Categories
Cybersecurity General

you probably don’t need desktop mfa

Update 9/25/24: I was informed after writing this post that NIST has published a draft of the latest revision of SP-800-63B, Digital Identity Guidelines. I recommend referring to section 3.1.7 “Multi-factor cryptographic authentication.”

We recently had a conversation about password rotations at work. One of our engineering managers suggested we implement a password rotation policy.

When I explained to them that was no longer a recommended practice, another engineering colleague suggested we implement multi-factor authentication (MFA) on endpoints (laptops). At nearly the same time, Okta started pushing an add-on called Okta Device Access, where one of the main features is “Desktop MFA.”

(To be clear, I am a fan of Okta. Okta is a good company. They have been good to me in my career. I’m picking on them here because my current employer uses Okta, and therefore, I use Okta. Okta is not the only organization that offers Desktop MFA. I also have a Yubikey. Yubico is also a good company.)

Both of the conversations I had about implementing this feature were identical.

“We/you should require MFA to log in to desktops.”
“What do you mean?”
“You should require a second factor to get in besides just the password.”
“But, you have to have the device, right?”
“Yeah…”
“That is MFA. The device is something you have, and the password is something you know.”

The counter-argument for this that I hear is “well but if they have your device, the password is the weak link.” What people who argue this actually mean is “possession of the device only authenticates the machine, not the user,” but we’ll get to this later.

Assume for a minute that trusted platform modules (TPMs)/cryptographic subsystems such as the Secure Enclave don’t exist. Now consider the relationship between a computer user and a computer, a series of interactions with hardware at a base layer and a collection of services on top of the hardware layer. The majority of the services are independent of the hardware beyond dependencies created by the chip architecture. It should therefore be possible to transplant a disk in like-for-like architectures and expect the hardware to load the services on that disk. AKA, computer turns on.

So, if we’ve established a logical separation between the hardware and the software, then what vendors really mean by “desktop MFA” is “operating system MFA.” You’re not accessing your computer; that would require a screwdriver. You’re accessing a service, the operating system. Per our scenario above, this is a bit problematic, since anyone could, in fact, take the boot storage out of the desktop, transfer it to another machine, and voila.

But in our scenario, we did not have a TPM.

You must have that TPM (which facilitates identifying you and only you on an encrypted disk assigned to you and only you) in your possession and another factor to successfully access that operating system. Just having the disk doesn’t work, you cannot transplant the disk or the TPM. Per Apple’s documentation on the Secure Enclave:

”the [unique] key hierarchy protecting the file system includes the [unique] UID, so if the internal SSD storage is physically moved from one device to another, the files are inaccessible.”

That. Is. MFA. When we incorporate that TPMs do exist and can facilitate rapid full-disk encryption and secure authentication via a biometric, we can meet a secure standard for MFA, and that standard is pretty high.

Which brings us to “device + sticky note” argument. This rebuttal is not an argument for device MFA. It’s an argument against passwords.

“Desktop MFA” seeks to solve a problem that no longer exists. If you don’t believe me, ask NIST, where they define a “multi-factor authenticator” in the context of defining “multi-factor authentication:”

“An authentication system that requires more than one distinct authentication factor for successful authentication. Multi-factor authentication can be performed using a multi-factor authenticator or by a combination of authenticators that provide different factors.”

“Multi-factor authentication requires 2 or more authentication factors of different types for verification.”

Different types. NOT different devices. Multifactor authenticators acknowledge the possibility of passing a test for MFA as long as the factor types are logically separated. That the Secure Enclave and Touch ID reader exist in the same physical space as the rest of the system’s components does not matter.

Consider the much more simple scenario of accessing a phone. In almost all cases, that phone is designed to be used by you and only you. You must have physical possession of the phone and only that phone, and a way to provide a second factor (knowledge or biometric) to the phone to access it. Again, this is MFA. It’s so effective, in fact, that the government will compel you to sit in prison until you unlock the phone if they suspect you’ve committed a crime and need the phone’s contents for evidence, assuming you are alive. In the highly-publicized case of the FBI “hacking in” to the San Bernardino shooter’s iPhone, the FBI was able to gain access not by breaking the requirement for a second factor, but by breaking a mechanism in place in iOS that erases the iPhone after hitting a ceiling of passcode guesses. They were then able to brute force the passcode.

Let’s say you do decide to implement desktop MFA anyway. How are you going to do it? Are you going to allow your users to use their personal phones with an authenticator app push? How is that app secured?

If the answer is “we will force the phone’s owner to use Face/Touch ID to open their phone,” congratulations:

  • you changed one biometric for another and are relying on that biometric to work consistently on an enrolled device that is not managed by you, unless you are issuing phones, which is no longer common
  • in other words, you just decided to swap using a biometric and a possession factor for a different biometric and possession factor
  • if a(2) and b(2) are true, why are a(1) and b(1) not true?
  • you have to support it when it doesn’t work
  • you’re still susceptible to MFA bombing

If the answer is “we will require the phone’s owner to use a password to open their phone,” congratulations:

  • you are now satisfying “true” MFA, but you’re relying on a factor (knowledge factor) that is not phishing resistant
  • you have zero assurance that the password used to open the phone and the password used to log in to the endpoint are not the same password, and they probably are.

If the answer is “we will skip the authenticator app push and issue yubikeys,” congratulations:

  • You have to deal with managing yubikeys now, good luck!
  • There is not a material difference between using a yubikey and using integrated Touch ID or Windows Hello. Consider that all yubikeys are passkeys, but not all passkeys are yubikeys. “but the yubikey is separate” – the yubikey is only separate until the second it’s plugged into the device, and it can be safely assumed that the yubikey will nearly always be in close proximity to the device.

Look, if you’re still going to say “possession of the device you are authenticating to does not count as a possession factor,” that’s fine, but you’re specifically asking for 3FA or 2(n)FA, so just ask for that if it’s what you want. No vendor calls their solution 3FA or 2(n)FA because nobody uses those terms even if that’s what they really want, and if vendors started using them, organizations would have to think critically about things like “desktop MFA” and realize that it probably isn’t necessary.

OK, so let’s go back to the password thing for a minute.

We, security practitioners, feel uncomfortable reconciling the idea of a multi-factor authenticator only in the context where one of those authentication mechanisms is a password. Yet, when it comes to using cryptographic and biometric mechanisms, security practitioners have always felt a high degree of reliance and assurance. Certainly higher than passwords, anyway.

So, why can’t we just acknowledge that we have a real opportunity here to make everyone’s experience both much more simple and much more secure? I can’t believe I’m going to do this, but I’m actually going to quote Elon Musk: “The best part is no part.”

It would have been crazy to suggest 10 years ago, when Touch ID was first introduced, that biometric inputs and TPMs would be as ubiquitous as they are now. Apple performed a literal miracle here, they actually gave regular users a method by which they could authenticate to a device securely, and users wanted to do it.

Today, nearly everyone has one of these things both in their pockets and integrated into their endpoints, yet it feels like the industry writ large is literally looking for reasons to hold on to passwords for dear life. For whatever reason, we haven’t let this new paradigm catch up to us.

As Brian Moriarty says: “The treasure is right there.

When multi-factor authentication first started becoming part of the cybersecurity discourse, it was commonplace and accepted to rely on an SMS message as a second factor. Now, nearly no serious cybersecurity practitioner would recommend the use of SMS. “Better than nothing, but not good” is the approximate industry take on that thing.

If we were able to successfully look back on SMS and agree that we’ve evolved away from its reliable use as a factor, that we can truly reflect on what made sense at the time and now does not, we can do the same for passwords. When modern desktops are managed well and we stop relying so much on knowledge factors, implementing solutions like desktop MFA start to look more like a way to maintain the existing paradigm as opposed to evolving beyond it.

We are at an inflection point, and it’s time to make it the reflection point. We have WebAuthN, we have passkeys, we have biometrics, we have Touch ID, we have Windows Hello, we have certificates, and we have full disk encryption that is nearly invisible to the user.

We know what we need to do. Are we ready to ask why we aren’t doing it?

Categories
Cybersecurity General

rapid fire rfq

One of my favorite talks at RVASEC 2024 was one I was most surprised with, called “Social Engineering the Social Engineers” by David Girvin at Sumo Logic.

If you work in a technical leadership role, you should absolutely view this talk, because 1) it’s funny, and also 2) it will really help you understand the relationship between you and salespeople, the ways salespeople are incentivized, and how you can leverage the sales process to inform better decision making around tooling evaluation.

The net result of this enlightenment should be a shorter evaluation to deployment lifecycle, enabling you to extract more technical value out of your tools. Girvin broke a lot of my assumptions about sales and I’ve already started putting some of his tips into practice to good effect.

I want to pause for a minute here and say that I have worked with some truly great salespeople and some of them have become friends of mine. I have also worked with very poor salespeople. Tech sales is a tough job and I respect it, but I also have my own job I need to do.

Anyway, Girvin makes a big deal out of transparency and not keeping the competition a secret. In retrospect, that’s actually an obvious tip that goes back to Kerckhoffs’s principle, and it’s not like vendor A is unfamiliar with competing vendor B. They “know the system.”

I had a unique opportunity to put this tip into practice, and did an experiment in radical candor when my org had a popular enterprise security product up for renewal. The scenario was essentially “hey, we have this tool and it’s up for renewal, we need to put cost pressure on them.” Worst case scenario was we couldn’t get pricing, and we’d just apply indirect pressure – after all, “this is my best and final offer” usually isn’t.

So I contacted two competitors. Vendor A and Vendor B. I contacted them through the sales intake forms on their websites. An SDR responded to me from each vendor “wanting to know what we were looking for.” Here was my response:

Thanks for getting back to me. I want to be transparent about the outreach – we are currently $EXISTING_VENDOR customers, we are pretty happy with $EXISTING_VENDOR, we are probably not looking at an extended demo or POV, and our focus is getting comparable and competitive pricing. 

Of course, if it’s a slam dunk, there might be something here, and although right now we’re doing a primarily numbers/market-based evaluation, we’re happy to get a product/feature overview.

Vendor A played ball on this. We had a conversation, it was a good product, we got the product overview and a brief demo, and the next day we had a quote, and it was in fact lower, though not by much, than the existing product. Vendor B did not get back to me after my email above. But we only needed one response. We also now know that we were talking to a legit competitor, and even in this biz, relationships are king.

During the call, I presented this as us doing a “rapid-fire RFQ” (request for quote), and it became clear to me that the SDR thought this was an unusual tactic. “Do you do this for all of your renewals?”

Well, we do now.

Categories
Cybersecurity General

a more secret santa

I ran a secret Santa for my distributed friend group this year. It being 2023 and all, I used ChatGPT to do the Santa assignments and generate some creative secret Santa messages, of which it did an admirable job. As usual, more detail is better when it comes to that thing.

Here was my prompt:

Here is one of the results:

Unfortunately as the organizer, doing these things comes with the major drawback of “well, I’m the organizer, so I know who everyone is sending a gift to,” making it only a secret-ish Santa.

There are web-based services that do exactly what I was trying to do with this project, but since none of them have the prerequisite of being a huge nerd, I decided to home-grow my own solution using Python and Ollama, Meta’s free large language model.

I also figured I’d throw some encryption in the mix so the message would actually be a secret, not just to the Santa participants, but to the organizer. So I slapped it together and called it “Secreter Santa.”

If you want to try this thing, you need to be running Python 3, and a recent version, because there is some reliance on the correct ordering/index of lists. That’s something I can change, but this was just a fun thing and a proof of concept. You also need to run Ollama, and there are some libraries you need to import to be able to send the prompt to the LLM and do the cryptography.

Ollama is totally free, and you can download it at https://ollama.ai – it is not as good as OpenAI at handling prompting, which leads to some weird results. I’ll get into that later.

I used a hybrid encryption scheme to do this, since the return from the LLM prompt is an arbitrary length, and you can’t encrypt data with RSA that is longer than the key itself.

How it works:

  1. Organizer collects the RSA public keys from each participant. There are a fair number of free tools online you can use to generate keys. I wouldn’t use them for production, but for testing they can help.
  2. The program runs and prompts for the name of each participant, and their RSA public key, which can be pasted in, which is sent to a dict.
  3. The program uses the random.shuffle()method to assign the participants to each other.
  4. The program sends a prompt to Ollama, which is assumed to be listening on its default port:
  1. The program generates a random 16-character AES key for each participant and uses the key to encrypt the message based on the prompt. The encrypted message is written to a file.
  2. The program takes the public RSA key for each participant and encrypts their corresponding AES key.
  3. The encrypted AES key is printed to console.

Once the organizer sends both the encrypted message file and the encrypted key to each recipient, they can run the “Unsecret Santa” program to decrypt and display the contents.

Unsecret Santa prompts the user for their message file, their .pem (RSA private key) file, and the encrypted key. It does the work for you from there and displays the message.

So, from a pure security perspective, there are some holes here, but it’s still interesting – unless you’re yanking the unencrypted message out of memory before the encryption step, there’s no way to attribute the message to any author, because it wasn’t written by an author, and the sender of the message has no idea as to the message’s contents. There is some element of digital signing that could happen here too, but let’s not get too far ahead.

Anyway, this is where I ran into some limitations of Ollama, where it is just a little too eager to offer its…guidance on things it wasn’t prompted to offer them for.

This result was pretty good but it’s weird to me that it offered specific suggestions as to the gift without being prompted to do so, which is not something I ever experienced using ChatGPT.

In another return, the prompt offered “a hint” that the recipient “loved coffee” and specifically asked their Santa to order their recipient coffee for their gift.

The results of the prompt varied pretty wildly in weird and sometimes funny ways. Some of them include lots of Instagram-worthy hashtags, some are quite formal in nature, and others are only a couple of curt sentences. I recently saw a comparison chart of large language models making the rounds on LinkedIn, and I can’t lend that too much credibility (because LinkedIn) but it did have Ollama at the bottom.

Still, ya can’t beat the price.

Categories
Cybersecurity General

ssl decryption

I had a brief conversation with a friend (hi Brad, again) the other night about SSL decryption. I could tell he was wary of the idea of SSL decryption in the business, and rightfully so!

Your employer breaking open the encryption on your network traffic seems like a huge violation of your privacy. I don’t really have a hard-line stance about this at work – you likely signed away your expectation of privacy on your work network as part of an acceptable use policy – but most companies have some commitment to their customers’ and users’ privacy and it’s a foregone conclusion that people are going to do some personal activities on their work devices. Does that mean your employer has access to all of your banking information? Probably not!

I’m not going to go through every nuance of SSL (TLS) and SSL decryption but will go through what SSL is, how it works, and what I believe a sensible policy should include and why decryption has become such a hot topic lately for businesses.

What is SSL?

Let’s start from the beginning. SSL stands for secure sockets layer, which has a successor, called TLS – transport layer security.

(Going forward, I will just say SSL. The differences between the two are pretty minor and beyond the scope of this post.)

SSL is a cryptographic standard for encrypting data traversing a network, most commonly across the internet. It puts the “S” (secure) in “HTTPS.” SSL uses both asymmetric and symmetric-key cryptography via a public key infrastructure.

Here’s the deal. In the IT world, we usually try to use real-world analogies to explain technical concepts, as if the things we’re talking about somehow do not exist in the real world. Anyway, when you’re talking about cryptography, this is a hard thing to do.

I can’t take full credit for this “one-way box” explanation, but I don’t remember where I read it, and I’m going to add some of my own flair. Imagine you want to buy a car, but it’s the pandemic and you’re at home and you don’t want to spend hours at the dealership with a bunch of paperwork. “No problem,” the dealer says. “Just fill your information into a secure form online.”

Now imagine that the form is a physical item and you have to give it to them in person. (Bear with me.) The dealership has a “paperwork box” to drop the form in. But this is a weird box. It has one opening in the front, and another opening in the back. The back opening is locked with a strange lock with a keyhole, but it’s so complicated, you’ve never seen anything like it.

There’s another weird thing about this box. You put the form in, and it falls out the bottom. Huh? You try again, same thing happens. Then you notice a stack of envelopes on the top of the box, with a label that says “free, take one.” The envelope has some detailed information about the dealership, manager contact info, etc. There’s even a stamp on it from the chamber of commerce** with the business license number, and signed by their representative. Clearly, this dealer really is who they say they are. You put your payment info form into the envelope, and put it in the box. Voila. It’s accepted. Congratulations on your new…I dunno. Tesla.

If you read that and said, in your best The Simpsons Comic Book Guy voice, “sir, there is a glaring technical error in your analogy,” the glaring error being that encrypted data is not encapsulated in “an envelope” but algorithmically altered as to be unreadable in transit, yes you are correct.

But I don’t have a good analogy for this, and I’m not sure it exists. If it does, leave one in the comments!

Let’s break down the analogy into its technical elements, and talk about how this transaction would’ve occurred if you really could send this form electronically to the dealer. You go to the form on the dealer website, and you see the “secure” icon in your address bar, indicating your connection is secure.

Your browser just went to the dealer’s website and got the public key from their web server. This is the envelope with all of the information on it – the public key encrypts the data. Anyone can get the public key. It’s free. The public key has a mate, the private key. The private key decrypts the data, or can open the lock on the back of the box. It would be pretty bad if someone else got that key, so the dealer has taken extra steps to prevent its theft. Theoretically.

You can also generate a key pair whenever you want. “Does this mean I can just pretend to be the dealership?” Not exactly. Remember the stamp from the chamber of commerce? The one with the signature? That’s an independent third-party verifying the identity of the dealer. The web works the same way.

Secure website key pairs are generated in what’s called a certificate signing request. Basically, “hey, chamber of commerce, can you certify that I am who I say that I am, and keep a public record of it?” When the request is approved by a certificate authority, the public key of the pair* is tied to a digital signature, and returned to you as a certificate you can install on your web server. Every web browser is pre-configured to have enough relevant information about the certificate authority (the chain of trust) that the user doesn’t need to take any other action here, just like you don’t need to take any action to trust that your chamber of commerce has accurately assigned business licenses. Neat!

There is a little more to the process. When your browser verifies the certificate of the website, it uses their public key to encrypt some random data to send to the web server. Remember, only the web server can decrypt this data with its private key. This data becomes the session key between both machines. The web server decrypts this session key and returns a message to your browser, as if to say “I can prove to you that I have the key to this box, and I can open it.” Because both parties are now using the same symmetric key, data can go both ways. It’s pretty cool. This process is called the SSL handshake.

Here’s an image of the handshake from IBM. All credit to them.

This technology is really foundational to privacy and security on the internet. You can learn more about the encryption algorithms – there’s plenty of info out there.

SSL Decryption

This isn’t a foolproof process, though messing with it takes some resources. An organization can declare itself a certificate authority for its users, and direct certificates to user endpoints on its behalf. Since business endpoints will trust this enterprise certificate authority as a legitimate entity, the certificates they receive appear to be from the destination they’re trying to reach. This internal-only certificate is (definitely should be) properly signed by a “real” certificate authority.

From here, the decrypting device can act as a “man in the middle” and can proxy requests for secure websites. Because the endpoint trusts the decrypting device, and the decrypting device has (or has immediate access to) the private key, the device decrypts the traffic, inspects it, then forwards it to the real website using the same process we already went over. The real website doesn’t know any better.

So to sum that up, the decryption process necessitates:

1) A client’s willingness to have its chain of trust manipulated
2) The proper certificates to enable the decryption process
3) A device that can facilitate the work of performing the decryption

The question is, why?

Challenges in the Business Environment

Man, getting an SSL certificate used to be a process. In my day (combs neckbeard) you had to pay $20-ish for the certificate, then create the CSR, then upload it, then get the certificate, then there’d be some setting in IIS or httpd.conf you’d have to change, then invariably you messed something up, then you’d have half your site on not-encrypted http and other parts on https, then you’d have to restart httpd, then you’d look at some other thing for a while and forget what you were doing to begin with.

BOR-ING. Now you can just use Let’s Encrypt and certbot to get a free certificate installed on your web server in like a second. BOOM. You’re good to go faster than Sonic the Hedgehog after a pile of chili dogs. Sheeeeeeeeeeeeeeyit.

What’s the most common cyber-attack? DDoS? Maybe, but if phishing isn’t the most common by now, it’s extremely close. So let’s say you have a user base that isn’t the most technically inclined.

What have they been taught all their lives? Green padlock = safe! So when they click on a link in a phishing email, and that link takes them to “their bank” or “the company benefits page” – they see the green padlock. “This is safe” they think, put their credentials in, then they press enter.

For IT security enthusiasts, privacy advocates, and professionals, this truth gets into the range of being pretty uncomfortable. It’d be a reach to say something like “well, if everything is encrypted, nothing is encrypted,” but that’s sort of…we are on that bus. Don’t get me wrong, I think Let’s Encrypt is an amazing project and will continue to do great things for the internet. But encryption is a tool, and tools can be and are used for harm, and the people who stand to be the most harmed by it are not C-level executives, but employees trying to do their best. There are indisputably personal and professional impacts to people acting in good faith but affected by cyber threats.

If C&C traffic, malware traffic, and phishing websites are able to operate/communicate in a way where researchers and defenders have no insight into them, I’m worried about what that means for the next conversation we have about encryption at large, so businesses having sensible, practical, people-first decryption policies is a decent set of brake pads we can put on the bus.

Sensible Policies

Back to the concern about banking. I believe all decryption devices are able to selectively apply decryption, and if you are looking at a device where that is not an option, please look elsewhere. The engineer who is configuring your decryption should be able to put sites like https://bankofamerica.com in their decryption exclusion list, where the enterprise certificate authority is not used to facilitate an SSL forward proxy, and the chain of trust is not altered for that session.

After you’ve reviewed the legal obligations in your area about encryption, and had a conversation with HR and your leadership team about the go/no-go, consider the way you’ll implement your policy on the whole, and how it can add value to your business without completely betraying the trust of your users. Remember that the practical applications of cybersecurity are, first and foremost, value-oriented activities. Consider the messaging you provide to your teams and stakeholders.

What sounds better to you?

“Beginning Monday, we will be implementing web decryption on our network. We expect you all to sign new acceptable use policies regarding the use of this new technology.”

“Given the recent expansion of email phishing, ransomware, and malware attacks on organizations across the country, we’ve decided to implement web decryption to keep our business assets and users safe. We’ve worked with our partners to come up with a deployment solution that only targets suspicious activities and have attached updated documentation that explains what this means for you.”

Have a strategy. Vendors are more than willing to work with you on this, because of the increased processing requirements for decryption, it’s often an avenue for them to make another sale. Palo Alto Networks has a very helpful page about coming up with a decryption strategy for your network, even if you don’t use their products.

Leverage your synergies.*** How does your decryption device fit in to the rest of your network? Is it a firewall? Can you set up your decryption based on existing URL categories? For example, you might decrypt on “unknown” or “web-posting” (think pastebin) but not decrypt on “banking” or “ecommerce.” Are there any data loss prevention or credential theft features you can also take advantage of?

Be transparent. You are indisputably taking privacy away from your users here, even if they know they don’t have an expectation of it. You owe them a thorough explanation of the process and how they may be affected. What websites are you decrypting on? What was the business justification for decrypting that website or category?

I hope that’s a helpful primer to SSL, decryption, and why we’re seeing more and more of it at scale. Having to implement this technology in the business is a nose-holding endeavor, but I do see it as increasingly necessary as the majority of the internet goes secure and we see the continued proliferation of cyberattacks. If you’re leveraging this in your organization, how’s it going? Let me know in the comments.

*(The private key is too, because the public and private keys are inextricably linked, but the certificate authority doesn’t need your private key to generate the certificate. In fact, don’t send the private key to them. Don’t send it to anyone. Seriously.)

**(It has been pointed out to me that this is not actually something a chamber of commerce does. This is why I am a technologist and not a businessy businessperson. INTERNET: SERIOUS BUSINESS.)

**(Did I really write these three words?! In a row?!)