Categories
Cybersecurity Privacy

color me perplexed

In the early through mid 2000s, my nerdier friends and I had a reputation for being able to remove malware (back then, we didn’t really call it “malware”) from computers. Save for a few bricked machines due to our novice and woefully incomplete grasp of the command regedit, we would clean the malware off of your computer for $20. If we couldn’t do it, there was no charge.

Back then, it was still common to have desktop computers set up in a computer desk in a “computer room,” or “home office,” illuminated by an overhead boob light and the ethereal glow of the then-ubiquitous CRT monitor.

Revisiting the spyware era

The most difficult software to remove on computers in the 2000s was a flavor of malware called spyware (or “adware”, which is the same thing). The word “spyware” is not often used today because so much software can be classified as spyware in one form or another that the word has almost no meaning. The word was invented by Zone Labs (now Check Point) after a parent noticed an alert from their Zone Labs personal firewall about data being sent back to the Mattel toy company via the children’s edutainment program Reader Rabbit.

The majority of spyware, though, was delivered through a browser. Browser security controls, especially in Internet Explorer, were weak, and the web was still a Wild West of cobbled-together HTML, fan pages, and Flash Player content. If you remember web toolbars, you’ve come to the right era.

Spyware was particularly difficult to remove because the creators of spyware had resources and the financial motivations to continue developing spyware products that were compelling, purported to offer (or did offer) a legitimate service, and would only work effectively if they maintained a deep persistence in the target operating system.

Of the most notorious spyware operators masquerading as a legitimate business was the Claria corporation, which essentially invented behavioral marketing. Claria made a pack-in software called Gator eWallet, included as an optional-but-easy-to-miss install with other software of the time (think Kazaa, which is not something I thought I would be writing about in 2025) that was free, of an otherwise questionable nature, or both. Gator eWallet was an autofill program that captured and used personal data to sell advertisements with very limited user understanding as to how the program actually accomplished this (by displaying copious amounts of targeted and non-targeted advertising in the form of pop-up ads). If you want a more extensive history of the Gator eWallet program and how it worked, I found one written by Ernie Smith for Tedium in 2021. Ernie appears to be quite active on BlueSky (I don’t have a BlueSky).

Of note in Smith’s writeup is that an article decrying Claria/Gator for their practices is still available online through PCMatic, even in 2025.

Gator, like most Windows applications, had an uninstall capability baked into the Control Panel, but in practice the uninstall function did not work, or it only worked until the browser was re-launched and would be reinstalled, or would be installed again through some other vector. In any event, artifacts of Gator persisted deeply in Windows XP, and they remained in operation until overwhelming negative consumer sentiment killed the company despite a couple of failed rebrands.

But it’s OK now…right?

Unfortunately, the negative consumer sentiment that killed Claria and made spyware an untenable business model did not persist. Whether by intent or not, and I suspect the former, tech companies have continued their slow and insidious war against individual privacy for nearly 30 years, and they’re winning.

Gator was the first software I thought of when I read an article from TechCrunch about the upcoming Perplexity browser, called Comet: “Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads.” To quote Julie Bort from TechCrunch:

“CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.”

And:

“Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.”

In case it wasn’t already clear what I was getting at here: you, the user, should not be fine with this. Nor should you be fine with half-baked features like Recall, a Microsoft service that will “help you” remember your computer activities but will almost assuredly be used at some point to sell advertisements at some point.

The takes I read on this thing from the webosphere are astounding. The prevailing theory is one of ambivalence. There is also a sense of fatalist defeatism, all the way to one commenter saying “well, Elon has all of the data anyway,” and other insidious variants of the old “I have nothing to hide” argument.

Y’all, this is not OK.

One of the most compelling papers I have read about the subject of privacy is not from the cybersecurity space directly, but from law. The 2007 paper is by Daniel Solove, a then-student and now-professor at The George Washington University School of Law, and is called “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.” Solove decomposes the argument in plain language, and I highly recommend reading the paper in its entirety. Solove approaches the subject from the context of government surveillance – 2007 was the apex of discourse around the NSA PRISM program – but the current state of advertising tech applies wholesale, and I don’t figure Solove predicted (or could have predicted) the speed and scale of the erosion of digital privacy through 2025.

One of Solove’s more curious conclusions is that privacy is actually poorly defined:

“Ultimately, any attempt to locate a common core to the manifold things we file under the rubric of “privacy” faces a difficult dilemma.”

And lands here:

“The term privacy is best used as a shorthand umbrella term for a related web of things.”

And finally:

“In many instances, privacy is threatened not by singular egregious acts, but by a slow series of relatively minor acts which gradually begin to add up.”

One of the cruxes of Solove’s argument is you need not be acutely injured to be a victim of the erosion of privacy. The rise and fall of spyware as a “legitimate” service shows that people did in fact perceive injury at both their erosion of privacy, a poor understanding of how their privacy was being invaded, and the constant interruption by pop-up ads. We have since come to accept the structural decline of privacy in exchange for being, ourselves, products to advertisers. Death by a thousand platforms.

How did this happen? We focused too narrowly on the quality of service we get for “free” in exchange for giving up privacy, and fell into the same trap of systematic and systemic myopia that fuels the “I’ve got nothing to hide” argument and (by extension) facilitates the continued enshittification of the web.

On top of all this is a practical consideration: in exchange for serving you “hyper-personalized ads,” what benefit does the Comet browser actually offer you?

For a social media platform, the tradeoff is pretty obvious. For Comet, the benefits aren’t really all that clear. It markets itself extensively as an AI-powered browser. I don’t really know anyone who is interested in an AI-powered browser, and don’t really understand why companies feel a need to bake AI into anything and everything from the bottom up. But I know what I do not want, and that is “hyper personalized ads.” If I want to use AI today, I can just go to ChatGPT, and ChatGPT does not serve ads (today).

Tech money strikes again

So here is where we get to the thing in the podcast with Perplexity’s CEO, Aravind Srinivas, that spawned the TechCrunch article. Buckle up:

“People like using AI, they think it’s giving them something Google doesn’t offer.”

Yes, the “something” is “information quickly without ads.”

“We want to get data even outside the app to better understand you, because some of the prompts that people do in these AIs is purely work related. It’s not, like, that personal. On the other hand, like, what are things you’re buying? Which hotels are you going…which restaurants are you going to? What are you spending time browsing – tells us so much more about you that we plan to use all the context to build a better user profile…and show some ads there.”

Look. I’m no CEO. I’m not trying to pick on this guy. He’s successful and has a business. I do not. I’m just some fucko on the internet. I get it.

But essentially what he is saying is this: “it’s a problem that people are able to use AI without ads, and we need to solve that problem by showing them ads, and the only way we will get them to pay attention to the ads is by spying on them extensively.”

That is the most big tech-ass big tech shit I have ever heard. I am sure someone in the boardroom at VC capital whatever came up with this, and everyone just thought it was a great idea and just decided to go with it. It has really nothing to do with providing a good user experience and everything to do with making money.

In conclusion

The point I am trying to make is just to consider that we have seen and continue to see an erosion of privacy beyond a fatalist and narrow scope like “well, they have my data anyway,” or “well, I have nothing to hide.” OK, so stop giving it to them, and “having nothing to hide” is 1) not true and 2) not exactly the point.

I don’t really know how else I can illustrate the extent to which advertising tech creates opportunities for advertisers by using you as the product. I’m not going to suggest that every one of these tradeoffs is not worth it – sometimes it is. If you are getting real value out of a free social service, that’s fine, all I ask is that you zoom out a little bit to understand the context, value, and nautre of the data you are generating for them.

Categories
Cybersecurity welp

i got laid off on paradise point

It was always going to be a long day. We were awoken on the 18th deck of the Norwegian Escape by engine noise, particularly unusual as our room was forward on the ship. In an impressive maneuver, the Escape’s captain had backed her up on the pier at St. Thomas, USVI, in front of the Enchanted Princess. We had tickets for the gondola up to Paradise Point, a scenic overlook, the base of which is about a half mile walk from the pier.

If you take this excursion, prepare for a long wait. The line to get into the gondola was over half an hour, but as a US territory our regular Verizon service kept the kids entertained (yes, we gave them THE PHONES) without any extra fees. The ride is standing room only and does indeed provide a magnificent view of the richly verdant St Thomas and the surrounding azure seas of the Caribbean and Atlantic.

Paradise Point is something of an elaborate tourist trap that aggressively markets such overpriced libations as the “Bailey’s Bushwacker [sic],” which you should skip in favor of a frosty(TM) at the recently renovated Wendy’s at the bottom of the overlook. The bushwacker is advertised as a “chocolate piña colada” which really should be an indication that you should not get it, in spite of – or maybe because of – its total lack of piña and/or colada. Numerous tchotchkes and mediocre nachos were also available for sale. It was here we met a fellow tourist who, in her 50s, had inexplicably never heard the word “tchotchke,” which bodes poorly for us as tchotchke aficionados in our late 30s who enjoy cruising. Keep the AARP card warm for me.

At any rate, tchotchkes or not, the views were worth the cost of the ticket.

The kids, either unwilling or unable to appreciate the resplendence, were satiated by an oversized game of Connect Four on the point, which they played repeatedly despite not knowing at all how the game worked. If played by certain adults, the game could have been called “tariffs,” actually. This provided precious and brief time between our 3-year-old’s meltdowns to take some pictures and continue reflecting on my decision to spend $14 on Bailey’s, ice, and Kahlua.

Literally minutes after taking this picture I received a phone call from an unknown number that I ignored. The caller left me a voicemail, it was my manager’s manager’s manager calling me with some “important information.” I had an inkling as to what this was about, but didn’t have my work phone, and I couldn’t be sure. So I called back and it was the aforementioned 3x manager, my manager’s manager (the 2x), and HR. And sure enough, drink in hand, gazing upon the magical Wendy’s of the West Indies, I was cooked, or “RIF’d” in government parlance, albeit with a fairly generous severance package considering my short tenure.

I mouthed the words “LAID OFF” to Ashley who shot me a distinct “aw shit” look. I was hired to guide strategic decision making at the IRS with regard to their cybersecurity program, which I guess is no longer an area of interest for the part of the government that collects money. In a cruel irony, I had escaped multiple rounds of layoffs at my last employer and was optimistic about the stability provided by a company like MITRE (I accepted the offer before Trump’s inauguration). It was immediately, or maybe even before, I started that I felt like I’d perhaps made a mistake in joining given yet another round of “extraordinary times.” But MITRE, much like the rest of the country, were dealt a bad hand with Trump’s election. In fact, this wasn’t entirely a surprise; when talk of layoffs was picking up the previous week, I remarked in an internal team chat that if I were in a position of leadership at MITRE or any other federal contractor, I would be looking at people like myself (new, uncleared, an unapologetic exhibitor of dad humor and 90s karaoke) if I needed to quickly cut costs.

I’m angry and disappointed – not for my own career, which will survive, but that I too was summarily DOGE’d in the service of billionaires and our current president, noted adulterer and convicted felon. To be clear, I’m not against the idea of DOGE on principle, and would have been fine with being laid off had I worked myself out of a job. Maybe I would’ve earned a commemorative tchotchke for that one, maybe a novelty headstone adorned with a Shiba Inu. “Here lies Joe. He got DOGE’d.”

As for the vacation, we had paid for it full several months ago, so we enjoyed it. Norwegian Cruise Line took good care of us, as they always do. It was truly a “there’s nothing I can do about this right now, today, tomorrow, or this week” situation, and I was grateful to not be the one in the extremely unenviable position (dear Elon: laying people off is supposed to feel bad) of making the calls. Now that I’m back, it does sting in a more tangible way, but I’m ready to move forward, because that’s all I can do. I’m grateful that I got to work at MITRE. A quote from the great sage Jimmy Buffett is apropos here: “if life gives you limes, make margaritas.”

Anyway. This too shall pass. I didn’t waste any of the Bailey’s though. Consider that bush wacked.

Categories
Cybersecurity

even more takes about phishing tests

I have been meaning to write this post since I saw the article from The Wall Street Journal about phishing tests come out, but life got in the way a little bit (always does) with the little ones and a job change. Anyway, the article is called “Phishing Tests, the Bane of Work Life, are Getting Meaner.” Of course the article is paywalled, but if you have some free article credits an Apple News subscription, you can read it.

A few years ago I interviewed someone and I asked him a question I ask other people in senior+ security levels. The question is, and there is really no wrong answer, “what is your spiciest infosec/cyber hot take?” And he said, without missing a beat:

“I think phishing tests are total bullshit.”

I did not really agree with that statement at the time (we made him the offer!) and I don’t agree with it now. Most orgs are overthinking the value proposition of phishing tests, and should probably keep investing in them. On the other hand, there is this utterly bananas take from the article from a CIO of a health system:

“The first time employees…fail a phishing test, they lose external email access for three months. The second time, it gets cut for a year. The third, they’re fired…’I tell them it is draconian until we have an attack and we have to take our medial record system offline.'”

Hoo. Boy. Come to think of it, I might just “accidentally” fail those first two tests. Oops!

So let’s get a few things out of the way on that side:

  • It is OK to mandate training for people who click on links and OK to revoke their access if they do not complete the training.
  • It is not OK to deprive someone of their livelihood because they clicked on a link.
  • If you are a CIO/CISO, and your information security posture is so brittle that your critical infrastructure will fail if someone clicks on a link, it is you, not the employee, who is the problem.

Internet systems, like email, are designed to be end-to-end. It is the responsibility of the CIO/CISO, not employees to protect their end. It’s good to get employees to participate in the process, and in fact acknowledging that security is a two-way contract made with an organization is an effective strategy (I will write on this more another day). Threatening employees’ jobs because they are humans who are vulnerable to attacks that prey on human behavior is ridiculous.

There is broad consensus in information security that humans are, in fact, the weakest link. Rather than using that fact as upcode to create misaligned policy, use it as data to inform your defense strategy.

In other words: there are reasons to fire employees for not participating in a security program. Clicking on a link is not one of them.

OK, so where does that leave phishing tests?

First of all, let’s enumerate the numerous shortcomings of phishing tests:

  • Most commercial phishing tests are out of step with how threat actors operate. The most effective phishing (in 2025) takes a variant of “living off the land” – leveraging service abuse to deliver a legitimate, not malicious, link to a subject. The legitimate link takes the form of a valid OneDrive or Google Drive link. The link has a valid document in it, and the document (be it an invoice or whatever) contains the malicious link. Offense in depth. These mails are able to bypass nearly all commercially available forms of email protection by hijacking the internet’s end-to-end principle: “just deliver the email.”
  • “Message header from” fields (trivially spoofed) from phishing simulation domains are easily detected and blocked by above-average users, skewing the analytics of the reporting.
  • Despite whatever quasi-objective measure of “phishing resistance” your vendor claims to provide, you have no assurance based on phish “reporting” that your employees are actually better at detecting phishing messages. You may have some assurance that they are good at detecting phishing messages from a particular phish testing platform.
  • Aggressive phishing tests undermine employees. I could not put it better than Matt Linton at Google, who said this in an interview with PCMag: “employees are upset by them and feel security is ‘tricking them,’ which degrades the trust with our users that is necessary for security teams to make meaningful systemic improvements and when we need employees to take timely actions related to actual security events.”

You don’t seem like you like phishing simulations very much.

Correct. However, I have seen them work the way they are intended to work: as a tool to raise security awareness and facilitate some cyber hygiene. Nothing more, nothing less.

And there is, usually, value there.

What the flu vaccine teaches us about phishing simulations

Every year, we are told over and over again to get the flu vaccine. Every year, we are also told that the flu vaccine is not very effective. Maybe 60% on a good year. The 2024-2025 flu vaccine was only 51% effective. But we still get the flu vaccine because preventing a pathology is infinitely better (and substantially less expensive) than trying to treat it.

That is what a phishing test is. Phishing simulations are flu vaccines for orgs:

  • Employees are already experts at using email, using email takes no additional special training
  • It takes relatively little training to get ordinary users to spot low-effort phishing emails
  • Mistakes made by employees in catching phishing tests can be corrected. Again, it is OK to tell an employee they need to do training, and they will lose access…if they don’t do it.
  • Testing is very inexpensive and can even be done in house. Do you suspect your users are conditioned to detect all emails from your vendor? Put that to the test! Buy a domain and send your own phishing sim! Take that back to your vendor!

Put a dollar value on the cost of an incident, even a minor one, even if the incident had no material impact (it was just an “event.”) Assuming even a small handful of employees were able to detect a true positive phishing email, you likely captured the value of the investment. If you are looking for a metric to measure the effectiveness of your phishing simulation program, you should focus on the number of emails reported and how many reported emails are true positives. That’s literally it.

Trying to pretend that phishing simulations are actually an effective tool for security is nonsense. They aren’t. Someone will click. Do not fall for vendor marketing puffery trying to distract you by claiming phishing simulations will make your organization more secure. They won’t. But they might save you some time and a few headaches.

Defense is always behind offense, etc. Phishing sims can be an effective tool for cost savings, as in the time and hours spent by your security team in event analysis is saved when someone does not click – that related work is simply not created. As a CIO/CISO, you need to figure out the dollar amounts and go from there.

Usually the math works. It’s really that simple.

Categories
Cybersecurity Education

reflections on a master’s degree

One of the things that I heard a lot of before I started my master’s was that it wasn’t really necessary to be successful in the technology space. Since I’ve completed my degree I thought I would put some thoughts together about my experience and offer some perspective on the subject. I just earned a master’s of information technology from Virginia Tech. The program site is here: https://vtmit.vt.edu

The tl;dr is that no, I don’t think it’s necessary to pursue it to be successful in technology, but I have already realized benefits of my degree program, and intangibly it has made me a much stronger leader and engineer, and it has established a deep confidence in my career and my ability to execute, which is the very thing I was expecting to get out of the degree in the first place.

As tangible benefits go, the master’s degree program cost me about $36,000. During the course of obtaining the degree, I have been promoted two times and have secured a role of technical leadership. Between a job change and the promotions (in three years), my annual salary has risen by almost $40,000. While obviously this is not entirely attributable to the degree, part of it certainly is. Since the salary increase is effectively permanent, and there are about 25 years left in my career, this was well worth it for me, financially speaking.

Why I went for a master’s

A high-up cybersecurity executive recently asked me why I was motivated to get a master’s degree. You should have a good sense of your “why” before you commit to a master’s program. Mine was originally practical: it would have led to a pay bump in my previous role and my previous employer would have covered part of the cost. I left that role, but stayed in the program anyway, because I wanted to step up in my career and be able to speak cybersecurity more eloquently to the business. I absolutely achieved that goal through my studies. I also wanted to have an opportunity to teach (undergraduates) part-time, which is really not possible with only a bachelor’s.

In fact, some of the best courses in the degree program were also part of the Pamplin MBA program, and for a brief period I considered switching to the MBA program entirely. In retrospect I am glad I did not do that because I valued the technical content I got out of the information technology courses.

Getting a master’s degree while working full-time and raising children is no joke.

I registered for my first class while we were at the hospital the day after my daughter was born. I knew it was going to be a challenge, but it was a challenge. It is not for everyone. If you want to do this, you will lose almost all of your free time. I also gained weight and my hair grayed.

I am writing this while we are on vacation and I was struck when I looked in the mirror and noticed that I looked younger. I have been walking regularly for a couple of weeks and have been sleeping relatively well for the first time in years. I’m looking forward to feeling like myself again.

It’s hard to overstate how much you give up when you commit to something like this. The feeling of achievement, however, is pretty remarkable.

Not every hiring org values a master’s degree…but most do

I have a lot of opinions about certifications in cybersecurity. Most of them are negative, with the notable exception of SANS – if you can take a SANS training, you should. I have held cyber certs in the past and almost always let them expire. I currently work for a company that, unfortunately, does not have the budget for certifications or professional development. I have taken calls from recruiters in the past few months that have gone like this:

“Do you have any certifications?”

“I have some inactive ones (CCNA, Security+, CompTIA CASP, SANS GCED), but right now I’m working on a master’s degree with a concentration in cyber.”

“Have you considered getting some certifications?”

“I will happily obtain a certification is the cost and time is covered by my employer, but my professional growth is focused on my degree program.”

A master’s degree relevant to the position should carry more weight/have more impact on a resume than a certification. If your organization’s recruiters disagree with this, I think they are wrong. I also think certifications by and large are a money grab, and again – a degree or a cert is not required to be successful in tech. But that is OK. My company also did not help me pay for my degree (I paid cash and lived conservatively, and got support from my wife, who does well for herself being an attorney). Smaller technology companies seem to value the master’s degree less tangibly, but again, I can attribute my promotions at least somewhat to the degree program.

What I want out of my career has changed

My career was already going in this direction, but I am much less interested in highly operational, hands-on work. I have embraced being a technical leader and moving the focus of my work into more strategic and less tactical decision making. I make this pretty clear when I have conversations with recruiters. The day-to-day is just not a good use of my skills anymore – getting a master’s will change the overall tenor of your skillset, especially if you are mid-career like I am.

I still firmly believe that the best technology leaders have hands-on experience, and there are areas of professional development that I neglected (especially in cloud) over my degree program, so there is definitely opportunity cost there. I don’t think I want to ever be completely hands off, because being hands on is what got me into tech in the first place, but I derive a lot of career joy out of mentoring and building, not so much operating. 

The degree program amplified that and my desire to work somewhere where what I can offer is more strategic and entrepreneurial. Many people think the goal of education is to increase your “known knowns.” This is partially true, but the real goal is to decrease “unknown unknowns.” Being successful in your career and in business is a product of being able to make accurate predictions and understanding risks. Education reduces the space where those predictions are wrong and enables you to take good risks. You (and I) will still be wrong, nobody is perfect, but you will be right more often on higher-paying bets. 

Since placing those bets well is a skill I learned through my studies (and through experience), I want to be somewhere that I have the agency and authority to place them, which will be more to the benefit of the organization than a role that is exclusively hands-on.

Requirements gathering is one of the most difficult things to nail down for any technology project. I happen to be quite good at requirements gathering, to be honest, though I attribute that more to raw career experience than the degree program. Anyway, once you have gathered your requirements you need some way to execute on them as efficiently as possible. If you know what the “dead ends” are in advance, you can proactively avoid them, but the only real way to know what the dead ends are is by experiencing failure. Education distills those experiences into something tangible by turning an “unknown unknown” into a “known unknown.”

This is why formal education is useful even though the coursework is always a step behind the bleeding edge. This is true and something I experienced during my studies at Virginia Tech, however, people who instinctively dismiss the value of formal education because of it do so at their peril.

Of course, when you realize that you don’t know things, you realize how much you don’t know. And nothing bears this realization (that I know nothing) like getting a master’s degree. It is both humbling and weird.

OK, so what?

I’m very satisfied with what having a master’s has brought me both tangibly and not. There are a lot of naysayers in the tech space around formal education. My advice is to ignore them, but also be aware of what you are getting into and have a strong sense of what you are trying to achieve with a master’s degree before embarking on the journey, because it is a long road.

It is also true that a degree (of any kind) is not strictly necessary to be successful in tech. Many people in tech are smarter and more successful than I am without any degree. That is great! But few things have brought me more personal and career satisfaction than obtaining it for myself. It’s hard to put a dollar value on the feeling of accomplishment, capability, and confidence.

That’s all for now. If you have questions about education in your career feel free to email me and we’ll set something up.

Categories
Cybersecurity General

you probably don’t need desktop mfa

Update 9/25/24: I was informed after writing this post that NIST has published a draft of the latest revision of SP-800-63B, Digital Identity Guidelines. I recommend referring to section 3.1.7 “Multi-factor cryptographic authentication.”

We recently had a conversation about password rotations at work. One of our engineering managers suggested we implement a password rotation policy.

When I explained to them that was no longer a recommended practice, another engineering colleague suggested we implement multi-factor authentication (MFA) on endpoints (laptops). At nearly the same time, Okta started pushing an add-on called Okta Device Access, where one of the main features is “Desktop MFA.”

(To be clear, I am a fan of Okta. Okta is a good company. They have been good to me in my career. I’m picking on them here because my current employer uses Okta, and therefore, I use Okta. Okta is not the only organization that offers Desktop MFA. I also have a Yubikey. Yubico is also a good company.)

Both of the conversations I had about implementing this feature were identical.

“We/you should require MFA to log in to desktops.”
“What do you mean?”
“You should require a second factor to get in besides just the password.”
“But, you have to have the device, right?”
“Yeah…”
“That is MFA. The device is something you have, and the password is something you know.”

The counter-argument for this that I hear is “well but if they have your device, the password is the weak link.” What people who argue this actually mean is “possession of the device only authenticates the machine, not the user,” but we’ll get to this later.

Assume for a minute that trusted platform modules (TPMs)/cryptographic subsystems such as the Secure Enclave don’t exist. Now consider the relationship between a computer user and a computer, a series of interactions with hardware at a base layer and a collection of services on top of the hardware layer. The majority of the services are independent of the hardware beyond dependencies created by the chip architecture. It should therefore be possible to transplant a disk in like-for-like architectures and expect the hardware to load the services on that disk. AKA, computer turns on.

So, if we’ve established a logical separation between the hardware and the software, then what vendors really mean by “desktop MFA” is “operating system MFA.” You’re not accessing your computer; that would require a screwdriver. You’re accessing a service, the operating system. Per our scenario above, this is a bit problematic, since anyone could, in fact, take the boot storage out of the desktop, transfer it to another machine, and voila.

But in our scenario, we did not have a TPM.

You must have that TPM (which facilitates identifying you and only you on an encrypted disk assigned to you and only you) in your possession and another factor to successfully access that operating system. Just having the disk doesn’t work, you cannot transplant the disk or the TPM. Per Apple’s documentation on the Secure Enclave:

”the [unique] key hierarchy protecting the file system includes the [unique] UID, so if the internal SSD storage is physically moved from one device to another, the files are inaccessible.”

That. Is. MFA. When we incorporate that TPMs do exist and can facilitate rapid full-disk encryption and secure authentication via a biometric, we can meet a secure standard for MFA, and that standard is pretty high.

Which brings us to “device + sticky note” argument. This rebuttal is not an argument for device MFA. It’s an argument against passwords.

“Desktop MFA” seeks to solve a problem that no longer exists. If you don’t believe me, ask NIST, where they define a “multi-factor authenticator” in the context of defining “multi-factor authentication:”

“An authentication system that requires more than one distinct authentication factor for successful authentication. Multi-factor authentication can be performed using a multi-factor authenticator or by a combination of authenticators that provide different factors.”

“Multi-factor authentication requires 2 or more authentication factors of different types for verification.”

Different types. NOT different devices. Multifactor authenticators acknowledge the possibility of passing a test for MFA as long as the factor types are logically separated. That the Secure Enclave and Touch ID reader exist in the same physical space as the rest of the system’s components does not matter.

Consider the much more simple scenario of accessing a phone. In almost all cases, that phone is designed to be used by you and only you. You must have physical possession of the phone and only that phone, and a way to provide a second factor (knowledge or biometric) to the phone to access it. Again, this is MFA. It’s so effective, in fact, that the government will compel you to sit in prison until you unlock the phone if they suspect you’ve committed a crime and need the phone’s contents for evidence, assuming you are alive. In the highly-publicized case of the FBI “hacking in” to the San Bernardino shooter’s iPhone, the FBI was able to gain access not by breaking the requirement for a second factor, but by breaking a mechanism in place in iOS that erases the iPhone after hitting a ceiling of passcode guesses. They were then able to brute force the passcode.

Let’s say you do decide to implement desktop MFA anyway. How are you going to do it? Are you going to allow your users to use their personal phones with an authenticator app push? How is that app secured?

If the answer is “we will force the phone’s owner to use Face/Touch ID to open their phone,” congratulations:

  • you changed one biometric for another and are relying on that biometric to work consistently on an enrolled device that is not managed by you, unless you are issuing phones, which is no longer common
  • in other words, you just decided to swap using a biometric and a possession factor for a different biometric and possession factor
  • if a(2) and b(2) are true, why are a(1) and b(1) not true?
  • you have to support it when it doesn’t work
  • you’re still susceptible to MFA bombing

If the answer is “we will require the phone’s owner to use a password to open their phone,” congratulations:

  • you are now satisfying “true” MFA, but you’re relying on a factor (knowledge factor) that is not phishing resistant
  • you have zero assurance that the password used to open the phone and the password used to log in to the endpoint are not the same password, and they probably are.

If the answer is “we will skip the authenticator app push and issue yubikeys,” congratulations:

  • You have to deal with managing yubikeys now, good luck!
  • There is not a material difference between using a yubikey and using integrated Touch ID or Windows Hello. Consider that all yubikeys are passkeys, but not all passkeys are yubikeys. “but the yubikey is separate” – the yubikey is only separate until the second it’s plugged into the device, and it can be safely assumed that the yubikey will nearly always be in close proximity to the device.

Look, if you’re still going to say “possession of the device you are authenticating to does not count as a possession factor,” that’s fine, but you’re specifically asking for 3FA or 2(n)FA, so just ask for that if it’s what you want. No vendor calls their solution 3FA or 2(n)FA because nobody uses those terms even if that’s what they really want, and if vendors started using them, organizations would have to think critically about things like “desktop MFA” and realize that it probably isn’t necessary.

OK, so let’s go back to the password thing for a minute.

We, security practitioners, feel uncomfortable reconciling the idea of a multi-factor authenticator only in the context where one of those authentication mechanisms is a password. Yet, when it comes to using cryptographic and biometric mechanisms, security practitioners have always felt a high degree of reliance and assurance. Certainly higher than passwords, anyway.

So, why can’t we just acknowledge that we have a real opportunity here to make everyone’s experience both much more simple and much more secure? I can’t believe I’m going to do this, but I’m actually going to quote Elon Musk: “The best part is no part.”

It would have been crazy to suggest 10 years ago, when Touch ID was first introduced, that biometric inputs and TPMs would be as ubiquitous as they are now. Apple performed a literal miracle here, they actually gave regular users a method by which they could authenticate to a device securely, and users wanted to do it.

Today, nearly everyone has one of these things both in their pockets and integrated into their endpoints, yet it feels like the industry writ large is literally looking for reasons to hold on to passwords for dear life. For whatever reason, we haven’t let this new paradigm catch up to us.

As Brian Moriarty says: “The treasure is right there.

When multi-factor authentication first started becoming part of the cybersecurity discourse, it was commonplace and accepted to rely on an SMS message as a second factor. Now, nearly no serious cybersecurity practitioner would recommend the use of SMS. “Better than nothing, but not good” is the approximate industry take on that thing.

If we were able to successfully look back on SMS and agree that we’ve evolved away from its reliable use as a factor, that we can truly reflect on what made sense at the time and now does not, we can do the same for passwords. When modern desktops are managed well and we stop relying so much on knowledge factors, implementing solutions like desktop MFA start to look more like a way to maintain the existing paradigm as opposed to evolving beyond it.

We are at an inflection point, and it’s time to make it the reflection point. We have WebAuthN, we have passkeys, we have biometrics, we have Touch ID, we have Windows Hello, we have certificates, and we have full disk encryption that is nearly invisible to the user.

We know what we need to do. Are we ready to ask why we aren’t doing it?

Categories
Cybersecurity General

rapid fire rfq

One of my favorite talks at RVASEC 2024 was one I was most surprised with, called “Social Engineering the Social Engineers” by David Girvin at Sumo Logic.

If you work in a technical leadership role, you should absolutely view this talk, because 1) it’s funny, and also 2) it will really help you understand the relationship between you and salespeople, the ways salespeople are incentivized, and how you can leverage the sales process to inform better decision making around tooling evaluation.

The net result of this enlightenment should be a shorter evaluation to deployment lifecycle, enabling you to extract more technical value out of your tools. Girvin broke a lot of my assumptions about sales and I’ve already started putting some of his tips into practice to good effect.

I want to pause for a minute here and say that I have worked with some truly great salespeople and some of them have become friends of mine. I have also worked with very poor salespeople. Tech sales is a tough job and I respect it, but I also have my own job I need to do.

Anyway, Girvin makes a big deal out of transparency and not keeping the competition a secret. In retrospect, that’s actually an obvious tip that goes back to Kerckhoffs’s principle, and it’s not like vendor A is unfamiliar with competing vendor B. They “know the system.”

I had a unique opportunity to put this tip into practice, and did an experiment in radical candor when my org had a popular enterprise security product up for renewal. The scenario was essentially “hey, we have this tool and it’s up for renewal, we need to put cost pressure on them.” Worst case scenario was we couldn’t get pricing, and we’d just apply indirect pressure – after all, “this is my best and final offer” usually isn’t.

So I contacted two competitors. Vendor A and Vendor B. I contacted them through the sales intake forms on their websites. An SDR responded to me from each vendor “wanting to know what we were looking for.” Here was my response:

Thanks for getting back to me. I want to be transparent about the outreach – we are currently $EXISTING_VENDOR customers, we are pretty happy with $EXISTING_VENDOR, we are probably not looking at an extended demo or POV, and our focus is getting comparable and competitive pricing. 

Of course, if it’s a slam dunk, there might be something here, and although right now we’re doing a primarily numbers/market-based evaluation, we’re happy to get a product/feature overview.

Vendor A played ball on this. We had a conversation, it was a good product, we got the product overview and a brief demo, and the next day we had a quote, and it was in fact lower, though not by much, than the existing product. Vendor B did not get back to me after my email above. But we only needed one response. We also now know that we were talking to a legit competitor, and even in this biz, relationships are king.

During the call, I presented this as us doing a “rapid-fire RFQ” (request for quote), and it became clear to me that the SDR thought this was an unusual tactic. “Do you do this for all of your renewals?”

Well, we do now.

Categories
Cybersecurity General

a more secret santa

I ran a secret Santa for my distributed friend group this year. It being 2023 and all, I used ChatGPT to do the Santa assignments and generate some creative secret Santa messages, of which it did an admirable job. As usual, more detail is better when it comes to that thing.

Here was my prompt:

Here is one of the results:

Unfortunately as the organizer, doing these things comes with the major drawback of “well, I’m the organizer, so I know who everyone is sending a gift to,” making it only a secret-ish Santa.

There are web-based services that do exactly what I was trying to do with this project, but since none of them have the prerequisite of being a huge nerd, I decided to home-grow my own solution using Python and Ollama, Meta’s free large language model.

I also figured I’d throw some encryption in the mix so the message would actually be a secret, not just to the Santa participants, but to the organizer. So I slapped it together and called it “Secreter Santa.”

If you want to try this thing, you need to be running Python 3, and a recent version, because there is some reliance on the correct ordering/index of lists. That’s something I can change, but this was just a fun thing and a proof of concept. You also need to run Ollama, and there are some libraries you need to import to be able to send the prompt to the LLM and do the cryptography.

Ollama is totally free, and you can download it at https://ollama.ai – it is not as good as OpenAI at handling prompting, which leads to some weird results. I’ll get into that later.

I used a hybrid encryption scheme to do this, since the return from the LLM prompt is an arbitrary length, and you can’t encrypt data with RSA that is longer than the key itself.

How it works:

  1. Organizer collects the RSA public keys from each participant. There are a fair number of free tools online you can use to generate keys. I wouldn’t use them for production, but for testing they can help.
  2. The program runs and prompts for the name of each participant, and their RSA public key, which can be pasted in, which is sent to a dict.
  3. The program uses the random.shuffle()method to assign the participants to each other.
  4. The program sends a prompt to Ollama, which is assumed to be listening on its default port:
  1. The program generates a random 16-character AES key for each participant and uses the key to encrypt the message based on the prompt. The encrypted message is written to a file.
  2. The program takes the public RSA key for each participant and encrypts their corresponding AES key.
  3. The encrypted AES key is printed to console.

Once the organizer sends both the encrypted message file and the encrypted key to each recipient, they can run the “Unsecret Santa” program to decrypt and display the contents.

Unsecret Santa prompts the user for their message file, their .pem (RSA private key) file, and the encrypted key. It does the work for you from there and displays the message.

So, from a pure security perspective, there are some holes here, but it’s still interesting – unless you’re yanking the unencrypted message out of memory before the encryption step, there’s no way to attribute the message to any author, because it wasn’t written by an author, and the sender of the message has no idea as to the message’s contents. There is some element of digital signing that could happen here too, but let’s not get too far ahead.

Anyway, this is where I ran into some limitations of Ollama, where it is just a little too eager to offer its…guidance on things it wasn’t prompted to offer them for.

This result was pretty good but it’s weird to me that it offered specific suggestions as to the gift without being prompted to do so, which is not something I ever experienced using ChatGPT.

In another return, the prompt offered “a hint” that the recipient “loved coffee” and specifically asked their Santa to order their recipient coffee for their gift.

The results of the prompt varied pretty wildly in weird and sometimes funny ways. Some of them include lots of Instagram-worthy hashtags, some are quite formal in nature, and others are only a couple of curt sentences. I recently saw a comparison chart of large language models making the rounds on LinkedIn, and I can’t lend that too much credibility (because LinkedIn) but it did have Ollama at the bottom.

Still, ya can’t beat the price.

Categories
Cybersecurity General

ssl decryption

I had a brief conversation with a friend (hi Brad, again) the other night about SSL decryption. I could tell he was wary of the idea of SSL decryption in the business, and rightfully so!

Your employer breaking open the encryption on your network traffic seems like a huge violation of your privacy. I don’t really have a hard-line stance about this at work – you likely signed away your expectation of privacy on your work network as part of an acceptable use policy – but most companies have some commitment to their customers’ and users’ privacy and it’s a foregone conclusion that people are going to do some personal activities on their work devices. Does that mean your employer has access to all of your banking information? Probably not!

I’m not going to go through every nuance of SSL (TLS) and SSL decryption but will go through what SSL is, how it works, and what I believe a sensible policy should include and why decryption has become such a hot topic lately for businesses.

What is SSL?

Let’s start from the beginning. SSL stands for secure sockets layer, which has a successor, called TLS – transport layer security.

(Going forward, I will just say SSL. The differences between the two are pretty minor and beyond the scope of this post.)

SSL is a cryptographic standard for encrypting data traversing a network, most commonly across the internet. It puts the “S” (secure) in “HTTPS.” SSL uses both asymmetric and symmetric-key cryptography via a public key infrastructure.

Here’s the deal. In the IT world, we usually try to use real-world analogies to explain technical concepts, as if the things we’re talking about somehow do not exist in the real world. Anyway, when you’re talking about cryptography, this is a hard thing to do.

I can’t take full credit for this “one-way box” explanation, but I don’t remember where I read it, and I’m going to add some of my own flair. Imagine you want to buy a car, but it’s the pandemic and you’re at home and you don’t want to spend hours at the dealership with a bunch of paperwork. “No problem,” the dealer says. “Just fill your information into a secure form online.”

Now imagine that the form is a physical item and you have to give it to them in person. (Bear with me.) The dealership has a “paperwork box” to drop the form in. But this is a weird box. It has one opening in the front, and another opening in the back. The back opening is locked with a strange lock with a keyhole, but it’s so complicated, you’ve never seen anything like it.

There’s another weird thing about this box. You put the form in, and it falls out the bottom. Huh? You try again, same thing happens. Then you notice a stack of envelopes on the top of the box, with a label that says “free, take one.” The envelope has some detailed information about the dealership, manager contact info, etc. There’s even a stamp on it from the chamber of commerce** with the business license number, and signed by their representative. Clearly, this dealer really is who they say they are. You put your payment info form into the envelope, and put it in the box. Voila. It’s accepted. Congratulations on your new…I dunno. Tesla.

If you read that and said, in your best The Simpsons Comic Book Guy voice, “sir, there is a glaring technical error in your analogy,” the glaring error being that encrypted data is not encapsulated in “an envelope” but algorithmically altered as to be unreadable in transit, yes you are correct.

But I don’t have a good analogy for this, and I’m not sure it exists. If it does, leave one in the comments!

Let’s break down the analogy into its technical elements, and talk about how this transaction would’ve occurred if you really could send this form electronically to the dealer. You go to the form on the dealer website, and you see the “secure” icon in your address bar, indicating your connection is secure.

Your browser just went to the dealer’s website and got the public key from their web server. This is the envelope with all of the information on it – the public key encrypts the data. Anyone can get the public key. It’s free. The public key has a mate, the private key. The private key decrypts the data, or can open the lock on the back of the box. It would be pretty bad if someone else got that key, so the dealer has taken extra steps to prevent its theft. Theoretically.

You can also generate a key pair whenever you want. “Does this mean I can just pretend to be the dealership?” Not exactly. Remember the stamp from the chamber of commerce? The one with the signature? That’s an independent third-party verifying the identity of the dealer. The web works the same way.

Secure website key pairs are generated in what’s called a certificate signing request. Basically, “hey, chamber of commerce, can you certify that I am who I say that I am, and keep a public record of it?” When the request is approved by a certificate authority, the public key of the pair* is tied to a digital signature, and returned to you as a certificate you can install on your web server. Every web browser is pre-configured to have enough relevant information about the certificate authority (the chain of trust) that the user doesn’t need to take any other action here, just like you don’t need to take any action to trust that your chamber of commerce has accurately assigned business licenses. Neat!

There is a little more to the process. When your browser verifies the certificate of the website, it uses their public key to encrypt some random data to send to the web server. Remember, only the web server can decrypt this data with its private key. This data becomes the session key between both machines. The web server decrypts this session key and returns a message to your browser, as if to say “I can prove to you that I have the key to this box, and I can open it.” Because both parties are now using the same symmetric key, data can go both ways. It’s pretty cool. This process is called the SSL handshake.

Here’s an image of the handshake from IBM. All credit to them.

This technology is really foundational to privacy and security on the internet. You can learn more about the encryption algorithms – there’s plenty of info out there.

SSL Decryption

This isn’t a foolproof process, though messing with it takes some resources. An organization can declare itself a certificate authority for its users, and direct certificates to user endpoints on its behalf. Since business endpoints will trust this enterprise certificate authority as a legitimate entity, the certificates they receive appear to be from the destination they’re trying to reach. This internal-only certificate is (definitely should be) properly signed by a “real” certificate authority.

From here, the decrypting device can act as a “man in the middle” and can proxy requests for secure websites. Because the endpoint trusts the decrypting device, and the decrypting device has (or has immediate access to) the private key, the device decrypts the traffic, inspects it, then forwards it to the real website using the same process we already went over. The real website doesn’t know any better.

So to sum that up, the decryption process necessitates:

1) A client’s willingness to have its chain of trust manipulated
2) The proper certificates to enable the decryption process
3) A device that can facilitate the work of performing the decryption

The question is, why?

Challenges in the Business Environment

Man, getting an SSL certificate used to be a process. In my day (combs neckbeard) you had to pay $20-ish for the certificate, then create the CSR, then upload it, then get the certificate, then there’d be some setting in IIS or httpd.conf you’d have to change, then invariably you messed something up, then you’d have half your site on not-encrypted http and other parts on https, then you’d have to restart httpd, then you’d look at some other thing for a while and forget what you were doing to begin with.

BOR-ING. Now you can just use Let’s Encrypt and certbot to get a free certificate installed on your web server in like a second. BOOM. You’re good to go faster than Sonic the Hedgehog after a pile of chili dogs. Sheeeeeeeeeeeeeeyit.

What’s the most common cyber-attack? DDoS? Maybe, but if phishing isn’t the most common by now, it’s extremely close. So let’s say you have a user base that isn’t the most technically inclined.

What have they been taught all their lives? Green padlock = safe! So when they click on a link in a phishing email, and that link takes them to “their bank” or “the company benefits page” – they see the green padlock. “This is safe” they think, put their credentials in, then they press enter.

For IT security enthusiasts, privacy advocates, and professionals, this truth gets into the range of being pretty uncomfortable. It’d be a reach to say something like “well, if everything is encrypted, nothing is encrypted,” but that’s sort of…we are on that bus. Don’t get me wrong, I think Let’s Encrypt is an amazing project and will continue to do great things for the internet. But encryption is a tool, and tools can be and are used for harm, and the people who stand to be the most harmed by it are not C-level executives, but employees trying to do their best. There are indisputably personal and professional impacts to people acting in good faith but affected by cyber threats.

If C&C traffic, malware traffic, and phishing websites are able to operate/communicate in a way where researchers and defenders have no insight into them, I’m worried about what that means for the next conversation we have about encryption at large, so businesses having sensible, practical, people-first decryption policies is a decent set of brake pads we can put on the bus.

Sensible Policies

Back to the concern about banking. I believe all decryption devices are able to selectively apply decryption, and if you are looking at a device where that is not an option, please look elsewhere. The engineer who is configuring your decryption should be able to put sites like https://bankofamerica.com in their decryption exclusion list, where the enterprise certificate authority is not used to facilitate an SSL forward proxy, and the chain of trust is not altered for that session.

After you’ve reviewed the legal obligations in your area about encryption, and had a conversation with HR and your leadership team about the go/no-go, consider the way you’ll implement your policy on the whole, and how it can add value to your business without completely betraying the trust of your users. Remember that the practical applications of cybersecurity are, first and foremost, value-oriented activities. Consider the messaging you provide to your teams and stakeholders.

What sounds better to you?

“Beginning Monday, we will be implementing web decryption on our network. We expect you all to sign new acceptable use policies regarding the use of this new technology.”

“Given the recent expansion of email phishing, ransomware, and malware attacks on organizations across the country, we’ve decided to implement web decryption to keep our business assets and users safe. We’ve worked with our partners to come up with a deployment solution that only targets suspicious activities and have attached updated documentation that explains what this means for you.”

Have a strategy. Vendors are more than willing to work with you on this, because of the increased processing requirements for decryption, it’s often an avenue for them to make another sale. Palo Alto Networks has a very helpful page about coming up with a decryption strategy for your network, even if you don’t use their products.

Leverage your synergies.*** How does your decryption device fit in to the rest of your network? Is it a firewall? Can you set up your decryption based on existing URL categories? For example, you might decrypt on “unknown” or “web-posting” (think pastebin) but not decrypt on “banking” or “ecommerce.” Are there any data loss prevention or credential theft features you can also take advantage of?

Be transparent. You are indisputably taking privacy away from your users here, even if they know they don’t have an expectation of it. You owe them a thorough explanation of the process and how they may be affected. What websites are you decrypting on? What was the business justification for decrypting that website or category?

I hope that’s a helpful primer to SSL, decryption, and why we’re seeing more and more of it at scale. Having to implement this technology in the business is a nose-holding endeavor, but I do see it as increasingly necessary as the majority of the internet goes secure and we see the continued proliferation of cyberattacks. If you’re leveraging this in your organization, how’s it going? Let me know in the comments.

*(The private key is too, because the public and private keys are inextricably linked, but the certificate authority doesn’t need your private key to generate the certificate. In fact, don’t send the private key to them. Don’t send it to anyone. Seriously.)

**(It has been pointed out to me that this is not actually something a chamber of commerce does. This is why I am a technologist and not a businessy businessperson. INTERNET: SERIOUS BUSINESS.)

**(Did I really write these three words?! In a row?!)