Categories
Cybersecurity

even more takes about phishing tests

I have been meaning to write this post since I saw the article from The Wall Street Journal about phishing tests come out, but life got in the way a little bit (always does) with the little ones and a job change. Anyway, the article is called “Phishing Tests, the Bane of Work Life, are Getting Meaner.” Of course the article is paywalled, but if you have some free article credits an Apple News subscription, you can read it.

A few years ago I interviewed someone and I asked him a question I ask other people in senior+ security levels. The question is, and there is really no wrong answer, “what is your spiciest infosec/cyber hot take?” And he said, without missing a beat:

“I think phishing tests are total bullshit.”

I did not really agree with that statement at the time (we made him the offer!) and I don’t agree with it now. Most orgs are overthinking the value proposition of phishing tests, and should probably keep investing in them. On the other hand, there is this utterly bananas take from the article from a CIO of a health system:

“The first time employees…fail a phishing test, they lose external email access for three months. The second time, it gets cut for a year. The third, they’re fired…’I tell them it is draconian until we have an attack and we have to take our medial record system offline.'”

Hoo. Boy. Come to think of it, I might just “accidentally” fail those first two tests. Oops!

So let’s get a few things out of the way on that side:

  • It is OK to mandate training for people who click on links and OK to revoke their access if they do not complete the training.
  • It is not OK to deprive someone of their livelihood because they clicked on a link.
  • If you are a CIO/CISO, and your information security posture is so brittle that your critical infrastructure will fail if someone clicks on a link, it is you, not the employee, who is the problem.

Internet systems, like email, are designed to be end-to-end. It is the responsibility of the CIO/CISO, not employees to protect their end. It’s good to get employees to participate in the process, and in fact acknowledging that security is a two-way contract made with an organization is an effective strategy (I will write on this more another day). Threatening employees’ jobs because they are humans who are vulnerable to attacks that prey on human behavior is ridiculous.

There is broad consensus in information security that humans are, in fact, the weakest link. Rather than using that fact as upcode to create misaligned policy, use it as data to inform your defense strategy.

In other words: there are reasons to fire employees for not participating in a security program. Clicking on a link is not one of them.

OK, so where does that leave phishing tests?

First of all, let’s enumerate the numerous shortcomings of phishing tests:

  • Most commercial phishing tests are out of step with how threat actors operate. The most effective phishing (in 2025) takes a variant of “living off the land” – leveraging service abuse to deliver a legitimate, not malicious, link to a subject. The legitimate link takes the form of a valid OneDrive or Google Drive link. The link has a valid document in it, and the document (be it an invoice or whatever) contains the malicious link. Offense in depth. These mails are able to bypass nearly all commercially available forms of email protection by hijacking the internet’s end-to-end principle: “just deliver the email.”
  • “Message header from” fields (trivially spoofed) from phishing simulation domains are easily detected and blocked by above-average users, skewing the analytics of the reporting.
  • Despite whatever quasi-objective measure of “phishing resistance” your vendor claims to provide, you have no assurance based on phish “reporting” that your employees are actually better at detecting phishing messages. You may have some assurance that they are good at detecting phishing messages from a particular phish testing platform.
  • Aggressive phishing tests undermine employees. I could not put it better than Matt Linton at Google, who said this in an interview with PCMag: “employees are upset by them and feel security is ‘tricking them,’ which degrades the trust with our users that is necessary for security teams to make meaningful systemic improvements and when we need employees to take timely actions related to actual security events.”

You don’t seem like you like phishing simulations very much.

Correct. However, I have seen them work the way they are intended to work: as a tool to raise security awareness and facilitate some cyber hygiene. Nothing more, nothing less.

And there is, usually, value there.

What the flu vaccine teaches us about phishing simulations

Every year, we are told over and over again to get the flu vaccine. Every year, we are also told that the flu vaccine is not very effective. Maybe 60% on a good year. The 2024-2025 flu vaccine was only 51% effective. But we still get the flu vaccine because preventing a pathology is infinitely better (and substantially less expensive) than trying to treat it.

That is what a phishing test is. Phishing simulations are flu vaccines for orgs:

  • Employees are already experts at using email, using email takes no additional special training
  • It takes relatively little training to get ordinary users to spot low-effort phishing emails
  • Mistakes made by employees in catching phishing tests can be corrected. Again, it is OK to tell an employee they need to do training, and they will lose access…if they don’t do it.
  • Testing is very inexpensive and can even be done in house. Do you suspect your users are conditioned to detect all emails from your vendor? Put that to the test! Buy a domain and send your own phishing sim! Take that back to your vendor!

Put a dollar value on the cost of an incident, even a minor one, even if the incident had no material impact (it was just an “event.”) Assuming even a small handful of employees were able to detect a true positive phishing email, you likely captured the value of the investment. If you are looking for a metric to measure the effectiveness of your phishing simulation program, you should focus on the number of emails reported and how many reported emails are true positives. That’s literally it.

Trying to pretend that phishing simulations are actually an effective tool for security is nonsense. They aren’t. Someone will click. Do not fall for vendor marketing puffery trying to distract you by claiming phishing simulations will make your organization more secure. They won’t. But they might save you some time and a few headaches.

Defense is always behind offense, etc. Phishing sims can be an effective tool for cost savings, as in the time and hours spent by your security team in event analysis is saved when someone does not click – that related work is simply not created. As a CIO/CISO, you need to figure out the dollar amounts and go from there.

Usually the math works. It’s really that simple.

Categories
Cybersecurity Education

reflections on a master’s degree

One of the things that I heard a lot of before I started my master’s was that it wasn’t really necessary to be successful in the technology space. Since I’ve completed my degree I thought I would put some thoughts together about my experience and offer some perspective on the subject. I just earned a master’s of information technology from Virginia Tech. The program site is here: https://vtmit.vt.edu

The tl;dr is that no, I don’t think it’s necessary to pursue it to be successful in technology, but I have already realized benefits of my degree program, and intangibly it has made me a much stronger leader and engineer, and it has established a deep confidence in my career and my ability to execute, which is the very thing I was expecting to get out of the degree in the first place.

As tangible benefits go, the master’s degree program cost me about $36,000. During the course of obtaining the degree, I have been promoted two times and have secured a role of technical leadership. Between a job change and the promotions (in three years), my annual salary has risen by almost $40,000. While obviously this is not entirely attributable to the degree, part of it certainly is. Since the salary increase is effectively permanent, and there are about 25 years left in my career, this was well worth it for me, financially speaking.

Why I went for a master’s

A high-up cybersecurity executive recently asked me why I was motivated to get a master’s degree. You should have a good sense of your “why” before you commit to a master’s program. Mine was originally practical: it would have led to a pay bump in my previous role and my previous employer would have covered part of the cost. I left that role, but stayed in the program anyway, because I wanted to step up in my career and be able to speak cybersecurity more eloquently to the business. I absolutely achieved that goal through my studies. I also wanted to have an opportunity to teach (undergraduates) part-time, which is really not possible with only a bachelor’s.

In fact, some of the best courses in the degree program were also part of the Pamplin MBA program, and for a brief period I considered switching to the MBA program entirely. In retrospect I am glad I did not do that because I valued the technical content I got out of the information technology courses.

Getting a master’s degree while working full-time and raising children is no joke.

I registered for my first class while we were at the hospital the day after my daughter was born. I knew it was going to be a challenge, but it was a challenge. It is not for everyone. If you want to do this, you will lose almost all of your free time. I also gained weight and my hair grayed.

I am writing this while we are on vacation and I was struck when I looked in the mirror and noticed that I looked younger. I have been walking regularly for a couple of weeks and have been sleeping relatively well for the first time in years. I’m looking forward to feeling like myself again.

It’s hard to overstate how much you give up when you commit to something like this. The feeling of achievement, however, is pretty remarkable.

Not every hiring org values a master’s degree…but most do

I have a lot of opinions about certifications in cybersecurity. Most of them are negative, with the notable exception of SANS – if you can take a SANS training, you should. I have held cyber certs in the past and almost always let them expire. I currently work for a company that, unfortunately, does not have the budget for certifications or professional development. I have taken calls from recruiters in the past few months that have gone like this:

“Do you have any certifications?”

“I have some inactive ones (CCNA, Security+, CompTIA CASP, SANS GCED), but right now I’m working on a master’s degree with a concentration in cyber.”

“Have you considered getting some certifications?”

“I will happily obtain a certification is the cost and time is covered by my employer, but my professional growth is focused on my degree program.”

A master’s degree relevant to the position should carry more weight/have more impact on a resume than a certification. If your organization’s recruiters disagree with this, I think they are wrong. I also think certifications by and large are a money grab, and again – a degree or a cert is not required to be successful in tech. But that is OK. My company also did not help me pay for my degree (I paid cash and lived conservatively, and got support from my wife, who does well for herself being an attorney). Smaller technology companies seem to value the master’s degree less tangibly, but again, I can attribute my promotions at least somewhat to the degree program.

What I want out of my career has changed

My career was already going in this direction, but I am much less interested in highly operational, hands-on work. I have embraced being a technical leader and moving the focus of my work into more strategic and less tactical decision making. I make this pretty clear when I have conversations with recruiters. The day-to-day is just not a good use of my skills anymore – getting a master’s will change the overall tenor of your skillset, especially if you are mid-career like I am.

I still firmly believe that the best technology leaders have hands-on experience, and there are areas of professional development that I neglected (especially in cloud) over my degree program, so there is definitely opportunity cost there. I don’t think I want to ever be completely hands off, because being hands on is what got me into tech in the first place, but I derive a lot of career joy out of mentoring and building, not so much operating. 

The degree program amplified that and my desire to work somewhere where what I can offer is more strategic and entrepreneurial. Many people think the goal of education is to increase your “known knowns.” This is partially true, but the real goal is to decrease “unknown unknowns.” Being successful in your career and in business is a product of being able to make accurate predictions and understanding risks. Education reduces the space where those predictions are wrong and enables you to take good risks. You (and I) will still be wrong, nobody is perfect, but you will be right more often on higher-paying bets. 

Since placing those bets well is a skill I learned through my studies (and through experience), I want to be somewhere that I have the agency and authority to place them, which will be more to the benefit of the organization than a role that is exclusively hands-on.

Requirements gathering is one of the most difficult things to nail down for any technology project. I happen to be quite good at requirements gathering, to be honest, though I attribute that more to raw career experience than the degree program. Anyway, once you have gathered your requirements you need some way to execute on them as efficiently as possible. If you know what the “dead ends” are in advance, you can proactively avoid them, but the only real way to know what the dead ends are is by experiencing failure. Education distills those experiences into something tangible by turning an “unknown unknown” into a “known unknown.”

This is why formal education is useful even though the coursework is always a step behind the bleeding edge. This is true and something I experienced during my studies at Virginia Tech, however, people who instinctively dismiss the value of formal education because of it do so at their peril.

Of course, when you realize that you don’t know things, you realize how much you don’t know. And nothing bears this realization (that I know nothing) like getting a master’s degree. It is both humbling and weird.

OK, so what?

I’m very satisfied with what having a master’s has brought me both tangibly and not. There are a lot of naysayers in the tech space around formal education. My advice is to ignore them, but also be aware of what you are getting into and have a strong sense of what you are trying to achieve with a master’s degree before embarking on the journey, because it is a long road.

It is also true that a degree (of any kind) is not strictly necessary to be successful in tech. Many people in tech are smarter and more successful than I am without any degree. That is great! But few things have brought me more personal and career satisfaction than obtaining it for myself. It’s hard to put a dollar value on the feeling of accomplishment, capability, and confidence.

That’s all for now. If you have questions about education in your career feel free to email me and we’ll set something up.

Categories
Cybersecurity General

you probably don’t need desktop mfa

Update 9/25/24: I was informed after writing this post that NIST has published a draft of the latest revision of SP-800-63B, Digital Identity Guidelines. I recommend referring to section 3.1.7 “Multi-factor cryptographic authentication.”

We recently had a conversation about password rotations at work. One of our engineering managers suggested we implement a password rotation policy.

When I explained to them that was no longer a recommended practice, another engineering colleague suggested we implement multi-factor authentication (MFA) on endpoints (laptops). At nearly the same time, Okta started pushing an add-on called Okta Device Access, where one of the main features is “Desktop MFA.”

(To be clear, I am a fan of Okta. Okta is a good company. They have been good to me in my career. I’m picking on them here because my current employer uses Okta, and therefore, I use Okta. Okta is not the only organization that offers Desktop MFA. I also have a Yubikey. Yubico is also a good company.)

Both of the conversations I had about implementing this feature were identical.

“We/you should require MFA to log in to desktops.”
“What do you mean?”
“You should require a second factor to get in besides just the password.”
“But, you have to have the device, right?”
“Yeah…”
“That is MFA. The device is something you have, and the password is something you know.”

The counter-argument for this that I hear is “well but if they have your device, the password is the weak link.” What people who argue this actually mean is “possession of the device only authenticates the machine, not the user,” but we’ll get to this later.

Assume for a minute that trusted platform modules (TPMs)/cryptographic subsystems such as the Secure Enclave don’t exist. Now consider the relationship between a computer user and a computer, a series of interactions with hardware at a base layer and a collection of services on top of the hardware layer. The majority of the services are independent of the hardware beyond dependencies created by the chip architecture. It should therefore be possible to transplant a disk in like-for-like architectures and expect the hardware to load the services on that disk. AKA, computer turns on.

So, if we’ve established a logical separation between the hardware and the software, then what vendors really mean by “desktop MFA” is “operating system MFA.” You’re not accessing your computer; that would require a screwdriver. You’re accessing a service, the operating system. Per our scenario above, this is a bit problematic, since anyone could, in fact, take the boot storage out of the desktop, transfer it to another machine, and voila.

But in our scenario, we did not have a TPM.

You must have that TPM (which facilitates identifying you and only you on an encrypted disk assigned to you and only you) in your possession and another factor to successfully access that operating system. Just having the disk doesn’t work, you cannot transplant the disk or the TPM. Per Apple’s documentation on the Secure Enclave:

”the [unique] key hierarchy protecting the file system includes the [unique] UID, so if the internal SSD storage is physically moved from one device to another, the files are inaccessible.”

That. Is. MFA. When we incorporate that TPMs do exist and can facilitate rapid full-disk encryption and secure authentication via a biometric, we can meet a secure standard for MFA, and that standard is pretty high.

Which brings us to “device + sticky note” argument. This rebuttal is not an argument for device MFA. It’s an argument against passwords.

“Desktop MFA” seeks to solve a problem that no longer exists. If you don’t believe me, ask NIST, where they define a “multi-factor authenticator” in the context of defining “multi-factor authentication:”

“An authentication system that requires more than one distinct authentication factor for successful authentication. Multi-factor authentication can be performed using a multi-factor authenticator or by a combination of authenticators that provide different factors.”

“Multi-factor authentication requires 2 or more authentication factors of different types for verification.”

Different types. NOT different devices. Multifactor authenticators acknowledge the possibility of passing a test for MFA as long as the factor types are logically separated. That the Secure Enclave and Touch ID reader exist in the same physical space as the rest of the system’s components does not matter.

Consider the much more simple scenario of accessing a phone. In almost all cases, that phone is designed to be used by you and only you. You must have physical possession of the phone and only that phone, and a way to provide a second factor (knowledge or biometric) to the phone to access it. Again, this is MFA. It’s so effective, in fact, that the government will compel you to sit in prison until you unlock the phone if they suspect you’ve committed a crime and need the phone’s contents for evidence, assuming you are alive. In the highly-publicized case of the FBI “hacking in” to the San Bernardino shooter’s iPhone, the FBI was able to gain access not by breaking the requirement for a second factor, but by breaking a mechanism in place in iOS that erases the iPhone after hitting a ceiling of passcode guesses. They were then able to brute force the passcode.

Let’s say you do decide to implement desktop MFA anyway. How are you going to do it? Are you going to allow your users to use their personal phones with an authenticator app push? How is that app secured?

If the answer is “we will force the phone’s owner to use Face/Touch ID to open their phone,” congratulations:

  • you changed one biometric for another and are relying on that biometric to work consistently on an enrolled device that is not managed by you, unless you are issuing phones, which is no longer common
  • in other words, you just decided to swap using a biometric and a possession factor for a different biometric and possession factor
  • if a(2) and b(2) are true, why are a(1) and b(1) not true?
  • you have to support it when it doesn’t work
  • you’re still susceptible to MFA bombing

If the answer is “we will require the phone’s owner to use a password to open their phone,” congratulations:

  • you are now satisfying “true” MFA, but you’re relying on a factor (knowledge factor) that is not phishing resistant
  • you have zero assurance that the password used to open the phone and the password used to log in to the endpoint are not the same password, and they probably are.

If the answer is “we will skip the authenticator app push and issue yubikeys,” congratulations:

  • You have to deal with managing yubikeys now, good luck!
  • There is not a material difference between using a yubikey and using integrated Touch ID or Windows Hello. Consider that all yubikeys are passkeys, but not all passkeys are yubikeys. “but the yubikey is separate” – the yubikey is only separate until the second it’s plugged into the device, and it can be safely assumed that the yubikey will nearly always be in close proximity to the device.

Look, if you’re still going to say “possession of the device you are authenticating to does not count as a possession factor,” that’s fine, but you’re specifically asking for 3FA or 2(n)FA, so just ask for that if it’s what you want. No vendor calls their solution 3FA or 2(n)FA because nobody uses those terms even if that’s what they really want, and if vendors started using them, organizations would have to think critically about things like “desktop MFA” and realize that it probably isn’t necessary.

OK, so let’s go back to the password thing for a minute.

We, security practitioners, feel uncomfortable reconciling the idea of a multi-factor authenticator only in the context where one of those authentication mechanisms is a password. Yet, when it comes to using cryptographic and biometric mechanisms, security practitioners have always felt a high degree of reliance and assurance. Certainly higher than passwords, anyway.

So, why can’t we just acknowledge that we have a real opportunity here to make everyone’s experience both much more simple and much more secure? I can’t believe I’m going to do this, but I’m actually going to quote Elon Musk: “The best part is no part.”

It would have been crazy to suggest 10 years ago, when Touch ID was first introduced, that biometric inputs and TPMs would be as ubiquitous as they are now. Apple performed a literal miracle here, they actually gave regular users a method by which they could authenticate to a device securely, and users wanted to do it.

Today, nearly everyone has one of these things both in their pockets and integrated into their endpoints, yet it feels like the industry writ large is literally looking for reasons to hold on to passwords for dear life. For whatever reason, we haven’t let this new paradigm catch up to us.

As Brian Moriarty says: “The treasure is right there.

When multi-factor authentication first started becoming part of the cybersecurity discourse, it was commonplace and accepted to rely on an SMS message as a second factor. Now, nearly no serious cybersecurity practitioner would recommend the use of SMS. “Better than nothing, but not good” is the approximate industry take on that thing.

If we were able to successfully look back on SMS and agree that we’ve evolved away from its reliable use as a factor, that we can truly reflect on what made sense at the time and now does not, we can do the same for passwords. When modern desktops are managed well and we stop relying so much on knowledge factors, implementing solutions like desktop MFA start to look more like a way to maintain the existing paradigm as opposed to evolving beyond it.

We are at an inflection point, and it’s time to make it the reflection point. We have WebAuthN, we have passkeys, we have biometrics, we have Touch ID, we have Windows Hello, we have certificates, and we have full disk encryption that is nearly invisible to the user.

We know what we need to do. Are we ready to ask why we aren’t doing it?

Categories
Cybersecurity General

rapid fire rfq

One of my favorite talks at RVASEC 2024 was one I was most surprised with, called “Social Engineering the Social Engineers” by David Girvin at Sumo Logic.

If you work in a technical leadership role, you should absolutely view this talk, because 1) it’s funny, and also 2) it will really help you understand the relationship between you and salespeople, the ways salespeople are incentivized, and how you can leverage the sales process to inform better decision making around tooling evaluation.

The net result of this enlightenment should be a shorter evaluation to deployment lifecycle, enabling you to extract more technical value out of your tools. Girvin broke a lot of my assumptions about sales and I’ve already started putting some of his tips into practice to good effect.

I want to pause for a minute here and say that I have worked with some truly great salespeople and some of them have become friends of mine. I have also worked with very poor salespeople. Tech sales is a tough job and I respect it, but I also have my own job I need to do.

Anyway, Girvin makes a big deal out of transparency and not keeping the competition a secret. In retrospect, that’s actually an obvious tip that goes back to Kerckhoffs’s principle, and it’s not like vendor A is unfamiliar with competing vendor B. They “know the system.”

I had a unique opportunity to put this tip into practice, and did an experiment in radical candor when my org had a popular enterprise security product up for renewal. The scenario was essentially “hey, we have this tool and it’s up for renewal, we need to put cost pressure on them.” Worst case scenario was we couldn’t get pricing, and we’d just apply indirect pressure – after all, “this is my best and final offer” usually isn’t.

So I contacted two competitors. Vendor A and Vendor B. I contacted them through the sales intake forms on their websites. An SDR responded to me from each vendor “wanting to know what we were looking for.” Here was my response:

Thanks for getting back to me. I want to be transparent about the outreach – we are currently $EXISTING_VENDOR customers, we are pretty happy with $EXISTING_VENDOR, we are probably not looking at an extended demo or POV, and our focus is getting comparable and competitive pricing. 

Of course, if it’s a slam dunk, there might be something here, and although right now we’re doing a primarily numbers/market-based evaluation, we’re happy to get a product/feature overview.

Vendor A played ball on this. We had a conversation, it was a good product, we got the product overview and a brief demo, and the next day we had a quote, and it was in fact lower, though not by much, than the existing product. Vendor B did not get back to me after my email above. But we only needed one response. We also now know that we were talking to a legit competitor, and even in this biz, relationships are king.

During the call, I presented this as us doing a “rapid-fire RFQ” (request for quote), and it became clear to me that the SDR thought this was an unusual tactic. “Do you do this for all of your renewals?”

Well, we do now.

Categories
General

linkedin is bad, actually

The week before this last vacation, I wrote an at-length post on LinkedIn discussing the Crowdstrike RCA report that I thought was helpful in explaining some of the details of the report and actions you should or should not take as a Crowdstrike customer, and how I thought the future of this thing was going to go down with their customer base.

That post got almost no engagement, likes, reposts, comments, or anything spurring further genuine discussion, or things I had missed.

It was at this point that I devised a theory – that if I wrote a post about Hawk Tuah Girl AKA Hailey Welch, it would generate better engagement than the illustrative and professional content I wrote about a recent cyber event.

I didn’t use the words “hawk tuah girl” in the post, but I barely disguised the fact that I was talking about this woman, and used her full name. I pre-empted any accusation of unprofessional content by citing the Vanity Fair and NYMag articles about her.

Sure enough, hawk tuah generated significantly more engagement than my Crowdstrike RCA writeup. It was at this point, and among many other reasons, that I decided I’d had enough and closed my LinkedIn account.

There are people I connected with on LinkedIn that I genuinely did enjoy interacting with, and I will miss that. I felt the same way about Twitter/X. I have not had a Twitter account in a number of years. But, when I ask myself the question: “what have I really gotten out of this platform” – in the long run I just can’t come up with a good answer other than “actually, I think this may have been a colossal waste of my time.”

I have never networked my way into a job from LinkedIn. I came close once, when a very good recruiter came across my profile and reached out, but I ended up withdrawing and took the job I’m in now that I got from old-fashioned networking. All of the other jobs I’ve gotten have either been through applying directly or from in-person networking.

The vast, vast majority of recruiters who reached out to me on LinkedIn seemed like they were following a spray-and-pray strategy, often advertising jobs to me that – had they actually read my profile – which was 1:1 with my resume, they would’ve known I was not qualified for.

When I closed my account, I had 176 open requests to connect. Almost all of them were from salespeople. I have worked with some great salespeople, however:

1) I’m really not interested in talking to a stranger about their product. I am aware that BDRs/SDRs make money based on the number of appointments they book, and that is the endgame here. It’s cliché, but I’m well aware of what my pain points are. I will come to you if I need a problem solved.

2) When you connect with salespeople on LinkedIn, your feed fills up with content for salespeople. Ultimately, this is probably doing salespeople a disservice.

I know people find career success on LinkedIn, but it has never worked for me personally. I am not interested in sales calls.

I am also appalled at the amount of straight-up bad behavior there is on LinkedIn, including the stolen and re-hashed content, content obviously generated by AI (it’s really obvious), grifters, and a frankly disturbing number of people who use mental illness as a shield to deflect any claim that they’re being rude and shitty.

So, I am gone now. I maintain a private Instagram account with a small number of followers for the use of my close friends and family, and have no other social media presence.

And I’m upset about this, mostly at myself, for the wasted time I invested in another ARR generator for Microsoft. For the time I could’ve spent with my family, coding, or working on a side project, or even this website, which I operate and maintain entirely on my own.

I’m upset about the value they captured that I didn’t.

Categories
Uncategorized

task failed successfully

I am not entirely sure what has happened to hiring that the process has become so complicated, but I suspect it is something like “we have hundreds of applicants and they all have the same resume that they wrote with ChatGPT, so we don’t have any way to discern between candidates.”

As a result, hiring orgs have implemented novel things to try to weed out candidates who don’t make the cut on a technical level. In the past for technical roles this has taken the form of take-home assignments. I have done a couple of these and gotten offers after the fact. One was to write an essay about a GRC-ish topic, and the other one was to create a network architecture diagram that was reasonably basic. Each of these took me about 90 minutes – 2 hours, which I felt like was a reasonable amount of time to ask a candidate to spend on a take-home.

Recently, I suspect with the advent of GenAI, we have seen the rise of the “live coding assignment.” I was approached by a really solid org to take an enterprise security engineering role, and was told that I would indeed be encountering a live coding assignment, and that it would be OK as long as I knew how to code APIs, which is something I do know how to do reasonably well. I was not really sure what to expect in said assignment other than the API thing, and that my python is…competent.

Spoiler alert: I did not get the job.

I appreciate that they did not give me a bullshit task (it wasn’t “fizz buzz”, and I know how to solve “fizz buzz,” and it is the most bullshit of bullshit tasks), and they asked me to just get some data out of an API and format it in such a way that it could be piped to a SIEM.

I did make a reasonable amount of progress, and it was the kind of thing that I could’ve crushed at a take-home in two hours, but 40 minutes was just not enough time for me to deliver, and nerves probably got the best of me. As much as I find it distasteful that an org would disqualify an experienced candidate without a software engineering background by evaluating their ability to engineer software, this is unfortunately the world we live in now, so let me (the not software engineer) provide some tips on how to solve this kind of thing and what I would’ve done differently.

Practice in the Tool First

When I work in an API I like to have a copy of the output in front of me in an easily readable way before trying to sort out whatever field or key value pair I’m trying to get. Look, I’m in my late 30s and I like things that are easy to read.

I killed a bunch of time in this interview trying to pretty print the data to the console, only to find out that the console just would not display the pretty printed output. They had offered me access to the tool (it was Coderpad) ahead of time with a sandbox, and aside from making sure I could import the requests library (which worked), I didn’t mess around with it. I tried pasting some other working code into the sandbox after the fact that did pretty print locally, and I still couldn’t get Coderpad to play ball.

So, practice in the actual tool. If a sandbox is not offered, ask for one. If one is not available at all, maybe run away?

The Docs are the Source of Truth

Read the API docs. They are the source of truth. Almost every API has example output and input. I have never worked with an API in security engineering that wasn’t accessible with a URL and returned its data in JSON. Usually the data is returned as a list, or a dict, or a list of dicts. This API was just returning a dict, so it was reasonably easy to extract the values from the keys. For a list of dicts, it is a little more complicated, but it’s not bad as long as you understand how lists work.

Anyway, the API docs will tell you how to build your headers. Just follow them. If you are spending time thinking about doing it a different way because you know that way works, you are wasting time. Just read the docs.

Write Good Code

This is a part where the interviewer said I was doing a good job. You should know the fundamentals of how to write code. I don’t think at all that you need to be a coding expert to be good at cybersecurity, but you do need to know the basics. If you are writing everything as one big main function, your code is going to be impossible to maintain.

I originally learned to code in Java so this was basically beaten into me. Small pieces loosely joined. Small batches.

Also, do leave comments for where you think you might want to make improvements or add features. I have a bad habit of leaving comments after I am done coding rather than while I am coding, so I left a few on my live task, but it was pretty half-assed.

Talk about what you’re doing

I did not do this and should’ve. It was a stream of consciousness, but it was like, nonsensical. Essentially I was just thinking out loud and probably embarrassed myself. In fact, the interviewer probably got off the call with me and was like “this guy is insane, we can’t hire him.” I didn’t have any strategy and wasn’t intentional about talking through what I was doing. I think this cost me.

Which, after the fact, is like, “duh” right? I would’ve practiced a presentation before I gave it, why didn’t I do this here? I know how to work with an API and have done some pretty cool things with the Okta API (the one I use the most often) because Okta is terrible about letting you do operations in bulk in the GUI.

Find a way to practice/free API resources

This came down to some weird timing for me, but I should’ve found/made more time to practice before going into the interview so I had more confidence. At that point I had just gotten back from a week’s vacation and catching up on work the following week, so I didn’t do any coding and by then it had been several weeks since I coded anything at all. I didn’t make good use of the time, and just ended up constantly referencing a bunch of API code I had already written either for work or academic projects instead of having confidence in what I was doing.

”But how do I practice on an API without paying $100,000 for a SaaS tool?”

APIs are great because they can teach you about coding and give you a fun little project that you can see the results immediately, and there are tons of great APIs that are totally free!

Washington Metropolitan Area Transit Authority API: https://developer.wmata.com – this one is fun because it presents a lot of information as a list of dicts.

Massachusetts Bay Transportation Authority API: https://www.mbta.com/developers/v3-api

Bureau of Labor Statistics API, which I have used for academic work: https://www.bls.gov/bls/api_features.htm

This list from GitHub on free APIs – a lot of these are kinda more “just for fun,” so if you want to know how many minutes until the next Orange Line train leaves Clarendon, it’s not for you:

https://github.com/public-apis/public-apis

Better luck next time

Considering I had never done a live coding exercise before, I feel like I did okay. I definitely did not implode, but I didn’t excel and my result didn’t deliver everything someone experienced would be able to do in the time allotted. I was disappointed, but not entirely surprised, when I heard I was not moving forward.

I didn’t make it and I own that, and it was helpful for me to have the interviewer acknowledge that, no, this is not really how we work in real life. But I’ll be reflecting on this for next time (and there will be a next time) and hope these tips are helpful. For better and worse, this now appears to be the norm in technical interviews.

Categories
Uncategorized

cyber mai tais

Have you ever had a Mai Tai? If you drink, the answer is probably “yes.”

Was it made correctly? The answer to this one is “probably not.”

There is some debate about the origin of the venerable tiki cocktail, I personally believe the original recipe for the Mai Tai should be attributed to Vic Bergeron, aka “Trader Vic.”

This is the recipe:

  • 2 ounces Jamaican rum (I use 1.5 Appleton Estate and a .5 Diplomatico, which is Venezuelan, but it’s damn good)
  • 1/2 ounce orange curacao (Cointreau is fine even though it’s technically a triple sec)
  • 1/2 ounce orgeat syrup (Check Wegmans)
  • 1/4 ounce simple syrup (cane sugar)
  • Juice of one whole lime

Pour into a shaker with a lot of cracked ice, and shake vigorously. You really want to froth it up in that thing. Then pour the whole thing, with the ice, into a double old fashioned glass. Top with a sprig of mint and one half of your spent lime shell. It’s supposed to look like an island with a palm tree. Look, I don’t make the rules, just go with it.

If this sounds odd to you, it’s because most Mai Tais today are made with a mix, and usually with fruit juices like pineapple, and/or coconut flavorings. Then a pineapple stick with a maraschino cherry is added. I do add the cherry myself, but the original recipe doesn’t call for that, or any of the other stuff. Over the years, it’s been twisted into something commercial, and often looking like it literally came from a commercial.

The “real” Mai Tai a simple drink that takes good, fresh ingredients and turns them into something pretty remarkable. In fact, you might read the ingredients on their own and wonder if this is actually any good, but the whole really is much greater than the sum of its parts.

Once a mix is put together, it can’t be disassembled. It’s OK to use mixes if they’re good, like Zing Zang, but when the mix you’re drinking differs wildly from the original intent of the thing the cocktail was trying to do in the first place, that should arouse some suspicion. The intent for the Mai Tai being that 1) rum can be appreciated and that 2) simple is better.

So what can cyber practitioners learn from the storied history of the Mai Tai? Do things simply, and execute them well. K.I.S.S. “Complexity is the enemy of security,” so too for a good cocktail. We’ve seen broad consolidation of tooling in the security market in 2024. Vendors herald this as a great thing for their customers, and naturally we’ve seen the return of the 2010s-era marketing around visibility through “a single pane of glass.” 

I can well see through a bottle of cocktail mix. I have no idea what’s actually inside the bottle, or if it was mixed correctly. Practitioners should be wary of this scheme. A single pane of glass isn’t valuable if you can’t really tell what’s on the other side of the pane, or if the pane only becomes clearer with an additional license. Nothing in security begets more work than tooling, an endless series of options that generate more work for you and more ARR for your vendor.

The “zero trust” marketing from vendors hit a fever pitch about two years ago, then when generative AI became popular, it seemed to all go out the window. Why? Maybe it’s because zero trust is real work, or maybe it’s because good security practitioners can look at zero trust as an opportunity to stop using all of their expensive tooling and focus on basics, and vendors started to realize that too. 

Palo Alto has disclosed multiple vulnerabilities in 2024 at a rate I haven’t seen before. Time will tell if this is a one-off, or if it’s because they’ve finally spread themselves too thin. Which is it? Their firewalls were the rum, the services were the syrups, and Panorama was the lime. You knew what it all did. Look at Palo Alto’s website now. Do you have that same assurance today that these parts are still just as effective? What about Crowdstrike? Are all of these features delivering value, or are they a solution in search of a problem, and if so, is the problem “we need to figure out how to deliver shareholder value?”

The fresh Mai Tai is unpretentious and needs no unnecessary adornment or sweetener. Don’t take my word for it, mix it yourself. There is no beverage on Earth more balanced (except for maybe Coca-Cola, which coincidentally also retains its original recipe); it’s sweet, but just so. Tart, but not too much. It is wholly punchy and refreshing. It knows what it wants to do and does it without getting in its own way, which is something security practitioners seem to continuously forget how to do.

It works smart, not hard.

You probably want to limit yourself to two though, unless you’ve got a couple of plates of skewered chicken and crab rangoon. Aloha!

Categories
Cybersecurity General

a more secret santa

I ran a secret Santa for my distributed friend group this year. It being 2023 and all, I used ChatGPT to do the Santa assignments and generate some creative secret Santa messages, of which it did an admirable job. As usual, more detail is better when it comes to that thing.

Here was my prompt:

Here is one of the results:

Unfortunately as the organizer, doing these things comes with the major drawback of “well, I’m the organizer, so I know who everyone is sending a gift to,” making it only a secret-ish Santa.

There are web-based services that do exactly what I was trying to do with this project, but since none of them have the prerequisite of being a huge nerd, I decided to home-grow my own solution using Python and Ollama, Meta’s free large language model.

I also figured I’d throw some encryption in the mix so the message would actually be a secret, not just to the Santa participants, but to the organizer. So I slapped it together and called it “Secreter Santa.”

If you want to try this thing, you need to be running Python 3, and a recent version, because there is some reliance on the correct ordering/index of lists. That’s something I can change, but this was just a fun thing and a proof of concept. You also need to run Ollama, and there are some libraries you need to import to be able to send the prompt to the LLM and do the cryptography.

Ollama is totally free, and you can download it at https://ollama.ai – it is not as good as OpenAI at handling prompting, which leads to some weird results. I’ll get into that later.

I used a hybrid encryption scheme to do this, since the return from the LLM prompt is an arbitrary length, and you can’t encrypt data with RSA that is longer than the key itself.

How it works:

  1. Organizer collects the RSA public keys from each participant. There are a fair number of free tools online you can use to generate keys. I wouldn’t use them for production, but for testing they can help.
  2. The program runs and prompts for the name of each participant, and their RSA public key, which can be pasted in, which is sent to a dict.
  3. The program uses the random.shuffle()method to assign the participants to each other.
  4. The program sends a prompt to Ollama, which is assumed to be listening on its default port:
  1. The program generates a random 16-character AES key for each participant and uses the key to encrypt the message based on the prompt. The encrypted message is written to a file.
  2. The program takes the public RSA key for each participant and encrypts their corresponding AES key.
  3. The encrypted AES key is printed to console.

Once the organizer sends both the encrypted message file and the encrypted key to each recipient, they can run the “Unsecret Santa” program to decrypt and display the contents.

Unsecret Santa prompts the user for their message file, their .pem (RSA private key) file, and the encrypted key. It does the work for you from there and displays the message.

So, from a pure security perspective, there are some holes here, but it’s still interesting – unless you’re yanking the unencrypted message out of memory before the encryption step, there’s no way to attribute the message to any author, because it wasn’t written by an author, and the sender of the message has no idea as to the message’s contents. There is some element of digital signing that could happen here too, but let’s not get too far ahead.

Anyway, this is where I ran into some limitations of Ollama, where it is just a little too eager to offer its…guidance on things it wasn’t prompted to offer them for.

This result was pretty good but it’s weird to me that it offered specific suggestions as to the gift without being prompted to do so, which is not something I ever experienced using ChatGPT.

In another return, the prompt offered “a hint” that the recipient “loved coffee” and specifically asked their Santa to order their recipient coffee for their gift.

The results of the prompt varied pretty wildly in weird and sometimes funny ways. Some of them include lots of Instagram-worthy hashtags, some are quite formal in nature, and others are only a couple of curt sentences. I recently saw a comparison chart of large language models making the rounds on LinkedIn, and I can’t lend that too much credibility (because LinkedIn) but it did have Ollama at the bottom.

Still, ya can’t beat the price.

Categories
General

eoy 2023 update

Hi, I’m here. I didn’t reissue my SSL certificate and our life was essentially plunged into chaos, so I just let it go, which was bad.

In 2022 I decided to re-activate my LinkedIn profile and in the ensuing events ended up doing more posts on LinkedIn, when I thought, “hey, I have an actual website that I pay real money for on which I can post things, and then those things aren’t the property of Microsoft.”

So I’d like to do more posting here instead and may cross-post some of my longer stuff that I already wrote on LinkedIn back in here, then I can have two places where I’ve written some stuff that nobody will read.

I have some cleaning up I need to do on the site, my resume has gotten better with a new role and I can happily (I think) say that I’m done with 7/11 classes in my master’s program at Virginia Tech. We also had a daughter, who turns 2 in a few days, and Oliver is 4, which is wild.

We also moved into a new house, which is great despite being also something of a shitshow, though we’re doing our damndest to let it go. The house itself is pretty baller; the last place was just getting too small, so we needed something where each of us could have a dedicated office space.

My hosting package for this site goes through the middle of the year. We’ll see how it goes, I’m considering packing it up and moving it over to something like an Azure static page, but flitting through the WordPress options and remembering how easy it is to use, I may just keep it here.

Anyway, that’s all for now.

Categories
Cybersecurity General

ssl decryption

I had a brief conversation with a friend (hi Brad, again) the other night about SSL decryption. I could tell he was wary of the idea of SSL decryption in the business, and rightfully so!

Your employer breaking open the encryption on your network traffic seems like a huge violation of your privacy. I don’t really have a hard-line stance about this at work – you likely signed away your expectation of privacy on your work network as part of an acceptable use policy – but most companies have some commitment to their customers’ and users’ privacy and it’s a foregone conclusion that people are going to do some personal activities on their work devices. Does that mean your employer has access to all of your banking information? Probably not!

I’m not going to go through every nuance of SSL (TLS) and SSL decryption but will go through what SSL is, how it works, and what I believe a sensible policy should include and why decryption has become such a hot topic lately for businesses.

What is SSL?

Let’s start from the beginning. SSL stands for secure sockets layer, which has a successor, called TLS – transport layer security.

(Going forward, I will just say SSL. The differences between the two are pretty minor and beyond the scope of this post.)

SSL is a cryptographic standard for encrypting data traversing a network, most commonly across the internet. It puts the “S” (secure) in “HTTPS.” SSL uses both asymmetric and symmetric-key cryptography via a public key infrastructure.

Here’s the deal. In the IT world, we usually try to use real-world analogies to explain technical concepts, as if the things we’re talking about somehow do not exist in the real world. Anyway, when you’re talking about cryptography, this is a hard thing to do.

I can’t take full credit for this “one-way box” explanation, but I don’t remember where I read it, and I’m going to add some of my own flair. Imagine you want to buy a car, but it’s the pandemic and you’re at home and you don’t want to spend hours at the dealership with a bunch of paperwork. “No problem,” the dealer says. “Just fill your information into a secure form online.”

Now imagine that the form is a physical item and you have to give it to them in person. (Bear with me.) The dealership has a “paperwork box” to drop the form in. But this is a weird box. It has one opening in the front, and another opening in the back. The back opening is locked with a strange lock with a keyhole, but it’s so complicated, you’ve never seen anything like it.

There’s another weird thing about this box. You put the form in, and it falls out the bottom. Huh? You try again, same thing happens. Then you notice a stack of envelopes on the top of the box, with a label that says “free, take one.” The envelope has some detailed information about the dealership, manager contact info, etc. There’s even a stamp on it from the chamber of commerce** with the business license number, and signed by their representative. Clearly, this dealer really is who they say they are. You put your payment info form into the envelope, and put it in the box. Voila. It’s accepted. Congratulations on your new…I dunno. Tesla.

If you read that and said, in your best The Simpsons Comic Book Guy voice, “sir, there is a glaring technical error in your analogy,” the glaring error being that encrypted data is not encapsulated in “an envelope” but algorithmically altered as to be unreadable in transit, yes you are correct.

But I don’t have a good analogy for this, and I’m not sure it exists. If it does, leave one in the comments!

Let’s break down the analogy into its technical elements, and talk about how this transaction would’ve occurred if you really could send this form electronically to the dealer. You go to the form on the dealer website, and you see the “secure” icon in your address bar, indicating your connection is secure.

Your browser just went to the dealer’s website and got the public key from their web server. This is the envelope with all of the information on it – the public key encrypts the data. Anyone can get the public key. It’s free. The public key has a mate, the private key. The private key decrypts the data, or can open the lock on the back of the box. It would be pretty bad if someone else got that key, so the dealer has taken extra steps to prevent its theft. Theoretically.

You can also generate a key pair whenever you want. “Does this mean I can just pretend to be the dealership?” Not exactly. Remember the stamp from the chamber of commerce? The one with the signature? That’s an independent third-party verifying the identity of the dealer. The web works the same way.

Secure website key pairs are generated in what’s called a certificate signing request. Basically, “hey, chamber of commerce, can you certify that I am who I say that I am, and keep a public record of it?” When the request is approved by a certificate authority, the public key of the pair* is tied to a digital signature, and returned to you as a certificate you can install on your web server. Every web browser is pre-configured to have enough relevant information about the certificate authority (the chain of trust) that the user doesn’t need to take any other action here, just like you don’t need to take any action to trust that your chamber of commerce has accurately assigned business licenses. Neat!

There is a little more to the process. When your browser verifies the certificate of the website, it uses their public key to encrypt some random data to send to the web server. Remember, only the web server can decrypt this data with its private key. This data becomes the session key between both machines. The web server decrypts this session key and returns a message to your browser, as if to say “I can prove to you that I have the key to this box, and I can open it.” Because both parties are now using the same symmetric key, data can go both ways. It’s pretty cool. This process is called the SSL handshake.

Here’s an image of the handshake from IBM. All credit to them.

This technology is really foundational to privacy and security on the internet. You can learn more about the encryption algorithms – there’s plenty of info out there.

SSL Decryption

This isn’t a foolproof process, though messing with it takes some resources. An organization can declare itself a certificate authority for its users, and direct certificates to user endpoints on its behalf. Since business endpoints will trust this enterprise certificate authority as a legitimate entity, the certificates they receive appear to be from the destination they’re trying to reach. This internal-only certificate is (definitely should be) properly signed by a “real” certificate authority.

From here, the decrypting device can act as a “man in the middle” and can proxy requests for secure websites. Because the endpoint trusts the decrypting device, and the decrypting device has (or has immediate access to) the private key, the device decrypts the traffic, inspects it, then forwards it to the real website using the same process we already went over. The real website doesn’t know any better.

So to sum that up, the decryption process necessitates:

1) A client’s willingness to have its chain of trust manipulated
2) The proper certificates to enable the decryption process
3) A device that can facilitate the work of performing the decryption

The question is, why?

Challenges in the Business Environment

Man, getting an SSL certificate used to be a process. In my day (combs neckbeard) you had to pay $20-ish for the certificate, then create the CSR, then upload it, then get the certificate, then there’d be some setting in IIS or httpd.conf you’d have to change, then invariably you messed something up, then you’d have half your site on not-encrypted http and other parts on https, then you’d have to restart httpd, then you’d look at some other thing for a while and forget what you were doing to begin with.

BOR-ING. Now you can just use Let’s Encrypt and certbot to get a free certificate installed on your web server in like a second. BOOM. You’re good to go faster than Sonic the Hedgehog after a pile of chili dogs. Sheeeeeeeeeeeeeeyit.

What’s the most common cyber-attack? DDoS? Maybe, but if phishing isn’t the most common by now, it’s extremely close. So let’s say you have a user base that isn’t the most technically inclined.

What have they been taught all their lives? Green padlock = safe! So when they click on a link in a phishing email, and that link takes them to “their bank” or “the company benefits page” – they see the green padlock. “This is safe” they think, put their credentials in, then they press enter.

For IT security enthusiasts, privacy advocates, and professionals, this truth gets into the range of being pretty uncomfortable. It’d be a reach to say something like “well, if everything is encrypted, nothing is encrypted,” but that’s sort of…we are on that bus. Don’t get me wrong, I think Let’s Encrypt is an amazing project and will continue to do great things for the internet. But encryption is a tool, and tools can be and are used for harm, and the people who stand to be the most harmed by it are not C-level executives, but employees trying to do their best. There are indisputably personal and professional impacts to people acting in good faith but affected by cyber threats.

If C&C traffic, malware traffic, and phishing websites are able to operate/communicate in a way where researchers and defenders have no insight into them, I’m worried about what that means for the next conversation we have about encryption at large, so businesses having sensible, practical, people-first decryption policies is a decent set of brake pads we can put on the bus.

Sensible Policies

Back to the concern about banking. I believe all decryption devices are able to selectively apply decryption, and if you are looking at a device where that is not an option, please look elsewhere. The engineer who is configuring your decryption should be able to put sites like https://bankofamerica.com in their decryption exclusion list, where the enterprise certificate authority is not used to facilitate an SSL forward proxy, and the chain of trust is not altered for that session.

After you’ve reviewed the legal obligations in your area about encryption, and had a conversation with HR and your leadership team about the go/no-go, consider the way you’ll implement your policy on the whole, and how it can add value to your business without completely betraying the trust of your users. Remember that the practical applications of cybersecurity are, first and foremost, value-oriented activities. Consider the messaging you provide to your teams and stakeholders.

What sounds better to you?

“Beginning Monday, we will be implementing web decryption on our network. We expect you all to sign new acceptable use policies regarding the use of this new technology.”

“Given the recent expansion of email phishing, ransomware, and malware attacks on organizations across the country, we’ve decided to implement web decryption to keep our business assets and users safe. We’ve worked with our partners to come up with a deployment solution that only targets suspicious activities and have attached updated documentation that explains what this means for you.”

Have a strategy. Vendors are more than willing to work with you on this, because of the increased processing requirements for decryption, it’s often an avenue for them to make another sale. Palo Alto Networks has a very helpful page about coming up with a decryption strategy for your network, even if you don’t use their products.

Leverage your synergies.*** How does your decryption device fit in to the rest of your network? Is it a firewall? Can you set up your decryption based on existing URL categories? For example, you might decrypt on “unknown” or “web-posting” (think pastebin) but not decrypt on “banking” or “ecommerce.” Are there any data loss prevention or credential theft features you can also take advantage of?

Be transparent. You are indisputably taking privacy away from your users here, even if they know they don’t have an expectation of it. You owe them a thorough explanation of the process and how they may be affected. What websites are you decrypting on? What was the business justification for decrypting that website or category?

I hope that’s a helpful primer to SSL, decryption, and why we’re seeing more and more of it at scale. Having to implement this technology in the business is a nose-holding endeavor, but I do see it as increasingly necessary as the majority of the internet goes secure and we see the continued proliferation of cyberattacks. If you’re leveraging this in your organization, how’s it going? Let me know in the comments.

*(The private key is too, because the public and private keys are inextricably linked, but the certificate authority doesn’t need your private key to generate the certificate. In fact, don’t send the private key to them. Don’t send it to anyone. Seriously.)

**(It has been pointed out to me that this is not actually something a chamber of commerce does. This is why I am a technologist and not a businessy businessperson. INTERNET: SERIOUS BUSINESS.)

**(Did I really write these three words?! In a row?!)