Categories
Cybersecurity General

you probably don’t need desktop mfa

Update 9/25/24: I was informed after writing this post that NIST has published a draft of the latest revision of SP-800-63B, Digital Identity Guidelines. I recommend referring to section 3.1.7 “Multi-factor cryptographic authentication.”

We recently had a conversation about password rotations at work. One of our engineering managers suggested we implement a password rotation policy.

When I explained to them that was no longer a recommended practice, another engineering colleague suggested we implement multi-factor authentication (MFA) on endpoints (laptops). At nearly the same time, Okta started pushing an add-on called Okta Device Access, where one of the main features is “Desktop MFA.”

(To be clear, I am a fan of Okta. Okta is a good company. They have been good to me in my career. I’m picking on them here because my current employer uses Okta, and therefore, I use Okta. Okta is not the only organization that offers Desktop MFA. I also have a Yubikey. Yubico is also a good company.)

Both of the conversations I had about implementing this feature were identical.

“We/you should require MFA to log in to desktops.”
“What do you mean?”
“You should require a second factor to get in besides just the password.”
“But, you have to have the device, right?”
“Yeah…”
“That is MFA. The device is something you have, and the password is something you know.”

The counter-argument for this that I hear is “well but if they have your device, the password is the weak link.” What people who argue this actually mean is “possession of the device only authenticates the machine, not the user,” but we’ll get to this later.

Assume for a minute that trusted platform modules (TPMs)/cryptographic subsystems such as the Secure Enclave don’t exist. Now consider the relationship between a computer user and a computer, a series of interactions with hardware at a base layer and a collection of services on top of the hardware layer. The majority of the services are independent of the hardware beyond dependencies created by the chip architecture. It should therefore be possible to transplant a disk in like-for-like architectures and expect the hardware to load the services on that disk. AKA, computer turns on.

So, if we’ve established a logical separation between the hardware and the software, then what vendors really mean by “desktop MFA” is “operating system MFA.” You’re not accessing your computer; that would require a screwdriver. You’re accessing a service, the operating system. Per our scenario above, this is a bit problematic, since anyone could, in fact, take the boot storage out of the desktop, transfer it to another machine, and voila.

But in our scenario, we did not have a TPM.

You must have that TPM (which facilitates identifying you and only you on an encrypted disk assigned to you and only you) in your possession and another factor to successfully access that operating system. Just having the disk doesn’t work, you cannot transplant the disk or the TPM. Per Apple’s documentation on the Secure Enclave:

”the [unique] key hierarchy protecting the file system includes the [unique] UID, so if the internal SSD storage is physically moved from one device to another, the files are inaccessible.”

That. Is. MFA. When we incorporate that TPMs do exist and can facilitate rapid full-disk encryption and secure authentication via a biometric, we can meet a secure standard for MFA, and that standard is pretty high.

Which brings us to “device + sticky note” argument. This rebuttal is not an argument for device MFA. It’s an argument against passwords.

“Desktop MFA” seeks to solve a problem that no longer exists. If you don’t believe me, ask NIST, where they define a “multi-factor authenticator” in the context of defining “multi-factor authentication:”

“An authentication system that requires more than one distinct authentication factor for successful authentication. Multi-factor authentication can be performed using a multi-factor authenticator or by a combination of authenticators that provide different factors.”

“Multi-factor authentication requires 2 or more authentication factors of different types for verification.”

Different types. NOT different devices. Multifactor authenticators acknowledge the possibility of passing a test for MFA as long as the factor types are logically separated. That the Secure Enclave and Touch ID reader exist in the same physical space as the rest of the system’s components does not matter.

Consider the much more simple scenario of accessing a phone. In almost all cases, that phone is designed to be used by you and only you. You must have physical possession of the phone and only that phone, and a way to provide a second factor (knowledge or biometric) to the phone to access it. Again, this is MFA. It’s so effective, in fact, that the government will compel you to sit in prison until you unlock the phone if they suspect you’ve committed a crime and need the phone’s contents for evidence, assuming you are alive. In the highly-publicized case of the FBI “hacking in” to the San Bernardino shooter’s iPhone, the FBI was able to gain access not by breaking the requirement for a second factor, but by breaking a mechanism in place in iOS that erases the iPhone after hitting a ceiling of passcode guesses. They were then able to brute force the passcode.

Let’s say you do decide to implement desktop MFA anyway. How are you going to do it? Are you going to allow your users to use their personal phones with an authenticator app push? How is that app secured?

If the answer is “we will force the phone’s owner to use Face/Touch ID to open their phone,” congratulations:

  • you changed one biometric for another and are relying on that biometric to work consistently on an enrolled device that is not managed by you, unless you are issuing phones, which is no longer common
  • in other words, you just decided to swap using a biometric and a possession factor for a different biometric and possession factor
  • if a(2) and b(2) are true, why are a(1) and b(1) not true?
  • you have to support it when it doesn’t work
  • you’re still susceptible to MFA bombing

If the answer is “we will require the phone’s owner to use a password to open their phone,” congratulations:

  • you are now satisfying “true” MFA, but you’re relying on a factor (knowledge factor) that is not phishing resistant
  • you have zero assurance that the password used to open the phone and the password used to log in to the endpoint are not the same password, and they probably are.

If the answer is “we will skip the authenticator app push and issue yubikeys,” congratulations:

  • You have to deal with managing yubikeys now, good luck!
  • There is not a material difference between using a yubikey and using integrated Touch ID or Windows Hello. Consider that all yubikeys are passkeys, but not all passkeys are yubikeys. “but the yubikey is separate” – the yubikey is only separate until the second it’s plugged into the device, and it can be safely assumed that the yubikey will nearly always be in close proximity to the device.

Look, if you’re still going to say “possession of the device you are authenticating to does not count as a possession factor,” that’s fine, but you’re specifically asking for 3FA or 2(n)FA, so just ask for that if it’s what you want. No vendor calls their solution 3FA or 2(n)FA because nobody uses those terms even if that’s what they really want, and if vendors started using them, organizations would have to think critically about things like “desktop MFA” and realize that it probably isn’t necessary.

OK, so let’s go back to the password thing for a minute.

We, security practitioners, feel uncomfortable reconciling the idea of a multi-factor authenticator only in the context where one of those authentication mechanisms is a password. Yet, when it comes to using cryptographic and biometric mechanisms, security practitioners have always felt a high degree of reliance and assurance. Certainly higher than passwords, anyway.

So, why can’t we just acknowledge that we have a real opportunity here to make everyone’s experience both much more simple and much more secure? I can’t believe I’m going to do this, but I’m actually going to quote Elon Musk: “The best part is no part.”

It would have been crazy to suggest 10 years ago, when Touch ID was first introduced, that biometric inputs and TPMs would be as ubiquitous as they are now. Apple performed a literal miracle here, they actually gave regular users a method by which they could authenticate to a device securely, and users wanted to do it.

Today, nearly everyone has one of these things both in their pockets and integrated into their endpoints, yet it feels like the industry writ large is literally looking for reasons to hold on to passwords for dear life. For whatever reason, we haven’t let this new paradigm catch up to us.

As Brian Moriarty says: “The treasure is right there.

When multi-factor authentication first started becoming part of the cybersecurity discourse, it was commonplace and accepted to rely on an SMS message as a second factor. Now, nearly no serious cybersecurity practitioner would recommend the use of SMS. “Better than nothing, but not good” is the approximate industry take on that thing.

If we were able to successfully look back on SMS and agree that we’ve evolved away from its reliable use as a factor, that we can truly reflect on what made sense at the time and now does not, we can do the same for passwords. When modern desktops are managed well and we stop relying so much on knowledge factors, implementing solutions like desktop MFA start to look more like a way to maintain the existing paradigm as opposed to evolving beyond it.

We are at an inflection point, and it’s time to make it the reflection point. We have WebAuthN, we have passkeys, we have biometrics, we have Touch ID, we have Windows Hello, we have certificates, and we have full disk encryption that is nearly invisible to the user.

We know what we need to do. Are we ready to ask why we aren’t doing it?

Categories
Cybersecurity General

rapid fire rfq

One of my favorite talks at RVASEC 2024 was one I was most surprised with, called “Social Engineering the Social Engineers” by David Girvin at Sumo Logic.

If you work in a technical leadership role, you should absolutely view this talk, because 1) it’s funny, and also 2) it will really help you understand the relationship between you and salespeople, the ways salespeople are incentivized, and how you can leverage the sales process to inform better decision making around tooling evaluation.

The net result of this enlightenment should be a shorter evaluation to deployment lifecycle, enabling you to extract more technical value out of your tools. Girvin broke a lot of my assumptions about sales and I’ve already started putting some of his tips into practice to good effect.

I want to pause for a minute here and say that I have worked with some truly great salespeople and some of them have become friends of mine. I have also worked with very poor salespeople. Tech sales is a tough job and I respect it, but I also have my own job I need to do.

Anyway, Girvin makes a big deal out of transparency and not keeping the competition a secret. In retrospect, that’s actually an obvious tip that goes back to Kerckhoffs’s principle, and it’s not like vendor A is unfamiliar with competing vendor B. They “know the system.”

I had a unique opportunity to put this tip into practice, and did an experiment in radical candor when my org had a popular enterprise security product up for renewal. The scenario was essentially “hey, we have this tool and it’s up for renewal, we need to put cost pressure on them.” Worst case scenario was we couldn’t get pricing, and we’d just apply indirect pressure – after all, “this is my best and final offer” usually isn’t.

So I contacted two competitors. Vendor A and Vendor B. I contacted them through the sales intake forms on their websites. An SDR responded to me from each vendor “wanting to know what we were looking for.” Here was my response:

Thanks for getting back to me. I want to be transparent about the outreach – we are currently $EXISTING_VENDOR customers, we are pretty happy with $EXISTING_VENDOR, we are probably not looking at an extended demo or POV, and our focus is getting comparable and competitive pricing. 

Of course, if it’s a slam dunk, there might be something here, and although right now we’re doing a primarily numbers/market-based evaluation, we’re happy to get a product/feature overview.

Vendor A played ball on this. We had a conversation, it was a good product, we got the product overview and a brief demo, and the next day we had a quote, and it was in fact lower, though not by much, than the existing product. Vendor B did not get back to me after my email above. But we only needed one response. We also now know that we were talking to a legit competitor, and even in this biz, relationships are king.

During the call, I presented this as us doing a “rapid-fire RFQ” (request for quote), and it became clear to me that the SDR thought this was an unusual tactic. “Do you do this for all of your renewals?”

Well, we do now.

Categories
General

linkedin is bad, actually

The week before this last vacation, I wrote an at-length post on LinkedIn discussing the Crowdstrike RCA report that I thought was helpful in explaining some of the details of the report and actions you should or should not take as a Crowdstrike customer, and how I thought the future of this thing was going to go down with their customer base.

That post got almost no engagement, likes, reposts, comments, or anything spurring further genuine discussion, or things I had missed.

It was at this point that I devised a theory – that if I wrote a post about Hawk Tuah Girl AKA Hailey Welch, it would generate better engagement than the illustrative and professional content I wrote about a recent cyber event.

I didn’t use the words “hawk tuah girl” in the post, but I barely disguised the fact that I was talking about this woman, and used her full name. I pre-empted any accusation of unprofessional content by citing the Vanity Fair and NYMag articles about her.

Sure enough, hawk tuah generated significantly more engagement than my Crowdstrike RCA writeup. It was at this point, and among many other reasons, that I decided I’d had enough and closed my LinkedIn account.

There are people I connected with on LinkedIn that I genuinely did enjoy interacting with, and I will miss that. I felt the same way about Twitter/X. I have not had a Twitter account in a number of years. But, when I ask myself the question: “what have I really gotten out of this platform” – in the long run I just can’t come up with a good answer other than “actually, I think this may have been a colossal waste of my time.”

I have never networked my way into a job from LinkedIn. I came close once, when a very good recruiter came across my profile and reached out, but I ended up withdrawing and took the job I’m in now that I got from old-fashioned networking. All of the other jobs I’ve gotten have either been through applying directly or from in-person networking.

The vast, vast majority of recruiters who reached out to me on LinkedIn seemed like they were following a spray-and-pray strategy, often advertising jobs to me that – had they actually read my profile – which was 1:1 with my resume, they would’ve known I was not qualified for.

When I closed my account, I had 176 open requests to connect. Almost all of them were from salespeople. I have worked with some great salespeople, however:

1) I’m really not interested in talking to a stranger about their product. I am aware that BDRs/SDRs make money based on the number of appointments they book, and that is the endgame here. It’s cliché, but I’m well aware of what my pain points are. I will come to you if I need a problem solved.

2) When you connect with salespeople on LinkedIn, your feed fills up with content for salespeople. Ultimately, this is probably doing salespeople a disservice.

I know people find career success on LinkedIn, but it has never worked for me personally. I am not interested in sales calls.

I am also appalled at the amount of straight-up bad behavior there is on LinkedIn, including the stolen and re-hashed content, content obviously generated by AI (it’s really obvious), grifters, and a frankly disturbing number of people who use mental illness as a shield to deflect any claim that they’re being rude and shitty.

So, I am gone now. I maintain a private Instagram account with a small number of followers for the use of my close friends and family, and have no other social media presence.

And I’m upset about this, mostly at myself, for the wasted time I invested in another ARR generator for Microsoft. For the time I could’ve spent with my family, coding, or working on a side project, or even this website, which I operate and maintain entirely on my own.

I’m upset about the value they captured that I didn’t.

Categories
Uncategorized

task failed successfully

I am not entirely sure what has happened to hiring that the process has become so complicated, but I suspect it is something like “we have hundreds of applicants and they all have the same resume that they wrote with ChatGPT, so we don’t have any way to discern between candidates.”

As a result, hiring orgs have implemented novel things to try to weed out candidates who don’t make the cut on a technical level. In the past for technical roles this has taken the form of take-home assignments. I have done a couple of these and gotten offers after the fact. One was to write an essay about a GRC-ish topic, and the other one was to create a network architecture diagram that was reasonably basic. Each of these took me about 90 minutes – 2 hours, which I felt like was a reasonable amount of time to ask a candidate to spend on a take-home.

Recently, I suspect with the advent of GenAI, we have seen the rise of the “live coding assignment.” I was approached by a really solid org to take an enterprise security engineering role, and was told that I would indeed be encountering a live coding assignment, and that it would be OK as long as I knew how to code APIs, which is something I do know how to do reasonably well. I was not really sure what to expect in said assignment other than the API thing, and that my python is…competent.

Spoiler alert: I did not get the job.

I appreciate that they did not give me a bullshit task (it wasn’t “fizz buzz”, and I know how to solve “fizz buzz,” and it is the most bullshit of bullshit tasks), and they asked me to just get some data out of an API and format it in such a way that it could be piped to a SIEM.

I did make a reasonable amount of progress, and it was the kind of thing that I could’ve crushed at a take-home in two hours, but 40 minutes was just not enough time for me to deliver, and nerves probably got the best of me. As much as I find it distasteful that an org would disqualify an experienced candidate without a software engineering background by evaluating their ability to engineer software, this is unfortunately the world we live in now, so let me (the not software engineer) provide some tips on how to solve this kind of thing and what I would’ve done differently.

Practice in the Tool First

When I work in an API I like to have a copy of the output in front of me in an easily readable way before trying to sort out whatever field or key value pair I’m trying to get. Look, I’m in my late 30s and I like things that are easy to read.

I killed a bunch of time in this interview trying to pretty print the data to the console, only to find out that the console just would not display the pretty printed output. They had offered me access to the tool (it was Coderpad) ahead of time with a sandbox, and aside from making sure I could import the requests library (which worked), I didn’t mess around with it. I tried pasting some other working code into the sandbox after the fact that did pretty print locally, and I still couldn’t get Coderpad to play ball.

So, practice in the actual tool. If a sandbox is not offered, ask for one. If one is not available at all, maybe run away?

The Docs are the Source of Truth

Read the API docs. They are the source of truth. Almost every API has example output and input. I have never worked with an API in security engineering that wasn’t accessible with a URL and returned its data in JSON. Usually the data is returned as a list, or a dict, or a list of dicts. This API was just returning a dict, so it was reasonably easy to extract the values from the keys. For a list of dicts, it is a little more complicated, but it’s not bad as long as you understand how lists work.

Anyway, the API docs will tell you how to build your headers. Just follow them. If you are spending time thinking about doing it a different way because you know that way works, you are wasting time. Just read the docs.

Write Good Code

This is a part where the interviewer said I was doing a good job. You should know the fundamentals of how to write code. I don’t think at all that you need to be a coding expert to be good at cybersecurity, but you do need to know the basics. If you are writing everything as one big main function, your code is going to be impossible to maintain.

I originally learned to code in Java so this was basically beaten into me. Small pieces loosely joined. Small batches.

Also, do leave comments for where you think you might want to make improvements or add features. I have a bad habit of leaving comments after I am done coding rather than while I am coding, so I left a few on my live task, but it was pretty half-assed.

Talk about what you’re doing

I did not do this and should’ve. It was a stream of consciousness, but it was like, nonsensical. Essentially I was just thinking out loud and probably embarrassed myself. In fact, the interviewer probably got off the call with me and was like “this guy is insane, we can’t hire him.” I didn’t have any strategy and wasn’t intentional about talking through what I was doing. I think this cost me.

Which, after the fact, is like, “duh” right? I would’ve practiced a presentation before I gave it, why didn’t I do this here? I know how to work with an API and have done some pretty cool things with the Okta API (the one I use the most often) because Okta is terrible about letting you do operations in bulk in the GUI.

Find a way to practice/free API resources

This came down to some weird timing for me, but I should’ve found/made more time to practice before going into the interview so I had more confidence. At that point I had just gotten back from a week’s vacation and catching up on work the following week, so I didn’t do any coding and by then it had been several weeks since I coded anything at all. I didn’t make good use of the time, and just ended up constantly referencing a bunch of API code I had already written either for work or academic projects instead of having confidence in what I was doing.

”But how do I practice on an API without paying $100,000 for a SaaS tool?”

APIs are great because they can teach you about coding and give you a fun little project that you can see the results immediately, and there are tons of great APIs that are totally free!

Washington Metropolitan Area Transit Authority API: https://developer.wmata.com – this one is fun because it presents a lot of information as a list of dicts.

Massachusetts Bay Transportation Authority API: https://www.mbta.com/developers/v3-api

Bureau of Labor Statistics API, which I have used for academic work: https://www.bls.gov/bls/api_features.htm

This list from GitHub on free APIs – a lot of these are kinda more “just for fun,” so if you want to know how many minutes until the next Orange Line train leaves Clarendon, it’s not for you:

https://github.com/public-apis/public-apis

Better luck next time

Considering I had never done a live coding exercise before, I feel like I did okay. I definitely did not implode, but I didn’t excel and my result didn’t deliver everything someone experienced would be able to do in the time allotted. I was disappointed, but not entirely surprised, when I heard I was not moving forward.

I didn’t make it and I own that, and it was helpful for me to have the interviewer acknowledge that, no, this is not really how we work in real life. But I’ll be reflecting on this for next time (and there will be a next time) and hope these tips are helpful. For better and worse, this now appears to be the norm in technical interviews.

Categories
Uncategorized

cyber mai tais

Have you ever had a Mai Tai? If you drink, the answer is probably “yes.”

Was it made correctly? The answer to this one is “probably not.”

There is some debate about the origin of the venerable tiki cocktail, I personally believe the original recipe for the Mai Tai should be attributed to Vic Bergeron, aka “Trader Vic.”

This is the recipe:

  • 2 ounces Jamaican rum (I use 1.5 Appleton Estate and a .5 Diplomatico, which is Venezuelan, but it’s damn good)
  • 1/2 ounce orange curacao (Cointreau is fine even though it’s technically a triple sec)
  • 1/2 ounce orgeat syrup (Check Wegmans)
  • 1/4 ounce simple syrup (cane sugar)
  • Juice of one whole lime

Pour into a shaker with a lot of cracked ice, and shake vigorously. You really want to froth it up in that thing. Then pour the whole thing, with the ice, into a double old fashioned glass. Top with a sprig of mint and one half of your spent lime shell. It’s supposed to look like an island with a palm tree. Look, I don’t make the rules, just go with it.

If this sounds odd to you, it’s because most Mai Tais today are made with a mix, and usually with fruit juices like pineapple, and/or coconut flavorings. Then a pineapple stick with a maraschino cherry is added. I do add the cherry myself, but the original recipe doesn’t call for that, or any of the other stuff. Over the years, it’s been twisted into something commercial, and often looking like it literally came from a commercial.

The “real” Mai Tai a simple drink that takes good, fresh ingredients and turns them into something pretty remarkable. In fact, you might read the ingredients on their own and wonder if this is actually any good, but the whole really is much greater than the sum of its parts.

Once a mix is put together, it can’t be disassembled. It’s OK to use mixes if they’re good, like Zing Zang, but when the mix you’re drinking differs wildly from the original intent of the thing the cocktail was trying to do in the first place, that should arouse some suspicion. The intent for the Mai Tai being that 1) rum can be appreciated and that 2) simple is better.

So what can cyber practitioners learn from the storied history of the Mai Tai? Do things simply, and execute them well. K.I.S.S. “Complexity is the enemy of security,” so too for a good cocktail. We’ve seen broad consolidation of tooling in the security market in 2024. Vendors herald this as a great thing for their customers, and naturally we’ve seen the return of the 2010s-era marketing around visibility through “a single pane of glass.” 

I can well see through a bottle of cocktail mix. I have no idea what’s actually inside the bottle, or if it was mixed correctly. Practitioners should be wary of this scheme. A single pane of glass isn’t valuable if you can’t really tell what’s on the other side of the pane, or if the pane only becomes clearer with an additional license. Nothing in security begets more work than tooling, an endless series of options that generate more work for you and more ARR for your vendor.

The “zero trust” marketing from vendors hit a fever pitch about two years ago, then when generative AI became popular, it seemed to all go out the window. Why? Maybe it’s because zero trust is real work, or maybe it’s because good security practitioners can look at zero trust as an opportunity to stop using all of their expensive tooling and focus on basics, and vendors started to realize that too. 

Palo Alto has disclosed multiple vulnerabilities in 2024 at a rate I haven’t seen before. Time will tell if this is a one-off, or if it’s because they’ve finally spread themselves too thin. Which is it? Their firewalls were the rum, the services were the syrups, and Panorama was the lime. You knew what it all did. Look at Palo Alto’s website now. Do you have that same assurance today that these parts are still just as effective? What about Crowdstrike? Are all of these features delivering value, or are they a solution in search of a problem, and if so, is the problem “we need to figure out how to deliver shareholder value?”

The fresh Mai Tai is unpretentious and needs no unnecessary adornment or sweetener. Don’t take my word for it, mix it yourself. There is no beverage on Earth more balanced (except for maybe Coca-Cola, which coincidentally also retains its original recipe); it’s sweet, but just so. Tart, but not too much. It is wholly punchy and refreshing. It knows what it wants to do and does it without getting in its own way, which is something security practitioners seem to continuously forget how to do.

It works smart, not hard.

You probably want to limit yourself to two though, unless you’ve got a couple of plates of skewered chicken and crab rangoon. Aloha!

Categories
Cybersecurity General

a more secret santa

I ran a secret Santa for my distributed friend group this year. It being 2023 and all, I used ChatGPT to do the Santa assignments and generate some creative secret Santa messages, of which it did an admirable job. As usual, more detail is better when it comes to that thing.

Here was my prompt:

Here is one of the results:

Unfortunately as the organizer, doing these things comes with the major drawback of “well, I’m the organizer, so I know who everyone is sending a gift to,” making it only a secret-ish Santa.

There are web-based services that do exactly what I was trying to do with this project, but since none of them have the prerequisite of being a huge nerd, I decided to home-grow my own solution using Python and Ollama, Meta’s free large language model.

I also figured I’d throw some encryption in the mix so the message would actually be a secret, not just to the Santa participants, but to the organizer. So I slapped it together and called it “Secreter Santa.”

If you want to try this thing, you need to be running Python 3, and a recent version, because there is some reliance on the correct ordering/index of lists. That’s something I can change, but this was just a fun thing and a proof of concept. You also need to run Ollama, and there are some libraries you need to import to be able to send the prompt to the LLM and do the cryptography.

Ollama is totally free, and you can download it at https://ollama.ai – it is not as good as OpenAI at handling prompting, which leads to some weird results. I’ll get into that later.

I used a hybrid encryption scheme to do this, since the return from the LLM prompt is an arbitrary length, and you can’t encrypt data with RSA that is longer than the key itself.

How it works:

  1. Organizer collects the RSA public keys from each participant. There are a fair number of free tools online you can use to generate keys. I wouldn’t use them for production, but for testing they can help.
  2. The program runs and prompts for the name of each participant, and their RSA public key, which can be pasted in, which is sent to a dict.
  3. The program uses the random.shuffle()method to assign the participants to each other.
  4. The program sends a prompt to Ollama, which is assumed to be listening on its default port:
  1. The program generates a random 16-character AES key for each participant and uses the key to encrypt the message based on the prompt. The encrypted message is written to a file.
  2. The program takes the public RSA key for each participant and encrypts their corresponding AES key.
  3. The encrypted AES key is printed to console.

Once the organizer sends both the encrypted message file and the encrypted key to each recipient, they can run the “Unsecret Santa” program to decrypt and display the contents.

Unsecret Santa prompts the user for their message file, their .pem (RSA private key) file, and the encrypted key. It does the work for you from there and displays the message.

So, from a pure security perspective, there are some holes here, but it’s still interesting – unless you’re yanking the unencrypted message out of memory before the encryption step, there’s no way to attribute the message to any author, because it wasn’t written by an author, and the sender of the message has no idea as to the message’s contents. There is some element of digital signing that could happen here too, but let’s not get too far ahead.

Anyway, this is where I ran into some limitations of Ollama, where it is just a little too eager to offer its…guidance on things it wasn’t prompted to offer them for.

This result was pretty good but it’s weird to me that it offered specific suggestions as to the gift without being prompted to do so, which is not something I ever experienced using ChatGPT.

In another return, the prompt offered “a hint” that the recipient “loved coffee” and specifically asked their Santa to order their recipient coffee for their gift.

The results of the prompt varied pretty wildly in weird and sometimes funny ways. Some of them include lots of Instagram-worthy hashtags, some are quite formal in nature, and others are only a couple of curt sentences. I recently saw a comparison chart of large language models making the rounds on LinkedIn, and I can’t lend that too much credibility (because LinkedIn) but it did have Ollama at the bottom.

Still, ya can’t beat the price.

Categories
General

eoy 2023 update

Hi, I’m here. I didn’t reissue my SSL certificate and our life was essentially plunged into chaos, so I just let it go, which was bad.

In 2022 I decided to re-activate my LinkedIn profile and in the ensuing events ended up doing more posts on LinkedIn, when I thought, “hey, I have an actual website that I pay real money for on which I can post things, and then those things aren’t the property of Microsoft.”

So I’d like to do more posting here instead and may cross-post some of my longer stuff that I already wrote on LinkedIn back in here, then I can have two places where I’ve written some stuff that nobody will read.

I have some cleaning up I need to do on the site, my resume has gotten better with a new role and I can happily (I think) say that I’m done with 7/11 classes in my master’s program at Virginia Tech. We also had a daughter, who turns 2 in a few days, and Oliver is 4, which is wild.

We also moved into a new house, which is great despite being also something of a shitshow, though we’re doing our damndest to let it go. The house itself is pretty baller; the last place was just getting too small, so we needed something where each of us could have a dedicated office space.

My hosting package for this site goes through the middle of the year. We’ll see how it goes, I’m considering packing it up and moving it over to something like an Azure static page, but flitting through the WordPress options and remembering how easy it is to use, I may just keep it here.

Anyway, that’s all for now.

Categories
Cybersecurity General

ssl decryption

I had a brief conversation with a friend (hi Brad, again) the other night about SSL decryption. I could tell he was wary of the idea of SSL decryption in the business, and rightfully so!

Your employer breaking open the encryption on your network traffic seems like a huge violation of your privacy. I don’t really have a hard-line stance about this at work – you likely signed away your expectation of privacy on your work network as part of an acceptable use policy – but most companies have some commitment to their customers’ and users’ privacy and it’s a foregone conclusion that people are going to do some personal activities on their work devices. Does that mean your employer has access to all of your banking information? Probably not!

I’m not going to go through every nuance of SSL (TLS) and SSL decryption but will go through what SSL is, how it works, and what I believe a sensible policy should include and why decryption has become such a hot topic lately for businesses.

What is SSL?

Let’s start from the beginning. SSL stands for secure sockets layer, which has a successor, called TLS – transport layer security.

(Going forward, I will just say SSL. The differences between the two are pretty minor and beyond the scope of this post.)

SSL is a cryptographic standard for encrypting data traversing a network, most commonly across the internet. It puts the “S” (secure) in “HTTPS.” SSL uses both asymmetric and symmetric-key cryptography via a public key infrastructure.

Here’s the deal. In the IT world, we usually try to use real-world analogies to explain technical concepts, as if the things we’re talking about somehow do not exist in the real world. Anyway, when you’re talking about cryptography, this is a hard thing to do.

I can’t take full credit for this “one-way box” explanation, but I don’t remember where I read it, and I’m going to add some of my own flair. Imagine you want to buy a car, but it’s the pandemic and you’re at home and you don’t want to spend hours at the dealership with a bunch of paperwork. “No problem,” the dealer says. “Just fill your information into a secure form online.”

Now imagine that the form is a physical item and you have to give it to them in person. (Bear with me.) The dealership has a “paperwork box” to drop the form in. But this is a weird box. It has one opening in the front, and another opening in the back. The back opening is locked with a strange lock with a keyhole, but it’s so complicated, you’ve never seen anything like it.

There’s another weird thing about this box. You put the form in, and it falls out the bottom. Huh? You try again, same thing happens. Then you notice a stack of envelopes on the top of the box, with a label that says “free, take one.” The envelope has some detailed information about the dealership, manager contact info, etc. There’s even a stamp on it from the chamber of commerce** with the business license number, and signed by their representative. Clearly, this dealer really is who they say they are. You put your payment info form into the envelope, and put it in the box. Voila. It’s accepted. Congratulations on your new…I dunno. Tesla.

If you read that and said, in your best The Simpsons Comic Book Guy voice, “sir, there is a glaring technical error in your analogy,” the glaring error being that encrypted data is not encapsulated in “an envelope” but algorithmically altered as to be unreadable in transit, yes you are correct.

But I don’t have a good analogy for this, and I’m not sure it exists. If it does, leave one in the comments!

Let’s break down the analogy into its technical elements, and talk about how this transaction would’ve occurred if you really could send this form electronically to the dealer. You go to the form on the dealer website, and you see the “secure” icon in your address bar, indicating your connection is secure.

Your browser just went to the dealer’s website and got the public key from their web server. This is the envelope with all of the information on it – the public key encrypts the data. Anyone can get the public key. It’s free. The public key has a mate, the private key. The private key decrypts the data, or can open the lock on the back of the box. It would be pretty bad if someone else got that key, so the dealer has taken extra steps to prevent its theft. Theoretically.

You can also generate a key pair whenever you want. “Does this mean I can just pretend to be the dealership?” Not exactly. Remember the stamp from the chamber of commerce? The one with the signature? That’s an independent third-party verifying the identity of the dealer. The web works the same way.

Secure website key pairs are generated in what’s called a certificate signing request. Basically, “hey, chamber of commerce, can you certify that I am who I say that I am, and keep a public record of it?” When the request is approved by a certificate authority, the public key of the pair* is tied to a digital signature, and returned to you as a certificate you can install on your web server. Every web browser is pre-configured to have enough relevant information about the certificate authority (the chain of trust) that the user doesn’t need to take any other action here, just like you don’t need to take any action to trust that your chamber of commerce has accurately assigned business licenses. Neat!

There is a little more to the process. When your browser verifies the certificate of the website, it uses their public key to encrypt some random data to send to the web server. Remember, only the web server can decrypt this data with its private key. This data becomes the session key between both machines. The web server decrypts this session key and returns a message to your browser, as if to say “I can prove to you that I have the key to this box, and I can open it.” Because both parties are now using the same symmetric key, data can go both ways. It’s pretty cool. This process is called the SSL handshake.

Here’s an image of the handshake from IBM. All credit to them.

This technology is really foundational to privacy and security on the internet. You can learn more about the encryption algorithms – there’s plenty of info out there.

SSL Decryption

This isn’t a foolproof process, though messing with it takes some resources. An organization can declare itself a certificate authority for its users, and direct certificates to user endpoints on its behalf. Since business endpoints will trust this enterprise certificate authority as a legitimate entity, the certificates they receive appear to be from the destination they’re trying to reach. This internal-only certificate is (definitely should be) properly signed by a “real” certificate authority.

From here, the decrypting device can act as a “man in the middle” and can proxy requests for secure websites. Because the endpoint trusts the decrypting device, and the decrypting device has (or has immediate access to) the private key, the device decrypts the traffic, inspects it, then forwards it to the real website using the same process we already went over. The real website doesn’t know any better.

So to sum that up, the decryption process necessitates:

1) A client’s willingness to have its chain of trust manipulated
2) The proper certificates to enable the decryption process
3) A device that can facilitate the work of performing the decryption

The question is, why?

Challenges in the Business Environment

Man, getting an SSL certificate used to be a process. In my day (combs neckbeard) you had to pay $20-ish for the certificate, then create the CSR, then upload it, then get the certificate, then there’d be some setting in IIS or httpd.conf you’d have to change, then invariably you messed something up, then you’d have half your site on not-encrypted http and other parts on https, then you’d have to restart httpd, then you’d look at some other thing for a while and forget what you were doing to begin with.

BOR-ING. Now you can just use Let’s Encrypt and certbot to get a free certificate installed on your web server in like a second. BOOM. You’re good to go faster than Sonic the Hedgehog after a pile of chili dogs. Sheeeeeeeeeeeeeeyit.

What’s the most common cyber-attack? DDoS? Maybe, but if phishing isn’t the most common by now, it’s extremely close. So let’s say you have a user base that isn’t the most technically inclined.

What have they been taught all their lives? Green padlock = safe! So when they click on a link in a phishing email, and that link takes them to “their bank” or “the company benefits page” – they see the green padlock. “This is safe” they think, put their credentials in, then they press enter.

For IT security enthusiasts, privacy advocates, and professionals, this truth gets into the range of being pretty uncomfortable. It’d be a reach to say something like “well, if everything is encrypted, nothing is encrypted,” but that’s sort of…we are on that bus. Don’t get me wrong, I think Let’s Encrypt is an amazing project and will continue to do great things for the internet. But encryption is a tool, and tools can be and are used for harm, and the people who stand to be the most harmed by it are not C-level executives, but employees trying to do their best. There are indisputably personal and professional impacts to people acting in good faith but affected by cyber threats.

If C&C traffic, malware traffic, and phishing websites are able to operate/communicate in a way where researchers and defenders have no insight into them, I’m worried about what that means for the next conversation we have about encryption at large, so businesses having sensible, practical, people-first decryption policies is a decent set of brake pads we can put on the bus.

Sensible Policies

Back to the concern about banking. I believe all decryption devices are able to selectively apply decryption, and if you are looking at a device where that is not an option, please look elsewhere. The engineer who is configuring your decryption should be able to put sites like https://bankofamerica.com in their decryption exclusion list, where the enterprise certificate authority is not used to facilitate an SSL forward proxy, and the chain of trust is not altered for that session.

After you’ve reviewed the legal obligations in your area about encryption, and had a conversation with HR and your leadership team about the go/no-go, consider the way you’ll implement your policy on the whole, and how it can add value to your business without completely betraying the trust of your users. Remember that the practical applications of cybersecurity are, first and foremost, value-oriented activities. Consider the messaging you provide to your teams and stakeholders.

What sounds better to you?

“Beginning Monday, we will be implementing web decryption on our network. We expect you all to sign new acceptable use policies regarding the use of this new technology.”

“Given the recent expansion of email phishing, ransomware, and malware attacks on organizations across the country, we’ve decided to implement web decryption to keep our business assets and users safe. We’ve worked with our partners to come up with a deployment solution that only targets suspicious activities and have attached updated documentation that explains what this means for you.”

Have a strategy. Vendors are more than willing to work with you on this, because of the increased processing requirements for decryption, it’s often an avenue for them to make another sale. Palo Alto Networks has a very helpful page about coming up with a decryption strategy for your network, even if you don’t use their products.

Leverage your synergies.*** How does your decryption device fit in to the rest of your network? Is it a firewall? Can you set up your decryption based on existing URL categories? For example, you might decrypt on “unknown” or “web-posting” (think pastebin) but not decrypt on “banking” or “ecommerce.” Are there any data loss prevention or credential theft features you can also take advantage of?

Be transparent. You are indisputably taking privacy away from your users here, even if they know they don’t have an expectation of it. You owe them a thorough explanation of the process and how they may be affected. What websites are you decrypting on? What was the business justification for decrypting that website or category?

I hope that’s a helpful primer to SSL, decryption, and why we’re seeing more and more of it at scale. Having to implement this technology in the business is a nose-holding endeavor, but I do see it as increasingly necessary as the majority of the internet goes secure and we see the continued proliferation of cyberattacks. If you’re leveraging this in your organization, how’s it going? Let me know in the comments.

*(The private key is too, because the public and private keys are inextricably linked, but the certificate authority doesn’t need your private key to generate the certificate. In fact, don’t send the private key to them. Don’t send it to anyone. Seriously.)

**(It has been pointed out to me that this is not actually something a chamber of commerce does. This is why I am a technologist and not a businessy businessperson. INTERNET: SERIOUS BUSINESS.)

**(Did I really write these three words?! In a row?!)

Categories
Home

adieu daft punk

I remember the first CD I purchased. It was actually two CDs. One was Chumbawamba’s Tubthumper. The other one was Daft Punk’s Discovery.

There used to be a music store in Fredericksburg, The Blue Dog. They were known for being able to procure pretty much any CD or vinyl you could want, back when that skill had value. My dad had a lot of love for that store – you could ask them to crack open pretty much any CD you wanted, and they’d do it, and they had a listening area – a try before you buy – with some expensive cans and multi-disc CD changers, and leather couches. It was cool.

I was not cool. I was an awkward 14-year old with a blue Sony Walkman CD player stuffed haphazardly into the pocket of his cargo shorts. But I did buy some CDs, and I did have (plenty of) cargo shorts to fit them in, and I do remember buying Discovery, with its weird liquid metal album cover.

At that age, I didn’t know what electronic music even was. But this was great. A literal “discovery” of an entire genre of music I had never even heard of. Every track meant something. Every pulse, untz, and blasting synth. I couldn’t tell you the number of times I’ve listened to Discovery, and (like many nerdy teenagers) fell into the allure of Interstellar 5555, the trippy concept anime based on the album. Most other people I knew listened to other stuff. I have never been into other stuff. But I was always into Discovery.

When I was 16, I went to a youth tech program in California. It was basically a week-long camp for computer-y people about how they might become a CEO, or something. This was when Silicon Valley was a thing, but not a thing that you could make a TV show about and have people think it was funny. It was just a place where you could go to work if you were a smart person, so I’m not sure why my parents decided to send me to this thing. As part of this event, you could take a tour of some companies in the Bay Area. One of the options was Apple, so I went on a tour of the Apple campus.

No visit like this would be complete without a trip to the Apple company store. I remember buying a t-shirt with a grey Apple logo on it. I also bought (to the disdain of my parents, who funded this trip with a debit card) a small, white box with four buttons on it called an iPod.

“You bought a what?”

In these times, of course, you couldn’t get anything wirelessly, so I got some weird electronic music from a friend (hi Matt) by plugging the iPod into his laptop and taking whatever he had.

(Incidentally, that music ended up being, mostly, that of Neil Cicierega, whom you’ve heard of if you’ve heard of Harry Potter Puppet Pals.)

As soon as I got home from California, it was my life’s mission to get Daft Punk on this thing. Blue Dog was old hat at this point, so I pulled all of the songs off of Discovery (remember “Rip, Mix, Burn?”) and turned to the nascent iTunes Music store. That next year, Pepsi had a promotion where there’d be a code for a free song in the cap of every nth bottle of Pepsi, and if you looked up from the bottle upside down just so, and you tilted the bottle just right, you could see if you had a code. I bought a lot of Pepsi, and redeemed most of the codes for other Daft Punk albums.

Daft Punk always persisted. In college I had some money, so I bought Human After All from the iTunes Store and downloaded it onto my iPod. Human After All wasn’t and isn’t as good as Discovery, but it still had some bangers. I figured out ways to sneak Daft Punk songs onto mix CDs, and give them to girls I liked. This was a thing you did. Or I did. But it worked in High Fidelity, so.

The iTunes Store had stopped being so nascent by this point, and had become the powerhouse for buying music and downloading it to your iPod. The Blue Dog had closed.

Around 2006, Daft Punk announced they were going on tour, and releasing an album of the tour, Alive 2007. I bonded with one of my best friends (hi Brad) a great deal about Daft Punk and this album. An incredible production, though my age has made it sort of a tiring listen – it’s just so damn loud. Brad loved Alive 2007. It became a hallmark of the late 2000s that Brad would drive around College Avenue in Fredericksburg in his orange Mitsubishi Eclipse convertible, blasting it as loud as it could possibly go. You could hear Brad and his rolling disco from half a mile away.

Things blur a bit here. Alive 2007 wasn’t much of anything new, more original mixes of their existing stuff, but turned up to 11. I had long since moved on from blue Walkmans and iPods and was in a blue Civic Si, a fun car that had a CD player – cars today don’t even have CD players anymore. I had moved on to using Spotify, and moved up to Boston where I met my now-wife.

Then, when I turned 24, I heard that there was a Daft Punk movie coming out. It was called Tron: Legacy. I think Tron: Legacy is actually a fine movie. The soundtrack was the first truly original stuff from Daft Punk in a while, and it was mostly great – though there are some tracks in there that they had to make for the movie, and they don’t really sound like Daft Punk. By and large, though, it was solid stuff.

And they are in the movie!

I was at work when the teaser for Random Access Memories dropped in the form of a clip from “Get Lucky.” This was classic Daft Punk sound. The most Daft Punk of Daft Punk-ness, without Disney’s meddling. Then the album launched, and people were…confused.

Random Access Memories has gotten better with age. When I first listened to it, I thought it was weird. It was Daft Punk, but a new direction. Unafraid of the consequences, they made something they knew wouldn’t be for everyone. A concept in the same vein as Dark Side of the Moon. Less iconic, but certainly a grand work of art, and it did win the Grammy for Album of the Year in 2014. In retrospect, I know better now – that its old, weird sound, evocative of the late 70s and into the 80s was the point. It was eight years ago, but it’s easier now to look back on Random Access Memories and see it as album created in Daft Punk’s twilight. Maybe they saw it, too.

They’ve done some work since then – I remember hearing The Weeknd’s “I Feel it Coming” for the first time and saying out loud to my wife, “hey, this sounds like Daft Punk,” and she says “it is Daft Punk.” Because really, only Daft Punk sounds like Daft Punk. And sure enough, they’re in that music video too, as alluring and mysterious as ever.

I heard about the news today from none other than Brad. His text read “love is over,” with a link to the Pitchfork article.

“Daft Punk Break Up”

I watched the Epilogue video they put up on YouTube, where one of the duo decides to have the other activate their self destruct sequence. (Is that one Daft, or is that one Punk? Or are they both just what they are? Are we all Dafts and Punks?) Funny and bizarre, and a little surreal.

Then, the chorus from “Touch” comes into the focus through my headphones, and as it rises and falls, I feel my chest tighten and ache, the emotion comes to my eyes. Grief. The song cuts away, and I am left in an existential ennui. A reflection of my own self through music that’s been with me well into my adult life. A reflection of changing times and changing technologies. No more.

When you get into your 30s, you get to be old enough to lose things you care about. (Chumbawamba is gone too.) As time continues its inexorable march, the more of that lost stuff you hold on to. I remember the long road trips in my dad’s pickup truck to my grandfather’s old house in Pittsburgh, listening to The Doors, and my mom’s “cleaning parties” where she’d listen to Blondie. I am one of those people now, another one of the olds, still listening to “Aerodynamic” and “Human After All” and even “Around the World.” The gen-z’ers are already old enough now to realize that their elders are wrong and bad, as is the way of the universe. What will my son say about Daft Punk when he’s old enough to have a real opinion? “Dad, this is crap.” Maybe he’s right. Maybe everything we hold on to is just nostalgia, the idea of the thing. Like an ex-girlfriend. I hope I’m wrong about that, and that there really is something timeless to Daft Punk’s sound. We’ll see.

“If love is the answer, you’re home. Hold on.”

My son retreats into his iPhone 17, listening to the latest from whatever genre the music industry decides to remix from the past. The phone knows what he wants to listen to before he does. Meanwhile, Taylor Swift, greying, plays an intimate but sold-out show at The Birchmere. I get into my family-hauling vehicle and press the start button to the hum of the electric motor, and I ask it queue up Discovery. One more time.

Categories
General

Rote Monetization

This is one of those posts that isn’t that long, but since I have some aversion to Twitter threads I didn’t want to turn it into some “1/n” thing.

I’m going to start by talking about network requirements, even though that’s not really what this post is about, because network requirements are interesting and topical. The “new normal” has created some considerations/constraints on resources that most organizations don’t usually have to contend with, like “how do we deliver this service reliably to people who are hundreds of miles away and to people in our office at the same time?” At my place of employment there has been some discussion about network requirements and bandwidth needs for schools implementing hybrid learning; a school running teleconference equipment with teachers live-streaming their classes has greater network requirements than a school that doesn’t.

The typical solution to this problem is simple: overprovisioning, or buying more bandwidth than you think you would ever possibly need. For an organization of means, this is a fine solution, although inelegant if you subscribe to the systems engineering principle of “free capacity is wasted.” However, bandwidth is relatively cheap, and future use cases are hard to predict.

The typical thought process for capacity planning is something like “if I have 10 users who need 100mbps each, I need a 100mbps connection.” That’s wrong for a number of reasons, but most obviously, it assumes each capacity-consuming entity is using 10mbps all the time, which they almost certainly are not, even in 2021. However, there are other factors in play, and networks are bottlenecked all the way down to the endpoint.

This is actually interesting stuff; if you want to get into it, you have to dig a bit. I wanted to freshen up my memory on it, so I decided to do some of that.

I was fairly disappointed in what I found, which is an increasingly common feeling I’ve been having using Google. Google is becoming less and less useful, partially because I’d consider myself mid-career at this point, and I know what I don’t know. But Google as a tool is increasingly becoming a victim of the things that make it a viable product in the sense of “we have this thing and it needs to generate revenue.”

Anyway, after reviewing multiple sources, I found out that a pioneer of doing “network math” is William Stallings, an MIT-educated author who has written many books about computer science and systems engineering, and he writes about calculating network load in his book Local and Metropolitan Area Networks, which appears to have been last updated in 2000.

Bandwidth was not cheap in 2000.

My source for this information ended up being a Cisco Press text – Top Down Network Design – I accessed via my O’Reilly subscription. The O’Reilly subscription is an excellent and comprehensive technical resource. A Library of Alexandria, but for neckbeards. It is also $499 per year if you are an individual – I’m fortunate enough to receive a subscription to the service through my employer. Some libraries may offer it as well.

Google would have you believe that network capacity planning is a problem that can only be solved if you are one of many vendors leveraging inbound marketing and optimized SEO to provide your viewer a surface-level understanding, with maybe a smattering of technical information, of the content they are trying to understand. If you do a Google search for “network capacity planning” (without quotes) – of the first 10 results, 9 are vendors. On the second page, I get this result:

It looks like this would be pretty good, but (like many SEO-optimized inbound posts) it’s not really a “guide to bandwidth capacity planning.” More like “a list of bad things that will happen if you don’t pay us to do this work for you.” (God help you if you need to do work on your house and you search for something.)

What this marketing vehicle provides can be pretty informative, and is often engaging, but is not really a path to understanding, more of a path to purchase. To be clear:

  1. I understand this is a valuable business practice, and some of this content is truly useful. Compared to the advertising of the early web, inbound marketing is much less sinister. It’s also easier. I used to ask vendors for whitepapers all the time. The information here is no different.
  2. Most people are pretty good at knowing when they’re being sold something, especially when given time and space to read and reflect on it.
  3. Almost all vendors set up the next step of the engagement for you, which is usually getting ahold of someone inside the sales organization of that vendor.

So what’s the problem? Getting paid is good. I guess.

The early internet was created and used for supporting defense programs and research institutions. A place where knowledge existed for the sake of knowing it. Like many things, capitalism has turned this model on its head. Rather than looking for information, many have leveraged the internet to placate an algorithm they don’t really have any meaningful control over, regardless of the quality of their content or its factual accuracy. The marketplace of ideas has become…a marketplace. The impressions seem to matter, the SEO, the algorithm, but little else. Whatever draws your eyeballs the fastest, not the best. This is dismaying, and has made Google less useful – which of course it would when you consider Google’s incentives as it acts as a mediator in this exchange.

Then, if you actually want to get to quality information, you have to pay for it, behind a paywall or an article limit or a subscription, like the one I used to eventually find the information I was originally looking for. I have some sympathy for publishers here as much as I have disdain, because quality content takes time, effort, and resources to create and distribute, and that effort should be rewarded. At the same time, these controls over information enable questionable content (in accuracy and intent) to thrive, for free. I don’t know what the solution to this is, because this problem seems intractable, but I imagine the solution will include free or mostly free access to the internet at some point.

Maybe this is all one big nostalgia trip for a bygone era, where we sat wide-eyed at our desks in college and embraced the quirky and unknowable “Web 2.0.” Before the machines of capitalization and monetization repaved the information superhighway into a toll road, adorned with billboards of the highest bidder. Garish, wide, a sort of subliminal horror. As I look in the rearview mirror, I wonder if this is the best we could’ve done.

Maybe I’ll try Bing.