“Someday, though not soon, Mr. Bernstein feels a program may be designed that will enable the computer to profit by its own mistakes, and improve…on the basis of its experience against human opponents.”
– Horizons of Science, Vol 1. No. 4. 1958.
In 1960, a hulking new IBM 709 mainframe was installed in the Massachusetts Institute of Technology as part of a 10-year collaboration between MIT and IBM called the MIT Computation Center. While the over one-ton vacuum tube computer’s lifespan was short lived (an improved version used transistors instead of tubes in a notable upgrade), the $2.6m ($25-30m in 2026 dollars) behemoth found a good home at MIT.
So good, in fact, that a system of “time-sharing” called the Compatible Time-Sharing System (CTSS) was developed to enable more than one user to use the computer. CTSS was primitive, but eventually led to the development of Multics, and eventually of Unix at Bell Labs. Time-sharing became a de facto standard to solve the problem of how to enable multiple users to operate the computer at the same time. At least that is how it appeared to work for the user – in reality, the computer offered each user or task a fraction of time in a process called slicing. The financial implications of such a system are obvious: if you know who the user is, and you know how much time they’ve used, you can bill them for it.
Consumption-based time-sharing schemes for what we now refer to broadly as “compute” lasted into the early 1990s, and were featured prominently in Dr. Clifford Stoll’s 1989 book The Cuckoo’s Egg, where Stoll was able to run a complex, lengthy, and at times overdramatic operation to catch a malicious hacker traced back to a 75-cent overage on the Lawrence Berkeley National Laboratory’s computer account. These schemes were eventually displaced by the advent of cheap, widespread, and accessible personal computers.
At least, they were for a while.
“Sooner or later, everything old is new again.”
– Stephen King, The Colorado Kid, 2005.
Generative AI has brought us full circle. We are, essentially, back to time-sharing. We have returned to consumption-based billing for scarce resources, leaving us in an alien and uncomfortable position – one we haven’t experienced in the last 30 years of information technology.
Some will interject here with a counterargument that I see regularly in internet discourse: that cloud computing is essentially time-sharing too, and we have already been full circle since the advent of *-as-a service models pioneered by cloud service providers like AWS. This notion, however, is a misunderstanding of the cause and effect that willed time-sharing into being.
Time-sharing, or any operation that leverages consumption-based billing like generative AI, and cloud computing solve problems that are fundamentally different for most customers. Time-sharing solves a problem of scarcity, and cloud computing solves a problem of abundance. The raison d’être for AWS is simple in retrospect: computers were so cheap, AWS was able to monetize excess compute to customers who were fed up with buying more capacity than they really needed. All AWS did was figure out a novel way for customers to turn many large computers into many more smaller ones on demand. This method took on the name of elasticity, and it worked spectacularly well.
But computers are expensive again. The good times are over.
This essay will not seek to explain why generative AI is computationally expensive, but it doesn’t take a PhD to know that it is, and that’s not the whole story. For the first time in history, video game consoles – of all things – are appreciating assets, which was set in motion before AI agents took over the infernal machine of work. Tariffs, COVID-19, the (more or less) failure of the CHIPS act as a hedge on the global semiconductor machine, and of course, generative AI. All in service of that damn computer.
All have led us back to this point – what’s old is new again.
General purpose technologies don’t create sustainable competitive advantages for their customers in a vacuum. The general purpose tools made available to you by firms, even public benefit corporations like Anthropic, are available to everyone. That’s the point. The Harvard Business Review explores the conundrum in detail in their September 2024 issue. The MIT Sloan Management review addresses it in a similar article in their Summer 2025 issue:
“If a technology is valuable but not unique, then it is not an advantage; similarly, if a technology is unique to a company but not valuable, it is not an advantage. If a technology is valuable and unique to a company but can be imitated by others, then it does not confer a sustainable advantage. AI is unquestionably valuable, but it fails the other two tests because it is neither unique to any organization nor inimitable.”
The late anthropologist Dr. David Graeber also explores this idea in his 2015 book The Utopia of Rules:
“Competition forces factory owners to mechanize production, to reduce labor costs, but while this is to the short-term advantage of the firm, mechanization’s effect is to drive down the general rate of profit.”
In other words, the problem with the AI game is that everyone is playing it.
The most common thread about AI displacing jobs is simply that: jobs are truly being wholesale replaced by AI, and CEOs are seeing the endgame as leaner and more nimble enterprises without the use of those pesky employees, who must be paid, fed, trained, provided healthcare, and all of the other things that make for a fulfilling career.
This may be partially true, but as a technologist myself, I find the argument unsatisfying, partially because of my own experiences in using AI, but mostly because leaders of corporations have insofar struggled to rigorously quantify the impact of AI on their businesses as it relates to human productivity.
As a case study, let’s look at the layoff letter posted on X by Brian Armstrong on May 5th, 2026, which looks essentially the same as every other one of the letters I’ve seen to this effect:
“AI is changing how we work. Over the past year, I’ve watched engineers use AI to ship in days what used to take a team weeks. Non-technical teams are now shipping production code and many of our workflows are being automated”
The problem with this statement is twofold:
- There is no basis for the number of roles eliminated (for Coinbase, it’s about 700 people). Where is the evidence that 700 roles worth of work was replaced by AI? This evidence is conspicuously absent for each case of “AI layoff,” not just for Coinbase.
- It’s incompatible with the idea that AI does not provide any sustainable advantage. Are your workflows being automated well?
To be clear, I am not arguing that AI does not improve productivity. That is clearly untrue, including in my own experience. This puts insightful employers in an advantageous position – they can leverage employees’ pre-existing skills and AI to develop their real competitive advantage more quickly. So what’s with the layoffs?
The most generous read I can offer on the situation is this:
- Consumption-based token costs are significantly higher than companies expected.
- Companies must continue to use AI to appease investors and shareholders, who are demanding a return on these investments.
- Frontier operators are propped up on a gigantic investment machine, keeping compute costs too high for most companies to deploy models in-house on their own hardware.
- The money to pay for the cost of AI has to come from somewhere.
- While AI is clearly improving productivity, it is not as good as its biggest boosters say it is.
In summary: operators of frontier models, shareholders, and investors have AI-using companies, to use a technical term, “by the balls.”
If companies were really seeing major returns on their AI investments, they wouldn’t have to justify their expense by cutting labor. Their bottom lines would be boosted proportionally to their AI investments. But that isn’t happening despite, or maybe because of, the market rhetoric about “becoming AI-native.”
I believe this is why managers are now demanding obsessive levels of tracking around AI usage by their workforces. Of course, Goodhart’s law applies: “when a measure becomes a target, it ceases to be a good measure.” Demanding employees use AI leads to…more AI consumption, which leads to higher costs, which leads to more desperate attempts to justify the cost, whether that desperation takes the form of layoffs or other (arguably more) pernicious schemes like cutting benefits.
It has always interested me that there are AI PaaS companies like Base44 and Lovable – the companies that claim they can build entire apps for you using generative AI.
Here’s a question: if AI were really that capable, wouldn’t those companies just sell the apps? Marketing your product as a “platform” is a convenient exit for an uncomfortable “marketing meets reality” truth: the capability is not there. The gulf between an AI-built scaffold and working, maintainable software is massive.
Look, there is certainly no shortage of low-quality SaaS out there, and some of these solutions really are going to be displaced by AI. But the “SaaSpocalypse” predicted by tech pundits feels very far away, and you are right to be skeptical of the idea that AI will simply replace developers wholesale.
I do not know where this ends. In his 2023 book Cointelligence, Wharton professor Dr. Ethan Mollick defines a set of principles for using AI effectively. One of his principles is “assume this is the worst AI you will ever use.” What he means is that he expects the capabilities of AI to continue to improve over time.
I agree. While I hesitate to call this point “a bubble,” we are at an uncomfortable interlude where the capabilities of generative AI are good enough to enhance productivity but not good enough to truly supercharge growth. We are in an earlier phase of generative AI than I think most leaders would like to admit. To get out of this phase, assuming generative AI remains in the lane of a general-purpose technology a few things need to happen.
- Organizations using AI need to figure out a way to create sustainable advantages, enhance their existing sustainable advantages, or both. The most likely path to this for most organizations is by leveraging their own data in training the AI, but this path conflicts with structural or even regulatory barriers around how institutional data is shared.
- The capabilities of AI need to improve dramatically as a general-purpose technology to create additional opportunities for profit.
- The cost of compute needs to decline and/or we need to see increased competition around frontier models in a way that drives down costs.
- Per 3, large players may be able to justify running models on their own hardware to escape the time-sharing model.
None of these solutions seem imminent. If you are an employee at a company aggressively pursuing AI, especially a tech company, I think we are in for a bad time save some miracle of progress in AI development.
While we are in this moment, here is my practical advice for you as an employee trying to navigate it: if you are an AI skeptic, this is not the right time to showcase your perspective in view of your employer. YouTuber Mo Bitar has a hyperbolic but achingly funny video about this moment, and it was one of the reasons I decided to write this essay. It’s aching not because it is so funny, but because of how right he really is.
I say all this because the moment is currently seeking an answer to the question of whether AI will durably replace human labor or if it will just raise expectations for performance and output for its users. No matter what the answer to the question is, “I don’t want to use AI because it sucks” fails. You will be using AI. Pandora’s box is wide open. I would again turn you to a suggestion from Dr. Mollick – “always invite AI to the table.” You’ll laugh, you’ll cry, you’ll be surprised at how capable (or hopelessly incapable) it is for a given task.
Time-sharing died when the computer was made available, portable, and reasonably reliable to everyone at a low cost. Microprocessors brought in a new era of affordable and plentiful computing. The microprocessor may yet save us again, as history repeats itself.
What if the solution is not, as leaders like Sam Altman say, “AI as a utility,” but lightweight and portable models that are tailored to the specific purpose of the person in control of the model, and tailored to the specific needs of the company deploying the model, or that particular version of the model?
We solved the problem of wasted compute with cloud computing – the excess capacity was repackaged and efficiently resold as smaller, purpose-built technologies like EC2 instances, S3, and RDS. Where is the wasted compute now? I would suggest that it is on your developers’ endpoints. The $3500 MacBook Pros with 48GB of RAM. Why are we not taking advantage of that capacity with a purpose-built model? Small, lightweight, and developer-centric. No consumption. As legendary game developer Brian Moriarty says: “The treasure is right there.”
This is just an idea. Will it work? I don’t know. But we need more of them, and probably not from AI. A few things are abundantly clear: we must reduce costs, we must optimize for competitive advantage, and we must realize that AI as a general-purpose technology is not quite at the point where we are able to reap revolutionary benefits, nor do we know if it ever will be.
Until we figure that out, we will remain stuck in the AI middle, time-sharing with Claude, the computer that profits by its own mistakes.