AI can indeed do our jobs

Feb 1, 2026 • Yousef Amar • 3 min read

Cory Doctorow, the famous sci-fi author who coined Enshittification, recently wrote an article about a future where AI serves us versus where we serve it. In the first case, AI helps us and catches our mistakes, for example as a second opinion on a radiologist's work. In the second, it does the bulk of the work, jobs get lost, and the remaining juniors check its work, but mostly act as a scapegoat when both AI radiologist and junior doctor miss the tumour.

I think there are problems with this view. First off, in many cases, AI is simply better than humans. I don't mean that it's more productive / doesn't get tired etc, but rather that when it comes to spotting tumours, for a narrow use case, it has a lower rate of false positives and false negatives than humans. So I was surprised that he used that example. You can also just replace AI with "software" in many cases (or hardware: a 4-row harvester might not get all potatoes, but its still better than a human with a tiller). If you add in fatigue as well, then it would be crazy to let humans operate heavy machinery or even drive a car, if statistically the roads are safer with AI at the wheel. To add to that, it kind of says the opposite of his first point: shouldn't AI overlords checking your work be the dystopian future, while humans wrangling fleets of AI be the future that puts us in control?

The second issue is around the idea that juniors keep their jobs and expensive, mouthy seniors are the ones getting fired. I can't speak for the medical profession, but at least in software engineering (which he touches on), that is certainly not the case, according to a 2025 Stanford study. Anecdotally, I see this too -- because these models are not (yet) that good, they're equivalent to a highly productive junior to mid-level engineer, and they need a senior to supervise them, just as you would need to supervise junior humans. And when a junior human messes up, you're accountable as their manager, which is as it should be.

I must say though, especially in the past few months, they've gotten better than most humans. They don't really make the kinds of mistakes Cory talks about anymore, so long as you use them properly (e.g. have them run a linter to catch their own syntax errors, make them write and run their own unit tests, etc). The same way you would help a junior developer not make mistakes. It's possible Cory's thinking about the code agents of ~6 months ago, which shouldn't feel like an eternity, but it is in this case! People have already adapted.

The consequence of this is that entry-level jobs are disappearing, and the demand for seniors has actually gone up. I don't say that out of denial (disclaimer: I'm as senior as it gets and the CTO of an AI company). This is because the path to become a senior is suddenly very narrow and there are fewer and fewer future seniors. The only way out is if AI gets good enough, fast enough, to also take the senior role. But for now, companies will fire (or more accurately, not hire) 10 juniors in favour of 1 senior with a Claude Max subscription. I suspect that we will see a lot more solo-founder startups appear as a result of this.

The true risk here is knowledge collapse, where if it doesn't get good enough (or if one day all AI disappeared for some reason), suddenly there's nobody left who can fix the machines that build machines, or the final machines. This happens in less dramatic ways all the time with technology and automation, and sometimes there are specialists left that still know the Old Ways and we don't need to build the knowledge again from first principles.

I agree that of course there's an AI bubble, and it will pop despite the fact that AI is genuinely useful (the same way that the internet is useful, and the dotcom bubble had to pop). However, I don't think we need to do anything to help it along -- it will pop no matter what. In the final two paragraphs Cory tries to explain what it is we need to do to pop the bubble in a way that minimises harm to people. He says we should become aware of the fact that AI can't actually do our jobs. But it can and it is!