I think this is far too nuanced. I am terrified by what the civilization we have known will become. People living in less advanced economies will do OK, but the rest of us not so much. We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
I have deep concerns surrounding LLM-based systems in general, which you can see discussed in my other threads and comments. However in this particular article's case, I feel the same fears outlined largely predate mass LLM adoption.
If you substitute "artificial intelligence" with offshored labor ("actually indo-asians" meme moniker) you have some parallels: cheap spaghetti code that "mostly works", just written by farms of humans instead of farms of GPUs. The result is largely the same. The primary difference is that we've now subsidized (through massive, unsustainable private investment) the cost of "offshoring" to basically zero. Obviously that has its own set of problems, but the piper will need to be paid eventually...
Hopefully this does not count as being uncivil, I just want to cut through what feels like insanity to put my (and at least a few others’) feelings plainly:
If you are one of those devs who heavily uses LLMs at work and you are in a position of relative authority, either as senior+ or something else, and you hand off your LLM code to others to review or “build off of”… we hate you. We don’t want to be your voluntold slop jannies. LLM over use and vibe coding is taking a fairly enjoyable job and making it insufferable. Now I have to sift through 3-10x more lines of code that are written in a non-human thought process using terrible naming schemes and try to find the bug… just to realize that the code isn’t even solving for the correct or underlying problem. Every time I have to interact with a co-workers LLM code, my tasks take weeks longer than they would have. This is including the ones who claim to be exerts in prompting and harnessing and whatever skibidi buzzword is out this week.
You are not saving time, you only think you are because you don’t look closely at the output and send it off to your lowley janitors to deal with. And the people who claim to be running 20 or 30 AI tasks at once what are you even building? If you aren’t literally shipping the next Amazon that’s just embarrassing.
I can not wait for people to wake from this bizarre mass psychosis. I already see co-workers context window getting smaller than free version ChatGPT in an incognito window.
Generative AI is completely in line with the rest of the industrial milieu; pumping out product as quickly and as cheaply as possible. "Good enough" is often the standard, even before the industrial mode, but the industrial mode allows "good enough" to compound exponentially until you've got an edifice of trash that continues tumbling downhill on sheer momentum and we all scramble to fix the thing in situ.
This is how our world works and until it hits the proverbial wall, this is how it will continue to work because it's too big to be detoured or course-adjusted
> 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
I would argue most never did.
If you spend time in the startup world you quickly realize how little the average developer cares about craftsmanship or quality.
The startup world is full of mantras like move fast and break things, or if you are not embarrassed by your mvp it’s not an mvp.
LLM are an embodiment of the Pareto principle. Turns out that if you can get an 80% solution in 1% of the time no one gives a shit about the remaining 20%. I agree that’s terrifying. The existential AI risk crowd is afraid we’ll produce gods to destroy us. The reality is we’ve instead exposed a major weakness in our culture where we’ve trained ourselves to care nothing about quality but instead to maximize consumption.
This isn’t news really. Content farms already existed. Amusing Ourselves to Death was written in 1985. Critiques of the culture exist way before that. But the reality of seeing the end game of such a culture laid bare in the waste of the data center buildout is shocking and repulsive.
What terrifies me is the total and utter potential disruption to our economies in a very rapid order.
Software is just a proxy for the thing that we want which is data. The same way an electric drill is a proxy to a hole. Since it's impossible to sell holes there's a market for selling electric drills to make holes.
A lot of economic activity is based on these proxies. Same in the software digital world. Even though it's data that were after many successful software businesses have been started to sell the tools, i.e. software products for people to make their digital "holes".
Now imagine if you could just suddenly 3D print your electric drill. Or your frying pan. Or your garden shears. What would happen to the economiies based on selling these tools?
Once you can prompt your way to any digital creation what happens to the economies based on making the digital tools?
It's not there yet, but if/when it does it's going to be a complete economic restructuring that will affect many. Careers will be wiped out, livelihoods will be lost.
> 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
I feel like long before LLMs, people already didn't care about this.
If anything software quality has been decreasing significantly, even at the "highest level" (see Windows, macOS, etc). Are LLMs going to make it worse? I'm skeptical, because they might actually accelerate shipping bug fixes that (pre-LLMs) would have required more time and management buy-in, only to be met with "yeah don’t bother, look at the usage stats, nobody cares".
> I'm terrified that our craft will die, and nobody will even care to mourn it.
"Terrified" is a strong word for the death of any craft. And as long as there are thousands that love the craft, then it will not have died.
As much as we speak about slop in the context of AI, slop as the cheap low-quality thing is not a new concept.
As lots of people seem to always prefer the cheaper option, we now have single-use plastic ultra-fast fashion, plastic stuff that'll break in the short term, brittle plywood furniture, cheap ultra-processed food, etc.
Classic software development always felt like a tailor-made job to me and of course it's slow and expensive but if it's done by professionals it can give excellent results. Now if you can get crappy but cheap and good enough results of course it'll be the preferred option for mass production.
Commercial ventures already had to care exactly to the extent that they are financially motivated by competition forces and by regulation.
In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.
I was watching a youtube video the other day where the guy was complaining his website was dropping off the google search results. Long story short, he reworded it according to advice from Gemini, the more he did it, the better it performed, but he was reflecting on how the website no longer represented him.
Soon, we'll all just be meatpuppets, guided by AI to suit AI.
The terrifying part isn't obsolescence. It's mediocrity becoming the ceiling.
AI produces code that technically runs but lacks the thoughtfulness that makes software maintainable or elegant. The "90% solution" ships because economic pressure rewards speed over quality.
What haunts me: compilers don't make design decisions. IDEs don't choose architecture. AI does both, and most users accept those choices uncritically. We're already seeing juniors who've never debugged without a copilot.
The author's real question: what if most people genuinely don't care about the last 10%? Not from laziness, but because "good enough" is cheaper and we're all exhausted.
Dismissing this as "just another moral panic" feels too easy. The handcraft isn't dying because AI is too good. It's dying because mediocrity is profitable.
From The Free Press article:
In a 1995 interview with Inc. magazine, author Kurt Vonnegut was asked what he thought about living in an increasingly digitized world. His response is so perfect that it’s worth reprinting in full:
I work at home, and if I wanted to, I could have a computer right by my bed, and I’d never have to leave it. But I use a typewriter, and afterwards I mark up the pages with a pencil. Then I call up this woman named Carol out in Woodstock and say, “Are you still doing typing?” Sure she is, and her husband is trying to track bluebirds out there and not having much luck, and so we chitchat back and forth, and I say, “OK, I’ll send you the pages.”
Then I’m going down the steps, and my wife calls up, “Where are you going?” I say, “Well, I’m going to go buy an envelope.” And she says, “You’re not a poor man. Why don’t you buy a thousand envelopes? They’ll deliver them, and you can put them in a closet.” And I say, “Hush.” So I go down the steps here, and I go out to this newsstand across the street where they sell magazines and lottery tickets and stationery. I have to get in line because there are people buying candy and all that sort of thing, and I talk to them. The woman behind the counter has a jewel between her eyes, and when it’s my turn, I ask her if there have been any big winners lately. I get my envelope and seal it up and go to the postal convenience center down the block at the corner of 47th Street and 2nd Avenue, where I’m secretly in love with the woman behind the counter. I keep absolutely poker-faced; I never let her know how I feel about her. One time I had my pocket picked in there and got to meet a cop and tell him about it. Anyway, I address the envelope to Carol in Woodstock. I stamp the envelope and mail it in a mailbox in front of the post office, and I go home. And I’ve had a hell of a good time. And I tell you, we are here on Earth to fart around, and don’t let anybody tell you any different.
We’re dancing animals. How beautiful it is to get up and go do something
"terrified".... overused word. As a man I literally can't relate. I get terrified when I see a shark next to me in the ocean. I get impatient when code is hard to debug.
I’m a systems person too, and I don’t see mediocrity as inevitable.
The slop problem isn’t just model quality. It’s incentives and decision making at inference time. That’s why I’m working on an open source tool for governance and validation during inference, rather than trying to solve everything in pre training.
Better systems can produce better outcomes, even with the same models.
AI slop is similar to the cheap tools at harbor freight. Before we used to have to buy really expensive tools that were designed to last forever and perform a ton of jobs. Now we can just go to harbor freight and get a tool that is good enough for most people.
80% of good maybe reframed as 100% ok for 80% of the people. It is when you are in the minority that cares about or needs that last 20% where it is a problem because the 80% were subsidizing your needs by buying more than the need.
> You get AI that can make you like 90% of a thing! 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
Based on the Adobe stock price the market thinks AI slop software will be good enough for about 20% of Adobe users (or Adobe will need to make its software 20% cheaper, or most likely somewhere between).
Interestingly workday, which is possibly slightly simpler software more easily replicable using coding agents is about the same (down 26%).
I don't think craft dies, but I do think it retreats
As it should.
Dr. Strangelove or: How I Learned to Stop Worrying and Love the Slop
If slop doesn't get better, it would mean that at least I get to keep my job. In the areas where the remaining 10% don't matter, maybe I won't. I'm struggling to come up with an example of such software outside of one-off scripts and some home automation though.
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
Why is slop assumed inevitable? These models are plagiarization and copyright laundering machines. We need a great AI model reset whereby all published works are assumed to opt-out of training and companies pay to train on your data. We've seen what AI can do, now fund the creators.
We should have also been talking about "devops slop" since 2007! it's good enough we have heard this for how many decades?
>What if the future of computing belongs not to artisan developers or Carol from Accounting, but to whoever can churn out the most software out the fastest? What if good enough really is good enough for most people?
Sounds like the cost of everything goes down. Instead of subscription apps, we have free Fdroid apps. Instead of only the 0.1% commissioning art, all of humanity gets to commission art.
And when we do pay for things, instead of an app doing 1 feature well, we have apps do 10 features well with integration. (I am living this, instead of shipping software with 1 core feature, I can do 1 core feature and 6 different options for free, no change order needed)
I deeply hate the people that use AI to poison the music, video or articles that I consume. However I really feel that it can possibly make software cheaper.
A couple of years ago, I worked for an agency as a dev. I had a chat with one of the sales people, and he said clients asked him why custom apps were so expensive, when the hardware had gotten relatively cheap. He had a much harder time selling mobile apps.
Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
One of the biggest problems with AI slop (the biggest problem) is that we aren't discerning or critical enough to ignore the bad stuff. It should be fine for people to use AI to generate tons of crap so long as people curate the good stuff to the top.
The slop is sad but a mild irritation at most.
It's the societal level impact of recent advances that I'd call "terrifying". There is a non-zero chance we end up with a "useless" class that can't compete against AI & machines - like at all, on any metric. And there doesn't seem to be much of a game plan for dealing with that without social fabric tearing
The creme rises to the top. If someone's shit-coded program hangs and crashes frequently, in this day and age, we don't have to put up. with it any longer. That lazy half-assed feature that everyone knows sucks but we're forced to use it anyway? The competition just vibe coded up a hyper-specific version of that app that doesn't suck for everyone involved. We start looking at who's requiring what. What's an interface and what's required to use it. If there's an endpoint that I can hit, but someone has a better, more polished UI, that users prefer, let the markets decide.
My favorite pre-LLM thing in this area is Flighty. It's a flight tracking app that takes available data and presents it in the best possible wway. Another one is that EU border visa residency app that came thru here a couple of months ago.
Standards for interchange formats have now become paramount.
API access is another place where things hinge on.
Meh. Slop is not danger. Because in software lines of code quantity does not have quality on its own. Or if it has it is not a good quality. And bad software costs money. The problem with temu for the west is not that the things sold there are bad. The real problem rose in the last 2-3 years when they become good.
I use AI/LLMs hard for my programming.
They allow me to do work I could never have done before.
But there’s no chance at all of an LLM one shotting anything that I aim to build.
Every single step in the process is an intensely human grind trying to understand the LLM and coax it to make the thing I have in mind.
The people who are panicking aren’t using this stuff in depth. If they were, then they would have no anxiety at all.
If only the LLM was smart enough to write the software. I wish it could. It can’t, nor even close.
As for web browsers built in a few hours. No. No LLM is coming anywhere new at building a web browser in a few hours. Unless your talking about some super simple super minimal toy with some of the surface appearance of a web browser.
Our paper on removing AI slop got accepted to ICLR 2026, and it's under consideration for an IgNobel prize:
https://arxiv.org/abs/2510.15061
Our definition of slop (repetitive characteristic language from LLMs) is the original one as articulated by the LLM creative writing community circa 2022-2023. Folks trying to redefine it today to mean "lazy LLM outputs I don't like" should have chosen a different word.
Butlers jihad has to happen. Destroy the datacenters and give the oligarchs the french treatment!
Slop existed before AI came along.
It's often lamented that the World Wide Web used to be controlled by indie makers, but now belongs to a handful of megacorp websites and ad networks pushing addictive content. But, the indie maker era was just a temporary market inefficiency, from before businesses fully knew how to harness the technology.
I think software development has gone through a similar change. At one point software companies cared about software quality, but this too was just an idealist, engineer-driven market inefficiency. Eventually business leaders realized they can make just as much money (but make it faster) by shoveling out rushed, bloated, garbage software, since even though poor-quality software aggravates people, it doesn't aggravate enough for the average person to switch vendors over it. (Case in point - I'm regularly astounded at how buggy the YouTube app is on Android of all platforms. I have to force-kill it semi-regularly to get it working right. But am I gonna stop watching YouTube because of this? Admittedly, no, probably not.)
>What if AI stops getting better and what if people stop caring?
Seems an unlikely problem. It'll get better, which may cause it's own problems.
Now that generative AI products are becoming more widely used, it's a little depressing how folks don't seem to view the world with a broad historical context.
The "AI effect" on the world has many similarities to previous events and in many ways changes very little about how the world works.
> I'm terrified of the good enough to ship—and I'm terrified of nobody else caring.
For almost every product/service ever offered, it was possible to scale the "quality" of the offering while largely keeping the function or outcome static. In fact, lots of capitalistic activity is basically a search for the cheapest and fastest way to accomplish a minimum set of requirements. This leads to folks (including me!!) to lament the quality of certain products/services.
For example, it's possible to make hiking boots that last a lot longer than others. But if the requirement is to have it last for just 20 miles, it's better to pay less for one that won't last as long.
Software is the same way. Most users just absolutely do not know about, care about, or worry about security, privacy, maintainability, robustness, or a host of other things. For some reason this is continually terrifying and shocking to many.
There is nothing surprising here, it's been this way for many years and will continue.
Obviously there are exceptions, but for the most part it's best to assume the above.