Model collapse is already happening(cacm.acm.org)
18 points by zdw 6 hours ago | 17 comments
- chromacity 6 hours agoThere's some comedy in this article having all the hallmarks of LLM writing.[-]
- justonceokay 6 hours agoYeah a typo in the subtitle does not especially inspire confidence[-]
- niccl 6 hours agoyou've got me. What's the typo?[-]
- justonceokay 6 hours agoIt seems to me there is a word or two missing between “rich” and “slowly”. If I read the whole thing aloud I cannot parse it into a sentence. Or the word “rich” could be removed. That would be clunky but at least grammatically sensible.
“Make data get smoothed out” is a very strange way of saying “smooths out data”
[-]- atmavatar 5 hours agoI read the subtitle as
> The weird, rare, surprising patterns [that make data rich] slowly get smoothed out when an AI model trains on outputs from a previous model.
i.e., the patterns are responsible for making data rich, and they are slowly lost as each new generation model trains on the prior generation's output.
Or, if you'd prefer an analogy, we're using a copy machine to output new documents by taking the last copy spit out by the machine, adding some marks to it, and running it through the copier again. Over time, details present in much older copies blur and fade away in Nth generation copies.
- quantified 5 hours agoIt might be weird if you haven't read a lot of English. It's actually quite normal to say that process X is a way to make effect Y happen. "Makes your mout water" is more effective than "waters your mouth". "Makes your breath fresh and tolerable" is better than "freshens and tolerablerizes your breath". Etc.
Actually, what you are describing is what happens when LLM-generated prose cycles and then trains humans to use equally dull thinking.
[-]- justonceokay 58 minutes agoI have read a lot of English. That’s why it’s weird
- SunshineTheCat 6 hours agoI always find articles like this very odd and nebulous because they act as though AI models are just Google.
Type request, get info.
But that's such a narrow/one dimensional view of how LLMs are used. They can gather data or write an article, but that's probably a minority of use cases.
People have casual conversations with them, code written, brainstorming sessions, dictating a voice-recorded note, and the list goes on.
While data its getting trained on is important, the supposition is that this data consists only of what sits out there on the interwebs.
That as oppose to user input/interaction which, I'm guessing, has a pretty large role in training models. Maybe even more so in some cases than AI-written blog spam.
- kimi 6 hours agoI have a pet-peeve with this. As a non-native English speaker, I find it very useful to dictate multiple notes, in different languages, and have the LLM produce clear English prose out of it. The prose may be LLM-generated, but I edit it when needed to make sure that the contents is 100% mine.
It's like dictating to a typist like they did in the 60's - he will make sure that your letter looks professional and will fix your grammar, but you will sign the letter. This is totally different from LLM spam, the kind that inflates a sentence into a three-page article full of nothing.
So - is it a problem if the language reverts to a mean? that is the point of a shared language, right?
[-]- mvdtnz 4 hours agoIt's not just the language that reverts to a mean, it's the knowledge embedded in the model. If you're interested in discussing niche topics with ChatGPT, the further the model collapses the less likely you are to get meaningful results from the "tail" - the areas of knowledge that fall at the far ends of the model's bell curve.
- FeepingCreature 6 hours agoSource: a bad study from 2023.[-]
- slowmovintarget 6 hours agoWhy is the study bad?[-]
- FeepingCreature 2 hours agoBecause they exclusively used a model that was about as big as the original GPT-2.
Which, I mean, fair enough within these constraints, but it's cited like it's a universal law.
Really all that can be taken away from the study is "we trained a very small model on data generated from it in a particular way, and this was eventually harmful for the model."
Also note that models are nowadays trained on massively self-generated data (task RL post-training) and it seems to significantly improve their performance.
- levocardia 6 hours agoEvidence: trust me bro. Really, where is the actual evidence that models are "collapsing" from too much AI-generated training material? Evals are up, subjective perception of model usefulness is up (for me, certainly), and if anything the slop levels are down, or at least stable. I find it hard to believe that seven-figure software engineers at top labs aren't being careful about how much post-ChatGPT-era internet content is going into their training data.[-]
- jrmg 6 hours agoI find it hard to believe that seven-figure software engineers at top labs aren't being careful about how much post-ChatGPT-era internet content is going into their training data.
I agree - but as the Internet descends into all-slop-all-the-time (seriously, just do a search for reviews or travel advice or technical questions -or most anything - to see it), where do you expect the high quality training material on future things to come from? I have a hard time imagining it.
[-]- ctoth 6 hours agoYour Claude Code sessions. Every interaction. Every time the model is asked to do something and then gets feedback on that something (this didn't work I got this traceback)
Textbooks, company wikis, news corpora, structured reports of all kinds from far more sources than what is available on the web.
[-]- Terretta 4 hours agoOn your first line -- is it clear that's a good thing? Massive "it depends".
Sadly, enterprise fizzbuzz style is wildly successful compared to ghostty style.
Put another way, a gem of code versus the masses of mess. It's amazing new models aren't worse. And now most of this human interaction is with vibers.
LLMs trained by the crowd risk being medianizers, or rather, mediocritizers.
One need not look further than "Absolutely!" to see this in play -- user selection matters for corpus matters for model. Suddenly content everywhere is “Little houses, all alike.”
On your second line -- I couldn't agree more strongly.
ANTHROP\C has been sitting inside high performance white collar industries with top builders, that signal is priceless compared to feedback farms in Kenya.
Bet on models that see spikey pointy mastery at play.
- slowmovintarget 6 hours ago[dead]