Semantic quibbling is one of the least interesting kinds of internet debate, so replace the word “understanding” with whatever word makes you happy. I continued with “and talking about” right afterwards so you can just delete the word entirely and the sentence still works fine. You could have just kept reading.
Since you didn’t read the rest of my comment, I should note that the rest of it after that sentence is about the other issue that OP raised and not even about model collapse at all.
Anyway. The article about model collapse that I see still crop up every once in a while is this one. It’s not that it has “methodological errors”, though, it’s just that it uses a very artificial training protocol to illustrate model collapse that doesn’t align with how LLMs are actually trained in real life. It’s like demonstrating the effects of inbreeding in animals by crossing brothers and sisters for twenty generations straight - you’ll almost certainly see some strong evidence, but it’s not a pattern of breeding that you are actually going to see in the wild.
Semantic quibbling is one of the least interesting kinds of internet debate
Why do you engage in it then?
In my opinion, a debate about the semantics of understanding and intelligence in context of AI is highly interesting, and a huge issue for worldwide politics and policies, but you do you.
Facedeer pretends to be above the thing he’s doing because he’s a pretty well-known troll who believes nothing and will say opposite statements just to promote AI…
Semantic quibbling is one of the least interesting kinds of internet debate, so replace the word “understanding” with whatever word makes you happy. I continued with “and talking about” right afterwards so you can just delete the word entirely and the sentence still works fine. You could have just kept reading.
Since you didn’t read the rest of my comment, I should note that the rest of it after that sentence is about the other issue that OP raised and not even about model collapse at all.
Anyway. The article about model collapse that I see still crop up every once in a while is this one. It’s not that it has “methodological errors”, though, it’s just that it uses a very artificial training protocol to illustrate model collapse that doesn’t align with how LLMs are actually trained in real life. It’s like demonstrating the effects of inbreeding in animals by crossing brothers and sisters for twenty generations straight - you’ll almost certainly see some strong evidence, but it’s not a pattern of breeding that you are actually going to see in the wild.
Why do you engage in it then?
In my opinion, a debate about the semantics of understanding and intelligence in context of AI is highly interesting, and a huge issue for worldwide politics and policies, but you do you.
Facedeer pretends to be above the thing he’s doing because he’s a pretty well-known troll who believes nothing and will say opposite statements just to promote AI…