It gets me how every single comment is the same three tired old jokes about AI. Between “It’s only good for imagining up stuff that has no right answer”, “And so the downward spiral begins as LLM’s are trained on the output of previous LLM’s.” and "But somehow it still can’t remember what I said one minute ago " we’ve covered every single anti-LLM talking point.
For one, it looks like the actual training data hasn’t changed, even if the model has changed and it’s been able to access the internet for a while, this is non-news. For another, a lot of people haven’t tried GPT v4 and are just complaining about the free version sucking, well, things you get for free often suck.
And so the downward spiral begins as LLM’s are trained on the output of previous LLM’s
I think we can be fairly confident that the people in charge of training the LLMs have heard this too and are probably on top of it.
It gets me how every single comment is the same three tired old jokes about AI. Between “It’s only good for imagining up stuff that has no right answer”, “And so the downward spiral begins as LLM’s are trained on the output of previous LLM’s.” and "But somehow it still can’t remember what I said one minute ago " we’ve covered every single anti-LLM talking point.
For one, it looks like the actual training data hasn’t changed, even if the model has changed and it’s been able to access the internet for a while, this is non-news. For another, a lot of people haven’t tried GPT v4 and are just complaining about the free version sucking, well, things you get for free often suck.
I think we can be fairly confident that the people in charge of training the LLMs have heard this too and are probably on top of it.