I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.
It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.
From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.
Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?