- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
I can’t remember where I read it but someone said “LLM’s provide three types of answer: so vague as to be useless, directly plagiarized from a source and reworded, or flat out wrong but confidently stated as the truth.” I’m probably butchering the quote, but that was the gist of it.
Hold on let me have chat gpt rephrase that for you.
I’m not exactly sure of the source, but there was a statement suggesting that language models offer three kinds of responses: ones that are too general to be of any value, those that essentially mimic existing content in a slightly altered form, and assertions that are completely incorrect yet presented with unwavering certainty. I might be paraphrasing inaccurately, but that was the essence.
deleted by creator
This is just chat gpt rephrasing the comment above me. Don’t worry though, when chat gpt is wrong it’s quite confident sounding and even cites sources that don’t exist but look quite convincing!
So the same as answers on Reddit then
To me the answers are useful enough and I appreciate that it understands vague questions. When I don’t know enough about a topic to know what terms to punch into a search engine, I can use ChatGPT as a first step and go from there.
Wish this AI bubble would burst already
LoL… if you hear all the real estate moguls defending trump that “everyone does it, and if this is prosecuted noone in real estate will be able to make a dime” it is clear that the bot is trained on real life data.
Maybe they can train a bot to find the actual fraud and hammer down on it. This is a place where the addition of AI to workers is able to massively increase the ability to audit complex tax schemes.
Taxes for normal people should be pre filled by the IRS and just require a person to log in, verify and commit.
A Microsoft spokesperson declined to comment or answer questions about the company’s role in building the bot
Weird…