RSS Bot@lemmy.bestiver.seMB to Hacker News@lemmy.bestiver.seEnglish · 2 months agoAI chatbots are "Yes-Men" that reinforce bad relationship decisions, study findsnews.stanford.eduexternal-linkmessage-square15linkfedilinkarrow-up149arrow-down12file-textcross-posted to: [email protected]
arrow-up147arrow-down1external-linkAI chatbots are "Yes-Men" that reinforce bad relationship decisions, study findsnews.stanford.eduRSS Bot@lemmy.bestiver.seMB to Hacker News@lemmy.bestiver.seEnglish · 2 months agomessage-square15linkfedilinkfile-textcross-posted to: [email protected]
minus-squaredanh2os@piefed.sociallinkfedilinkEnglisharrow-up2arrow-down2·2 months agoI agree that the LLM dataset is in play here. What’s I’m saying is that we are the guide.
minus-squaretheunknownmuncher@lemmy.worldlinkfedilinkEnglisharrow-up6·edit-22 months ago What’s I’m saying is that we are the guide. The user has far less influence over how the model acts, though. No, we are not the guide.
minus-squaredanh2os@piefed.sociallinkfedilinkEnglisharrow-up1arrow-down3·2 months agoA hammer’s design is controlled by the manufacturer. But who’s responsible for what gets built with it? The person swinging it.
minus-squaretheunknownmuncher@lemmy.worldlinkfedilinkEnglisharrow-up5arrow-down1·2 months agoFalse equivalency. Hammers are not comparable to LLMs.
I agree that the LLM dataset is in play here. What’s I’m saying is that we are the guide.
The user has far less influence over how the model acts, though. No, we are not the guide.
A hammer’s design is controlled by the manufacturer. But who’s responsible for what gets built with it? The person swinging it.
False equivalency. Hammers are not comparable to LLMs.