- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Locking social norms at some predetermined stage is a great way to curb all progress. Like, slavery was a social norm at some point.
Yeah, I suggest using Signal to communicate with friends…
I haven’t read the article. Are you being sarcastic? Or is it a more secure option?
It’s probably the most secure,. commonly available, messaging platform right now. They keep a bare minimum of metadata on their servers. Basically enough to link you on the platform. After that, everything is e2e encrypted and they can’t tell authorities anything.
Other platforms are a sliding scale from to/from/time data, all the way up to full messages.
How many advances were considered against social norms…?
I’m pretty sure it was, all of them.
Alternately, you could be getting a personal AI buddy who can whisper a warning in your ear when you’re about to misread the room and do something that’ll cause you a lot of trouble.
deleted by creator
If they used social media as training data it will say Everything is normal human behaviour…
They also used ChatGPT-3 to decide what’s normal.
Well, we’re all fucked.
Sorry, as a large language model…
Wait until they train it on porn sites. The AI may decide seppuku is the only option.
Is that where they all stand in a circle and jizz on the girl’s face?
Minority Report anyone?
But there was nothing wrong with the basic idea of the tech in Minority Report. It worked. They saved many lives by preventing imminent murders with it. The main problem in the movie was that they leapt straight from “your name came out of this machine” to “ten years dungeon. No trial.”
Movies are designed to sell as many tickets as possible by presenting scenarios that provoke endorphins. They’re not serious scenarios you should be making real-world decisions based off of.
Don’t worry guys, I’m sure there’s a very good reason for this.
It appears there is. They are using it to determine an areas general feelings are towards a US military presence in an area (whether the local population feels they need help from the US or not) as a means of helping to determine best locations for setting up garrisons and bases or whatever during a conflict. Which makes sense as you don’t want to choose an area that really doesn’t want you there as they likely become an asset to the enemy and put your soldiers at risk.
I, for one, welcome our new robot overlords.
We should violate anything Pentagon considers to be a study. Especially when it wants to control Social Norms.
DAE feel like they woke up one day recently and “AI” suddenly has the answer to EVERY SINGLE PROBLEM EVER? Yet, nothing is getting noticeably better?
“AI” doesn’t have to work a dead end job to feed its family, or turn to alcohol because it’s lonely and scared of being forgotten. It’s training data is a curated version of the human experience based on the Internet!
It’s playing human instead of being human and ALL of its solutions will assume that’s “normal.”
Imagine a five star general googling “should I attack this country?” That’s silly right? Well that’s what’s happening. It’s just being wrapped in a way that makes it look novel.
These are algorithms designed to mimic humans. When faced with any actual controversy they must be persuaded to answer in an “acceptable” and predetermined manner.
The golden rule.