The big question is how? The algorithms aren’t the root cause of the problem, they are just amplifying natural human behaviour.
People have always fallen down these rabbit holes and any algorithm based on predicting what a person will be interested in will suffer a similar problem. How can you regulate what topics a person is interested in?
Do we need algorithms that predict what we’re interested in though? At what point do we go “ah this is actually causing more trouble than it’s worth?”
I’d be perfectly fine browsing content by category rather than having it fed to me based on some sort of black-box weighting system with no clear instructions for me to correct. I mean it works great here on Lemmy.
Do you ever sort posts by “hot”, “active” or even “top 6 hours”? They’re all algorithms that predict what you’re interested in. Less complex than something like YouTube or Instagram, but the same core principle.
The amount of content published on the internet each day makes some kind of sorting necessary. Browsing YouTube by “new” would be a cluttered mess, even with fairly narrow categories. Over 11,000 hours of new video are posted every hour - we need some way to automatically sort the wheat from the chaff, and that means some sort of algorithm.
So how do we build an algorithm that delivers what we want, without giving people too much of what they want if they want something potentially harmful? As far as I know, nobody has found a good answer to that.
Well I mean obviously I’m not against algorithms in general. They’re just mathematical functions to achieve a goal. Each HTTP request generally uses both encryption and compression algorithms and that’s highly useful.
I’m questioning the usefulness of profiling and targeting users with specific content. The Lemmy algorithm isn’t that complex, it doesn’t build a user profile on you, it just goes by general user engagement. That’s fine. Further by virtue of it being open source, Lemmy wouldn’t have a “black box”, it’d be open for anyone to view and analyse.
Comparing Lemmy to YouTube/Instagram/Facebook/Twitter and the like makes for a rather poor comparison.
Lemmy’s simpler algorithm still has the same the problem though. That’s been seen time and time again on Reddit. Humans will actively curate a feed of content they find engaging and avoid content they disagree with. This leads down exactly the same rabbit holes as if you let an algorithm curate a personalised feed for that user.
My theory is society has a suppressing affect on these things… It’s not nice to be a nazi, or to mistreat people you don’t like, so these things get hidden.
Algorithms do the opposite. Now someone with Nazi tendencies is surrounded by them and encouraged. Posts hating trans people get pushed by algorithms because they drive engagement (even if all the initial responses are negative, it’s still engagement to the algorithm, which will then boost the ‘popular’ post).
Things like lemmy and mastodon don’t do that and end up nicer places as a result.
The big question is how? The algorithms aren’t the root cause of the problem, they are just amplifying natural human behaviour.
People have always fallen down these rabbit holes and any algorithm based on predicting what a person will be interested in will suffer a similar problem. How can you regulate what topics a person is interested in?
Do we need algorithms that predict what we’re interested in though? At what point do we go “ah this is actually causing more trouble than it’s worth?”
I’d be perfectly fine browsing content by category rather than having it fed to me based on some sort of black-box weighting system with no clear instructions for me to correct. I mean it works great here on Lemmy.
Lemmy literally has an algorithm to rank posts
Or do you sort your posts by new?
What would you propose for YouTube?
Do you ever sort posts by “hot”, “active” or even “top 6 hours”? They’re all algorithms that predict what you’re interested in. Less complex than something like YouTube or Instagram, but the same core principle.
The amount of content published on the internet each day makes some kind of sorting necessary. Browsing YouTube by “new” would be a cluttered mess, even with fairly narrow categories. Over 11,000 hours of new video are posted every hour - we need some way to automatically sort the wheat from the chaff, and that means some sort of algorithm.
So how do we build an algorithm that delivers what we want, without giving people too much of what they want if they want something potentially harmful? As far as I know, nobody has found a good answer to that.
Well I mean obviously I’m not against algorithms in general. They’re just mathematical functions to achieve a goal. Each HTTP request generally uses both encryption and compression algorithms and that’s highly useful.
I’m questioning the usefulness of profiling and targeting users with specific content. The Lemmy algorithm isn’t that complex, it doesn’t build a user profile on you, it just goes by general user engagement. That’s fine. Further by virtue of it being open source, Lemmy wouldn’t have a “black box”, it’d be open for anyone to view and analyse.
Comparing Lemmy to YouTube/Instagram/Facebook/Twitter and the like makes for a rather poor comparison.
Lemmy’s simpler algorithm still has the same the problem though. That’s been seen time and time again on Reddit. Humans will actively curate a feed of content they find engaging and avoid content they disagree with. This leads down exactly the same rabbit holes as if you let an algorithm curate a personalised feed for that user.
My theory is society has a suppressing affect on these things… It’s not nice to be a nazi, or to mistreat people you don’t like, so these things get hidden.
Algorithms do the opposite. Now someone with Nazi tendencies is surrounded by them and encouraged. Posts hating trans people get pushed by algorithms because they drive engagement (even if all the initial responses are negative, it’s still engagement to the algorithm, which will then boost the ‘popular’ post).
Things like lemmy and mastodon don’t do that and end up nicer places as a result.
@dojan @Mr_Will
I’d recommend you read ‘Weapons of Math Destruction’.
Algorithms are usually developed with the best of intentions but no one really knows how they will behave out in the wild.
#algorithms
Thank you! I looked it up, and it sounds really interesting. Will have a deeper dive into it!