The Fediverse, a decentralized social network with interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Decentralization h...
Not the best news in this report. We need to find ways to do more.
It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
And that is just for known images that matched the hash.
Out of approximately 325,000 posts analyzed over a two day period, we detected
112 instances of known CSAM, as well as 554 instances of content identified as
sexually explicit with highest confidence by Google SafeSearch in posts that also
matched hashtags or keywords commonly used by child exploitation communities.
We also found 713 uses of the top 20 CSAM-related hashtags on the Fediverse
on posts containing media, as well as 1,217 posts containing no media (the text
content of which primarily related to off-site CSAM trading or grooming of minors).
From post metadata, we observed the presence of emerging content categories
including Computer-Generated CSAM (CG-CSAM) as well as Self-Generated CSAM
(SG-CSAM).
In an ideal world sense, I agree with you - nobody should abuse children, so media of people abusing children should not exist.
In a practical sense, whether talking about moderation or law enforcement, a rate of zero requires very intrusive measures such as moderators checking every post before others are allowed to see it. There are contexts in which that is appropriate, but I doubt many people would like it for the Fediverse at large.
Because it’s another “WON’T SOMEONE THINK OF THE CHILDREN” hysteria bait post.
They found 112 images of cp in the whole Fediverse. That’s a very small number. We’re doing pretty good.
It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
And that is just for known images that matched the hash.
Quoting the entire paragraph:
How are the authors distinguishing between posts made by actual pedophiles and posts by law enforcement agencies known to be operating honeypots?
Still, that number should be zero.
In an ideal world sense, I agree with you - nobody should abuse children, so media of people abusing children should not exist.
In a practical sense, whether talking about moderation or law enforcement, a rate of zero requires very intrusive measures such as moderators checking every post before others are allowed to see it. There are contexts in which that is appropriate, but I doubt many people would like it for the Fediverse at large.