A South Korean man has been sentenced to jail for using artificial intelligence to generate exploitative images of children, the first case of its kind in the country as courts around the world encounter the use of new technologies in creating abusive sexual content.
As someone who’s spent a couple weeks down a stable diffusion rabbit hole. I can attest that they don’t need to be trained on CP to generate CP content. Using some very popular checkpoints I inadvertently created some images that I found questionable enough to immediately delete. And I wasn’t even using prompts to generate young girls, with the right prompts I can easily see some of the more popular checkpoints pumping out CP.
AI produces images all the time of things that aren’t in its training set.
AI models learn statistical connections from the data it’s provided. It’s going to see connections we can’t, but it’s not going to create things that are not connected to its training data. The closer the connection, the better the result.
It’s a pretty easy conclusion from that that CSAM material will be used to train such models, and since training requires lots of data, and new data to create different and better models…
Real material is being used to train some models, but sugesting that it will encourage the creation of more “data” is silly. The amount required to finetune a model is tiny compared to the amount that is already known to exist. Just like how regular models haven’t driven people to create even more data to train on.
And what does that have to do with the production of csam? In the example given the data already existed, they’ve just been more aggressive about collecting it.
Not necessarily. The same images would be consumed by both groups, there’s no need for new data. This is exactly what artists are afraid of. Image generation increases supply dramatically without increasing demand. The amount of data required is also pretty negligible. Maybe a few thousand images.
What do you think those AIs models are trained on?
deleted by creator
I think that astronaut has hooves for hands
deleted by creator
So it wasn’t trained on pictures of astronauts and pictures of horses?
deleted by creator
As someone who’s spent a couple weeks down a stable diffusion rabbit hole. I can attest that they don’t need to be trained on CP to generate CP content. Using some very popular checkpoints I inadvertently created some images that I found questionable enough to immediately delete. And I wasn’t even using prompts to generate young girls, with the right prompts I can easily see some of the more popular checkpoints pumping out CP.
I think it it becomes widespread, like you want it to be, models that generate CSAM will be trained on such material, yes.
deleted by creator
Not child porn. AI produces images all the time of things that aren’t in its training set. That’s kind of the point of it.
AI models learn statistical connections from the data it’s provided. It’s going to see connections we can’t, but it’s not going to create things that are not connected to its training data. The closer the connection, the better the result.
It’s a pretty easy conclusion from that that CSAM material will be used to train such models, and since training requires lots of data, and new data to create different and better models…
Real material is being used to train some models, but sugesting that it will encourage the creation of more “data” is silly. The amount required to finetune a model is tiny compared to the amount that is already known to exist. Just like how regular models haven’t driven people to create even more data to train on.
It has driven companies to try to get access to more data people generate to train the models on.
Like chatGPT on copyrighted books, or google on emails, docs, etc.
And what does that have to do with the production of csam? In the example given the data already existed, they’ve just been more aggressive about collecting it.
Well now in addition to regular pedos consuming CSAM, now there are the additional consumers of people to use huge datasets of them to train models.
If there is an increase in demand, the supply will increase as well.
Not necessarily. The same images would be consumed by both groups, there’s no need for new data. This is exactly what artists are afraid of. Image generation increases supply dramatically without increasing demand. The amount of data required is also pretty negligible. Maybe a few thousand images.