You genuinely don’t think CSAM is used in the training of these AI models…? And then you used a chat model to essentially google the differences in text and not visually?..
Why did you feel the need to jump in and defend stuff like this?
It is irrelevant. Armchairs are not people. Dali does not know what is inside of those objects. Or under their fabrics for instance. Ask Dali to cut open the Avacado armchair.
I’m sorry if I’m not buying your defense of CSAM.
But the Dali use case of “an illustration of a baby daikon radish in a tutu walking a dog" can’t possibly be the best example to use here to defend child porn.
Thanks for making it clear you’re either arguing in bad faith, or that you’re incapable of talking about actual issues the moment anyone mentions CSAM.
The original comment said it’s impossible for a model to be able to produce CP if it was never exposed to it.
They were uninformed, so as someone who works with machine learning I informed them. If your argument relies on ignorance it’s bad.
Re: text model, someone already addressed this. If you’re going to make arguments and assumptions about things I share without reading them, there’s no need for me to bother with my time. You can lead a horse to water but you cant make it drink.
You genuinely don’t think CSAM is used in the training of these AI models…? And then you used a chat model to essentially google the differences in text and not visually?..
Why did you feel the need to jump in and defend stuff like this?
Didnt they then post a link showing that dall-e could combine two different things into something its never seen before?
Did you read the whole comment? Even if the text model describing things is irrelevant the dall-e part is not.
It is irrelevant. Armchairs are not people. Dali does not know what is inside of those objects. Or under their fabrics for instance. Ask Dali to cut open the Avacado armchair.
I’m sorry if I’m not buying your defense of CSAM.
But the Dali use case of “an illustration of a baby daikon radish in a tutu walking a dog" can’t possibly be the best example to use here to defend child porn.
Thanks for making it clear you’re either arguing in bad faith, or that you’re incapable of talking about actual issues the moment anyone mentions CSAM.
The original comment said it’s impossible for a model to be able to produce CP if it was never exposed to it.
They were uninformed, so as someone who works with machine learning I informed them. If your argument relies on ignorance it’s bad.
Re: text model, someone already addressed this. If you’re going to make arguments and assumptions about things I share without reading them, there’s no need for me to bother with my time. You can lead a horse to water but you cant make it drink.
Have a good one!