- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’
I’d like to point out that not everything generative is a subset of all the ML stuff. So prejudices in datasets do not affect everything generative.
That’s off the topic, just playing with such a thing as generative music now. Started with SuperCollider, but it was too hard (maybe not anymore TBF, probably recycling a phrase, for example, would be much easier and faster there than in my macaroni shell script) so now I just generate ABC, convert it to MIDI with various instruments, and use FluidSynth.