I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating.
Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased.
This is
But it is intelligence. Just a very different form than we are generally used to. It’s not entirely trustworthy or accurate in it’s output yet, but that’s ok for what is effectively early stage AI. humans have never been fully functional or reliable, but can still be useful. We have fully functional agents capable of doing complex things like building a functional computer out of eBay listings, or ordering a pizza in the style you request. I’ve trained less reliable or capable human beings. It is not sentient, it is not perfect or completely reliable, but it is more than just a parrot. It is capable of creating and responding to some novel situations. Of course there is still a lot more to be worked on.
Do you think there is no stochastic element to our natural use of language? Are you never confused by a word that came out of your mouth that upon immediate reflection isn’t a word you would have intended to say at all? What we have built is just a piece of the puzzle, but it’s not stopping there.
There is a lot of work to be done in mechanistic interpretability and alignment. users also need to understand the abilities and limitations of the tool, but it’s absurd not to be impressed and excited by the current state of neural networks.