anarchoilluminati [comrade/them]

  • 1 Post
  • 367 Comments
Joined 1 year ago
cake
Cake day: November 14th, 2023

help-circle





  • In the US, especially in the 90’s, there were a lot promotional mailers, TV ads, and so on to prey on people’s desperation and ignorance that were sent out that would say that the person has “a chance to win $1M!” or “may have already won!”

    Of course, I’ve never heard of anyone who actually won anything from them. It was probably just a data collection thing or asking people to send $20 to enter the fake lottery, I’m not sure. But, I have to admit, I did join a lot of those because I was a kid and I thought it’d be fun or easy and my dad would humor me by going through the process—or, at least, pretend that he did. We never won anything though.












  • But, ironically, the Chinese Room Argument you’re bringing up supports what others are saying that LLMs do not ‘understand’ anything.

    It seems to me like you are establishing ‘understanding’ with a functionalist meaning to be able to say that input/output is equivalent to understanding in order to say the measurable process in itself shows ‘understanding’. But that’s not what Searle, and seemingly the others here, seem to mean by ‘understanding’. As Searle argues, it is not purely the syntactic manipulation in question but the semantic. In other words, these LLMs do not “know” the information they provide, they are just repeating based off the input/output process with which they were programmed. LLMs do not project or internalize any meaning to the input/output process. If they had some reflexive consciousness and any ‘understanding’, then they could have critically approach the meaning of the information in order to assess its validity against facts rather than just naïvely proclaiming that cockroaches got their name because they like to crawl into penises at night. Do you believe LLMs are conscious?