• SkyeStarfall@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    5 months ago

    While I agree it’s a relatively low percentage, not being sure and having people pick effectively randomly is still an interesting result.

    The alternative would be for them to never say that gpt-4 is a human, not 50% of the time.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          5 months ago

          Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.

          • dustyData@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            5 months ago

            The problem with that is that you can change the percentage of people who identify correctly other humans as humans. Simply by changing the way you setup the test. If you tell people they will be, for certain, talking to x amount of bots, they will make their answers conform to that expectation and the correctness of their answers drop to 50%. Humans are really bad at determining whether a chat is with a human or a bot, and AI is no better either. These kind of tests mean nothing.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              5 months ago

              Humans are really bad at determining whether a chat is with a human or a bot

              Eliza is not indistinguishable from a human at 22%.

              Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.

              This is a monumental achievement.

              • dustyData@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                3
                ·
                edit-2
                5 months ago

                First, that is not how that statistic works, like you are reading it entirely wrong.

                Second, this test is intentionally designed to be misleading. Comparing ChatGPT to Eliza is the equivalent of me claiming that the Chevy Bolt is the fastest car to ever enter a highway by comparing it to a 1908 Ford Model T. It completely ignores a huge history of technological developments. There have been just as successful chatbots before ChatGPT, just they weren’t LLM and they were measured by other methods and systematic trials. Because the Turing test is not actually a scientific test of anything, so it isn’t standardized in any way. Anyone is free to claim to do a Turing Test whenever and however without too much control. It is meaningless and proves nothing.