The aircraft flew up to speeds of 1,200mph. DARPA did not reveal which aircraft won the dogfight.

  • calcopiritus@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    8 months ago

    In video games the AI have access to all the data in the game. In real life both the human and AI have access to the same (maybe imprecise) sensor data. There are also physical limitations in the real world. I don’t think it’s the same scenario.

    • BluesF@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      Not exactly, AI would be able to interpret sensor data in a more complete and thorough way. A person can only take in so much information at once - AI not so limited.

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        Don’t get me wrong. Humans have many limitations that AI don’t in this scenario. I’m not saying that a human would do better. For example, as others have stated, an AI doesn’t suffer from G forces like a human does. AI also reads the raw sensor data instead of a screen.

        All I’m saying that this case is not the same as a videogame.

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          Video games can model point of view and limit AI to what they can legitimately see, while still taking the governor chip off their aiming and reaction time performance.

            • VirtualOdour@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              Ai can balance a physical triple pendulum and move between positions fluidly just using vision alone, a human has no chance at coming close.

              We’re genuinely in the sci-fi robots are better than humans phase of history, by 2030 you’ll be used to seeing impressive things done by robots like dude perfect videos with people setting up crazy challenges like ‘I got my robot to throw THIS egg through THIS obstacle course and you’ll never belive how it did it!’

    • NeatNit@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      I think even the imperfect sensor data is enough to beat a human. My main argument for why self-driving cars will eventually be objectively safer than the best human drivers (no comment about whether that point has already done) is this:

      A human can only look at one thing at a time. Compared to a computer, we see allow, think slow, react show, move slow. A computer can look in all directions all the time, and react to danger coming from any of those directions faster than a human driver would even if they were lucky enough to be looking in the right direction. Add to that the fact that they can take in much more sensor data that isn’t available to the driver or take away from precious looking-at-the-road time for the driver to know, such as wind resistance, engine RPM, or what have you (I’m actually not a car guy so my examples aren’t the best). Bottom line: the AI has a direct connection to more data, can take more of it in at once and make faster decisions based on all of it. It’s inherently better. The “only” hurdles are making it actually interpret its sensors effectively (i.e. understand what cameras are seeing) and make good decisions based on this data. We can argue about how well either of those are in the current state of the technology, but IMO they’re both good enough today to massively outperform a human in most scenarios.

      All of this applies to an AI plane as well. So my money is on the AI.