“Translation: all the times Tesla has vowed that all of its vehicles would soon be capable of fully driving themselves may have been a convenient act of salesmanship that ultimately turned out not to be true.”

Another way to say that, is Tesla scammed all of their customers, since you know, everyone saw this coming…

  • MajorHavoc@programming.dev
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    edit-2
    10 days ago

    That out of the way, FSD sucks, and it’s getting worse, not better.

    It’s almost like they bet on the AI to teach the AI, rather than continuing to pay for skilled engineers.

    Buckle up folks, we’re going to see a lot more of this, across every industry, before the lawsuits go into high gear and anything gets better.

    • capital@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 days ago

      Since the first time I heard about FSD I’ve been wondering why Tesla (or others) doesn’t set up a system where drivers opt-in (no opt-in by default) to sending anonymized driving data to help train the model. The vast majority of the time, it’s probably modeling OK driving. At least no accidents. But the shitty driving and accidents are also useful as data about what to avoid.

      Maybe they’re already doing this? But then I wonder why their FSD is getting shittier rather than improving. One would think with more driving data, good and bad examples, would only help.

    • GenosseFlosse@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 days ago

      I don’t believe that you can use traditional algorithms to teach the car street driving, because there are to many different variations of intersections, traffic signs, special conditions like accidents, heavy Rain or fog, road closures or construction sites to get it right every time. Even if your autopilot is 99% correct and you drive 20000km a year, you still drive wrong 200km of it.

      This doesn’t mean that AI will be better, because then you don’t even have a source code to track down where it went wrong to correct it in future updates.

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 days ago

        I don’t believe that you can use traditional algorithms to teach the car street driving, because there are to many different variations… Even if your autopilot is 99% correct and you drive 20000km a year, you still drive wrong 200km of it.

        Exactly!

        And this is why, if the problem is solveable, it must be solved by learning models shepherded by expert engineers. The LLMs can take care of the long boring stretches, freeing skilled engineer time to fine-tune an LLM algorithm hybrid for the tricky bits.

        I’m inclined to believe the problem is solveable, but since I’m not selling anything, I’m allowed to say “if”. Heh.