Would it be possible to run AI on an AMD GPU instead of Nvidia?

  • keepthepace@slrpnk.net
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    9 months ago

    That’s the opposite of the feedback I got. AMD claims to support all of the transformers library but many people report this to be a lie.

    I am in no love of companies that establish de-facto monopolices, but that is indeed what NVidia has right now. Everything is built over CUDA, AMD has a lot of catch-up to do.

    I have the impression that Apple chips support more things than AMD does.

    There are some people making things work on AMD, and I cheer to them, but let’s not pretend it is as easy as with Nvidia. Most packages depend on cuda for gpu acceleration.

      • keepthepace@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Can’t wait! But really, this type of things is what makes it hard for me to cheer at AMD:

        For reasons unknown to me, AMD decided this year to discontinue funding the effort and not release it as any software product. But the good news was that there was a clause in case of this eventuality: Janik could open-source the work if/when the contract ended.

        I wish we had a champion of openness but in that respect AMD just looks like a worse version of NVidia. Hell, even Intel has been a better player!

        • remotelove@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          I just got DirectML working with torch in WSL2 which was fairly painless.

          I am wondering if that isn’t a better option than trying to emulate CUDA directly. Love it or hate it, Microsoft did do a fairly good job wrangling in different platforms with DirectX.

    • genie@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      This is especially true in the research space (where 90% of this stuff is being built :)