OpenAI collapses media reality with Sora AI video generator | If trusting video from anonymous sources on social media was a bad idea before, it’s an even worse idea now::Hello, cultural singularity—soon, every video you see online could be completely fake.

  • gerryflap@feddit.nl
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    2
    ·
    9 months ago

    I looked at these videos with very mixed emotions. On the one hand, I marveled at how far we’ve gotten. In a few years we went from generating sort of okay images in a very confined domain and essentially uncontrollable, to generating high resolution video that on first glance looks real.

    But then the sadness struck me. I think we’re entering the post-truth era, where the truth is harder and harder to find because all the fake stuff looks so real. We can generate text, images, sound, and now also video of whatever we want in the blink of an eye. Combine this with the tendency of people to accept any “information” that fits their view, and the filter bubbles that already exist, and we can see that humanity will start living in separate bubbles. Every bubble will have their own truth, and even if someone proves that a video or image is fake, that information will probably not even reach them because the truth doesn’t generate enough clicks.

    I want to stay optimistic, we’ve overcome so much stuff as a species, maybe we’ll right the ship at some point. But with all the shit that is already going on in the world, the last thing we need is the ability to fake videos like this in no time at all. At some point the separate filter bubbles will tear our stable western world as we knew it apart, and we’ll see shit like WW II again. The situation is already heating up.

    • shneancy@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      9 months ago

      It’s funny that in the human history there will be a gap of around 100 years where photos and video were considered to be solid proof and evidence that could determine the outcome of somebody’s future

      we’re back at square one I guess

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        9 months ago

        Naah, that was never a thing. EG: In 1917, two young girls created some photographs of fairies, the Cottingley Fairies. Arthur Conan Doyle, the inventor of Sherlock Holmes, endorsed them as real. When you have eliminated the impossible, whatever remains, however improbable, must be the truth. That quote is terrible advice.

        The last 1 or 2 decades were really the golden age of credible evidence. Everyone has a video camera and can upload these videos almost immediately (proving that the videos were not edited later). Yet, at the same time, misinformation has become this huge topic.


        We’re not back to square 1, either. You can still immediately upload a video (or a hash, or get it certified in some way). Say, you do this with dashcam footage after a collision, ASAP. That makes it almost unassailable as evidence, because you can’t have had time to forge it; certainly not in a way that is congruent with independent evidence and testimony.

        If several people, upload videos of the same event at about the same time, then they either are all in it together and carefully prepared the videos beforehand, or not.

    • Dadifer@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      9 months ago

      I’m actually glad that AI is making people realize that what they see is likely not real. For the history of media, the default has been for the written word or images or video to be taken as 100% truth, when in reality, it has always been very easy to deceive and manipulate. Now that we will suspect everything, maybe there will finally be critical thinking.

    • eluvatar@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      Honestly I think we’ve been there for a while. The only difference now is that it’s very easy for anyone to fake something, which might actually force us to face it? Or not who knows.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    4
    ·
    edit-2
    9 months ago

    Another stepping stone to a much worse world. We won’t know what is real anymore.

    I think it’s very cool technology, but in the hands of governments and psyops, it’s going to brainwash entire countries.

    Want another 9/11? Sure no problem. Blow up a building, tell people you have some random video of what happened, captured by civilians…place evidence in locations where it will be found.

      • 1984@lemmy.today
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        9 months ago

        I think some governments already had tech like this but not all.

        It will be interesting to follow this. Probably lots of fake videos on YouTube as a consequence where events are not real but used to stir up aggression.

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 months ago

          Photoshop has existed for a long time. Three Letter Agencies have been faking stuff forever. Not new.

          Will this make it easier/faster? For sure. The one upside I can see is it brings the conversation to everyone, even those folks who don’t want to acknowledge government is as bad an actor as anyone else.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      We won’t know what is real anymore.

      Of all the things, this really scares me. Many people scroll through their socials so quickly they will definitely not be able to tell apart generated clips from real ones. And the generated ones will only get better. One generation later, nobody will believe anything they see on a screen. And no, I don’t think regulation can do much here as it will only end up in heavily censoring everything, leading to more distrust in media.

  • Rayquetzalcoatl@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    6
    ·
    9 months ago

    This kind of AI stuff bums me out. You get people legitimately sharing AI images (and potentially videos in the future) and saying “look what I made!”. It’s totally inauthentic.

    My boss loves this shit, on the other hand. Looking forward to the day she can automate our jobs away, I assume.

    • mods_are_assholes@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Because the rich profit the most when everyone else is in fear and confusion.

      So they generate it as much as possible and reap the rewards of ignorance and knee-jerk policies.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      edit-2
      9 months ago

      Because we’re all born selfish assholes*, and some people never learn to not be so.

      *We’re all born as selfish idiots, how can we be otherwise? We’re helpless at birth, thrust from perfect comfort and safety into discomfort, utterly ignorant and wholly dependent, with no knowledge there are others, who are just as dependent and helpless when they’re born. Learning about others, and how to get along is part of maturing.

      • mods_are_assholes@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Sorry no, your perceptions are skewed by how well society rewards selfish assholes.

        Most humans are inherently empathic and compassionate, just the tiny handful of sociopaths that run everything are projecting.

  • Dogyote@slrpnk.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    9 months ago

    It’s like we’re going back to the pre-internet era but it’s obviously a little different. Before the internet, there were just a few major media providers on TV plus lots of local newspapers. I would say that, for the most part in the USA, the public trusted TV news sources even though their material interests weren’t aligned (regular people vs big media corporations). It felt like there wasn’t a reason not to trust them, since they always told an acceptable version of the truth and there wasn’t an easy way to find a different narrative (no internet or crazy cable news). Local newspapers were usually very trusted, since they were often locally owned and part of the community.

    The internet broke all of those business models. Local newspapers died because why do you need a paper when there are news websites? Major media companies were big enough to weather the storm and could buy up struggling competitors. They consolidated and one in particular started aggressively spinning the news to fit a narrative for ratings and political gain of the ownership class. Other companies followed suit.

    This, paired with the thousands of available narratives online, weakened the credibility of the major media companies. Anyone could find the other side of the story or fact check whatever was on TV.

    Now what is happening? The internet is being polluted with garbage and lies. It hasn’t been good for some time now. Obviously anyone could type up bullshit, but for a minute photos were considered reliable proof (usually). Then photoshopping something became easier and easier, which made videos the new standard of reliable proof (in most cases).

    But if anything can be fake now and difficult to identify as fake, then how can you fact check anything? Only those with the means will be able to produce undeniably real news with great difficulty, which I think will return power to major news companies or something equivalent.

    I’m probably wrong about what the future holds, so what do you think is going to happen?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 months ago

      I don’t think you’re wrong, I have been thinking the same thing.

      Everyone has been worried about “AI misinformation” - but if misinformation becomes so commoditized online that someone convinced the moon landing is fake finds two dozen different AI generated sources agreeing with them but disagreeing with each other (i.e. a video of Orson Wells filming it but also a video of Stanley Kubrick filming it) we may well end up in a world where people just stop paying attention to the bullshit online that has been destroying people’s minds for years now.

      Couple this with the advances in AI correctly identifying misinformation and live fact checking it with citations to reputable and/or certified sources, combined with things like Elon Musk’s ‘uncensored’ Grok turning around and calling his conservative Twitter fans racist and small minded morons while pointing out why they are wrong, or Gab’s literal Adolf Hitler AI telling a user they were disgusting for asking if Jews were vermin - and we may just end up on a narrow path out of the mess we’ve found ourselves in well before AI was suddenly a thing.

      I had been really worried about the AI misinformation angle, but given some recent developments in the past few months I’m actually hopeful about the future of a better informed public for the first time in years.

      • Drewelite@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        Agreed, people are up in arms that misinformation will become easier. But I think the naive idea that the internet is inherently a reliable source of truth when it is mixed with subtler forms of misinformation, is much more insidious. Journalism used to be a highly respected field before we all forgot why it was so important.

      • le_saucisson_masquay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        People just want to get confirmed in what they already believe, with the amount of fake news people are already getting dumber because they’re not suffering criticism.

        If I believe moon landing was fake before I would have hundreds of source telling me I’m wrong and only a few scammy documentaries that would agree with my belief. But now there is fake to confirm any belief I have. Aliens are real, check this video proving it. Zuckerberg is a lizard? There are dozens of photo and video on twitter. And so on.

        I’m really not optimistic about that at all.

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Now what is happening? The internet is being polluted with garbage and lies. It hasn’t been good for some time now.

      Social media as content aggregation is generally garbage, but it’s a far stretch to apply that to the Internet or even the Web as a whole. Don’t forget Wikipedia is still a thing and almost every creator of primary source data publishes online.

      But if anything can be fake now and difficult to identify as fake, then how can you fact check anything? Only those with the means will be able to produce undeniably real news with great difficulty, which I think will return power to major news companies or something equivalent.

      That’s kind of always been true. And I agree, we need to find a way to maintain information sourcing organizations (e.g. news) that we can trust as the arbiters of this information. If Washington Post can actually put credible reporters on the ground to confirm something, and I know I can trust WaPo, I can fairly say with some confidence that it’s good information.

      I think we all (or some of us at least) just need to be willing to pay for this service.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 months ago

      Fake photos existed before Photoshop, with scissors and glue

    • captsneeze@lemmy.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      FANTASTIC reference! This movie is so funny and awesome, and it seems to have completely disappeared from pop culture. I never understood why Conan looms so large in our collective memory, but this movie totally vanished.

  • OpenHammer6677@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    edit-2
    9 months ago

    Genuine question: why do we need this type of thing?

    Especially in view of the harm it can cause, what’s the point of creating this aside from generating shareholder value?

    Sure, creating a video out of text quickly is cool, but is there an actual need for this?

    • YungOnions@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      2
      ·
      9 months ago

      I mean, the ability to generate whatever video you want without having to pay the costs normally associated with filming, location, actors etc is going to be very appealing to people like advertisers. This way you can have a few seconds of a beach for your travel company advert, for example, without having to pay for the stock footage or film it yourself. In fact I can see this transforming stock footage in general. Why bother to pay someone to make a generic video of ‘people having a meeting’ when an AI can do it for free in half the time. Doesn’t even need to be that good if you’re only using it briefly in a presentation. Not saying any of this is a good thing, but here we are…

        • YungOnions@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          9 months ago

          I mean, yes in so far that it opens those options up to people who may not have been able to afford it before. Whether that’s a ‘need’ or not depends on your opinion of the company I guess.

          Also there are other applications beyond this, of course. Easily made videos could help reduce the costs associated with treating some mental health issues for example.

          The you have the ability for novice film makers to make content, a bit like how engines like Unity have made it easier for people to make games. Sure there’ll be shit-tier tat, but there’ll also be content made by creators that may never have had the chance otherwise.

          • OpenHammer6677@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            9 months ago

            Which then just feeds to the system. But as this is a globally impactful thing, is there any real world need that outweighs the harm?

            • Blue_Morpho@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Cost savings are a need. It frees resources for everyone. Sure the vast majority of the profit goes to the shareholders but that’s true of every labor saving device.

              Do we NEED computers? You can hire people to do calculations by hand. The word Computer used to mean a job title, not a device.

            • abhibeckert@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              8
              ·
              edit-2
              9 months ago

              OpenAI’s take is someone will create this technology - it might as well be them since their motivation is relatively pure. OpenAI is a non profit and they do work hard to minimise the damage their tech can cause. Which is why this video generation feature has not been launched yet.

              • 31337@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                7
                ·
                9 months ago

                OpenAI is no longer “pure.” They are not open. They do not publish the details of any of the discoveries they’ve made (which used to be standard practice, even in the private sector). Their leadership is now in the “effective accelerationism” camp that worships capitalism, and sees developing AGI as their moral obligation, regardless of what harm it may cause to society. (They are also delusional, because it’s very unlikely AGI will be developed anytime soon).

              • bassomitron@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                9 months ago

                OpenAI is only technically non-profit. They’re a proxy for Microsoft in all but name. They started out mostly pure, but their dickhead CEO has worked hard to undo all of that nonsense and has created parallel companies for OpenAI that can absolutely make profit while the main company gets to keep its nonprofit status. That was literally the entire basis for the board firing him (the CEO) a few months back.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        9 months ago

        This way you can have a few seconds of a beach for your travel company advert, for example, without having to pay for the stock footage or film it yourself.

        Advertising holidays at places that do not exist! Exactly what we needed!

        • I_Has_A_Hat@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          9 months ago

          So no different than currently? Pictures and videos in ads all have heavy editing and post processing to make them look better.

          • CosmoNova@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            It‘s highly debatable how much editing is ok or when it get‘s deceiving. Video can only show you so much of an actual experience, senses like smells, warmth and time need some editing to be somewhat communicated. That‘s just film making.

            For an AI generated clip it‘s not even a debate. It doesn‘t exist and doesn‘t try to advertise anything. And since it‘s so much easier and cheaper than doing all of the above… Well let‘s just say the incentive to deceive was just pushed up to 11.

          • CosmoNova@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Usually their computer animations are a pretty obvious style choice. This is completely different than that and even more different than photos.

      • ours@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        Weird music videos. Just this week Youtube had pushed me music videos all done in the weird warpy AI style.

        It’s kind of cool and simultaneously already feels like a fad that tired with.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      9 months ago

      Who said anything about need?

      It was created because someone thought of it. How it’s used is a measure of the person using it.

      People will find ways to utilize whatever someone creates. And usually in ways the creator never envisioned.

      “Needing” something comes after a tool becomes ubiquitous. Imagine trying to screw in a Phillips screw with a slot screw driver - you’d need a Phillips driver because those screws are now ubiquitous (and I can’t wait for them to go away. All hail our stripped screw saviour Torx!)

    • mods_are_assholes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Genuine question: When have we ever needed the things corporations push as disruptive technologies?

      This isn’t being done because we asked for it, it’s being done by the owner class as a method of control and cover.

      It isn’t ‘we’, and never has been.

  • Mac@mander.xyz
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    9 months ago

    OpenAI singlehandedly breaking the internet. Props, tbh.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      They really are. I don’t know about Google but with DDG when searching for information I feel like most of the top results are articles written by AI. Luckily it’s still somewhat easy to recognize but that’s not going to be the case for long. It’s inevitable though so I don’t really blame them. If not OpenAI then it would have just been someone else. I’m just worried about where this is going. I can think of more ways this could go wrong than right.

  • RBG@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    9
    ·
    9 months ago

    Isn’t this a bit over dramatic, seeing as we had deepfake tech for a while now?

    • M500@lemmy.ml
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      9 months ago

      I don’t think so, as deep fake stuff was about switching faces and voices. You needed actual footage to train this on.

      So if you wanted to stage something, it would take considerable effort, money, time, and manpower.

      Now anyone will be able to just type in a prompt and have a video generated.

      We saw the Joe Biden deepfake that made calls to tell people not to vote for him. Now just wait until we are having videos of him saying it sent out in mass.

    • ch00f@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      9 months ago

      Imagine generating 5,000 videos of different people (likenesses pulled from Facebook) reacting to a fake calamity staged in a certain city.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        9 months ago

        Imagine seeing it every day to a point you can’t see the real calamity coming because you stopped believing in them entirely.

    • jantin@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 months ago

      The link does not say in what way people were not supposed to share.

      The link is the same kind of self-delusion people show around all of these generative tools: “look the faces are weird, the bird has wrong feathers, the cat has only 2 legs, nothing to worry about” while forgetting that most everything else in a clip works well and that it is the first-of-the-first releases which will get gradually better.

      • Rekonok@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        This shit is sold as the new Pixar

        Yeah in a few months it will be a tool to help animators with shadows and lightings In some years it could produce a decent GIF of Pepe having sex with Sonic

        At least it is less a scam than an NFT, still a scam

        Still a waste of money and energetic ressources

        Invest in it if you can, sure you can have a nice taxe refund in 3-5 years

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      Well I get that AI companies over promise and stuff, but that opinion piece really just confirmes what we’re already able to see in said clips. Sure, many animals look eerie es hell and that monobloc excavation video is one hell of an acid trip, but there’s already a lot there. More than I’m comfortable with.

  • Alpha71@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    15
    ·
    9 months ago

    I don’t really see the big problem yet. There’s still a hint of uncanny valley in that video.

    • Pirky@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      This is only the beginning. It’s only going to get harder and harder to know what is and isn’t real online.
      Sure, you and I are aware of this and have an idea of what to look out for. But do my older parents or grandparents know about this stuff and what to look for? I seriously doubt it.

    • chuckleslord@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      With the amount of people who are either lying or genuinely can’t tell when images are made by AI… I’m scared

    • Stanwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      9 months ago

      Show it to your parents and ask what they think. Guaranteed they can’t tell it’s fake.

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        9 months ago

        My mother is about the only person of that vintage that would probably say “this doesn’t look right”

        Even my older siblings (boomers) would probably fall for it.