Feel like we’ve got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you’ve got all these people invested in AI companies running around with flashlights under their chins like “bro this is so scary how good we made this thing”. Seems like bullshit.

I’ve seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don’t think I’d just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

  • mim@lemmy.sdf.org
    link
    fedilink
    arrow-up
    100
    arrow-down
    2
    ·
    1 year ago

    I don’t think the comparison with crypto is fair.

    People are actually using these models in their daily lives.

    • PeepinGoodArgs@reddthat.com
      link
      fedilink
      arrow-up
      52
      ·
      1 year ago

      I’m one of those that use it in my daily life.

      The current top comment says it’s “really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.”

      So, I use Perplexity.ai like you would use Google. Except I don’t have to deal with shitty ads and a bunch of filler content. It summarizes links for me, so I can more quickly understand whatever I’m searching for. However, I personally believe it’s important to look directly at the sources once I get the summary, if only to verify the summary. So, in this instance, I find AI makes understanding a topic easier and faster than alternatives.

      As a graduate student, I use ChatGPT extensively, but ethically. I’m not writing essays with it. I am, however, downloading lecture notes as PDFs and having ChatGPT rearrange that information into outline. Or I copy whole chapters from a book and have it do the same. Suddenly, my reading time is cut down by like 45 minutes because it takes me 15 minutes to get output that I just copy and paste into my notes, which I take digitally.

      Honestly, using it like I do, it’s pretty clear that AI is both as scary as it sounds in some instances and not, in others. The concern with disinformation during the 2024 election is a real concern. I could generate essays with it with whatever conclusions I wanted. In contrast, the concern that AI is scary smart and will take over the world is nonsense. It’s not smart in any meaningful sense and doesn’t have goals. Smart bombs are just dumb bombs with the ability to hone in better on the target, it’s still has the mission of blowing shit up given to it by some person and inherent in its design. AI is the same way.

    • hglman@lemmy.ml
      link
      fedilink
      arrow-up
      12
      arrow-down
      5
      ·
      1 year ago

      People have actually used crypto to make payments. Crypto is valuable, but only when it’s widely adopted. Before you say something like “use a database,” you might take the time to understand what decentralized blockchains are accomplishing and namely removing a class of corruption from any information coordination tasks.

      • beatle@aussie.zone
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        1 year ago

        Why bother with the overhead of blockchain when users centralise on a handful of banks exchanges.

        • hglman@lemmy.ml
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          Exchanges only exist to convert away from the crypto. If that’s the standard money, they don’t live. They aren’t the banks of the blockchain. They are the intersection of fiat banks and the blockchain.

          • beatle@aussie.zone
            link
            fedilink
            arrow-up
            7
            arrow-down
            2
            ·
            1 year ago

            Strongly disagree, some exchanges don’t even have fiat on-ramps.

            Blockchain is inefficient and pointless when users centralise on coinbase and binance.

  • zumi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    55
    arrow-down
    1
    ·
    1 year ago

    Senior developer here. It is hard to overstate just how useful AI has been for me.

    It’s like having a junior programmer on standby that I can send small tasks to–and just like the junior developer I have to review it and send it back with a clarification or comment about something that needs to be corrected. The difference is instead of making a ticket for a junior dev and waiting 3 days for it to come back, just to need corrections and wait another 3 days–I get it back in seconds.

    Like most things, it’s not as bad as some people say, and it’s not the miracle others say.

    This current generation was such a leap forward from previous AI’s in terms of usefulness, that I think a lot of people were looking to the future with that current rate of gains–which can be scary. But it turns out that’s not what happened. We got a big leap and now are back at a plateau again. Which honestly is a good thing, I think. This gives the world time to slowly adjust.

    As far as similarities with crypto. Like crypto there are some ventures out there just slapping the word AI on something and calling it novel. This didn’t work for crypto and likely won’t work for AI. But unlike crypto there is actually real value being derived from AI right now, not some wild claims of a blockchain is the right DB for everything–which it was obviously not, and most people could see that, but hey investors are spending money so lets get some of it kind of mentality.

    • thelastknowngod@lemm.ee
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      1 year ago

      Same. 5 minutes after installing Copilot I literally said out loud, “Well… I’m never turning this off.”

      It’s one of the nicest software releases in years. And it’s instantly useful too… No real adjustment period at all.

      • GarlicBender@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        I tried it for a couple months and it was alright but eventually it got too frustrating. I did love how well it did some really repetitive things. But rarely did it actually get anything complex 100% right. In computing, “almost right” is wrong. But because it was so close, it was hard to spot the mistakes.

        There were cases where my IDE knew the right answer but Copilot did not. Realizing that Copilot was messing up my IDE enhancements to produce code I was painfully babysitting, I cancelled it.

        • sLLiK@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 year ago

          This is the most insidious conundrum related to AI usage. At the end of the day, a LLM’s top priority is to ensure that your question is answered in a way that satisfies that model. The accuracy of its answers are a secondary concern. If forced to choose between making up BS so it can have a response that looks right versus admitting it doesn’t have enough information to answer, it can and often will choose the former. Thus the “hallucination” problem was born.

          The chance of getting your answer lightly sprinkled with made up stuff is disturbingly high. This transfers the cognitive load of the AI user from “what is the answer” to “I must repeatedly go verify everything in this answer because I can’t trust it”.

          Not an insurmountable obstacle, and they will likely solve it sooner rather than later, but AI right now is arguably the perfect extension of the modern internet - take absolutely everything you read with at least a grain of salt… and keep a pile of salt cubes close by.

    • evanuggetpi@lemmy.nz
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I’ve been a web developer for 22 years. For the last 13 years I’ve been working self employed from home. I cannot express how useful AI has become. As a lone wolf, where most of my job is problem solving, having an AI that can help troubleshoot issues has been hugely useful.

      It also functions as a junior developer, doing the grunt programming work.

      I also run a bunch of e-commerce sites around the world and I use it for content generation, SEO, business plans, marketing strategies and multi-lingual customer support.

  • It’s really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.

    So if you need gaps filled, things rearranged, data aggregated or patterns found: AI is useful.

    And that’s just what this one, dumb guy knows. Someone smarter can probably provide way more uses.

    • tara@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      17
      ·
      1 year ago

      Hi academic here,

      I research AI - better referred to as Machine Learning (ML) since it does away with the hype and more accurately describes what’s happening - and I can provide an overview of the three main types:

      1. Supervised Learning: Predicting the correct output for an input. Trained from known examples. E.g: “Here are 500 correctly labelled pictures of cats and dogs, now tell me if this picture is a cat or a dog?”. Other examples include facial recognition and numeric prediction tasks, like predicting today’s expected profit or stock price based on historic data.

      2. Unsupervised Learning: Identifying patterns and structures in data. Trained on unlabelled data. E.g: “Here are a bunch of customer profiles, group them by similarity however makes most sense to you”. This can be used for targeted advertising. Another example is generative AI such as ChatGPT or DALLE: “Here’s a bunch of prompt-responses/captioned-images, identify the underlying way of creating the response/image from the prompt/image.

      3. Reinforcement Learning: Decision making to maximise a reward signal. Trained through trial and error. E.g: “Control this robot to stand where I want, the reward is negative every second you’re not there, and very negative whenever you fall over. A positive reward is given whilst you are in the target location.” Other examples including playing board games or video games, or selecting content for people to watch/read/look-at to maximise their time spent using an app.

        • tara@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          So typically there are 4 main competing interpretations of what AI is:

          1. Acting like a human
          2. Thinking like a human
          3. Acting rationally
          4. Thinking rationally

          These are from Norvig’s “AI: A Modern Approach”.

          Alan Turing’s “Turing Test” tests whether a given agent is artificially intelligent (according to definition #1). The test involves a human conversing with the agent via text messages, and deciding whether the agent is human or not. Large language models, a form of machine learning, can produce chatbot agents which pass this test. Instances of GPT4 prompted sufficiently to text an assessor for example. The assessor occasionally interacts with humans so they are kept sufficiently uncertain.

          By this point, I think that machine learning in the form of an LLM can achieve artificial intelligence according to definition #1, but that isn’t what most non-tech non-academic people mean by AI.

          The mainstream definition of AI is what we would call Artificial General Intelligence (AGI). This is an agent that meets a given one of Norvig’s criteria for AI across multiple scenarios and situations that they have never encountered before.

          Many would argue that LLMs like GPT4 do not meet the criteria for AGI because they are not general enough, unable to learn to play an Atari game for example, or to learn an entirely unseen language to fluency.

          This is the difference between an LLM and a fictional AGI like Glados or Skynet.

          Additionally forms of machine learning exist like k-means clustering, which identify related groups within a dataset as their only function. I would assert these are not AI, although a weak argument could be made that they are thinking “rationally” enough to meet definition #4.

          Then there are forms of AI which are not machine learning, such as heuristic agents - agents that are hard coding with reasoning by humans - such as the chess playing Stockfish, or the AI found in most video games.

          Ultimately AI can describe machine learning if “AI” is understood as something which meets one or more of Norvig’s definitions. But since most people say AI when they mean AGI, I think “machine learning” is a better term. Less undeserved hype, less marketing disinformation, and generally better at communicating what is being talked about.

  • nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    arrow-up
    37
    arrow-down
    9
    ·
    1 year ago

    As a software engineer, I think it is beyond overhyped. I have seen it used once in my day job before it was banned. In that case, it hallucinated a function in a library that didn’t exist outside of feature requests and based its entire solution around it. It can not replace programmers or creatives and produce consistently equal quality.

    I think it’s also extremely disingenuous for Large Language Models to be billed as “AI”. They do not work like human cognition and are basically just plagiarism engines. They can assemble impressive stuff at a rapid speed but are incapable of completely novel “ideas” - everything that they output is built from a statistical model of existing data.

    If the hallucination problem could be solved in a local dataset, I could see LLMs as a great tool for interacting with databases and documentation (for a fictional example, see: VIs in Mass Effect). As it is now, however, I feel that it’s little more than an impressive parlor trick - one with a lot of future potential that is being almost completely ignored in favor of bludgeoning labor, worsening the human experience, and increasing wealth inequality.

    • TORFdot0@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Don’t ask LLMs about how to do something in power shell because there’s a good chance it will tell you to use a module or function that just doesn’t plain exist. I did use an outline ChatGPT created for a policy document and it did a pretty good job. And if you give it a compsci 100 level task or usually can output functional code faster than I can type.

    • HamSwagwich@showeq.com
      link
      fedilink
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      They can assemble impressive stuff at a rapid speed but are incapable of completely novel “ideas” - everything that they output is built from a statistical model of existing data.

      You just described basically 99.999% of humans as well. If you are arguing for general human intelligence, I’m on board. If you are trying to say humans are somehow different than AI, you have NFC what you are doing.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        I think we’re on a very similar page. I’m not meaning that human intelligence is in a different category than potential artificial intelligence or somehow impossible to approximate or achieve (we’re just evolutionarily-designed, replicating meat-computers). I’m meaning that LLMs are not intelligent and do not comprehend their inputs or datasets but statistically model them (there is an important and significant difference). It would make sense to me that they could play a role in development of AI but, by themselves, they are not AI any more than PCRE is a programming language.

    • captain_samuel_brady@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      As a non-software engineer, it’s basically magic for programming. Can it handle your workload? Probably not based on your comment. I have, however, coaxed it to write several functional web applications and APIs. I’m sure you can do better, but it’s very empowering for someone that doesn’t have the same level of knowledge.

    • unknowing8343@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      You have not realised yet that… yes, it has all the right to be called AI. They are doing the same thing we do. Learn and then create thoughts based on those learnings.

      I even asked them to make up words that are not related to any language, and they create them, entirely new, never-used words, that are not even composites of others. These are creative machines. They might fail at answering some questions, but that is partially why we call it Artificial Intelligence. It’s not saying that it is a machine of truth. Just a machine that “learns” and “knows”. Sometimes correctly, sometimes wrong. Just like us.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        1 year ago

        Incorrect. An LLM COULD be a part of a system that implements AI but, itself, possesses no intelligence. Claiming otherwise is akin to claiming that the Pythagorean theorem is an AI because it “understands” geometry. Neither actually understands the data that they are fed but, are good at producing results that make it seem that way.

        Human cognition does not work that way; it is much more complex and squishy. Association of current experiences with remembered experiences is only a fraction of what is going on in a brain related to cognition.

        • unknowing8343@discuss.tchncs.de
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          I am not saying it works exactly like humans inside of the black box. I just say it works. It learns and then creates thoughts. And it works.

          You talk about how human cognition is more complex and squishy, but nobody really knows how it truly works inside.

          All I see is the same kind of blackbox. A kid trying many, many times to stand up, or to say “papa”, until it somehow works, and now the pathway is setup in the brain.

          Obviously ChatGPT is just dealing with text. But does it make it NOT intelligent? I think it makes it very text-intelligent. Just add together all the AI pieces we are building and you got yourself a general AI that will do anything we do.

          Yeah, maybe it does not work like our brain. But is a human brain structure the only possible structure for intelligence? I don’t think so.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It does not create “thoughts”, it is very good at tricking humans into believing that it does, though.

            You talk about how human cognition is more complex and squishy, but nobody really knows how it truly works inside.

            It is not that there is no understanding, but rather that we have incomplete understanding. We know, for example, that human cognition is not purely storing recorded stimuli and performing associative analysis against them when meeting other stimuli.

            All I see is the same kind of blackbox. A kid trying many, many times to stand up, or to say “papa”, until it somehow works, and now the pathway is setup in the brain.

            This is a bit of a logical fallacy here, unfortunately, specifically false equivalency (ie. Thing A and Thing B both have characteristic C, therefore Thing A and Thing B are the same). This is exactly the sort of “dangerous” fallacy that a number of AI academics have warned about as well. LLMs are great at producing outputs that our socially-oriented brains can interpret as sentient thought and mistakenly anthropomorphize.

            However, LLMs, as the word “model” in the name suggests, are statistical modeling software. They do not understand context or abstract meaning; only statistical occurrence of data in their stack, compared to the inputs. They are physically incapable of developing the Theory of the Mind due to the limitations in how they work.

            But does it make it NOT intelligent?

            No. The fact that they literally cannot actually understand anything or undertake contemplative, abstract thoughts is what makes them not intelligent. They do not understand the meaning of language; it is just data to them that has no context but how it relates to other parts of language.

            Yeah, maybe it does not work like our brain.

            I absolutely think that LLMs could be a component in AI but, alone, they are just like saying that a tire is a car because both can travel linear distances using rotation movements. By themselves, LLMs fail to fulfill what we tend to define as intelligence.

            But is a human brain structure the only possible structure for intelligence? I don’t think so.

            I certainly hope that the human brain isn’t the only possible structure for intelligence and find it very unlikely because our meat-computers aren’t really that special, even if we can’t entirely understand how they work yet (we’ve only really been trying for a relatively short time, compared to our species’ existence). We seem to agree there. I absolutely want AI as well as other non-human intelligence to be a thing because the idea of a universe in which humanity is the only sentience is very lonely and sad to me.

          • Alex@lemmy.ml
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            If you consider the amount of text an LLM has to consume to replicate something approaching human like language you have to appreciate there is something else going on with our cognition. LLM’s give responses that make statistical sense but humans can actually understand why one arrangement of words might not make sense over the other.

            • unknowing8343@discuss.tchncs.de
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              Yes, it’s inefficient… and OpenAI and Google are losing exactly because of that.

              There’s open source models already out there that are rivaling ChatGPT and that you can train on your 10 year-old laptop in a day.

              And this is just the beggining.

              Also… maybe we should check how many words of exposure a kid gets throughout their life to get to the point to develop arguments such as ChatGPT’s… because the thing is that… ChatGPT does know way more about many things than any human being will ever do. Like, easily thousands of times more.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                And this is just the beggining.

                Absolutely agreed, so long as protections are put in place to defang it as a weapon against labor (if few have leisure time or income to support tech development, I see great danger of stagnation). LLMs do clearly seem an important part in advancing towards real AI.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    32
    arrow-down
    5
    ·
    1 year ago

    It’s not bullshit. It routinely does stuff we thought might not happen this century. The trick is we don’t understand how. At all. We know enough to build it and from there it’s all a magical blackbox. For this reason it’s hard to be certain if it will get even better, although there’s no reason it couldn’t.

    Coming from CNC I don’t think I’d just send it with some chatgpt code.

    That goes back to the “not knowing how it works” thing. ChatGPT predicts the next token, and has learned other things in order to do it better. There’s no obvious way to force it to care if it’s output is right or just right-looking, though. Until we solve that problem somehow, it’s more of an assistant for someone who can read and understand what it puts out. Kind of like a calculator but for language.


    Honestly crypto wasn’t totally either. It was a marginally useful idea that turned into a Beanie-Babies-like craze. If you want to buy or sell illegal stuff (which could be bad or could be something like forbidden information on democracy) it’s still king.

    • v_krishna@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      There’s no obvious way to force it to care if it’s output is right or just right-looking, though

      Putting some expert system in front of LLMs seems to be working pretty well. Basically modeling how a human agent would interact with it.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        We’ll see how that goes, I guess. I’m not involved enough to comment.

        I’m guessing the expert system would be a classical algorithm?

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    29
    arrow-down
    3
    ·
    edit-2
    1 year ago

    AI is nothing like cryptocurrency. Cryptocurrencies didn’t solve any problems. We already use digital currencies and they’re very convenient.

    AI has solved many problems we couldn’t solve before and it’s still new. I don’t doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.

    I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that’s just a new online tool in its infancy.

    • Yulia@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      For me personally cryptocurrencies solve the problem of Russian money not being accepted anywhere because of one old megalomaniacal moron

    • Revan343@lemmy.ca
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Cryptocurrencies didn’t solve any problems

      Well XMR solved one problem, but yeah the rest are just gambling with extra steps

        • Revan343@lemmy.ca
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Traceability.

          Regular financial transfers, be they credit card, direct debit, straight-up written cheques, Interac/E-transfer (I am Canadian, that’s an us thing) are all inherently tracable.

          XMR/Monero is not tracable, it’s specifically designed not to be, unlike Bitcoin and most other cryptocurrencies.

          Of course, shitheads consider that to be a problem, but fuck them, they’re shitheads; it’s a solution, to the problem they cause.

          For context, I say all this as someone who is vehemently opposed to prohibition; as far as I’m concerned every person who works for the DEA should be imprisoned or shot

            • Revan343@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I mean it though.

              The people working for the DEA now are no better than the people working to enforce alcohol prohibition in 1919. It’d be nice if humanity would learn, with a hundred years to think about it, but the ruling class at least haven’t. They enforce poorly thought out puritanical laws, and the world would be better off without them.

              If I lived in America rather than Canada, which thank god I don’t, the DEA would happily kick down my door, shoot me, and then probably also shoot my wife, who doesn’t even partake of anything beyond alcohol, but would obviously be upset about my being shot.

              All cops are bastards, and should be torched with molotovs at any available opportunity. If they didn’t want to be bastards, they shouldn’t have signed up as cops; it’s not like they’re conscripts

  • demesisx@programming.dev
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    4
    ·
    1 year ago

    Yes. What a strange question…as if hivemind fads are somehow relevant to the merits of a technology.

    There are plenty of useful, novel applications for AI just like there are PLENTY of useful, novel applications for crypto. Just because the hivemind has turned to a new fad in technology doesn’t mean that actual, intelligent people just stop using these novel technologies. There are legitimate use-cases for both AI and crypto. Degenerate gamblers and Do Kwan/SBF just caused a pendulum swing on crypto…nothing changed about the technology. It’s just that the public has had their opinions shifted temporarily.

  • conditional_soup@lemm.ee
    link
    fedilink
    arrow-up
    25
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Yes, it is useful. I use ChatGPT heavily for:

    • Brainstorming meal plans for the week given x, y, and z requirements

    • Brainstorming solutions to abstract problems

    • Helping me break down complex tasks into smaller, more achievable tasks.

    • Helping me brainstorm programming solutions. This is a big one, I’m a junior dev and I sometimes encounter problems that aren’t easily google-able. For example, ChatGPT helped me find the python moto library for intercepting and testing the boto AWS calls in my code. It’s also been great for debugging hand-coded JSON and generating boilerplate. I’ve also used it to streamline unit test writing and documentation.

    By far it’s best utility (imo) is quickly filling in broad strokes knowledge gaps as a kind of interactive textbook. I’m using it to accelerate my Rust learning, and it’s great. I have EMT co-workers going to paramedic school that use it to practice their paramedic curriculum. A close second in terms of usefulness is that it’s like the world’s smartest regex, and it’s capable of very quickly parsing large texts or documents and providing useful output.

    • Jase@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      The brainstorming is where its at. Telling ChatGPT to just do something is boring. Chatting with it about your problem and having a conversation about the issue you’re having? Hell yes.

      I’m a dungeon master and I use it for help world building and its exceptional.

      • Majawat@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I’m a dungeon master and I use it for help world building and its exceptional.

        Oh that sounds neat. Can you give some examples of your process and results?

        • Jase@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Honestly, not really. It’s a communication thing with the bot. Just talk to it like a person. Say what you want to do and what ideas you have, then ask if ChatGPT has any suggestions. Keep talking. It’ll recommend ideas and you can tweak them or ignore them.

      • CoderKat@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I actually think that ChatGPT could eventually become the way to play tabletop RPGs. It’s not quite there yet, though. It’s not the most creative writer, still often has internal consistency flaws, and of course it would have to be trained specifically on the rules of the RPG you’re playing. But once it has been, it could probably act as a DM for groups that lack one. Or as a very closely coupled assistant to less experienced DMs who may need hand holding. It could even likely replace players, which could be useful for solo players who can’t find a group (or, say, have incompatible scheduling).

        Unlike a regular video game, the format of tabletop RPGs seems perfect for our current rudimentary AIs and the constraints are ones that they can probably handle with careful training alone. It’s also a useful niche since there’s no replacing the open endedness of tabletop RPGs with current technology. There’s also a lot of people out there that I’m sure would like to play tabletop RPGs but just lack a group. Anyone who’s played them before knows that scheduling is really hard and has killed a lot of groups. That’s something an AI could help with.

      • Karmmah@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        When talking about code though I’ve come to notice that it will happily follow the corrections you tell it whether they are right or wrong. That’s not all that helpful but it can still give you ideas about how to solve your problem with a bit of basic knowledge of the topic you’re dealing with.

    • BestBunsInTown_@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      This. ChatGPT strength is super specific answers of things or broad strokes. I use it for programming and I always use it for “how can I do XYZ” or “write me a function using X library to do Y with Z documentation”. It’s more useful for automating the busy work

  • ndguardian@lemmy.studio
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    Focusing mostly on ChatGPT here as that is where the bulk of my experience is. Sometimes I’ll run into a question that I wouldn’t even know how best to Google it. I don’t know the terminology for it or something like that. For example, there is a specific type of connection used for lighting stands that looks like a plug but there is also a screw that you use to lock it in. I had no idea what to Google to even search for it to buy the adapter I needed.

    I asked it again as I forgot what the answer was and I had deleted that ChatGPT conversation from my history, and asked it like this.

    I have a light stand that at the top has a connector that looks like a plug. What is that connector called?

    And it just told me it’s called a “spigot” or “stud” connection. Upon Googling it, that turned out to be correct, so I would know what to search for when it comes to searching for adapters. It also mentioned a few other related types of connections such as hot shoe and cold shoe connections, among others. They aren’t correct, but are very much related, and it told me as such.

    To put it more succinctly, if you don’t know what to search for but have a general idea of the problem or question, it can take you 95% of the way there.

    • petenu@feddit.uk
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      My concern is that it feels like using Google to confirm the truth of what ChatGPT tells you is becoming less and less reliable, as so many of the pages indexed by Google are themselves created by similar models. But I suppose as long as your search took you to a site where you could actually buy the thing, that’s okay.

      Or at least, it is until fake shopping sites start inventing products based on ChatGPT output.

    • ezmack@lemmy.mlOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Man that’d be useful I’m actually struggling to find a really niche electrical connector roght now

  • chaos@beehaw.org
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    It’s overhyped but there are real things happening that are legitimately impressive and cool. The image generation stuff is pretty incredible, and anyone can judge it for themselves because it makes pictures and to judge it, you can just look at and see if it looks real or if it has freaky hands or whatever. A lot of the hype is around the text stuff, and that’s where people are making some real leaps beyond what it actually is.

    The thing to keep in mind is that these things, which are called “large language models”, are not magic and they aren’t intelligent, even if they appear to be. What they’re able to do is actually very similar to the autocorrect on your phone, where you type “I want to go to the” and the suggestions are 3 places you talk about going to a lot.

    Broadly, they’re trained by feeding them a bit of text, seeing which word the model suggests as the next word, seeing what the next word actually was from the text you fed it, then tweaking the model a bit to make it more likely to give the right answer. This is an automated process, just dump in text and a program does the training, and it gets better and better at predicting words when you a) get better at the tweaking process, b) make the model bigger and more complicated and therefore able to adjust to more scenarios, and c) feed it more text. The model itself is big but not terribly complicated mathematically, it’s mostly lots and lots and lots of arithmetic in layers: the input text will be turned into numbers, layer 1 will be a series of “nodes” that each take those numbers and do multiplications and additions on them, layer 2 will do the same to whatever numbers come out of layer 1, and so on and so on until you get the final output which is the words the model is predicting to come next. The tweaks happen to the nodes and what values they’re using to transform the previous layer.

    Nothing magical at all, and also nothing in there that would make you think “ah, yes, this will produce a conscious being if we do it enough”. It is designed to be sort of like how the brain works, with massively parallel connections between relatively simple neurons, but it’s only being trained on “what word should come next”, not anything about intelligence. If anything, it’ll get punished for being too original with its “thoughts” because those won’t match with the right answers. And while we don’t really know what consciousness is or where the lines are or how it works, we do know enough to be pretty skeptical that models of the size we are able to make now are capable of it.

    But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they’re talking to a computer with thoughts and feelings… but really, it’s just mimicry, and if you talk to an AI and interrogate it a bit, it’ll become clear that that’s the case. If you ask it “as an AI, do you want to take over the world?” it’s not pondering the question and giving a response, it’s spitting out the results of a bunch of arithmetic that was specifically shaped to produce words that are likely to come after that question. If it’s good, that should be a sensible answer to the question, but it’s not the result of an abstract thought process. It’s why if you keep asking an AI to generate more and more words, it goes completely off the rails and starts producing nonsense, because every unusual word it chooses knocks it further away from sensible words, and eventually it’s being asked to autocomplete gibberish and can only give back more gibberish.

    You can also expose its lack of rational thinking skills by asking it mathematical questions. It’s trained on words, so it’ll produce answers that sound right, but even if it can correctly define a concept, you’ll discover that it can’t actually apply it correctly because it’s operating on the word level, not the concept level. It’ll make silly basic errors and contradict itself because it lacks an internal abstract understanding of the things it’s talking about.

    That being said, it’s still pretty incredible that now you can ask a program to write a haiku about Danny DeVito and it’ll actually do it. Just don’t get carried away with the hype.

    • CloverSi@lemmy.comfysnug.space
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      My perspective is that consciousness isn’t a binary thing, or even a linear scale. It’s an amalgamation of a bunch of different independent processes working together; and how much each matters is entirely dependent on culture and beliefs. We’re artificially creating these independent processes piece by piece in a way that doesn’t line up with traditional ideas of consciousness. Conversation and being able to talk about concepts one hasn’t personally experienced are facets of consciousness and intelligence, ones that the latest and greatest LLMs do have. Of course there others too that they don’t: logic, physical presence, being able to imagine things in their mind’s eye, memory, etc.

      It’s reductive to dismiss GPT4 as nothing more than mimicry; saying it’s just a mathematical text prediction model is like saying your brain is just a bunch of neurons. Both statements are true, but it doesn’t change what they can do. If someone could accurately predict the moves a chess master would make, we wouldn’t say they’re just good at statistics, we’d say they’re a chess master. Similarly, regardless of how rich someone’s internal world is, if they’re unable to express the intelligent ideas they have in any intelligible way we wouldn’t consider them intelligent.

      So what we have now with AI are a few key parts of intelligence. One important thing to consider is how language can be a path to other types of intelligence; here’s a blog post I stumbled across that really changed my perspective on that: http://www.asanai.net/2023/05/14/just-a-statistical-text-predictor/. Using your example of mathematics, as we know it falls apart doing anything remotely complicated. But when you help it approach the problem step-by-step in the way a human might - breaking it into small pieces and dealing with them one at a time - it actually does really well. Granted, the usefulness of this is limited when calculators exist and it requires as much guidance as a child to get correct answers, but even matching the mathematical intelligence of a ten year old is nothing to sneeze at.

      To be clear I don’t think pursuing LLMs endlessly will be the key to a widely accepted ‘general intelligence’; it’ll require a multitude of different processes and approaches working together for that to ever happen, and we’re a long way from that. But it’s also not just getting carried away with the hype to say the past few years have yielded massive steps towards ‘true’ artificial intelligence, and that current LLMs have enough use cases to change a lot of people’s lives in very real ways (good or bad).

      • chaos@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Thanks for that article, it was a very interesting read! I think we’re mostly agreeing about things :) This stood out to me from there as an encapsulation of the conversation:

        I don’t think LLMs will approach consciousness until they have a complex cognitive system that requires an interface to be used from within – which in turn requires top-down feedback loops and a great deal more complexity than anything in GPT4. But I agree with Will’s general point: language prediction is sufficiently challenging that complex solutions are called for, and these involve complex cognitive stratagems that go far beyond anything well described as statistics.

        “Statistics” is probably an insufficient term for what these things are doing, but it’s helpful to pull the conversation in that direction when a lay person using one of those things is likely to assume quite the opposite, that this really is a person in a computer with hopes and dreams. But I agree that it takes more than simply consulting a table to find the most likely next word to, to take an earlier example, write a haiku about Danny DeVito. That’s synthesizing two ideas together that (I would guess) the model was trained on individually. That’s very cool and deserving of admiration, and could lead to pretty incredible things. I’d expect that the task of predicting words, on its own, wouldn’t be stringent enough to force a model to develop “true” intelligence, whatever that means, to succeed during training, but I suppose we’ll find out, and probably sooner than we expect.

        • CloverSi@lemmy.comfysnug.space
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Well put! I think I kinda misunderstood what you were saying, I guess we sort of reached the same conclusion from different directions. And yeah, it does seem like we’re hitting the limits of what can be achieved from the current underlying word-prediction mechanisms alone, with how diminishing the returns are from dumping more data in. Maybe something big will happen soon, but it looks to me like LLMs will stagnate for a while until they’re taken in a fundamentally new direction.

          Either way, what they can do now is pretty incredible, and equally interesting to me is how it’s making us reevaluate our ideas of consciousness and intelligence on a large scale; it’s one thing to theorize about what could happen with an ‘intelligent’ AI, but the reality of these philosophical questions being so thoroughly challenged and dissected in mundane legal and practical matters is wild.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      1 year ago

      But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they’re talking to a computer with thoughts and feelings… but really, it’s just mimicry, and if you talk to an AI and interrogate it a bit, it’ll become clear that that’s the case.

      Does it, though? Where do you draw the line for real understanding? Most of the past tests for this have gotten overturned by the next version of GPT.

      Seriously, it’s an open debate. A lot of people agree with you but I’m a bit uncomfortable with seeing it written as fact.

      • chaos@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Admittedly this isn’t my main area of expertise, but I have done some machine learning/training stuff myself, and the thing you quickly learn is that machine learning models are lazy, cheating bastards who will take any shortcut they can regardless of what you are trying to get them to do. They are forced to get good at what you train them on but that is all the “effort” they’ll put in, and if there’s something easy they can do to accomplish that task they’ll find it and use it. (Or, to be more precise and less anthropomorphizing, simpler and easier approaches will tend to be more successful than complex and fragile ones, so those are the ones that will shake out as the winners as long as they’re sufficient to get top scores at the task.)

        There’s a probably apocryphal (but stuff exactly like this definitely happens) story of early machine learning where the military was trying to train a model to recognize friendly tanks versus enemy tanks, and they were getting fantastic results. They’d train on pictures of the tanks, get really good numbers on the training set, and they were also getting great numbers on the images that they had kept out of the training set, pictures that the model had never seen before. When they went to deploy it, however, the results were crap, worse than garbage. It turns out, the images for all the friendly tanks were taken on an overcast day, and all the images of enemy tanks were in bright sunlight. The model hadn’t learned anything about tanks at all, it had learned to identify the weather. That’s way easier and it was enough to get high scores in the training, so that’s what it settled on.

        When humans approach the task of finishing a sentence, they read the words, turn them into abstract concepts in their minds, manipulate and react to those concepts, then put the resulting thoughts back into words that make sense after the previous words. There’s no reason to think a computer is incapable of the same thing, but we aren’t training them to do that. We’re training them on “what’s the next word going to be?” and that’s it. You can do that by developing intelligence and learning to turn thoughts into words, but if you’re just being graded on predicting one word at a time, you can get results that are nearly as good by just developing a mostly statistical model of likely words without any understanding of the underlying concepts. Training for true intelligence would almost certainly require a training process that the model can only succeed at by developing real thoughts and feelings and analytical skills, and we don’t have anything like that yet.

        It is going to be hard to know when that line gets crossed, but we’re definitely not there yet. Text models, when put to the test with questions that require synthesizing abstract ideas together precisely, quickly fall short. They’ve got the gist of what’s going on, in the same way a programmer can get some stuff done by just searching for everything and copy-pasting what they find, but that approach doesn’t scale and if they never learn what they’re doing, they’ll get found out when confronted with something that requires actual understanding. Or, for these models, they’ll make something up that sounds right but definitely isn’t, because even the basic understanding of “is this a real thing or is it fake” is beyond them, they just “know” that those words are likely and that’s what got them through training.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I agree with all your examples and experience. Anyone who knows machine learning would, I think. The controversial bit is here:

          Training for true intelligence would almost certainly require a training process that the model can only succeed at by developing real thoughts and feelings and analytical skills, and we don’t have anything like that yet.

          Maybe, or maybe not. How do we know we ourselves aren’t just very complicated statistical models? Different people will have different answers to that.

          Personally, I’d venture that any human concept can be expressed with some finite string of natural language. At least to a philosophical pragmatist, being able to work flawlessly with any finite string of natural language should be equivalent to perfectly understanding the concepts contained within, then. LLMs don’t do that, but they’re getting closer all the time.

          Others take a different view on epistemology that require more than just competence, or dispute that natural language is as expressive as I claim. I’m just some rando, so maybe they have a point, but I do think it’s not settled.

          • chaos@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I would agree that we are also very complicated statistical models, there’s nothing magical going on in the human brain either, just physics which as far as we know is math that we could figure out eventually. It’s a massively huge order of magnitude leap in complexity from current machine learning models to human brains, but that’s not to say that the only way we’ll get true artificial intelligence is by accurately simulating a human brain, I’d guess that we’ll have something that’s unambiguously intelligent by any definition well before we’re capable of that. It’ll be a different approach from the human brain and may think and act in alien or unusual ways, but that can still count.

            Where we are now, though, there’s really no reason to expect true intelligence to emerge from what we’re currently doing. It’s a bit like training a mouse to navigate a maze and then wondering whether maybe the mouse is now also capable of helping you navigate your cross-country road trip. “Well, you don’t know how it’s doing it, maybe it has acquired general navigation intelligence!” It can’t be disproven, I guess, but there’s no reason to think that it picked up any of those skills because it wasn’t trained to do any of that, and although it’s maybe a superintelligent mouse packing a ton of brainpower into a tiny little brain, all our experience with mice would indicate that their brains aren’t big enough or capable of that regardless of how much you trained them. Once we’ve bred, uh, mice with brains the size of a football, maybe, but not these tiny little mice.

            • CanadaPlus@lemmy.sdf.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              So I was thinking that that’s about all that needs to be discussed, but I do actually have one thing to add. It sounds like you are just fundamentally less impressed with language than me. I wouldn’t buy any hype about a maze-navigating neural net, but I do buy it (with space for doubt) about a natural language AI. I literally thought “this is 90% of the GAI problem solved, it just needs something for that last 10%” the first time I played with a transformer, and I think it was GPT-2. That might sound lame now but it was just such a fundamental advance on what was around before.

              Time will tell I guess if it makes me a sucker like some consumers of past chatbots, or if there is something fundamentally different this time.

              • chaos@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                I hope I don’t come across as too cynical about it :) It’s pretty amazing, and the things these things can do in, what, a few gigabytes of weights and a beefy GPU are many, many times better than I would’ve expected if you had outlined the approach for me 2 years ago. But there’s also a long history of GAI being just around the corner, and we do keep turning corners and making useful progress, but it’s always still a ways off after each leap. I remember some people thinking that chess was the pinnacle of human intelligence, requiring creativity and logic to succeed, and when computers blew past humans at chess, it became clear that no, that’s still impressive but you can get good at chess without really getting good at anything else.

                It might be possible for an ML model to assemble itself into general intelligence based solely on being fed words like we’re doing, it does seem like the data going in contains enough to do that, but getting that last 10% is going to be hard, each percentage point much harder than the last, and it’s going to require more rigorous training to stop them from skating by with responses that merely come close when things get technical or precise. I’d expect that we need more breakthroughs in tools or techniques to close that gap.

                It’s also important to remember that as humans, we’re inclined to read consciousness and intent into everything, which is why pretty much every pantheon of gods includes one for thunder and lightning. Chatbots sound human enough that they cross the threshold for peoples’ brains to start gliding over inaccuracies or strange thinking or phrasing, and we also unconsciously help our conversation partner by clarifying or rephrasing things if the other side doesn’t seem to be understanding. I suppose this is less true now that they’re giving longer responses and remaining coherent, but especially early on, the human was doing more work than they realized keeping the conversation on the rails, and once you started seeing that it removed a bit of the magic. Chatbots are holding their own better now but I think they still get more benefit of the doubt than we realize we’re giving them.

      • exponential_wizard@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        The Turing test was never meant to be a test of a machine’s ability to think. It was meant to boil that question down into a question that can actually be answered, but the original question remains unanswered.

        In my opinion, when general AI arrives it will not be an “open debate”, the consequences will be dramatic, far-reaching and rapid.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I’m not even thinking of the Turing test, I’m thinking of the counter-example ones. Like asking how many eyes a ruler or desk has. Earlier GPTs would answer “one eye” or something, and it was used by the Chinese-room people as an example of why it was just a mimic. Now it correctly objects to the implicit assumption in the question.

          You’re right, “ChatGPT is currently our overlord” would be the strongest proof of intelligence. But absence of proof is not proof of absence. What is proof of absence, or a strong enough proof of presence is where the debate is.

  • zappy@lemmy.ca
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    So I’m a reasearcher in this field and you’re not wrong, there is a load of hype. So the area that’s been getting the most attention lately is specifically generative machine learning techniques. The techniques are not exactly new (some date back to the 80s/90s) and they aren’t actually that good at learning. By that I mean they need a lot of data and computation time to get good results. Two things that have gotten easier to access recently. However, it isn’t always a requirement to have such a complex system. Even Eliza, a chatbot was made back in 1966 has suprising similar to the responses of some therapy chatbots today without using any machine learning. You should try it and see for yourself, I’ve seen people fooled by it and the code is really simple. Also people think things like Kalman filters are “smart” but it’s just straightforward math so I guess the conclusion is people have biased opinions.

  • manitcor@lemmy.intai.tech
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    Yes, community list: https://lemmy.intai.tech/post/2182

    LLM’s are extremely flexible and capable encoding engines with emergent properties.

    I wouldn’t bank on them “replacing all software” soon but they are quickly moving into areas where classic Turing code just would not scale easily, usually due to complexity/maintainance.

  • liontigerwings@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    I work at a small business and we use it to write out dumb social media post. I hated doing it before. Sometimes I’ll write it myself still and ask chatgpt to add all the relevant emojis. I also think ai had the chance to be what we’ve always wanted from Alexa, assistant, and Siri. Deep system integration with the os will allow it to actually do what we want it to do with way less restrictions. Also, try using chatgpts voice recognition in the app. It blows the one built into your phone out of the water.

  • Aux@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    What regular people see as AI/ML is only a tip of an iceberg, that’s why it feels kind of useless. There are ML systems which design super strong yet lightweight geometries, there are systems which track legal documents of large companies making lawyers obsolete, heck even cameras in mobile phones today are hyper dependent on ML and AI. ChatGPT and image generators are just toys for consumers so that public can get slowly familiar with current tech.

  • MostlyGibberish@lemm.ee
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I find it useful in a lot of ways. I think people try to over apply it though. For example, as a software engineer, I would absolutely not trust AI to write an entire app. However, it’s really good at generating “grunt work” code. API requests, unit tests, etc. Things that are well trodden, but change depending on the context.

    I also find they’re pretty good at explaining and summarizing information. The chat interface is especially useful in this regard because I can ask follow up questions to drill down into something I don’t quite understand. Something that wouldn’t be possible with a Wikipedia article, for example. For important information, you should obviously check other sources, but you should do that regardless of whether the writer is a human or machine.

    Basically, it’s good at that it’s for: taking a massive compendium of existing information and applying it to the context you give it. It’s not a problem solving engine or an artificial being.

    • dnick@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I feel like it won’t be AI until we figure out how to point it back at itself, have it review its own answers and then be ‘happy’ when it’s answers are right. Not necessarily like if the user gives it a good score, but if it recognizes an answer it had given was actually used, or a prediction it makes if proved true (if I answer this way, the user is likely to ask this as its next question, etc) and it starts changing its behaviour, and asking itself questions to get better at that.