LLM’s are a rather fascinating subject with lots of high level stuff that can be very daunting to understand. My laymans understanding is that generative systems such as Chat GPT are almost entirely probability based.
If I gave a LLM the following prompt: “in 2023 the current president of the United states is _____” fill in the blank. It would compare the prompt to its database of text and return Joe Biden with a 95% probability, Donald Trump with 4%, and some other random stuff for the remaining 1. It doesn’t actually KNOW what the prompt is and it doesn’t reason the answer. It’s only comparing what is in its database vs what’s been given to it.
This is why the larger the database it has the better answers it can return. True AI would be able to give you responses to anything having only been fed the contents of the dictionary. It could then reason what each word means and creatively form them into legible responses.
If you are interested in a very cool semi-interactive explanation of how generative systems work check out this website. I found it very fascinating.
It doesn’t actually KNOW what the prompt is and it doesn’t reason the answer.
Right, no AI “knows” anything. To know someone implies understanding and, like we said earlier, AIs are just computer programs; they don’t know anything, they just provide an output based on an input.
Two areas that are fun to play with LLMs in are math and cooking. Math has crisp rules and answers can be shown to be right or wrong. LLMs get a lot of complex math problems wrong. They will give you something that looks right, because their model includes what an answer should look like, but all they’re doing is giving an answer that its model shows looks like satisfies the prompt.
Cooking is similar. There’s no crisp right or wrong, butThe same process is at play. If you ask it for a recipe for something uncommon, it’s going to give you one, and it will likely have the right kind of ingredients, but if you make it there’s a decent chance it will taste terrible.
LLM’s are a rather fascinating subject with lots of high level stuff that can be very daunting to understand. My laymans understanding is that generative systems such as Chat GPT are almost entirely probability based.
If I gave a LLM the following prompt: “in 2023 the current president of the United states is _____” fill in the blank. It would compare the prompt to its database of text and return Joe Biden with a 95% probability, Donald Trump with 4%, and some other random stuff for the remaining 1. It doesn’t actually KNOW what the prompt is and it doesn’t reason the answer. It’s only comparing what is in its database vs what’s been given to it.
This is why the larger the database it has the better answers it can return. True AI would be able to give you responses to anything having only been fed the contents of the dictionary. It could then reason what each word means and creatively form them into legible responses.
If you are interested in a very cool semi-interactive explanation of how generative systems work check out this website. I found it very fascinating.
https://ig.ft.com/generative-ai/
Right, no AI “knows” anything. To know someone implies understanding and, like we said earlier, AIs are just computer programs; they don’t know anything, they just provide an output based on an input.
Two areas that are fun to play with LLMs in are math and cooking. Math has crisp rules and answers can be shown to be right or wrong. LLMs get a lot of complex math problems wrong. They will give you something that looks right, because their model includes what an answer should look like, but all they’re doing is giving an answer that its model shows looks like satisfies the prompt.
Cooking is similar. There’s no crisp right or wrong, butThe same process is at play. If you ask it for a recipe for something uncommon, it’s going to give you one, and it will likely have the right kind of ingredients, but if you make it there’s a decent chance it will taste terrible.