Access. If it can truly run fully locally, then companies can have their LLM model in everyone’s pockets, basically.
As far desirable, it depends on the person. And it’s more about what people don’t want, then want. Google has been promising various AI features that people will be able to run on the device, but so far most of them send the data to Google servers and then return the result.
That explains why Google wants it, but what do phone owners get? What is being offered of value. I keep hearing all this talk about this wave of AI on phones, but don’t see what it’s providing.
Yes, same here. More compute on my phone to me means one thing: battery drain.
Also, with companies getting more and more comfortable with looking at your searches for their “advertising,” what else can they invent with an onboard brain? Nothing we would like I’m afraid. There are no protections for something like that, and you know what companies do when there are no restrictions…
Yeah at present it’s mostly a gimmick feature. The next step will be for it to be integrated to the OS where it can understand what’s going on with your phone and you can ask it, for example, “make my phone secure” and it’ll execute a bunch of steps to accomplish this, or “tell me why this app just crashed” and it’ll review the logs and tell you - or the app developer, what happened. The ultimate goal is to sell you a truly personal ai-personal-assistant. The first company that can truly achieve this will mint gold.
Doesn’t that require all of those things to be developed manually though. It’s not like the LLM can just access your logs. I guess the thought is the LLM can better generalize commands, but seems like a lot of the AI there would actually be hand developed.
Gpt4all already lets you do this. You need to have at leave 8gb of unused memory to run it and it can be a bit slow, but it works well for the most part.
This way you can use gpt without sharing your data online and it will work without internet access as well.
The ones you run locally can be uncensored meaning, it will not avoid certain subjects that google and open ai deem not good to talk about.
Interesting. I’m just paranoid when the super brain has been programmed by a corporation “to help you.” Corporations are self serving, that’s just business.
I get it, this seems to be an instance where corporate interests and consumer interests align. I’m sure it will still collect a bunch of data from you, but that would happen anyway.
One thing that is important to me is the on-device subtitles. Pixel phones can caption any audio played on the device. That includes music, movies, random internet videos, phone calls, video calls, anything. For someone with hearing issues, this is a huge benefit. I’m not aware of any other phone that can do this.
Can someone explain why AI on your phone is so revolutionary and desirable?
Access. If it can truly run fully locally, then companies can have their LLM model in everyone’s pockets, basically.
As far desirable, it depends on the person. And it’s more about what people don’t want, then want. Google has been promising various AI features that people will be able to run on the device, but so far most of them send the data to Google servers and then return the result.
Less compute overhead on them giant and expensive datacentres as well as the ability to work offline.
That explains why Google wants it, but what do phone owners get? What is being offered of value. I keep hearing all this talk about this wave of AI on phones, but don’t see what it’s providing.
Yes, same here. More compute on my phone to me means one thing: battery drain.
Also, with companies getting more and more comfortable with looking at your searches for their “advertising,” what else can they invent with an onboard brain? Nothing we would like I’m afraid. There are no protections for something like that, and you know what companies do when there are no restrictions…
Yeah at present it’s mostly a gimmick feature. The next step will be for it to be integrated to the OS where it can understand what’s going on with your phone and you can ask it, for example, “make my phone secure” and it’ll execute a bunch of steps to accomplish this, or “tell me why this app just crashed” and it’ll review the logs and tell you - or the app developer, what happened. The ultimate goal is to sell you a truly personal ai-personal-assistant. The first company that can truly achieve this will mint gold.
Doesn’t that require all of those things to be developed manually though. It’s not like the LLM can just access your logs. I guess the thought is the LLM can better generalize commands, but seems like a lot of the AI there would actually be hand developed.
Gpt4all already lets you do this. You need to have at leave 8gb of unused memory to run it and it can be a bit slow, but it works well for the most part.
This way you can use gpt without sharing your data online and it will work without internet access as well.
The ones you run locally can be uncensored meaning, it will not avoid certain subjects that google and open ai deem not good to talk about.
Interesting. I’m just paranoid when the super brain has been programmed by a corporation “to help you.” Corporations are self serving, that’s just business.
I get it, this seems to be an instance where corporate interests and consumer interests align. I’m sure it will still collect a bunch of data from you, but that would happen anyway.
One thing that is important to me is the on-device subtitles. Pixel phones can caption any audio played on the device. That includes music, movies, random internet videos, phone calls, video calls, anything. For someone with hearing issues, this is a huge benefit. I’m not aware of any other phone that can do this.