In the last week, we’ve had no less than three different pieces asking whether the massive proliferation of data centers is a massive bubble, and though they, at times, seem to take the default position of AI’s inevitable value, they’ve begun to sour on the idea that
Oh god, another AI hot take 🙄
Yes, OpenAI and Cursor both are waaaaayyyy overhyped & overvalued.
So were pets.com and yahoo.com back in 1999. But that didn’t stop FAANG from becoming honestly trllion-dollar valuation because while there was breathless Internet hype, the Internet was about to completely change the way the world works.
AI today is like the Internet in 1999.
I’ve seen this argument way to often and it is completely pointless. The argument that this will succeed because something in the past succeeded is exactly the same as arguing it will fail because something in the past failed.
If you want to draw the conclusion that they’re similar enough to use history in prediction, you’ll have to show that they’re similar and make a case for why those similarities are relevant.
I haven’t seen anyone making this argument bother with this exercise, but I have seen people that actually look at the economics discuss why they’re different animals.
There is also the tech itself.
internet - connect everything together across vast distances. Obvious limitless possibilities.
smart phones (you didn’t mention here but this is the other one people use for this argument most frequently) - Anything a computer can do in the palm of your hand.
llms - can do some powerful stuff like rifle through and summarize text, or generate text, or generate code… Except you can’t really trust it to do any of these things accurately, and that is a fundamental aspect of how the technology works rather than something that can be fixed, so it can’t be used responsibly for anything critical.
People immediately knew how internet could help us even during the dot com bubble. Anyone who had used Google (or before that, Yahoo) would immediately fall in love with them with how they help their live. AI (LLM)? Not so.
The Internet boom didn’t have the weird you’re-holding-it-wrong vibe too. Legit “It doesn’t help with my use case concerns” seem to all too often get answered with choruses of “but have you tried this week’s model? Have you spent enough time trying to play with it and tweak it to get something more like you want?” Don’t admit limits to the tech, just keep hitting the gacha.
I’ve had people say I’m not approaching AI in “good faith”. I say that you didn’t need “good faith” to see that Lotus 1-2-3 was more flexible and faster than tallying up inventory on paper, or that AltaVista was faster than browsing a card catalog.
Perhaps you are unaware that AI has solved the Proteome. This was expected to be a 100 year project.
I’m aware of machine learning being used in all kinds of science, but it is not llms and therefore not the topic of discussion here.
Au contraire. The proteome was solved by LLM transformers trained on genetic strings
https://en.wikipedia.org/wiki/AlphaFold
A transformer model isn’t always an llm, nor does a type of algorithm/data model/whatever being useful for one purpose mean it is equally useful for all other purposes.
I was at a startup in 1999 … in Seattle. I actually ducked out because it was clear that about all they could do was arrange outings for the staff.
Ah, yes, Yahoo!, the elephant graveyard of good ideas.