Yes I think there is a bubble. I think AI may become extremely impressive but still be limited.
A lot of tasks involve having context to produce a correct solution. AI can whip up algorithms in a vacuum. But doesn't know about the custom data format you have to import. It doesn't understand how to map that to the target schema and will blindly import in a way that is subtly wrong. It doesn't have the context provided by chit/chat and vague statements made in an email.
No matter how impressive AI works in a vacuum, the "context" heavy problems are going to be an issue. Someone mentioned "last mile", that's where things fall apart. Same with self driving, it is impressive at first, until a road is blocked and there's some poorly marked detour signs routing you through non-standard paths.
Same with a plumber. You might make a robot that can fix many plumbing problems. But there's always custom nooks and crannies you have to contort yourself into and saw off a pipe in a very specific way that only having a tons of context would let you even know where to begin.
So until AI can consume context the way humans can, it's going to be limited to a "auto completion on steroids". Which is valuable, but not the end of human developers. Only time will tell.
AI is actually more revolutionary than people believe, it can do the repetitive human labour 10-100 times faster and doesn't has societal issues, hence in terms of business sense it makes companies more productive, allowing economy to grow.
But instead of natural progression the forces in the industry have been hyping up AI for their benefits. Which ofcourse will have consequences in the long term, I don't think there will be a burst in next 5 years, but I'd say chances highly increase in next two year, if certain conditions are met.
Disclosure: I'm not an economist, the observation is based on my understanding of propagation of AI as a developer and founder.
Personally I love it when I see people dismiss AI as a “stochastic parrot” or whatever other snarky phrase is currently popular on HN.
I just keep getting tons of value out of AI and am more productive than I ever have been in my life. If the competition wants to shoot themselves in the foot, I’m not going to wrestle the gun out of their hands.
Fair enough, AI moves fast, but little has changed besides DeepSeek and the reasoning models. AI agents hit an inflection point around November, but people had always bet on them anyway.
I don't consider GPT-4.5 or Claude 3.7 as major enough, and based on the versioning, the creators don't consider them major either.
Yes I think there is a bubble. I think AI may become extremely impressive but still be limited.
A lot of tasks involve having context to produce a correct solution. AI can whip up algorithms in a vacuum. But doesn't know about the custom data format you have to import. It doesn't understand how to map that to the target schema and will blindly import in a way that is subtly wrong. It doesn't have the context provided by chit/chat and vague statements made in an email.
No matter how impressive AI works in a vacuum, the "context" heavy problems are going to be an issue. Someone mentioned "last mile", that's where things fall apart. Same with self driving, it is impressive at first, until a road is blocked and there's some poorly marked detour signs routing you through non-standard paths.
Same with a plumber. You might make a robot that can fix many plumbing problems. But there's always custom nooks and crannies you have to contort yourself into and saw off a pipe in a very specific way that only having a tons of context would let you even know where to begin.
So until AI can consume context the way humans can, it's going to be limited to a "auto completion on steroids". Which is valuable, but not the end of human developers. Only time will tell.
AI is actually more revolutionary than people believe, it can do the repetitive human labour 10-100 times faster and doesn't has societal issues, hence in terms of business sense it makes companies more productive, allowing economy to grow.
But instead of natural progression the forces in the industry have been hyping up AI for their benefits. Which ofcourse will have consequences in the long term, I don't think there will be a burst in next 5 years, but I'd say chances highly increase in next two year, if certain conditions are met.
Disclosure: I'm not an economist, the observation is based on my understanding of propagation of AI as a developer and founder.
Does it matter?
Personally I love it when I see people dismiss AI as a “stochastic parrot” or whatever other snarky phrase is currently popular on HN.
I just keep getting tons of value out of AI and am more productive than I ever have been in my life. If the competition wants to shoot themselves in the foot, I’m not going to wrestle the gun out of their hands.
Ask HN: Could AI be a dot com sized bubble?
159 points|jameslk|8 months ago|130 comments
https://news.ycombinator.com/item?id=40739431
Ask HN: Is commoditization of AI finally going to burst the AI bubble/hype?
16 points|behnamoh|7 months ago|13 comments
https://news.ycombinator.com/item?id=41134422
Ask HN: When will the AI bubble burst?
14 points|roschdal|10 months ago|25 comments
https://news.ycombinator.com/item?id=40259289
Ask HN: Are we in an AI / ML bubble?
10 points|orbOfOrthanc|5 years ago|8 comments
https://news.ycombinator.com/item?id=21737972
Right but it’s feb 2025
Fair enough, AI moves fast, but little has changed besides DeepSeek and the reasoning models. AI agents hit an inflection point around November, but people had always bet on them anyway.
I don't consider GPT-4.5 or Claude 3.7 as major enough, and based on the versioning, the creators don't consider them major either.
TL;DR Yes, because the current AI has the same last-mile problem as the tech around self-driving cars.
We'll see if there is more financial loss due to bad AI, similar to self-driving fatalities. It's calculated risk.
We've seen companies held liable for AI chatbot statements and lawyers penalized for confabulated case law.