The current state of LLMs, despite what you may have heard, is not far from “predictive text on steroids with a response structure engine”. It’s a marvel of Natural Language Processing technology, but it can’t think, or make decisions effectively.
It can’t apply critical thought. It can’t do the same repetitive task consistently.
The marketers behind the hype are the same marketers that drove previous failed advancements. They’re are just too good at what they do. They remind me of Hans from the Disney movie “Frozen”, where he serves as the unneeded rescuer of a princess in distress, with the secret plan of rising to power by marrying her and gaining control of the resources of her kingdom.
There are lots of engineers getting themselves trapped in a hole with their codebases right now, and then getting royally screwed because they can’t troubleshoot the code they’re responsible for after it got turned into “bot spaghetti”. Then, it gets pushed to production so they can tell their sales team, who will tell you “it’s real, we’re using it right now at such and such website — released in action!”. Then a persistent threat actor with a little know-how or even a junior security analyst with a penetration testing framework comes along and then after much public embarrassment, that company goes under, they are court ordered to pay restitution to the customers whose data they mishandled, and their reputations in the industry took a hit. Their “solution” missed obvious and critical security or stability problems that a development team would have and should have noticed, and they paid bitterly for it.
I hear about it every day. Then their next project? The same thing. Why? Because they’ve built up all their public persona around evangelizing a product type that cannot deliver. This is the sunk cost fallacy in action. Unfortunately, it’s not just their companies they’re doing it to — sometimes it is “yours”.
There will soon be a credibility shift on AI, LLMs in particular, “vibe coding”, and “agentic” anything — and it’s already started to happen a little, but I think it’s a few years off before it hits a point of criticality. As soon as results are actually graded as a default, instead of coasting off the market momentum, this will fall apart. Marketers designed the hype to exploit technology gaps in decision-makers’ awareness. That takes time to unravel.
We did the same thing with Quantum Computing. We did the same thing with Augmented Reality. We did the same thing with Virtual Reality.
Every 10 years or so, these marketers will stir up a new next big thing, and they make all the attendees at the conventions drink the koolaid they put it in, in the form of product features, or engineering systems, and then laugh at all the detractors while the only people making money are selling false promises that are later realized to have been never delivered.
Yes, it’s a bleak outlook. The IT industry has earned that outlook. A few years ago, we tried unsuccessfully to invalidate the smartest segment of IT workers on the planet by pretending Kubernetes, or containerization in general would somehow magically undo the need for Linux engineers because marketers didn’t know the Operating System still existed in a containerized environment and still has hard problems to solve. Now that the Kubernetes people have “learned a new word”, we’re onto “new mistakes”.
We seem to keep making the same mistakes, too. Google Glass failed for the same reason LLMs will fail with the advent of MCP (Model Context Protocol). Anybody remember Google Glass? This was a product type that would have changed the evolution of our species if it had been successful. Instead, they built a proprietary landscape– a walled garden — that it could not operate out of, and abandoned the hardware ecosystem lessons we learned with x86 machines by forcing it to be closed source, with no exchangeable parts ecosystem, and no ability to modify or run custom operating systems on. It deservedly became stillborn.
MCP is actually the perfect example of why LLMs are a terminal R&D track because it locks rapidly advancing model designs into a fixed, restrictive architecture that forces the technology type it’s meant to standardize, not to advance. Since everybody’s adopting MCP, it’ll lock the whole LLM ecosystem into a fixed design state and prevent the whole industry from making further breakthroughs on it beyond some trivial incremental bumps. Advancements to the models that don’t interface with MCP as a result of a design change will become difficult to make profitable, or even visible in the ocean of LLMs vying for your capital, as a result of not being in MCP. It creates an ecosystem where the standardization that makes it profitable, prevents anything in that standard from advancing the other parts. This means that advancements will come in the form of layered architecture using MCP-interfaced components which will have an exponential increase with each layer in the amount of expense, for little value returned. LLMs are already dead — they just don’t know it, yet.
We’ve told this lie so many times. We’ve made this mistake so many times. Now granted, every once in a while, maybe about every 20 years or so, one of these things is “what’s next” and it changes everything. There are indicators to look for, for that. They aren’t there. Marketers have realized this and are trying to forge some indicators for your procurement team right now to hopefully have before you notice.
To sum it all up, folks, marketers are great at selling promises. Engineers are great at standardizing. Now we’ve got standardized promises.
Total number of jobs replaced by AI to date: 0, and counting rapidly!
It’s a brave new world.
This article was cross-posted on linkedin:
https://www.linkedin.com/pulse/brave-new-world-christopher-m-punches-z9vyc