Beyond the Hype: What AI Can Actually Do for Your Engineering Team

Stop waiting for AGI. Start building systems that work today.
Mac Heller-Ogden, Senior Principal Software Engineer
November 6, 2025
Artificial intelligence will revolutionize everything! Artificial general intelligence is imminent! Artificial superintelligence will reshape the world! Or so the hype goes. The reality? AI is a powerful tool—but without careful integration, it’s slowing teams down.
We fell for the voice. As writer Kara Wynn Long compellingly argues, language is a poor heuristic for intelligence. We confused fluency with understanding. The confident tone labs trained these models to use only deepened that confusion.
Even Richard Sutton, the 2024 Turing Award winner and one of the field’s most respected voices, has argued that LLMs are fundamentally limited. In a recent interview, he points out that without experience, goals, or feedback from the world, there’s no grounding for real learning. His work, of course, is focused on general intelligence—a horizon we’re still far from. But the takeaway for engineering teams today is simple: treat LLMs as statistical mirrors of language, not reasoning engines.
We’re only now starting to recognize the consequences of this misjudgment. AI can generate content faster than we can review it, and in many teams, it’s introducing friction, not speed.
A rigorous study by METR found that experienced open-source developers using AI tools took 19% longer to complete tasks than without AI. Even more striking: developers expected AI to speed them up by 24%, and after experiencing the slowdown, they still believed AI had sped them up by 20%. The gap between perception and reality couldn’t be clearer. (Thanks to João Pedro de Amorim Paula for bringing this study to my awareness.)
So where does that leave us?
Gartner, the generally accepted steward of current corporate thinking, released their 2025 Hype Cycle for Artificial Intelligence. Their assessment? AI-Ready Data and AI Agents were last year’s disappointing hype. This year, the focus has shifted to “AI-native software engineering,” defined as “a set of practices and principles optimized for using AI-based tools to develop and deliver software applications.”
Notice the shift—from autonomous agents that would revolutionize everything to tools that augment existing practices.
Andrej Karpathy, one of the most well-informed voices on the technical aspects of AI, gave a keynote at Y Combinator’s AI Startup School in June 2025 titled “Software Is Changing (Again).” His message? Keep AI on a leash. Design solutions that remain first-class to humans and are only partially autonomous.
Karpathy talks about being at the start of a decade of learning how to adapt our software practices. He’s stated publicly that he’s “very concerned” when he hears people say things like “2025 will be the year of agents.” And yet he remains bullish on the technology and its opportunities. He’s simply realistic about the tech we actually have versus the tech we imagine.
The world is finally waking up to the fact that AGI (artificial general intelligence—systems that reason broadly like humans) isn’t around the corner as promised. In fact, it might be a long while coming.
Here’s the important part: this isn’t a reason to abandon AI initiatives. It’s a reason to approach them with clear eyes and practical goals.
The key insight is understanding AI as a tool for augmentation rather than replacement. When Karpathy talks about keeping AI “on a leash,” he’s describing a design principle, not a limitation. Human-in-the-loop isn’t a fallback position until the technology gets better. It’s the architecture that works.
This matters for how we build. Instead of asking “What can we fully automate?” we should be asking “Where can we amplify human judgment?” Instead of “What can we replace?” we ask “What can we accelerate?”
At Gateless, our product approach to intelligent automation is built on a core principle: codifying human knowledge and intelligence, not replacing human judgment. There’s a critical difference. Codification means capturing patterns, workflows, and decision frameworks that experts use. It means building systems that make expertise more accessible and repeatable. It doesn’t mean pretending the system has become the expert.
That same philosophy guides how I think about internal AI tools. I’m focused on finding opportunities where these new capabilities can genuinely help our teams—developers, customer support, product, sales. Everyone benefits when we design around human capabilities rather than trying to circumvent them.
The highest-leverage opportunities lie where engineers currently spend the most cognitive effort: understanding how systems behave in real-world conditions, tracing the flow of data or state across layers, and diagnosing unexpected behavior. These are the moments where context is fragmented, surprises are common, and expertise is spread thin — and where well-integrated AI could provide real support.
The cost of getting this wrong isn’t just wasted investment, though that matters. It’s the opportunity cost—focusing on flashy but impractical applications while missing the genuinely useful ones. Teams become skeptical of AI because they were oversold. Rightly so.
But the opportunity of getting this right is building systems that make your team more effective. Not someday. Not when AGI arrives. Now. This quarter. Systems that experts actually want to use because they genuinely help, not because they were mandated from above.
We’re at the beginning of a decade of learning how to integrate AI thoughtfully into our development practices. That’s Karpathy’s timeline, and it feels right. Not a quarter, not a year. A decade of experimentation, refinement, and gradual improvement.
The teams that will succeed aren’t the ones betting everything on AGI. They’re the ones building practical tools, measuring real outcomes, and iterating on what actually works. The ones treating AI as a powerful but limited tool—not magic.
That’s the approach we’re taking at Gateless, and it’s the one I’d encourage any engineering leader to consider. Focus on augmentation over replacement. Build for human-in-the-loop workflows. Start with internal applications where you can learn safely. Measure ruthlessly and iterate constantly.
The hype will continue. AGI will remain perpetually two years away. Meanwhile, there’s real work to do—building AI systems that make our teams more effective today.
That’s where the actual opportunity lies.
This blog is provided for informational purposes only and should not be relied upon as legal, financial, or compliance advice. Please consult appropriate professionals regarding your specific situation.