A significant report recently revealed that companies are not achieving the return on investment they hoped for with AI implementations. Organizations that invested heavily in artificial intelligence tools and technologies are discovering that the promised productivity gains and cost savings haven’t materialized as expected. This reality check comes as AI hype dies down and the massive hiring frenzy in the sector has calmed considerably.
This is completely normal within the cycle of software development. When AI hit the mainstream, doomsayers predicted mass job displacement. However, for developers struggling to find work as juniors, the issue wasn’t AI replacing jobs—it was a return to normal hiring patterns after an artificial boom created by COVID-19 and widespread lockdowns.
During that period, companies engaged in what’s called defensive hiring. They were desperate to acquire any talent available, hiring everyone and their neighbors at unprecedented rates. If you simply completed a React course and posted some projects you copied from that tutorial on GitHub, you could likely land a position. The demand was so intense that barriers to entry dropped dramatically.
Many people who secured jobs during this period wouldn’t have qualified in a normal market cycle. They rode the wave of exceptional circumstances. Now that the market has normalized, possibly dipping below normal levels, junior developers face harder prospects. You must demonstrate real capability rather than just completing tutorials. This represents a return to established professional standards rather than an AI-driven apocalypse.
The Fundamentals Still Matter
What I’ve been teaching for many years remains true. Learn your fundamentals deeply. Go execute two to three small freelance projects for free with local nonprofits or community organizations. Recently, a nonprofit reached out seeking developer assistance, representing exactly the type of opportunity early developers should pursue. Think of your first two or three development jobs as your stage work—the final stage of learning where you’re not yet highly valuable to paying clients, but building practical experience.
If you’ve only completed tutorials, especially those offered online or through boot camps, you face challenges. Many boot camp instructors have never written professional code, and they don’t properly prepare students for real-world demands. While boot camps serve some purposes, they often overpromise and underdeliver on genuine professional capability.
The same pattern applies to AI doomsayers. I believe the vast majority of those predicting catastrophic job displacement haven’t actually worked in real-world development environments. They see code generation tools outputting impressive amounts of code and assume that represents development. However, code generation is only one part of development. Writing syntax represents a fraction of the professional developer’s actual work.
Two Approaches to AI Development
There are two types of ways you can use AI to develop applications. First, you can use AI to augment and accelerate traditional development, which everyone should be doing. Second, and much more interesting in my opinion, you should explore AI-first development—building applications that are 90% AI and 10% traditional development. This is where I’m seeing the biggest opportunities lately.
Over the last three weeks, I’ve had four companies approach me about developing AI-first applications that AI made possible and would be either impossible or extremely difficult with traditional development. However, the problem with AI emerges when you hit a large language model without proper prompts and training. It might get 90% of the work right, but that last 10% is everything. The final 10% determines whether AI implementation succeeds or fails.
If your AI implementation doesn’t properly handle that last 10%, it becomes pointless. You get wrong answers, products go down the wrong path, and entire systems fail. That critical 10% requires proper prompt engineering and edge case mitigation.
The Critical 10% in AI Implementation
In AI systems, the final 10% determines everything. Miss it, and you get flawed outputs, misaligned products, and unreliable systems. Master it, and you unlock precision — through prompt engineering, edge case handling, and disciplined iteration that separates production-ready AI from prototypes.
Explore AI Engineering & Prompt Design Roles →The Edge Case Challenge
Edge case mitigation involves providing the AI with a very precise, structured framework within which the LLM will operate. Then you must specifically address outlier events. You need to explicitly instruct the system: if this happens, do this. If that happens, do that. If another scenario occurs, handle it this specific way.
This is exactly what I’ve been doing for the last three and a half to four months with my custom GPT called Brad, which is fitness and health-focused based on principles I used to lose 50 to 60 pounds. If you just used the LLM alone without all the prompt engineering and edge case management I implemented, it would still be pretty good. However, I made it significantly better—literally hundreds of times more effective. This took months of back and forth, understanding how the AI behaves and systematically addressing its limitations.
AI is powerful and you should absolutely use it, but it’s not a panacea. It’s not some superintelligence coming to destroy all jobs and wreck the economy. That narrative is nonsense. I’ve seen this pattern before with other technologies. Even if AI eventually reaches those capabilities, it’s going to take considerable time.
The Brittleness of AI Systems
We can see AI’s brittleness when GPT-5 was recently released, and it broke many existing custom GPTs that were built on GPT-4. If AI development was this AGI-adjacent super-intelligent system that makes everything easy, GPT-5 wouldn’t have broken a bunch of custom GPTs created in GPT-4. One of the major complaints is that GPT-5 lost its personality, which is a relatively minor issue. More critically, it destroyed custom GPTs that people spent months developing and training.
They also removed memory functionality. I had been dependent on memory within my custom GPT, and they took it away. This demonstrates a critical risk point. It gives me pause about using public models, or rather private models like OpenAI’s, where they can simply flip switches and destroy all your work and applications. That’s problematic behavior.
OpenAI shouldn’t force changes on users without warning. They reversed the policy of removing access to previous models and forcing GPT-5 on everybody, which was a mistake. In software development, there’s a basic rule: you don’t change underlying APIs and codebases without giving everyone notice. Programming languages deprecate features before removing them, and often deprecated features are never actually removed because people still depend on them.
Deprecation means warning users that something will be removed at some point in the future, giving them time to prepare. However, experience over three decades shows that most deprecated features don’t disappear because dependencies exist. I can understand the obsessive-compulsive impulse to want to clean up languages and remove old libraries, but you don’t do it unless absolutely necessary for security or critical functionality.
There’s lots of old code out there that’s perfectly functional. For a higher power like OpenAI to remove functionality without telling people is foolish. Forcing everybody onto new models before they could test them would be disastrous for businesses with thousands of users. Hopefully OpenAI learns from this mistake and handles model retirement much like programming languages handle deprecation, keeping old models available so people can switch at their leisure.
Frequently Asked Questions
Q: Why are companies reporting poor ROI from AI investments?Â
A: Many companies rushed into AI adoption without proper implementation strategies, missing proper prompt engineering and edge case management that makes the difference between successful AI integration and wasted investment.
Q: Has AI hiring really slowed down?
A: Yes, but this represents a return to normal hiring patterns after the artificial boom caused by COVID-19. Companies are no longer engaging in desperate defensive hiring but returning to more selective professional standards.
Q: What’s the difference between using AI and AI-first development?
A: Using AI means augmenting traditional development with AI tools. AI-first development creates applications that are 90% AI and 10% traditional code, opening possibilities that weren’t feasible with traditional methods alone.
Q: Why is the last 10% of AI implementation so critical?
A: AI might get 90% of tasks right, but that final 10% determines success or failure. Without proper prompts and edge case management, AI generates wrong answers and takes projects down incorrect paths.
Q: What are edge cases and why do they matter for AI?
A: Edge cases are outlier events that require specific handling. You must provide frameworks telling AI exactly what to do in each scenario: “if this happens, do this. If that happens, do that.”
Q: Should developers worry about AI replacing them?
A: No. Many AI doomsayers haven’t worked in real development environments. AI is powerful but brittle. Code generation is just one part of development, and proper AI integration still requires deep understanding of systems, architecture, and problem-solving.




