Alright, listen.
I have seen this story before. Different labels on the servers, same smell in the smoke. Everyone is cheering for the new co-pilot. The dashboards open fast, the buttons click, the demo hums. It looks finished. That is the trick. It always looks finished right before it collapses.
There used to be a floor you learned on. Tickets with splinters. Fix the CSV importer that chokes on Windows line endings. Patch the cron job that skips the last record on the last day of the month. Clean the blood off staging after someone dropped a column without a migration. Entry-level meant entry into pain. You touched the electrified fence and remembered the sound. That is how instincts are made.
Now entry-level means a login to a coding assistant, a shortcut key, a pasted stack trace, and a green checkmark from CI. The code “works.” No one can explain why. They just know it compiled. That is not engineering. That is superstition with unit tests.
I watched an SE2 this morning. He opened Cursor, pasted a 200-line exception, asked for a patch, and shipped the diff in ten minutes. Comment read, “AI fix, lmk.” The PR sailed through because the board said four points due by noon and the burndown looked hungry. Review happened. Eyes skimmed. Approve. Merge. The pipeline lit up like a Christmas tree. Everyone clapped at standup. It was efficient. It was also a loaded gun with the safety off.
Later, production started swallowing orders. Not failing. Worse. Succeeding twice. Idempotency broken by a handler that “looked fine.” Stripe side showed duplicate charges. Finance pinged. Legal hovered. The senior on call rolled back, traced the request path, and found the helper the assistant hallucinated. It passed tests that never tested time. Of course it did. Time is where systems go to die.
Management will call the outage a “learning moment.” They will ask for a postmortem with action items. They will want automation. More guardrails. More PR templates. Fewer blockers. Ten times the output. A hundred times the productivity. They will never say the quiet part. The quiet part is simple. There is no training ground left. There is only the stage, the spotlight, and a loaded script.
Remove entry-level work and watch what rises to replace it. You get pristine résumés from minted programs, people who can derive Dijkstra on a whiteboard and will, with perfect penmanship. They have never watched a database swap to disk under load. They have never tailed logs at 4 a.m. They have never pulled tcpdump because the trace “looks wrong.” They don’t know the smell of a memory leak. That is not a character flaw. That is a missing chapter.
Or you get the self-taught. Hungry. Clever. Shipping clones to prove they can. The work looks complete because the framework did most of the heavy lifting and the assistant filled the gaps. They skipped the painful scaffolding because the tools made it vanish. They are praying for a lucky break into mid or senior. They will be dropped into a fire and asked to walk out holding a report. Some will. Most will not. Not because they lack talent. Because they were never allowed to learn where the heat lives.
I can map the chain of causality with a marker on a napkin. Erase entry-level tickets. Replace them with auto-generated patches. Pressure mid-levels for throughput. Turn senior reviews into rubber stamps because cycle time is now a KPI. Push. Pray. Repeat. Incidents increase. Trust falls. Add more assistants. Add more abstraction. Fewer people understand the system. More people “operate” it. You do not get resilience from that spiral. You get a glossy crash.
Want anecdotes. Fine.
A company shipped a glowing “MVP” in a week. AI wrote most of it. The demo was slick. They forgot to lock down the bucket that stored uploads. Government IDs, utility bills, all there. The link got indexed. The breach notification cost more than the product ever would have earned. They called it unlucky. It was not unlucky. It was predictable.
Another shop let the assistant pick the dependency for a “simple” image resize service. It grabbed a package with a cute name and a postinstall script that phoned home. No one looked. Why would they. The pipeline was green. The server started mining on weekends. Cloud bill spiked. They laughed about it on Slack, then quietly rotated every credential in the org. “No customer data touched.” Sure. That is what you say when no one knows for certain.
One more. A team moved to serverless because the deck said cost savings. Cold starts were “fine” in staging. In prod the p95 crossed the SLA because of a naive JSON validation the assistant composed from four blog posts. The fix was three lines and a decade of experience. The person with the experience was on vacation. The alert fired for nine hours.
This is not about hating tools. Tools are fine. I like levers. I like explosives when I know where the blast will travel. This is about what gets hollowed out when the lever is so good that no one learns what it is moving.
There is still a way to make this work, but it is not the way the slides promise. It looks slower. It looks expensive. It looks like mentorship that costs burn rate. It looks like carving out live, contained pain for people to feel early and often. Give them a sandbox that bites. Make them run the on-call with a net. Let them fail against a real thing that will not flatter them. Teach review like surgery, not like stamping parking tickets. Reward the fix that empties a queue at 2 a.m., not the deck that explains why the queue filled.
Because here is the part no one likes to say out loud. Senior engineers do not scale by cloning themselves. They scale by creating scars that become instincts in other people. If the machines remove the scars, the instincts never form. Then the seniors retire, or burn out, or get promoted into a meeting calendar, and the organization keeps the shiny surface with nothing underneath it. That is how systems die. Quietly at first. Then all at once.
What happens next is not mysterious. More “co-piloted” commits. More green pipelines. More incident retros that blame “unknown edge cases.” More managers chasing throughput metrics like merge count and lead time while the system rots where metrics do not look. Eventually the outage is not a blip. It is a chapter. The kind that changes hiring plans, insurance premiums, and careers.
I am not warning. I already warned. This is the I told you so. This is the prophet part no one enjoys. The future is simple. Either rebuild the ladder and pay for the pain on purpose. Or keep deleting rungs and pay for the collapse by surprise.
There is a picture in my head. An office at 9:41 a.m. Coffee rings on a desk. A junior merges a fix he does not understand. A mid-level opens a new branch and asks the assistant to wire an integration he has never touched. A senior scans six PRs in ten minutes because leadership needs the release by lunch. At 1:13 p.m., the first alert. At 1:16, the second. By 1:27, status page goes yellow. By 1:32, red. Finance asks if charges duplicated. Legal asks if data leaked. Marketing asks if the tweet should be drafted now or later. No one breathes. The senior grabs the console and finds it in five minutes because she has the scars. She rolls it back. She writes the hotfix by hand. She patches the test that never tested time. The room exhales. Leadership calls it a win.
It is not a win. It is a bill. And the interest rate is going up.
Keep the assistants. Fine. But bring back the floor that hurts. Give the beginners real bugs, real logs, real outages inside fences. Let mid-levels write microservices without a ghost writer, then make them own the pager for what they shipped. Make review a fight worth having. Slow what must be slow. Fast is for demos. Durable is for reality.
Or keep doing what everyone is doing. Keep pretending the castle is stone because the paint is convincing. Keep asking for 10x and 100x. Keep calling push and pray a process. When it falls, do not look surprised. I will not be. I already pictured the smoke.