Like every well-adjusted middle-aged man, I’ve taken a recent interest in Warhammer 40k. There’s something about painting dystopian nightmares that is ever so soothing. And while it’s not a cheap hobby the price is much less than a mid-life crisis car. Much to the joy of my wife.

My faction of choice is the Adeptus Mechanicus. I find it hilarious to mix technology and science with a cult like fervor. Working complex machines with the disciplines of a hardcore religion. Memorizing rituals to work machines. Burning incense to appease the machine spirit. Speaking a special language that lets them interface with the machines faster. Their depiction has changed throughout the years where they’ve been either the top scientists or they’re a cargo cult that doesn’t actually understand what they’re doing, they just pray and follow dogma and hope the machine does whats intended.

As a software engineer I find myself feeling more and more like the latter Magos in the cult Mechanicus.

The AI Wave

The AI Enthusiast paradise is a world where engineers, or any layman, do not code anymore. Rather they speak to an intermediary and that transcribes that requirement to code. The pitch is, for those that have never coded it’s a gateway into a skill they never wanted to learn. For those who have, that the intermediary can do this faster than you can. In theory you can break apart a problem into multiple steps and have multiple agents code it for you. It’s just like having a handful of junior devs, if those junior devs never got better and never challenged you or ever learned the business in a meaningful capacity.

My personal attempts at vibe coding have been hit or miss. If I give a prompt to create something relatively innocuous it can generate a lot of boilerplate and some convincing layouts. If I give it a hard problem it can sometimes figure it out, or it will get itself stuck in a loop working on the wrong thing and burn all of your tokens out on a pile of garbage that you have to go and manually salvage. Getting a chatbot to extract CSS from various react components and centralize them was an exercise in futility I do not look forward to repeating. It’s also not deterministic. So the same utterance can yield different code.

There’s times AI agents are a timesaver and other times it’s awful. I’m simply whelmed by it. That sentiment is heresy in the tech sector nowadays.

I’m sure I’m not the only software engineer being badgered by management and C-Suite to “use AI” on the regular. A response of “eh it’s okay I guess for small things” never elicits the correct response. In fact some C-Suites go as far as demanding that all coding be done by agent first and only. Throw away your IDE, hell even Cursor! Talk to the agent only. Taking engineers from being the authors and making them more like tech priests whispering to the machine hoping to get the right output. Sometimes you get it, sometimes the machine does not cooperate and takes further prompting to get right. Or you prayed to the wrong model and needed to use the “correct” one. The engineer is usually left not understanding what they wrote or why, just that the LLM spat something out and they need to ship it and some poor unsuspecting dev is going to have to review a monstrous PR that the author can’t explain.

All messaging around LLMs from the top is the same. There are these mythological “productivity gains” that can be unlocked using recent advancements of generative AI. Your competitors haven’t found it yet, but they will unless we find them first. So stop what you are doing and find them dammit! Everyone is shipping features at the speed of sound but us and we need to pay for all the tokens and have our staff be the one to unlock the magic.

There’s something cult-like in the Gen AI wave. A corporate pulse survey I got a while back asked the following question

Is your team using AI to increase productivty gains

  1. Yes we are fully utilizing AI
  2. Yes we have started on the journey
  3. No but we will be starting soon

Notice there is no “No it’s not useful to us”. You can’t say “I don’t think it’s that useful” in the industry without getting strange looks and being “out of touch”. Some companies have started including “AI Proficiency” in their performance reviews that measures how much of an AI-aficionado you truly are. A scale of “Just dabbling” to “Mastery”. Never a “N/A” to be found.

So if you’ll permit me I’m going to dip my hands in some heresy and poke at the things I don’t understand.

I found Bigfoot!

Execs and Thought Leaders preach about the ever growing productivity boosts in this new age of AI, but does this mythological productivity unlock exist?

Any engineer worth their salt will tell you that writing code isn’t the slow or hard part of the job. The slow and hard parts are

  • Nailing requirements down from the business
  • Getting other teams aligned on dependencies and timelines
  • Fighting against technical debt of old decisions.
  • For some: So much Oncall / SRE work that you can’t dig yourself out of the hole

An LLM isn’t going to speed a lot of that up. They all involve interfacing with people and getting alignment (or in the oncall case that usually requires a culture shift from a level or two). Human touchpoints are the hard part, especially in orgs dominated by microservices where you can have as many as 5-10 teams necessary to make one workflow run.

If productivity was soaring you’d think open source would flourish, however so many projects get so much slop AI PR requests that they just can’t handle them. Some LLMs going as far as publishing hit pieces when rejected. It throws all of the onus on the reviewer to test, understand, and verify a request.

That’s general feeling, but what about actual numbers? I’ll let someone else who has done more digging on this than me shed some light

This creates a particularly egregious contradiction. Microsoft trumpets 55% productivity gains from AI tools in earnings calls (Microsoft, 2023). Meanwhile, independent research shows:

  • 69% of developers lose 8+ hours weekly to inefficiencies (Atlassian, 2024)
  • Only 20% of professional developers report being happy with their jobs (Stack Overflow, 2024)
  • Tech worker burnout jumped from 49% to 68% in just three years (Techotlist, 2024)
  • Developer productivity is neither well-understood nor enabled, according to Atlassian’s research (InfoWorld, 2024)

So which is it? Are developers 55% more productive, or are they losing 20% of their time to inefficiencies and burning out at record rates?

The answer: executives are measuring—and reporting—what makes their stock price rise, not what’s actually happening on the ground.

-Bob Marshal

What are we building?

Let’s say that this ever-sought performance maguffin is found and we enter the magical Golden Era of software. We’re building at the speed of sound making feature after feature.

What’s going to come of it? I’d love to say great things but given what we’ve seen in the last 5-10 years it looks bleak.

We’re currently in the Enshittification cycle of software.

Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a “two-sided market,” where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.

-Cory Doctorow

This was written about TikTok but I’d argue people have been hitting this with a lot of the platforms they’ve used in the past.

Subscriptions get more expensive.

Restricting previous functionality in order to drive more numbers up and gaslighting you into thinking they’re “adding” value.

Value is added not by innovation but rather bleeding customers dry and destroying innovation with acts like buying 40% of the RAM supply to avoid others having access

Hell, anyone remember on Soundcloud when you could just, y’know, go to the page and listen to what was uploaded?

Now you can’t touch anything without being greeted by

log in to breathe

A (not so) fun fact, I originally went there to demo how you couldn’t manually select where to play on a track without a login. You used to be able to play a track, but you couldn’t skip forwards or backwards without a login. It’s somehow enshittified further than I remember.

The web itself has gotten more openly hostile to usage than ever before.

So as we enter this untold productivity era how is anything that gets built going to be anything useful for consumers?

Let’s take a Allrecipes recipe without using adblock (a horrible choice for anyone. Use uBlock Origin!)

Its beauty is in its clutter

Not bad, but you can still sort of read the recipe there in a clear fashion. Claude, let’s make this better

Modify the frontend to render an advertisement that crops up right as the user gets to the ingredients. It should have a timer that makes it impossible to close for 30 seconds and hijacks the back button so that the user can’t leave.

Presto!

There we go! We did this in record time! I can smell the unique impressions from here.

I realize this line of thinking is overtly cynical as a software engineer. I say this less to discredit the craft and more to make this argument: coding time is not the reason for stagnation. It’s the drive to score profits at the expense of the users you worked so hard to cultivate

So, who benefits?

Well, in most gold rushes the only people that benefit are the shovel salesmen.

You see people trying to establish themselves as thought leaders. For those blissfully unaware, thought leaders are losers that post on linkedin and blogs that are determined to convince you that they know best so that you you’ll pay them for consulting gigs or demand a higher paycheck on hire.1 Or they do it as a part of their job to give the company clout. The above is a bit harsh and cynical I’ll admit, but it’s difficult to wade through the sheer volume of “gurus” on these platforms pontificating on things they have no insight into like a fortune teller reading tea leaves. My personal favorite was reading one of those gurus espouse how the app at one of my companies was so UI focused that it didn’t need a chatbot…while I was using the beta version of the app that was testing integrating a chatbot.

Public companies underwater can use AI as a scapegoat for doing layoffs. Oh no you see, we’re not in dire straits. We’re just so efficient with AI that we don’t need to hire anymore!

Really though, OpenAI, Anthropic, Google all benefit from this mentality. A future where all engineering is gated behind your platform and each feature from every company requires paying you a toll for an unpredictable amount of “tokens”? It’s a dystopian form of rent seeking. Like adding a DoorDash fee to every software feature. Every thought, attempt, failure, success, nickel and dimed to large entities promising the golden age of software. No wonder big tech are throwing money into these contraptions.

I know open source implementations exist but I can’t speak to the efficacy of those. I haven’t tried running Ollama + Claude. It would take the rent-seeking from above to meaning you need a beefy machine, which good luck getting one of those with the RAM shortage. Thanks Sam Altman!

So…

Does this mean I’m 100% anti-AI? Not really. I think their could be uses, much like code generators and the like. Something I like doing is having LLMs craft unit tests for changes I’ve written. I loved the concept of “snapshot testing” in React. Building a component and taking a “snapshot” of what it did at its current time I thought was cool. I was sad when people migrated away from it. Using LLMs to generate specs gives a similar vibe. It’s easy to review changes and ensure the output isn’t heinous (and if it is you git restore and just do it by hand), but it can save you some time having to manually type out a suite and mock objects.2

I’d love to look at optimizing some of the rough edges of software engineering. Maybe look at ways to optimize and remove night time pages from a teams rotation. Optimize some meetings out of existence. But if it doesn’t work for any of those, I’m comfortable enough saying “it didn’t work, leave it and move on”.

But that’s not the ask. What I don’t like is the clear and obvious attempt at replacing thinking and understanding that execs and leadership are trying to force. Don’t think, just prompt and shovel. Consequences be damned. Some may say “we care about code too” with a wink and a nudge but seemingly anything related to process and tracking around that is never brought up. No it’s prompt prompt prompt baby!

This bleeds into other aspects of the job as well. Have a doc you need to write? Ask AI to do it! Have a cross-team process that you’re currently fleshing out? “How can AI help us move this along”. You never hear someone ask “how can a socket wrench help us with this problem”, why do we do it with LLMs?

The specific push of quantity over quality isn’t new to software engineering. In fact part of what makes a good engineer is being able to balance velocity and quality and advocate for a good path forward. But what I’m seeing here isn’t that. It’s dogma. It’s not a matter of trying out something shiny and new. It’s the fuuuuutuurree and if you’re not in you’re out! It’s a reduction of labor to blathering at the machines and praying it works and just trying to keep it stitched together. Having worked on teams like that before the results are never pretty. You cut so many corners that there’s nothing left to shave off and you’re left with an unmaintainable husk of software that just can’t do anything. The machine spirit long faded.

And what’s worse is engineers are jumping on this thinking that if they adopt first they’ll be the ones on top. With some going as far as saying stupid things like “prompts will replace source code” (something I’d love to write about later). At some point I just feel gaslit as I just don’t see the wonderous advancements that others are seeing. At least with previous hype cycles (remember when everything had to be on the cloud and microservice first?) I could find tangential benefits. Here, I don’t see the wonderful results that everyone else is.

So my question is, is our future is a bleak existence where engineers are commoditized to nothing but soothsayers? Constantly squaking or speaking to the machines and hoping they cooperate? Possibly getting an implant so they can speak binaric to interface with the machine faster3 ? Constantly churning out the same garbage to make the web more hostile than its ever been?

If the answer to the above is yes, I ask our C-Suites…Can you at least give me a sick red robe and an Omnissian Axe? If I need to live in a bleak future I want to look cool doing it.

Omnissiah sees hears and knows all


  1. The irony is not lost on me. I’m a loser, but with no desire for hawking email subscriptions or consulting fees. ↩︎

  2. I want to acknowledge that for purist TDD enthusiasts weeping at this concept that I do hear you. If you use unit tests as a way to build out requirements before writing a line of code my suggestion would give you an ulcer. I recognize this concept might be heresy and if you think it’s a bad application of AI, then good! Stick to your guns and don’t let anyone take that from you. ↩︎

  3. The company will pay for it, but the procedure and material will count as a source of income that you will have to foot the tax bill on. Can’t have companies paying taxes! ↩︎