Struggle Over Solution
My hot take is that AI, in it's current form, won't obsolete software engineers anytime soon. I'm going to attempt to explain why.
I saw a writer on Threads awhile back commenting on the impact of AI and Large Language Models like ChatGPT on writing as a discipline. They explained that there's a component of writing where doing the work to get your ideas out of your brain and onto a page will, itself, force you to refine those ideas and wrestle with the conflicts they have with other parts of the story, document, or article you're working on. Writing is a tool for communication, yes, but it's also a critical tool for organizing thoughts in a structured way. As someone who was only a few credits shy of an English minor in college, this viscerally resonates with me.
There's a parallel of this problem in the technology industry. As new engineers start their careers with tools like Copilot and Claude at their disposal, I'm concerned that these new engineers are missing out on the learning gained in the struggle to write code due to the cultural pressure to "ship asap" in many companies. This, in turn, would create engineers that don't deeply understand what they're shipping.
I'm not alone in this concern.[1] However, I've been told multiple times that this is silly. There is a narrative that if we have AI then the need to know how any of this works under the hood goes away. One day soon, someone will describe how they want an app to work to an LLM, it'll spit out that code, and that's going to be that. "Will we even need software engineers at all?" they ask.
From purely anecdotal analysis, I find that there's a positive correlation between the strength of this belief and individual's distance from systems of any significant complexity. Everyone I've met who is "on the front lines" with me finds this laughable. Barring something happening I can't predict, I'm inclined to agree. These tools aren't as good as a layperson may think they are.[2]
Today I'm going to attempt to explain why.
Abstractions trend leaky
Let's talk about abstractions first because, ultimately, Large Language Models used for code generation are just an abstraction. Joel Spolsky is widely credited with coining the Law of Leaky Abstractions[3], which states:
All non-trivial abstractions, to some degree, are leaky.
An abstraction is any layer that simplifies something a lot more complicated going on under the hood. We do a lot of this in the software world. A "good" abstraction relieves you of having to be aware of many of the implementation details about how the work actually happens. However, sometimes, abstractions end up being "leaky" - which is to say they fail to insulate you from all the implementation details.
Joel's argument is that any abstraction that is not trivial falls into this "leaky" category to some degree or another. Certainly, some abstractions are better than others in terms of how much or how little they leak.
I'd like to add the following observation from my experience in the technology industry alongside this:
All systems trend toward increased complexity the longer they evolve.
Complexity is the antonym of trivial. Therefore taking these two properties together, we should conclude that all abstractions will trend toward being leaky over time. If I'm right on that, then we should expect that as LLMs become more competent and writing increasingly complex code, they too become more complex, and therefore more leaky. Furthermore, the longer you maintain a code base the more complex that is going to become, and if you're maintaining it primarily with the help of an LLM your going to create more leakiness when the LLM doesn't know how to handle that.
To put this another way: Assuming that just because an abstraction exists that it fully absolves you of knowing the implementation details of that abstraction is a fallacy.
Three raccoons in a trench coat
Next we should be clear that programming languages, themselves, are leaky abstractions already. They're designed to allow you to accomplish objectives more quickly, but they rarely, if ever, fully excuse you from understanding what's going on at the next level down.
This is unlikely to change for two main reasons:
- Any general purpose programming languages are inherently (and necessarily) non-trivial because manufactured triviality requires making assumptions that general purpose languages cannot make.
- No interesting system exists entirely in isolation from some kind of environment. Future changes in the environment will result in the code deviating from intended behavior. We call this effect bitrot. No programming language insulates you from this.
Given this, advocates of AI tools replacing the need to understand code are effectively expecting an LLM that generates code to provide a less-leaky abstraction than the code itself can provide.
Saying that AI can somehow magically absolve you of a responsibility of understanding what your program is actually doing is roughly equivalent to stacking three raccoons in a trench coat and calling it Inspector Lestrade: It's probably not going to go how you want it to go. [4]
Exercises left to the student
In school, I hated having to work things out by hand. Especially in classes like Calculus where a TI-89 could just give me the answer I needed and I could move on with my day. It's understand the appeal of these AI coding tools, and I use them myself to handle mundane things or to generate some initial code that I can use as a template for something I want to write. They are a great tool to have in our pocket.
Now that these tools exist, I think the challenge for us as software engineers is to figure out what "exercises left to the student" need to exist for folks to build enough depth into their understanding of the field to be successful. I remain convinced that these tools will allow us to do more, quicker - but an abstraction on top of a leaky abstraction is itself going to be leaky. The details of what's actually going on in the CPU are always going to bleed through at one point or another.
Much like the TI-89 wasn't the end of folks needing to actually understand mathematics, AI coding tools will not be the end of folks needing to understand what their code is actually doing under the hood when it goes to production. The struggle of learning that is valuable in its own right (to you and your company), above and beyond the solution you ultimately produce.
- See the blog post "New Junior Developers Can't Actually Code" for an example of this, and commentary on Wojtek's LinkedIn post. I also reposted this and had some discussion that inspired some of the content of this post.
- There may yet be a "major breakthrough" in AI technology that changes my views here. There's a lot of hoopla about Artificial General Intelligence being "just around the corner." But we've been saying the same thing about Quantum Computing for decades and that's just now seeing interesting developments. So, I'll believe it when I see it. That said, most discussion on this doesn't predicate it on some major, unknown breakthrough. Folks really seem to earnestly believe LLMs today by themselves (or some linear progression thereof) will replace engineers. I do not.
- The Law of Leaky Abstractions
- Worth noting that Holmes might have preferred at times to work with the raccoons at times, though.