Coding Was Never The Job

Last week was an interesting week at work. I moved to Canada midway through January and, despite being at Google for nearly 7.5 years, I’m a “Noogler” again, and got to go through Noogler training. Almost everyone in my cohort last week was a new grad from the University of Waterloo or University of Toronto (Canada, eh?) and it was really interesting hearing the questions that they had about working at both Google and being out in industry for the first time.

In particular, I wound up having breakfast with one of these new-grad Nooglers and he asked an interesting question: what does it “take” to go from an L3 engineer (early-career) to an L5 engineer (senior). So, we got to talk about impact and ambiguity—what is it, how you prove you have it/can work in it, how you build trust to be given more impactful, and ambiguous, work—that kinda stuff. Some more Chromies came in and sat with us, and we all started talking about this, and a consensus quickly emerged: the value of a senior+ engineer isn’t how fast they can code, or that they write the best code, it’s that they’ve seen stuff. They have deep product and domain expertise, understand software architecture and how to plan for software longevity, and leverage that to bring together multiple teams to solve hard problems for their organizations. Often their most important contributions aren’t code at all. It also happened to be AI Week.

AI Week at Chrome is kind of like a week-long internal AI conference. People give talks (I gave one on my evolving understanding of agentic coding), we had fireside chats with org leaders, and we were given time to both have conversations about AI and try and play with AI without expectations of making it work in a project. As with any “conference”, though, the interesting bit is always the hallway track, as well as the talks. As such, I had some interesting conversations and a pattern started to emerge. A lot of us who have gone deep on AI coding all kind of converged on the same thought, and it happened to dovetail nicely with the conversation I had with the Noogler: the actual job of a software engineer is systems design aimed at building long-term sustainable software products. The medium we practice this in, and where we learn this, is code, but writing code was never the end-goal—code is the means to an end.

Now this isn’t to say that code as craft, as a creative expression, isn’t a thing. Or that you can’t have fun or get great satisfaction our of hand-crafting code. Or even that for some developers, writing code is all they want or need from their jobs. Rather, it’s an observation driven by anecdata that, especially in software engineering organizations, the value of senior and staff+ engineers is much less in the code they write than in the kinds of problems they can solve. Because by the time you’re at those levels, you’ve learned how to work within organizational structures to build consensus, trace and break down ambiguity into digestible pieces, and prioritize work for maximum impact. You’re a mentor, a coach, and yes, a technical leader. You’ll often find yourself reviewing much more code than you write. But most of this work, even the code reviews, aren’t code problems. They’re people problems. Some say there are only two problems in software engineering: naming things, cache invalidation, and off-by-one errors. But I think what’s getting lost in the discourse around AI coding (and software engineering in general) is that the hard problems are often not technical.

AI (really, coding-tuned LLMs) today can only do a portion of a software engineer’s work, and arguably the easiest bit—writing code. But AI tooling is just a pattern matcher. It’s not creative, it can’t think critically about what it’s doing, it doesn’t have “taste” (learned, earned experience of what does and doesn’t work, and why) about what it’s writing. It’s a pattern matcher. An incredibly powerful one in the right hands—I’m much more ambitious in my personal projects because of it—but it’s still just a pattern matcher. We need a deeper, grounded understanding of how these models work in order to have meaningful conversations about how AI affects our work, especially as software engineers.

This is my concern with the current AI coding discourse: it over-estimates what AI is good at and underestimates the actual work of software engineers. It prioritizes coding productivity (hey! We’ve heard this before!) and ignores the people work required to do software engineering at scale. Sure, you can have AI write up code to “do notifications” for your app, but coming up with something like Slack’s notification decision tree requires much more than just pattern-matching code.

“AI layoffs” often result in rehiring for the same positions after leaders learn that AI hype can’t replace the actual work that software engineers get done day-to-day. A reduction in early-career hiring will dry up the pipeline through which employees gain the non-coding skills needed to become a senior software engineer. We need a conversation about what software engineering looks like in an age where writing code is no longer the bottleneck in software production, but where writing good code driven by taste and experience becomes even more important to solve impactful, ambiguous problems at scale. We’ll need to jump off the AI hype train to ground ourselves in what it can and can’t do to understand the art of the possible. Only then can we begin to talk about the future of software engineering.

I do believe AI will change software engineering, but not because people won’t be writing code anymore.

Coding was never the job.