Don Juan et la mort (1926) by Alexandra Exter, from Décors de théâtre (1930), via MoMA.
In online spaces dedicated to coding, I often encounter two archtypical tribes, each with a set of go-to talking points.
The bears are convinced that LLMs are destroying software engineering, filling codebases with mistake-filled code that no one understands, and that no competent engineer is using LLMs because their problems are too complex for the models to understand.1
The bulls, on the other hand, embrace LLMs for nearly all tasks, seeing them as a way not only to automate tedium and tinker on experiments, but as a perfectly good and likely superior way to write production code for their job.
These two groups collide in the comments section of your favorite website, exchanging insults and downvotes, absolutely baffled that anyone could be a member of the other group.
I leaned bearish when I felt the tools were still immature, but those days are nearing their end. The models are powerful enough that to ignore them is to rob yourself of a valuable skill in a contracting job market. And it is a skill. We may not need to prompt-engineer our inputs to death just to get the models to behave, but you still need to learn how to use LLMs effectively, just like you have to learn to use your IDE, your browser, or your command line.
When I peeked my hibernating bear head out of my cave, I’d found that my peers and colleagues had been getting a lot done without me, and I’ve reinvested. I like to think I’m an LLM moderate now: I try to leverage their strengths while keeping an eye out for their weaknesses, and I stay abreast of advancements with cautious optimism.
Rather than fuel the flame wars or ramble on (too much) about the philosophical implications, below I’ve tried to lay out in concrete terms the ways I get things done with LLMs.
I’m a huge fan of Simon Willison’s “llm” command line tool. I pop open a terminal with alt-tab, and llm chat
instantly starts a conversation with Claude, ChatGPT, local models, and so on. I use this throughout the day for my primary LLM use case: treating the model as a personalized one-on-one tutor. (When I’m on the go, I use the Claude app for Android.) Whether it’s a topic that merely strikes my curiosity (“what are the different cables on a telephone pole?”) or something I need to understand better for a project (“what is the difference a message queue, pub/sub, and event stream?”), I lean heavily on language models as an educational resource.
“But Dan,” I can hear you say, “these models make things up all the time!” It’s true, but I’ve found it’s getting better, and you learn to develop a bit of a bullshit detector that needles you to double-check something that seems off. We use that same skill all the time on the internet, right?
LLMs haven’t replaced my Wikipedia addiction, they complement it. Wikipedia and web searches in general are great for learning facts, but I find that talking through concepts with the model has been great for understanding.
Early on, ChatGPT could blow people away by spitting out a surprisingly competent React app, but it’s been more tightly integrated tools that have allowed programmers to iterate on existing codebases, solving bugs, refactoring, and building new features.
I’ve tried a few of them: GitHub Copilot and Cursor have both gotten extremely good, and I use both in my work. For side projects (where I’m the one paying), I like the “build-your-own assistant” flexibility of Continue, but I’ve also been playing around with Claude Code and I love the experience (just keep an eye on its API usage).
I have to admit that I simply can’t abide the always-on inline suggestions—I find them extremely disruptive and annoying and I disabled them almost immediately. Instead, I use a combination of inline chat (“add a docstring to this function”) and the chat/agent sidebar.
Chat mode is for talking through ideas: “how might I go about refactoring this to support ABC scenario? would XYZ work?” Agent mode excels for actually executing changes. I typically prompt it to execute step by step because I like to read, review, and understand each diff before approving (I often edit them or reject them, explaining my feedback). Getting a huge diff attempting the entire change at once has a lower success rate and is harder to digest.
The aforementioned Simon Willison has a good quote in a recent New York Times piece about the way AI is transforming coding jobs: “It’s more fun to write code than to read code,” and coding with LLMs is more of the latter. The idea of feeling like a “bystander” strikes a chord with me.
In my free time I’m often coding for the joy of it, so when I prompted Claude to help me sketch a design for something I wanted to try and it damn near implemented the whole thing, I couldn’t help but feel a little deflated: where’s the joy in that? On the other hand, I’m typically happy to tackle tedium or get unstuck using LLMs at work. I imagine some photographers were eager to abandon the smelly, temperamental darkroom for Photoshop, while others couldn’t help but feel something important was loft in translation. One man’s joy is another’s drudgery, and maybe we’ll be called luddites to get sentimental about writing code “by hand.”
We’re reckoning with these questions as an industry. In the meantime, I’ve developed a rule of thumb to preserve the joy of coding: let the LLM help you with the boring parts, save the fun parts for yourself, and have even more fun using model to do things you wouldn’t have been able to do on your own.
Something that didn’t resonate with me in the Times piece was Harper Reed’s comment that we may no longer need to be “precious” about understanding the code. In fact, I’d argue that since the AI explosion is likely to bring software into more facets of our lives than ever (AI therapists, brokers, interviewers), the importance of understanding what we’re putting out into the world is only growing.
Surprisingly enough, since they are language models, writing is where I’ve gotten the least value from LLMs. I’m not interested in having a model produce text on my behalf because my writing is an expression of my thoughts which I want to come directly from me—plus the prose always comes out with that obsequious LLM voice. I tried using Claude code to edit this post for brevity (always a struggle for me) and found it eager to excise multiple paragraphs at a time and rewrite my points in a way that felt foreign to me. Maybe I’m just being “precious,” maybe those edits are what I needed, but for now I’ll draw the somewhat arbitrary line here.