Personal growth

Don't outsource your thinking

An honest look at how AI actually fits into my daily workflow as a software developer in 2026.

Chris Brett's avatar image
Chris Brett

In the beginning

Let's rewind a couple of years when I was hand-rolling everything. Every component, every decision, every rabbit hole: just me, any necessary docs, and Google. I was then introduced to Cursor and shown its tab completion, which was genuinely useful at the time. A suggestion here, an autocomplete there, it was fast and accurate enough to be helpful but it never felt like a game-changer, just a little boost to my existing process. What I did appreciate about the Cursor team though was their attention to detail around the UI/UX.

Cursor released their Composer tool and immediately I saw myself asking more and more questions about code I hadn't built, trying to get an overview of how any new features or changes I'd made would impact the rest of the codebase. I was still doing the work, but I had a partner to talk through it with, to get a second opinion on my thinking.

I could see then that AI was becoming part of the software development future, but I didn't understand just how quickly things would change.

Then Anthropic released their Opus models and suddenly things shifted, like a fault line giving way. Composer felt infantile by comparison and sure, whilst I could use different models in Composer, there was just something raw and direct about using Claude Code in the terminal.

Plan, plan and plan some more

Most conversations about AI and developers fixate on code generation. How much is it writing? Is it accurate? Are we becoming deskilled? I get why people ask those questions, but they miss what actually makes these models useful to me. Planning.

Before Claude Code and the Opus models, using AI gave the same vibe I had when my children were first learning to sit up on their own. I had to stay close, verify everything, second-guess every output. The moment that changed was when I realised I could have a proper conversation first. Write a plan, refine it, stress test it, before a single line of code was touched. That shift from point-and-shoot tool to genuine planning partner was the thing that made me pursue AI in a more disciplined way.

Now the process looks different depending on the task. A bug? I'll feed the scenario in, go hunting through the codebase while it's thinking, then have a back-and-forth to tighten up the problem. A feature? I'll open a conversation in plan mode, discuss the problem space and potential outcomes, then refine based on how thoroughly the issue was already documented.

For the AI coaching app I've been building, I go further. I'll drop in an initial thought and then use the /grill-me skill by Matt Pocock, prompting Claude to interrogate my own problem statement, surface edge cases I haven't considered, and push back on assumptions I'm treating as facts. It's adversarial by design and that's the point. I don't need a yes man.

A copilot but not a Copilot

I've started thinking about AI as a colleague rather than a tool. Not because it flatters the technology, but because it changes how I interact with it. You don't just fire instructions at a colleague and accept whatever comes back. You have a conversation. You question. You push and loop back around.

I've pre-instructed Claude to push back where it feels necessary, to not pander to me, and to be straight even when I'm confident. And that helps.

There was a recent example where I was planning out work to make sure I didn't get handed an astronomical Anthropic API bill from users chatting with their coach. I wanted to apply daily message limits and worked with Claude Opus 4.6 to sketch out an implementation. It wasn't bad, but I had concerns about where it wanted to store those limits: alongside the user record. I pushed back, arguing for a separate table so there was no way a routine user profile update could accidentally expose or modify quota values. Claude pushed back, calling it overkill for such a small app.

We went back and forth and Claude's argument was pragmatic. Fewer tables, less complexity, the same outcome. Mine was about correctness, quotas are an admin concern, not a user concern, and mixing them invites the kind of accidental exposure that could cause issues later on. Eventually Claude came around, and the ai_quota table landed in the schema exactly as I'd envisioned it.

The point isn't that I was right. The point is that I had to argue for it. That friction is useful. Without the pushback I might have just accepted the simpler path and moved on. With it, I had to actually think through why isolation mattered. That's the kind of collaboration that makes the output better.

But there's a catch. When Claude eventually agreed, I couldn't be entirely sure whether it was genuinely persuaded or just smoothing things over. That ambiguity is always there. The model is trained to be helpful, and helpful can shade into agreeable. I have to stay alert to it: notice when something feels too easily resolved, and drag the honesty back out. That's on me, not the AI. It's my responsibility to know when to pause the work, question the output, and dig deeper, even at the cost of momentum.

Beyond the code

Here's what's super important to remember. AI has given me access to things I always wanted to pursue but never had the bandwidth or time for.

Writing, for example, has never been my strong point. This article exists because I gave Claude a scenario, told it to interview me like a senior copywriter, and had it push me until the raw material was there. I still had to think. I still had to answer. The ideas are mine, along with various paragraphs written, rewritten, and written again by me. But without that process I'd have stared at a blank page and never moved on.

The same principle applies to learning. When I'm stuck on a problem I'd previously have spent hours on, searching, guessing, going in circles, I can now ask unlimited questions. I can understand the why behind a solution rather than just copying the what. That's not outsourcing thinking. That's actually doing more of it.

Do I worry sometimes that I'm leaning too hard on it? Yes. That's a fair concern and I keep it in mind. But the alternative, treating AI as a cheat code, a shortcut, something to point and shoot, is the path that actually hollows you out. If you're not feeding the AI well, not questioning its outputs, not bringing your own experience and judgement to every interaction, then you're not really working with it. You're just generating noise faster.

What it says about how I work

I'm not writing this to evangelise AI. I'm writing it because how someone uses AI tells you a lot about how they think.

Do they use it to avoid hard problems or to get into them faster? Do they accept the first output or interrogate it? Do they know when to pause and when to push? Do they have the self-awareness to notice when they're being told what they want to hear?

For me, AI has become part of how I plan, how I learn, how I stress-test my own ideas, and yes, how I write. But the judgement is still mine. The questions are still mine. The responsibility for the work is still mine.

That hasn't changed. If anything, it's more important now than it ever was.