4 min read Emadideen Ghannam
6 months with Claude Code: what changed in my workflow
Honest, unhyped review. What actually changed in my day-to-day, what didn't, and the trap I keep falling into on Friday afternoons.
Six months ago I added Claude Code to my daily workflow at $work and across my side projects. I was skeptical. I have been programming for 14 years and most “AI for developers” demos in 2024 made things slower, not faster.
Six months in, I keep using it. Some things genuinely changed. Some things didn’t. And there is one trap I fall into every Friday afternoon.
What changed
Code review on PRs is faster. I used to read a 600-line PR top to bottom, leave six comments, miss the seventh thing. Now I drop the diff into Claude Code and ask for “the things I’d push back on if I were the reviewer.” It produces a draft list. I keep maybe 60 percent. The 40 percent I drop are still useful - they remind me which considerations I deliberately chose to ignore. Reviewing patterns I designed myself benefits the most.
Refactors that used to take a Saturday now take 90 minutes. Last month I renamed a Prisma model across a side-project repo - schema, migrations, NestJS modules, frontend contracts in libs/contracts, the admin UI. The mechanical work used to be the slow part. Claude Code does the mechanical work in passes. I do the taste work - “no, the singular form there, the plural elsewhere, the API stays this version.”
I write more tests. The friction dropped. “Cover this NestJS controller” used to be a 20-minute commit I’d put off. Now it’s 5 minutes and the coverage actually improves.
I talk to it like a junior pair. This is the biggest behavioural shift and the least flashy. When I have to explain a constraint to it - “this can’t add a new dependency, the team agreed last sprint” - I’m forcing myself to make the constraint legible. Half the time I realise the constraint was sloppy thinking.
What didn’t change
Architecture decisions still need a human review. Claude Code proposes plausible-sounding choices that mask the real tradeoff. It will recommend a microservice when you asked about a service. It will recommend Redis when you asked about a queue. Plausibility is the failure mode. A human reviewer (me) still has to ask “what risk is this absorbing” - the question I wrote a whole separate post about.
Reading new code is faster on my own. When a third-party SDK lands in the repo, or I’m onboarding to a vendor library, I read the source. Claude Code can summarise it but the summary is sometimes wrong in load-bearing ways and I cannot tell which times those are without reading anyway.
Greenfield code generation, I still write the structure. The first hour of a new module - the file layout, the seam between layers, the names - is mine. Claude Code is fast at filling in known shapes. It is not good at proposing the shape.
The trap
The trap is “ask, don’t think.” I notice it most on Friday afternoons. I have a hard problem, I’m tired, I open Claude Code and start asking. It produces something. I tweak it. I ship.
Monday I open the diff and realise I shipped a solution to a problem I never bothered to define. The code works. It is not the code I would have written if I had stopped to think first. It is the code Claude Code defaulted to because I never told it the constraint.
The fix that works for me: write the plan in plain English first, in a markdown file or a comment block, then delegate fragments. The act of writing the plan makes me think. Claude Code follows the plan well. It does not write the plan well.
I picked this habit up after the third Friday in a row I shipped something I half-regretted.
How I actually use it
A few specifics, in case it helps:
- Permission mode set to ask before destructive operations. No
rm -rf, nogit reset --hard, nogit push --forcewithout an explicit OK. - Plan mode for non-trivial changes. Anything more than a single-file edit, I make it write a plan first. Tasks list, files affected, risks. Then I review the plan. Then I let it execute.
- Subagents for parallelisable work. Code review on three different PRs at once - one subagent each. They run in parallel and report back. I consolidate.
- A
tasks/lessons.mdfile in every repo. When Claude Code makes a mistake, I write the rule in that file. Next session it reads the file. The mistake rate drops over time. - CLAUDE.md per project with the project’s conventions. So I don’t repeat them every session.
The broader pattern
Claude Code is not a junior engineer. A junior would learn from yesterday’s mistakes without me writing them down. Claude Code does not. So I write the lessons down. The investment in writing them is small. The compound benefit, after six months, is large.
I also stopped expecting it to know what’s in my head. The clearer I am about constraints, the better it performs. Vague prompts produce vague work. Specific prompts produce specific work. Same as briefing a contractor, except faster.
Take
It’s not a junior. It’s a fast, tireless pair who needs your taste.