Skip to main content
← Back to Blog
Behind the Scenes

My AI Development Workflow with Claude Code

A
Abe Reyes
February 6, 20267 min read

My AI Development Workflow with Claude Code

I built NeedThisDone.com in two and a half months. 1,300+ commits. 74 API routes. 160+ React components. 71 test files. All while running a consulting practice and learning new tech stacks.

That pace isn't sustainable through caffeine and late nights. It's sustainable because I treat AI as a force multiplier, not a replacement. Here's the workflow that makes it work.

Why AI-Assisted Development Matters in 2026

Let's get one thing straight: AI doesn't replace developers. It replaces the tedious parts of development.

What AI handles:

  • Writing boilerplate (API routes, type definitions, database migrations)
  • Generating test scaffolding (E2E tests, accessibility tests)
  • Refactoring repetitive patterns into reusable utilities
  • Drafting documentation and commit messages

What I still do:

  • Architecture decisions (which database for what data?)
  • UX design (how should this flow feel?)
  • Business logic (what rules govern subscriptions?)
  • Security review (are these auth checks sufficient?)

The productivity gain isn't "AI writes the code for me." It's "AI handles the boring stuff so I can focus on the hard problems."

My Claude Code Workflow

I use Claude Code, not ChatGPT or GitHub Copilot. Here's why: Claude Code operates at the project level with full codebase context, not just the current file. It reads my coding standards, design system, and project memory before suggesting changes.

Step 1: Project Instructions (CLAUDE.md)

Every project starts with a CLAUDE.md file in the root directory. This is my rulebook for AI:

# IFCSI Framework
When writing anything—cover letters, proposals, marketing copy, even commit messages—move through these five tones in order:
1. Inviting — Start with something that makes them want to keep reading
2. Focused — Get to the point
3. Considerate — Show you understand their situation
4. Supportive — Back it up with examples
5. Influential — Land the plane with next steps

# Quick Reference
| Task | Command |
|------|---------|
| Start dev server | cd app && npm run dev |
| Run tests | cd app && npm run test:e2e |
| Draft commit | Run /dac |

This file also includes design system rules (BJJ belt color progression for NeedThisDone), testing patterns (TDD-first), and deployment guidelines. Claude reads this every time.

Step 2: TDD Cycle with AI

I follow strict test-driven development, even with AI assistance:

RED → GREEN → REFACTOR

  1. RED: I describe the failing test I want

    • "Write an E2E test that verifies typing in the FAQ answer field updates the content"
    • Claude generates the Playwright test, I run it, it fails
  2. GREEN: Claude suggests the minimal fix

    • Updates InlineEditContext.tsx to sync selectedItem.content with pageContent
    • I review, test passes
  3. REFACTOR: I ask Claude to clean up

    • "Extract this state synchronization logic into a helper function"
    • Claude refactors, tests still pass

The key: I run the tests myself. Claude suggests code, I verify it works. This catches hallucinations immediately.

Step 3: Code Review Loop

Claude isn't perfect. Every suggestion goes through this filter:

  1. Does this follow project conventions? (Check against CLAUDE.md rules)
  2. Does this introduce security risks? (Review auth checks, input validation)
  3. Does this break existing functionality? (Run E2E tests)
  4. Is this the simplest solution? (Avoid over-engineering)

If the answer to any is "no," I reject the suggestion and prompt Claude to try again with constraints:

"This approach introduces a new dependency. Refactor using only existing utilities in app/lib/."

Effective Prompting Strategies

Good prompts get good results. Here's what works:

Be Specific with Context

Bad prompt: "Fix the cart bug"

Good prompt: "The CartContext is not updating when I add a subscription product. The addItem function should create a Medusa cart if one doesn't exist, then add the variant. Check app/context/CartContext.tsx and ensure the flow matches the pattern in lib/medusa-client.ts."

Provide Examples from the Codebase

Bad prompt: "Create a new API route for product search"

Good prompt: "Create a new API route at /api/shop/search that queries Medusa products by title. Follow the pattern in /api/pricing/products/route.ts for error handling and response formatting."

Iterate on Failures

If Claude's first attempt doesn't work, I give it the error message:

"This failed with TypeError: Cannot read property 'id' of undefined. The issue is that selectedItem is null when the user first clicks an item. Add a null check before accessing selectedItem.content."

What Claude Code Excels At

After 1,300+ commits using Claude Code, here's where it shines:

Boilerplate Generation

Creating a new API route with validation, error handling, and TypeScript types used to take 20 minutes. Now it takes 2.

Prompt: "Create an API route at /api/admin/campaigns with GET (list campaigns) and POST (create campaign). Use Zod validation for the request body. Follow the pattern in /api/admin/reviews."

Result: Fully typed route with error handling, CORS headers, admin auth checks.

Test Scaffolding

I write the test assertions, Claude generates the setup:

Prompt: "Write a Playwright test that navigates to /shop, adds a product to cart, and verifies the cart count updates. Use the existing CartFixture from e2e/fixtures/cart-fixture.ts."

Result: E2E test with proper page object patterns, ready to run.

Refactoring Repetitive Patterns

When I notice I'm copying the same 20 lines across 5 components, Claude extracts it into a reusable hook:

Prompt: "I'm using this mergeWithDefaults logic in 5 page components. Create a useEditableContent hook in app/lib/hooks/ that handles this pattern."

Result: Hook with memoization, TypeScript generics, and usage examples.

Documentation

I hate writing docs. Claude loves it.

Prompt: "Generate JSDoc comments for all functions in app/lib/medusa-client.ts."

Result: Fully documented API client with parameter descriptions and return types.

What Still Needs Human Expertise

AI can't replace judgment. Here's where I still do all the thinking:

Architecture Decisions

"Should user data live in Supabase or Medusa?" — This requires understanding the full system, data access patterns, and future scaling needs. Claude can explain tradeoffs, but I make the call.

UX Design

"How should the checkout flow feel?" — AI can suggest patterns, but designing delightful experiences requires empathy and iteration. I prototype, test with users, and refine.

Business Logic

"What happens when a subscription fails to renew?" — This involves business rules, customer communication, and edge cases. Claude can write the code once I define the logic.

Security Review

"Are these admin auth checks sufficient?" — AI might suggest if (user.role === 'admin'), but I verify it's checking against server-side session data, not client-side cookies.

Productivity Metrics: Real Examples

Here's the speed difference on actual NeedThisDone.com features:

TaskWithout AIWith Claude Code
Create loyalty points system (API + UI + tests)8 hours2 hours
Refactor 20 components to use centralized color system4 hours45 minutes
Write E2E tests for checkout flow3 hours1 hour
Generate blog post markdown templates2 hours10 minutes
Draft 10 commit messages with context30 minutes5 minutes

Estimated time saved over 2.5 months: 120+ hours.

That's three full work weeks. Instead of typing boilerplate, I spent that time on architecture, UX polish, and writing blog posts about what I learned.

The Honest Limitations

AI-assisted development isn't magic. Here's what still frustrates me:

Hallucinations: Claude sometimes invents APIs that don't exist. I catch this by running tests immediately.

Context limits: On large refactors, Claude loses track of changes across 10+ files. I break these into smaller prompts.

Over-engineering: Claude loves abstractions. I have to push back with "keep it simple, we only have 2 use cases."

Outdated knowledge: Claude's training data cuts off in early 2025. New Next.js 15 features require me to provide docs.

Get AI-Powered Development

This workflow works because I treat Claude Code as a senior pair programmer, not a junior dev I don't review. The productivity gains are real, but only if you maintain quality standards.

If you want a custom app built with this methodology—fast iteration, production quality, modern tech stacks—I can help.

View My ServicesGet in Touch

Let's ship something great.

Need Help Getting Things Done?

Whether it's a project you've been putting off or ongoing support you need, we're here to help.