AI-generated code handles about half of a modern web app in 2026. The other half, the part that matters most, still needs a human architect who owns the full context of the codebase. This post is the take I have earned from shipping 13 production apps with AI-assisted workflows, and from debugging plenty of AI-generated code that looked fine until it did not.
What AI Does Well in 2026
The routine work. CRUD endpoints, auth scaffolding, database migrations, form validation, typed API clients, test fixtures, and documentation. These are pattern-matching tasks. The model has seen ten thousand variations and produces a correct variation in 3 to 10 seconds.
Specifically, AI tooling (Claude Code, Cursor, GitHub Copilot) will handle:
- Boilerplate for a new Next.js route with typed inputs and outputs
- Prisma or Drizzle schema migrations from a natural-language description
- Zod validation schemas matching existing TypeScript types
- Tailwind component styling from a Figma screenshot
- Test cases covering happy path plus a few edge cases
- Shell scripts, CI yaml, config files, documentation
On a typical project, this is 40 to 60 percent of the code by line count. It is not the 40 to 60 percent that matters most to the product outcome, but it is real work that previously took real hours.