This site — the one you're reading right now — went from a 42-route dark-mode marketing site to a 67-page production platform in a single working session. 14 commits. 20+ parallel Claude Code agents. Chrome MCP running the entire time for live auditing. Here's exactly what happened, what broke, and what the numbers look like on the other side.
The Starting Point
ModernGrindTech.com already existed before this session. It was a Next.js 16 app on Vercel with Tailwind v4, dark-mode-only design, and 42 routes. It had the core pages — homepage, about, services, portfolio — but it was missing the connective tissue that turns a marketing site into something that actually works. No analytics. No blog. No interactive tools. No mobile CRO. The console had 90+ CORS errors from a Discord widget. Lighthouse scores were decent but not clean.
The site looked good. It didn't perform. There's a difference.
The Session: 14 Commits, 20+ Agents
I opened Claude Code and started dispatching agents in parallel. Not sequentially — simultaneously. The key to a productive session like this is task decomposition. Every agent gets an independent unit of work with no dependencies on other agents' output. If agent 3 needs agent 1's output, you're serializing your workflow and losing the entire benefit of parallelism.
Here's what the dispatch pattern looked like across the session:
- Wave 1: Analytics infrastructure (Plausible setup, PageViewTracker component, ScrollReadTracker component), blog system scaffolding (posts.ts metadata, [slug] dynamic route, blog index page), and the CORS bug investigation
- Wave 2: First batch of blog posts (4 posts written in parallel), /estimate wizard build, ShipTicker component, LiveBuildFeed component
- Wave 3: Remaining blog posts (10 more in parallel), filterable case studies page, mobile CRO audit and fixes
- Wave 4: Performance optimization pass, Lighthouse audit, console error cleanup, final integration testing
Chrome MCP was open the entire time. After every commit, I had an agent take a screenshot, run a Lighthouse audit, and check the console for errors. Live feedback loops. No guessing whether something worked — verify it immediately and course-correct if it didn't.
14 commits landed. Each one was a coherent unit: a feature, a fix, or a content batch. No "WIP" commits. No "fix typo" cleanup passes. Each commit moved the site forward in a measurable way.
Analytics: Plausible Custom Events
The site had zero analytics before this session. No idea how many people visited, what they read, or where they dropped off. I chose Plausible over Google Analytics for three reasons: no cookie banners required, lightweight script (under 1KB), and the custom events API is dead simple.
The implementation has three layers:
PageViewTracker: A client component that fires on route changes. Next.js 16 with the App Router doesn't trigger full page loads on navigation, so the default Plausible script misses client-side transitions. The PageViewTracker hooks into Next.js routing events and sends a pageview event on every navigation. Simple problem, simple fix — but you have to know it's a problem in the first place.
ScrollReadTracker: This one fires a custom event when a visitor scrolls past 75% of the page content. Not 100% — nobody scrolls to the absolute bottom. 75% depth is the threshold I use for "this person actually read the content" versus "this person bounced after the first paragraph." On blog posts, this gives me a real read-through rate instead of just pageview counts.
Custom goal events: CTA clicks, estimate form submissions, case study filter interactions, and blog category selections all fire named events. Every interactive element on the site now reports what visitors actually do, not just what pages they land on.
Total implementation time for the analytics system: about 20 minutes across two agents. One agent built the components, the other wired up the custom events across existing pages. Plausible's dashboard started showing data within minutes of deployment.
The CORS Bug: 90 Errors from One Widget
Open DevTools on the old version of the site and the console was a wall of red. 90+ CORS errors. Every single one came from the same source: a Discord server widget embedded in a client component.
The Discord embed widget makes cross-origin requests to Discord's API to pull server member counts, online status, and the invite link. When you embed it inside a Next.js client component that renders on the server first (which App Router components do by default), the server-side render attempts those cross-origin fetches in a Node.js context where browser CORS policies don't apply — but the hydration mismatch between server and client output triggers a cascade of CORS warnings in the browser console.
The fix was straightforward: wrap the Discord widget in a proper client boundary with dynamic import and ssr: false. The widget only renders on the client, the cross-origin requests only happen in the browser where they're supposed to, and the console goes silent. One component change eliminated 90+ errors. That's the kind of bug that looks catastrophic in the console but has a surgical fix once you trace the root cause.
Interactive Features
A marketing site that just displays text is a brochure. I wanted the site to do things.
The /estimate wizard: An interactive project estimator that walks visitors through their project scope and produces a ballpark price range. It asks about project type (landing page, web app, full platform), complexity factors (auth, payments, integrations, mobile), timeline, and existing assets. At the end, it generates a range based on the same pricing tiers I use for real client engagements. This does two things: it pre-qualifies leads before they ever fill out a contact form, and it demonstrates that MGT's pricing is transparent and systematic — not made up on the spot. Try it yourself.
ShipTicker: A real-time ticker component on the homepage that displays the projects I've shipped with their status (live, in development, deployed). It pulls from a static data source and animates through entries. It's proof of output — not a claim about capability, but a list of things that actually exist and are running in production right now.
LiveBuildFeed: A component that shows what I'm actively working on. Updated from the same project data that feeds MGT Studio. It gives the homepage a sense of motion — the site isn't static, the work isn't theoretical, things are actively being built.
Filterable case studies: The case studies page went from a static grid to a filterable, categorized portfolio. Visitors can filter by project type (SaaS, client site, gaming, automation), by tech stack, or by industry. Each case study has an architecture breakdown, timeline, and honest retrospective. Not a sales pitch — a build log.
Mobile CRO Audit
I ran a full mobile conversion rate optimization audit using Chrome MCP's device emulation. Here's what I found and fixed:
Above-fold CTAs were missing on mobile. The desktop homepage had clear CTAs in the hero section, but on mobile the hero text pushed the first CTA below the fold. Visitors on phones had to scroll before they could take any action. I restructured the mobile hero layout to put a primary CTA within the first viewport height. First screen, first action.
Touch targets were too small. Several navigation links and buttons were under the 44px minimum recommended touch target size. On desktop, a 32px-tall link is fine. On a phone screen, it's a frustration machine. I audited every interactive element and brought all touch targets to 44px minimum height with adequate spacing between adjacent targets.
iOS zoom on form inputs. Any input field with a font size below 16px triggers Safari's auto-zoom behavior on iOS. The estimate wizard and contact forms had 14px input text. On iPhone, tapping into a form field zoomed the page in and didn't zoom back out. I set all form inputs to 16px minimum font size. It's a one-line fix that eliminates one of the most common iOS UX complaints.
These aren't glamorous fixes. They don't show up in screenshots. But they're the difference between a site that converts on mobile and a site that looks good on mobile. Those are not the same thing.
Performance: Hero Video and Core Web Vitals
The homepage had a hero background video. On desktop, it looks great — ambient motion that makes the above-fold section feel alive without distracting from the content. On mobile, it was a 14MB file that loaded before anything else on the page. On a 4G connection, visitors were staring at a loading screen for 3-4 seconds before seeing any content.
The fix: skip the video entirely on mobile and serve a static poster image instead. The poster is 62KB. That's a 99.6% reduction in above-fold payload for mobile visitors. The detection uses a combination of media queries and the prefers-reduced-motion API — if you're on a small screen or you've told your OS you prefer reduced motion, you get the static image. No JavaScript required for the decision; it's pure CSS media query logic.
Final Core Web Vitals after the optimization pass:
- LCP (Largest Contentful Paint): 416ms. The hero content renders in under half a second. Well below the 2.5s "good" threshold.
- CLS (Cumulative Layout Shift): 0.00. Zero layout shift. Every element has explicit dimensions. No fonts causing reflow. No images loading without aspect ratios. The page renders once and stays put.
- Lighthouse scores: 100 Performance / 100 Accessibility / 100 Best Practices. Not "high nineties." Triple hundreds across the board.
These numbers aren't from a local dev server. They're from Lighthouse running against the live Vercel deployment, tested via Chrome MCP's audit tool during the session.
14 Blog Posts in One Session
The blog didn't exist before this session. By the end, there were 14 published posts covering project builds, engineering deep dives, tooling breakdowns, pricing transparency, and client work retrospectives. Each post averages 900-1,800 words. None of them are filler. Every post covers a real project or a real decision with specific numbers, specific tradeoffs, and specific outcomes.
Writing 14 posts in one session sounds aggressive. The trick is that these aren't research articles — they're build logs. I'm writing about projects I already built. The technical details are in my head and in my persistent memory files. The writing process is extraction, not creation. An agent that has access to the project's memory file, the codebase, and the deployment history can draft a post in minutes. I review, edit for voice, cut the slop, and publish. Read the full archive on the blog.
The Final Numbers
Before the session vs. after:
- Routes: 42 → 67 (25 new pages including all blog posts, estimate wizard, and case study filters)
- Blog posts: 0 → 14
- Analytics events: 0 → full Plausible integration with custom events on every interactive element
- Console errors: 90+ → 0
- Lighthouse: Mixed scores → 100/100/100
- LCP: Untracked → 416ms
- CLS: Untracked → 0.00
- Mobile hero payload: 14MB → 62KB
- Interactive features: 0 → 4 (estimate wizard, ShipTicker, LiveBuildFeed, filterable case studies)
- Commits: 14
- Parallel agents used: 20+
One session. One developer. A lot of agents.
What This Proves
This isn't a flex post. It's a proof point for the workflow I describe in How AI Agents Actually Fit Into a Solo Dev Workflow. The parallel agent model works. Persistent memory works. Chrome MCP for live auditing works. The output isn't theoretical — you can inspect every page on this site right now. View source, run Lighthouse, open DevTools, check the console. It's all there.
If you want to see what MGT builds for clients, check the case studies. If you want a ballpark on your own project, run through the estimate wizard. If you want to know who's behind it, that's on the about page.
The site is live. The numbers are real. And it all shipped in one session.