ModernGrindTech.com started as nothing. No repo. No design. No content. Two working sessions later, it had 67 routes, 14 blog posts, 6 case studies, an interactive project estimator, full Plausible analytics, and Lighthouse scores of 100/100/100. This post is the full story of how it happened, the decisions behind it, and what made it different from every other portfolio site.
Starting From Scratch: Next.js 16
I scaffolded the project with create-next-app targeting Next.js 16 with the App Router, TypeScript strict mode, and Turbopack as the default bundler. Tailwind v4 went in immediately with the new CSS-first @theme configuration. No tailwind.config.js. Everything in globals.css.
The tech stack was non-negotiable from the start: Next.js 16, Tailwind v4, Framer Motion for animations, Vercel for deployment. No CMS. No database. No backend. A marketing site does not need a backend. Every piece of content lives in the codebase as typed data or inline JSX. That means the entire site is statically generated at build time, cached at the edge, and served in under 500ms globally. Read the Next.js 16 deep dive for the technical details.
Design Decisions: Dark Mode Only, Anti-Slop
The site has no light mode toggle. It is dark mode only. The background is hsl(201, 100%, 5%), which is a deep navy that reads as black on most monitors but has enough blue to avoid feeling flat. Surface elements use hsl(201, 80%, 8%) and hsl(201, 60%, 12%) for layering. The accent color is hsl(192, 100%, 50%), a bright cyan that pops against the dark backgrounds without being neon.
Why no light mode? Because this is a developer portfolio, not a news site. My target audience (potential clients, other developers, people evaluating my work) is overwhelmingly browsing with dark mode on. Building and maintaining a light mode doubles the design surface area for a theme that maybe 5% of visitors would use. I spent that time on content instead.
The anti-slop rules were strict from day one. No gradient hero backgrounds. No three-card feature grids. No testimonial carousels. No stock photos. No "We leverage cutting-edge solutions to drive innovation." Every AI-default pattern got flagged and replaced with something that looks like a human made decisions about it. I run an unslop tool against generated output specifically to catch these patterns.
How nano-banana Pro Generated the Images
Every image on the site was generated with nano-banana pro, the AI image generation tool I use for all MGT projects. The process is straightforward: I write a detailed prompt specifying the exact composition, color palette (matching the site's HSL values), mood, and technical constraints. No random generation. Every image is intentional.
The key to making AI-generated images not look like AI-generated images is specificity. Generic prompts produce generic output. I specify exact color values, exact compositions, exact lighting conditions. The OG images for blog posts follow a consistent template: dark background matching the site theme, the post title in Space Grotesk (the display font), a subtle cyan accent element. They look like they were designed in Figma because the prompts are as specific as a Figma spec would be.
Total time for all site images: about 45 minutes. A traditional approach (hire a designer, brief them, review rounds, export assets) takes a week minimum. The quality is comparable. The turnaround is 100x faster.
The Build Process: Two Sessions, 67 Routes
Session one built the foundation. Homepage, about page, services pages, portfolio grid, contact form, basic routing. 42 routes by the end of the first session. The site looked good but did not perform. No analytics. No blog. No interactive tools. No mobile CRO. The console had 90+ CORS errors from a Discord widget.
Session two was the transformation. I dispatched 20+ parallel agents across four waves:
- Wave 1: Analytics infrastructure (Plausible setup, PageViewTracker, ScrollReadTracker), blog system scaffolding, CORS bug investigation
- Wave 2: First batch of blog posts (4 in parallel), /estimate wizard, ShipTicker component, LiveBuildFeed component
- Wave 3: Remaining blog posts (10 in parallel), filterable case studies, mobile CRO audit
- Wave 4: Performance optimization, Lighthouse audit, console error cleanup, integration testing
14 commits. Each one a coherent unit. 25 new routes. The details are in the session build log.
Plausible Analytics: Custom Events Everywhere
I chose Plausible over Google Analytics for three reasons: no cookie banners, a lightweight script (under 1KB), and a clean custom events API. The implementation has three layers.
A PageViewTracker component fires on every client-side navigation. Next.js App Router does not trigger full page loads on route changes, so the default Plausible script misses client transitions. The tracker hooks into routing events and sends a pageview on every navigation.
A ScrollReadTracker fires a custom event when a visitor scrolls past 75% of the page content. Not 100%, because nobody scrolls to the absolute bottom. 75% is the threshold for "this person actually read the content." On blog posts, this gives a real read-through rate instead of just pageview counts.
Custom goal events fire on every interactive element: CTA clicks, estimate form submissions, case study filter interactions, blog category selections. Every button, every link, every form on the site reports what visitors actually do.
Performance: Lighthouse 100/100/100
The performance optimization pass focused on three things: above-fold payload, layout stability, and JavaScript budget.
Hero video skip on mobile. The homepage has a background video that looks great on desktop. On mobile, it was a 14MB file loading before anything else. The fix: skip the video entirely on mobile and serve a 62KB static poster image instead. A 99.6% payload reduction for mobile visitors. The detection is pure CSS media queries and prefers-reduced-motion. No JavaScript required for the decision.
Dynamic imports. Components that are not visible above the fold get loaded with next/dynamic. The estimate wizard, the blog comment system, the Discord widget: all lazy-loaded. The initial JavaScript bundle for a blog page is 47KB gzipped. On Next.js 14 with client rendering, the same page was 89KB.
Image caching. Every image has explicit width and height attributes to prevent layout shift. Next.js Image component handles format conversion (WebP/AVIF), responsive sizing, and lazy loading. No images load until they enter the viewport. CLS stays at 0.00 across every page.
Final numbers against the live Vercel deployment, verified via Chrome MCP:
- LCP: 416ms (threshold is 2,500ms)
- CLS: 0.00 (threshold is 0.1)
- Lighthouse: 100 Performance / 100 Accessibility / 100 Best Practices
- Client JS bundle (blog page): 47KB gzipped
- Build time: Under 8 seconds for all 67 pages
What Made It Different
Most developer portfolio sites are templates with swapped content. This one is not a template. Every design decision was intentional. The dark-only theme, the anti-slop rules, the interactive estimate wizard, the filterable case studies, the real analytics, the blog with 14 posts of actual technical depth. None of that comes from a template.
The other difference is that the site is a proof point for the workflow it describes. The AI agents post talks about running 10 parallel agents. This site was built that way. The tooling ecosystem post talks about 3,900+ skills. Those skills generated the scaffolding for every page. The site is not making claims about productivity. It is the evidence.
67 routes. Two sessions. One developer. Lighthouse triple hundreds. If you want to see what the same workflow produces for client projects, browse the case studies. If you have a project that needs this level of execution, reach out.