Back to Blog
jacken@blog:~$ cat building-performant-nextjs-applications.md

Building Performant Next.js Applications: A Complete Guide

December 7, 20258 min readby Jacken Holland
Web DevelopmentNext.jsPerformanceReactServer Components

In my recent work with Next.js 15, I've been consistently hitting 95+ Lighthouse scores without resorting to heavy optimization tricks. The framework has matured to the point where great performance comes from understanding the fundamentals rather than fighting the tools.

Let me share what's actually working in production applications I've built and maintained throughout 2025.

Why Next.js 15 Changed My Performance Approach

The shift to Server Components as the default rendering strategy fundamentally changed how I think about React applications. Instead of optimizing hydration and bundle sizes after the fact, I now start with zero JavaScript and only add interactivity where it's genuinely needed.

In a dashboard project I completed last month, this mindset shift resulted in a 71% reduction in initial JavaScript—from 165KB to 48KB. The improvement wasn't from aggressive code-splitting or lazy loading tricks. It came from rendering most of the UI on the server where it belonged.

The Server-First Mental Model

Here's the pattern I follow for every new page:

  1. Start with a Server Component (the default)
  2. Fetch data directly in the component
  3. Only mark specific interactive pieces as Client Components
  4. Keep the Client Component boundaries as small as possible

When I'm working with AI assistants to scaffold new features, I use prompts like this:

Prompt for Claude/GPT:

"Create a Next.js 15 Server Component for a product details page. Fetch product data directly in the component using the fetch API with Next.js caching. Include a client-side 'Add to Cart' button as a separate Client Component. Follow the server-first pattern."

The AI typically generates a clean separation between server and client concerns, which is exactly what Next.js 15 excels at.

When Client Components Make Sense

I only reach for 'use client' when I need:

  • User interaction (clicks, form inputs, hover states)
  • Browser APIs (localStorage, geolocation, WebGL)
  • React hooks (useState, useEffect, useContext)
  • Third-party libraries that depend on browser APIs

Everything else stays on the server. Your navigation header with static links? Server Component. Your blog post content? Server Component. Product listings? Server Component with data fetched directly.

Streaming: The Performance Win You're Not Using

Streaming has been Next.js's killer feature since version 13, but I rarely see it used effectively. The concept is simple: don't wait for slow data to hold up fast content.

In a recent e-commerce rebuild, I had a product page where reviews took 2+ seconds to load from a third-party API. Without streaming, users stared at a blank screen for 2 seconds. With Suspense boundaries, they saw the product details instantly while reviews loaded in the background.

The Pattern That Works

I wrap any component with slow data dependencies in Suspense:

  • Third-party API calls
  • Complex database queries
  • External service integrations
  • Large data transformations

The key insight: your page skeleton, navigation, and primary content should render in under 500ms. Everything else can stream in progressively.

Prompt for AI assistance:

"Refactor this Next.js page to use Suspense boundaries. The main product data should render immediately. Reviews, related products, and recommendations should stream in with loading skeletons. Show me the component structure."

Caching in Next.js 15: What Actually Changed

Next.js 15 made significant changes to caching defaults, and understanding them prevents a lot of confusion. The framework no longer caches fetch requests by default in development, which is honestly a relief—it eliminates the "why isn't my data updating?" debugging sessions.

My Caching Strategy for Production

I've settled on this approach across multiple projects:

Static content (docs, marketing pages): Let the Full Route Cache handle it. These pages build at deploy time and serve instantly.

Semi-static content (product catalogs, blog posts): Use revalidate with time-based invalidation:

fetch(url, { next: { revalidate: 3600 } }) // Revalidate hourly

Dynamic content (user dashboards, real-time data): Opt out with cache: 'no-store' or mark the route as dynamic.

User-specific content: Mark routes with export const dynamic = 'force-dynamic' to prevent any caching.

The mistake I made early on was trying to cache everything. Some data just shouldn't be cached, and that's fine. Next.js is fast enough to fetch and render dynamic data on every request.

Image Optimization: Still Your Biggest Opportunity

Images are still the largest performance bottleneck in most applications I audit. Next.js's Image component handles the basics, but you need to configure it intelligently.

What I've Learned About next/image

The priority prop matters more than I initially thought. Use it on your Largest Contentful Paint (LCP) element—usually a hero image or main product photo. This tells Next.js to preload the image and fetch it with high priority.

For everything else, the default lazy loading is excellent. Images load as they enter the viewport with zero configuration.

Key settings I use:

  • quality={85} for most images (default 75 is often too aggressive)
  • sizes prop for responsive images (lets browser choose optimal size)
  • placeholder="blur" for better perceived performance

Modern Image Formats in 2025

Next.js automatically serves WebP and AVIF when browsers support them. I've seen 40-60% file size reductions compared to PNG/JPG without quality loss. The Image component handles this automatically—you just need to ensure your production environment supports image optimization.

If you're on Vercel, it's built-in. For self-hosting, you'll need to configure sharp or another image optimization service.

Partial Prerendering: The Future is Here

Next.js 15 introduced Partial Prerendering (PPR) as an experimental feature, and I've been testing it in production. The concept: static shell with dynamic content streamed in.

Think of it as the best of both worlds—instant page load with static content, combined with dynamic data that's always fresh.

I'm using it on a blog where the layout and navigation are static, but view counts and related posts are dynamic. The page feels instant because the static parts are pre-rendered, while the dynamic data streams in seamlessly.

To enable it, add to next.config.js:

experimental: {
  ppr: true
}

Then mark specific parts of your layout as dynamic while keeping the shell static. It's still experimental in late 2025, but incredibly promising.

Bundle Optimization: What Actually Moves the Needle

I spent way too much time early in my career micro-optimizing bundle sizes. Here's what actually matters:

Use the Bundle Analyzer Strategically

Install @next/bundle-analyzer and run it when you notice bundle growth:

ANALYZE=true npm run build

I do this monthly on active projects. Usually the culprits are:

  • Moment.js (use date-fns instead)
  • Entire icon libraries (use dynamic imports)
  • Lodash without tree-shaking (import specific functions)
  • Duplicate dependencies (check pnpm/npm why)

Dynamic Imports for Heavy Components

For large interactive components that aren't immediately needed, I use dynamic imports:

const HeavyChart = dynamic(() => import('@/components/HeavyChart'), {
  loading: () => <ChartSkeleton />,
  ssr: false // Skip SSR if it uses browser APIs
})

This shaves kilobytes off your initial bundle and defers loading until the component is actually rendered.

Real Performance Gains from AI-Assisted Development

Here's something I didn't expect: using AI coding assistants actually improved my application performance. Not because the AI writes better code, but because it lets me iterate faster on performance experiments.

Example prompt I use for optimization:

"Analyze this Next.js component for performance issues. Look for: unnecessary Client Components, missing Suspense boundaries, unoptimized images, and heavy synchronous operations. Suggest specific improvements."

The AI catches things I miss in code review—like Client Components that could be Server Components, or missing loading.tsx files that would enable streaming.

The Performance Checklist I Actually Use

Before deploying any significant Next.js update, I run through this:

  1. Lighthouse CI on staging - Automated check for performance regressions
  2. Bundle size comparison - Did JavaScript bundles grow unexpectedly?
  3. Core Web Vitals - LCP under 2.5s, CLS under 0.1, INP under 200ms
  4. Loading states - Do all slow operations have Suspense boundaries?
  5. Image optimization - Priority on LCP image, lazy loading elsewhere
  6. Cache configuration - Is frequently changing data actually being cached incorrectly?

Most performance issues I catch are from step 4 (missing loading states) or step 6 (over-aggressive caching).

Tools I Rely On Daily

Vercel Speed Insights: Real user monitoring in production. Shows actual Core Web Vitals from real users, not lab tests.

Lighthouse CI: Automated performance testing in GitHub Actions. Catches regressions before they reach production.

React DevTools Profiler: Identifies unnecessary re-renders and expensive components. Indispensable for Client Component optimization.

next/bundle-analyzer: Monthly bundle audits to catch bloat early.

What's Next for Next.js Performance

Looking ahead to 2026, I'm watching:

  • Partial Prerendering GA - Once stable, this will be the default for most apps
  • React Compiler - Automatic optimization of Client Components
  • Improved streaming - Better handling of nested Suspense boundaries
  • Edge runtime maturation - More APIs available at the edge

The trajectory is clear: Next.js is making it progressively harder to build slow applications. The defaults are good, and the escape hatches (when you need full control) are well-designed.

Final Thoughts

Performance in Next.js 15 isn't about memorizing optimization tricks. It's about understanding the rendering model and working with the framework's strengths.

Start with Server Components. Add interactivity sparingly. Use Suspense for progressive enhancement. Configure caching based on actual data freshness requirements. Optimize images properly.

Do these fundamentals well, and you'll hit 95+ Lighthouse scores without heroics. I've proven it across a dozen production applications in 2025.

The framework has finally matured to the point where great performance is the default, not the exception.