Back to Blog
jacken@blog:~$ cat developer-guide-to-prompt-engineering.md

The Developer's Guide to Prompt Engineering That Actually Works

December 8, 20258 min readby Jacken Holland
AIPrompt EngineeringDevelopmentProductivity

Introduction

Early 2024, I asked ChatGPT: "Write a login function."

It gave me code that used MongoDB (we use PostgreSQL), had no error handling, and didn't match any of our patterns. I spent 20 minutes fixing it. Would've been faster to write it myself.

Fast forward to late 2025. I ask AI to generate code constantly. And 80% of the time, it's usable with minimal tweaks.

The difference? I learned how to prompt.

This isn't theory. These are the exact prompting patterns I use daily—the ones that consistently give me code I can actually use.

Why Most Prompts Suck (Including My Early Ones)

The problem with "write a login function" isn't that it's wrong. It's that it's missing 90% of the information AI needs to write code that fits your project.

Here's what AI assumes when you're vague:

  • Stack: Whatever's most common in its training data
  • Patterns: Generic best practices
  • Error handling: Maybe basic try-catch
  • Testing: Probably nothing
  • Style: Random

Result: Code that works in theory but doesn't fit your codebase.

The Framework That Changed Everything

I stumbled on this pattern after months of frustration. Now I use it for every code generation request:

The 5-Part Prompt Structure

What: [One sentence - what you want]

Stack: [Your specific technologies]

Requirements:
- [Specific requirement 1]
- [Specific requirement 2]
- [Specific requirement 3]

Pattern to follow:
[Paste an example of your existing code pattern]

Watch out for:
[Common mistakes or things to avoid]

Let me show you the difference.

Bad prompt I used to write:

Write a function to fetch user data from an API

Good prompt I write now:

What: Fetch user data from our REST API with error handling and retry logic

Stack:
- Next.js 14 App Router
- TypeScript (strict mode)
- React Query for data fetching

Requirements:
- Retry 3 times on failure with exponential backoff
- Return type: { success: boolean, data?: User, error?: string }
- Include proper TypeScript types
- Handle network errors, 404s, and 500s differently

Pattern to follow:
export async function fetchUsers() {
  try {
    const response = await fetch('/api/users');
    if (!response.ok) return { success: false, error: 'Fetch failed' };
    const data = await response.json();
    return { success: true, data };
  } catch (error) {
    logger.error('fetchUsers failed', { error });
    return { success: false, error: 'Network error' };
  }
}

Watch out for:
- Don't use axios (we standardized on fetch)
- Don't throw errors (we return error objects)

The first prompt gives me generic code I have to rewrite.

The second prompt gives me code that's 80% ready to commit.

The Prompts I Actually Use Every Day

Here are my go-to prompts. Copy them and adjust for your stack:

1. Code Generation

Generate [language] code for [specific functionality].

Stack:
- [List your exact stack]

Requirements:
- [List 3-5 specific requirements]
- [Include any constraints]

Follow this pattern:
[Paste actual code from your codebase showing the pattern]

Types:
[If TypeScript, paste relevant types]

Error handling:
[Describe your error handling approach]

Edge cases to handle:
- [Case 1]
- [Case 2]
- [Case 3]

Real example I used last week:

Generate TypeScript code for a webhook handler that processes Stripe payment events.

Stack:
- Next.js 14 API route
- TypeScript strict mode
- Prisma ORM
- Stripe SDK

Requirements:
- Verify webhook signature
- Handle payment_intent.succeeded and payment_intent.failed events
- Update order status in database
- Send confirmation email on success
- Idempotent (handle duplicate webhooks)
- Proper error logging

Follow this pattern:
export async function POST(req: Request) {
  try {
    const body = await req.text();
    // Validation and processing
    return Response.json({ success: true });
  } catch (error) {
    logger.error('Webhook failed', { error });
    return Response.json({ success: false }, { status: 500 });
  }
}

Error handling:
- Log all errors with context
- Return 200 even on processing errors (so Stripe doesn't retry)
- Return 400 on signature verification failure

Edge cases to handle:
- Duplicate webhook deliveries
- Events for deleted orders
- Missing customer data

This gave me production-ready code with one minor adjustment.

2. Refactoring

Refactor this [language] code to [goal].

Current code:
[Paste code]

Goals:
- [What to improve - e.g., readability]
- [What to improve - e.g., performance]
- [What to improve - e.g., testability]

Constraints:
- [What must stay the same - e.g., API signature]
- [What to avoid - e.g., new dependencies]

Maintain exact same functionality. Explain what changed and why.

Real example:

Refactor this React component to improve performance.

Current code:
export function UserList({ users }) {
  return (
    <div>
      {users.map(user => (
        <div key={user.id}>
          <img src={user.avatar} />
          <span>{user.name}</span>
          <button onClick={() => handleEdit(user)}>Edit</button>
          <button onClick={() => handleDelete(user)}>Delete</button>
        </div>
      ))}
    </div>
  );
}

Goals:
- Reduce re-renders (users list has 500+ items)
- Improve scroll performance
- Keep code readable

Constraints:
- Keep same props interface
- No new dependencies (already using React 18)
- Must work on mobile

Maintain exact same functionality. Explain what changed and why.

AI suggested React.memo, virtualization with react-window, and useCallback for event handlers. Explained why each helped. Perfect.

3. Debugging

Debug this [problem type].

Error:
[Full error message and stack trace]

Code:
[Relevant code]

Context:
- Stack: [your stack]
- When it happens: [reproduction steps]
- Environment: [where it fails - local/prod/staging]
- Recent changes: [what changed recently]

What I've tried:
- [Thing 1 - didn't work]
- [Thing 2 - didn't work]

What are the top 3 most likely causes?
For each, explain:
1. Why it might be the cause
2. How to verify
3. How to fix if confirmed

I use this constantly. Having AI generate ranked hypotheses instead of one guess saves tons of time.

4. Code Review

Review this code for [specific concerns].

Code:
[Paste code]

Focus on:
- [Concern 1 - e.g., security]
- [Concern 2 - e.g., performance]
- [Concern 3 - e.g., edge cases]

Context:
- This runs [how often - e.g., "on every page load"]
- Expected scale: [e.g., "100 req/sec"]
- Users: [e.g., "authenticated users with sensitive data"]

For each issue found:
- Severity (Critical/Warning/Info)
- Explanation of the problem
- Suggested fix

Skip style nitpicks, focus on actual issues.

Real example:

Review this database query code for security and performance.

Code:
export async function searchUsers(query: string) {
  const sql = `SELECT * FROM users WHERE name LIKE '%${query}%'`;
  const result = await db.query(sql);
  return result.rows;
}

Focus on:
- SQL injection vulnerabilities
- Performance at scale
- Missing pagination

Context:
- This runs on every keystroke in search input
- Users table has 100,000 rows
- Expected: <200ms response time

For each issue found explain severity, problem, and fix.

AI immediately flagged SQL injection and N+1 query issues. Suggested parameterized queries and pagination. Exactly what I needed.

5. Learning/Explanation

Explain [concept] for a developer who [your experience level].

My understanding so far:
[What you know]

What confuses me:
[Specific confusion points]

My use case:
[Why you're learning this]

Provide:
1. Simple explanation without jargon
2. Practical code example for my use case
3. Common mistakes to avoid
4. When to use this vs alternatives

Real example:

Explain React Server Components for a developer who knows React well but hasn't used Next.js 13+.

My understanding so far:
- Server Components render on the server
- Different from SSR somehow
- Related to Next.js App Router
- Something about reducing client bundle size

What confuses me:
- How they differ from getServerSideProps
- When to use Server vs Client components
- How to fetch data in Server Components
- Interaction with client state

My use case:
Migrating a Next.js 12 app (pages router) to Next.js 14 (app router)

Provide explanation, examples, mistakes to avoid, and decision framework.

AI gave me a clear explanation with side-by-side comparisons. Way better than docs.

Advanced Techniques That Work

Once you nail the basics, these techniques level up your prompting:

Chain Multiple Prompts

Don't ask for everything at once. Iterate:

First prompt:

What are 3 different approaches to implement real-time notifications
in a Next.js app? Compare pros/cons for each.

Second prompt (based on AI's answer):

Let's go with Server-Sent Events. Implement SSE for Next.js 14 with:
- Heartbeat every 30s
- Automatic reconnection
- User-specific event filtering
[Include your pattern and requirements]

Third prompt:

Add error handling for when the event stream fails to send.
Handle both client and server errors.

Each iteration refines based on previous context. More accurate than one giant prompt.

Show Examples of Good and Bad

Write validation for user registration.

Good example (what to copy):
[Paste your best validation code]

Bad example (what to avoid):
[Paste problematic code you've seen]

Follow the good pattern, avoid the bad patterns.

This trains AI on your specific preferences.

Ask for Multiple Options

Show me 3 different ways to implement caching here.

For each approach:
- Code example
- Pros/cons
- When to use it

Then recommend which to use for [your specific context].

Better than getting one approach that might not fit.

Use Negative Constraints

Implement pagination but:
- Don't use offset (doesn't scale)
- Don't fetch count(*) on every request (too slow)
- Don't use GraphQL (not in our stack)

Here's what we use instead: [your approach]

Tells AI what to avoid upfront instead of fixing it later.

The Mistakes I Made (So You Don't Have To)

Mistake 1: Not Being Specific About Your Stack

Early on I'd say "React app." AI would use class components, outdated patterns, wrong hooks.

Now I say: "React 18 functional component with TypeScript, using hooks (no classes)."

Specificity matters.

Mistake 2: Accepting the First Response

AI's first attempt is often 70% right. But if I push back and iterate, it gets to 90%.

You: [Prompt]
AI: [Code]
You: "This works but uses a dependency we avoid. Rewrite without lodash."
AI: [Better code]
You: "Good, but add error handling for when the API times out."
AI: [Production-ready code]

Don't settle for the first draft.

Mistake 3: Not Providing Examples

When I say "follow our error handling pattern" without showing an example, AI guesses.

When I paste an actual example, AI copies that pattern exactly.

Always include examples of your patterns.

Mistake 4: Prompting for Huge Features at Once

"Build a complete authentication system" gives terrible results.

Break it down:

  1. "Create login endpoint"
  2. "Add JWT generation"
  3. "Implement refresh tokens"
  4. "Add password reset flow"

Each step builds on previous, much better results.

My Prompt Library Setup

I keep a prompts/ folder in my notes with templates I reuse:

prompts/
  api-endpoint.md
  react-component.md
  database-query.md
  debugging.md
  refactoring.md
  test-generation.md

Each file is a template with placeholders:

# API Endpoint Prompt

Create a Next.js 14 API route for [FUNCTIONALITY].

Stack:
- Next.js 14 App Router
- TypeScript
- Prisma ORM
- Zod validation

Requirements:
- [LIST REQUIREMENTS]

Pattern:
[PASTE EXAMPLE]

Error handling:
- Return { success: boolean, data?: T, error?: string }
- Log errors with context

Edge cases:
- [LIST EDGE CASES]

When I need to generate an endpoint, I copy the template, fill in the placeholders, and paste to AI.

Saves time and ensures consistency.

What Changed from 2024 to 2025

The big improvements in AI prompting:

  1. Context windows are massive: I can now paste entire files and multiple examples without hitting limits

  2. Better at following patterns: When I show an example, 2025 models copy the style much better than 2024 models did

  3. Multi-turn conversations work better: Iterative prompting (ask, refine, ask again) is much smoother

  4. Understands modern frameworks: Next.js 14, React Server Components, Bun—AI knows current patterns, not just 2021 best practices

But the core principle hasn't changed: Garbage in, garbage out. Specific prompts get specific results.

Your Action Plan

Want to get better at prompt engineering?

This Week:

Next time you ask AI for code, use the 5-part structure:

  1. What (one sentence)
  2. Stack (specific versions)
  3. Requirements (3-5 bullet points)
  4. Pattern (paste your example)
  5. Watch out for (things to avoid)

Compare results to your old vague prompts. Notice the difference.

This Month:

Create your first 3 prompt templates for:

  1. The type of code you generate most often
  2. Your most common debugging scenario
  3. Your standard refactoring pattern

Save them. Reuse them. Refine them.

Within 3 Months:

Build a personal prompt library with 10-15 templates covering your common tasks.

Track which prompts work best. Refine based on results.

You'll be generating usable code consistently instead of fighting with AI.

Related Reading

Final Thoughts

Prompt engineering isn't about memorizing templates. It's about learning to communicate precisely with AI.

Think of it like talking to a really smart junior developer who knows every programming language and framework but doesn't know your codebase or preferences.

You wouldn't tell a junior dev "write a login function" and expect production-ready code that fits your patterns.

You'd explain:

  • What you're using (stack)
  • What you need (requirements)
  • How you do things (pattern)
  • What to avoid (constraints)

That's all prompt engineering is. Clear communication.

The developers getting the most value from AI in 2025 aren't the ones using the fanciest tools. They're the ones who learned to communicate their needs precisely.

Start with one template. Use it for a week. Refine based on results. Build from there.

In a month, you'll wonder how you ever used AI without proper prompts. The difference is that dramatic.