šŸ›£ļø Road to 10x (Part 3): Next.js Routes, Validation & Deployment

#TypeScript#Voice AI#Next.js#Zod#API Routes
Blog preview

šŸ“š Series Navigation

Part 1: API Discovery & Type Safety āœ… Part 2: Building Scalable Clients & the Result Pattern āœ… Part 3 (this article): Next.js Routes, Validation & Deployment


TL;DR

In Parts 1 & 2, we built a rock-solid foundation:

  • Discovered APIs and created type-safe interfaces
  • Built clients with the Result pattern
  • Architected for scalability

Now we're wiring it all together with:

  • Next.js API routes that voice agents call
  • Zod validation for runtime type safety
  • Self-correcting feedback loops
  • Complete end-to-end testing

This is where everything comes together!


Recap: What We've Built So Far

Part 1: āœ… Type-safe interfaces āœ… API endpoint documentation Part 2: āœ… External clients āœ… Result pattern āœ… Data transformation

Now: Voice agent → Your API routes → Clients → External APIs

Let's build that API layer!


Step 4: Create the Next.js API Routes

šŸ§‘ā€šŸ’» "Finally, I create clean API routes that the voice agent actually calls:"

// app/api/voice/properties/search/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { z } from 'zod';
import { createRealEstateClient } from '@/lib/real-estate-client';

// Define the request schema
const PropertySearchSchema = z.object({
  bedrooms: z.number().min(1).max(10).optional(),
  maxPrice: z.number().min(0).optional(),
  minPrice: z.number().min(0).optional(),
  location: z.string().min(1).max(100).optional(),
  propertyType: z.enum(['house', 'apartment', 'condo', 'townhouse']).optional(),
});

export async function POST(request: NextRequest) {
  try {
    const body = await request.json();

    // Validate the request body with Zod
    const validatedData = PropertySearchSchema.parse(body);

    // Initialize client with environment variables
    const client = createRealEstateClient({
      apiKey: process.env.REAL_ESTATE_API_KEY!,
      clientId: process.env.REAL_ESTATE_CLIENT_ID!,
      clientSecret: process.env.REAL_ESTATE_CLIENT_SECRET!,
    });

    // Call the client function with validated data
    const result = await client.searchProperties(validatedData);

    // Handle Result pattern
    if (!result.success) {
      return NextResponse.json(
        {
          speechText: result.error,
          error: true
        },
        { status: result.statusCode || 500 }
      );
    }

    // Format for voice output
    const voiceResponse = {
      speechText: formatPropertiesForSpeech(result.data),
      properties: result.data,
      count: result.data.length,
    };

    return NextResponse.json(voiceResponse);
  } catch (error) {
    if (error instanceof z.ZodError) {
      console.log('Validation error:', error);
      return NextResponse.json(
        {
          speechText: "I need valid search criteria to find properties for you.",
          error: true,
          validationErrors: error.errors
        },
        { status: 400 }
      );
    }

    console.error('Property search error:', error);
    return NextResponse.json(
      {
        speechText: "I'm having trouble searching for properties right now. Please try again.",
        error: true
      },
      { status: 500 }
    );
  }
}

šŸ™‹ "Hold up - what's this Zod thing? Why do we need schema validation?"

šŸ§‘ā€šŸ’» "Great question! Zod is a TypeScript-first schema validation library. Think of it as a runtime type checker that validates data at the API boundary."


Deep Dive: Runtime Validation with Zod

TypeScript vs Zod - What's the Difference?

šŸ™‹ "But... don't we already have TypeScript types?"

šŸ§‘ā€šŸ’» "Yes, but TypeScript types only exist at compile time. Once your code is running, those types disappear. Zod validates data at runtime - when your API actually receives a request."

The Gap TypeScript Can't Fill:

// TypeScript thinks this is fine at compile time:
interface SearchParams {
  bedrooms: number;
}

// But at runtime, the voice agent might send:
const request = {
  bedrooms: "three"  // STRING! TypeScript can't catch this! 😱
};

// Your code crashes when you try:
const doubled = request.bedrooms * 2;  // NaN!

Zod catches this at runtime:

const SearchSchema = z.object({
  bedrooms: z.number()
});

// When the voice agent sends bad data:
try {
  const validated = SearchSchema.parse({ bedrooms: "three" });
} catch (error) {
  // Zod throws with clear error:
  // "Expected number, received string at path bedrooms"
}

Why Zod is Critical for Voice AI Agents

1. Voice agents make mistakes

They might send "bedrooms": "three" instead of "bedrooms": 3. Zod catches this and provides clear error messages.

2. Self-correction loop

When validation fails, the agent sees the error message and can correct itself:

{
  "speechText": "I need valid search criteria to find properties for you.",
  "error": true,
  "validationErrors": [
    {
      "path": ["bedrooms"],
      "message": "Expected number, received string"
    }
  ]
}

The agent reads this and tries again with bedrooms: 3!

3. Prevents downstream failures

Better to fail fast at the API boundary than crash deep in your business logic.

4. Living documentation

The schema IS your API contract. Any developer (or AI agent) can see exactly what's expected:

// This is self-documenting!
bedrooms: z.number().min(1).max(10).optional()
// "bedrooms must be a number between 1-10, and it's optional"

The Console.log Trick

šŸ™‹ "That console.log in the Zod error handler - is that on purpose?"

šŸ§‘ā€šŸ’» "Absolutely! That's not just for debugging - the agent can see those logs and learn from them."

console.log('Validation error:', error);

When the agent sees:

"Expected number, received string"

It knows to convert '3' to 3 next time. It's part of the feedback loop!


Clean Separation of Concerns

šŸ™‹ "So the voice agent never directly touches the external API?"

šŸ§‘ā€šŸ’» "Never! It only knows about our /api/voice/properties/search endpoint. Clean separation of concerns."

The Flow:

  1. Voice Agent → Calls /api/voice/properties/search
  2. Your API Route → Validates with Zod, calls client
  3. Your Client → Transforms data, calls external API
  4. External API → Returns messy data
  5. Your Client → Transforms to clean internal format
  6. Your API Route → Formats for voice, returns to agent

Benefits:

  • Voice agent doesn't know about auth, rate limits, or external API quirks
  • External API changes don't affect voice agent code
  • You can swap providers without touching voice logic

Step 5: Test the Complete Flow

šŸ§‘ā€šŸ’» "Before deploying, I test the entire chain in my .rest file:"

### Test Voice API Route
POST {{base_url}}/api/voice/properties/search
Content-Type: application/json

{
  "bedrooms": 3,
  "maxPrice": 500000,
  "location": "downtown"
}

### Expected Response:
# {
#   "speechText": "I found 5 properties matching your criteria...",
#   "properties": [...],
#   "count": 5
# }

### Test Validation Error
POST {{base_url}}/api/voice/properties/search
Content-Type: application/json

{
  "bedrooms": "three",  // Invalid - should be number
  "maxPrice": 500000
}

### Expected Error Response:
# {
#   "speechText": "I need valid search criteria to find properties for you.",
#   "error": true,
#   "validationErrors": [
#     {
#       "path": ["bedrooms"],
#       "message": "Expected number, received string"
#     }
#   ]
# }

### Test Rate Limiting (rapid requests)
POST {{base_url}}/api/voice/properties/search
Content-Type: application/json

{
  "bedrooms": 3,
  "maxPrice": 500000
}

### Should automatically retry after 429

What you're testing:

āœ… Happy path - valid requests work āœ… Validation - bad data returns helpful errors āœ… Rate limiting - interceptor handles 429s āœ… Error messages - voice-friendly responses āœ… End-to-end - complete flow works


The Complete Architecture in Action

šŸ™‹ "And this whole architecture scales?"

šŸ§‘ā€šŸ’» "Like a dream! Let me show you what happens when we add a second provider..."

Adding a New API Provider

// lib/zillow-client.ts (NEW FILE - 2 hours of work)
const createZillowClient = (config: ZillowConfig) => {
  // Same interface, different implementation
  const searchProperties = async (params: PropertySearchParams): Promise<Result<Property[]>> => {
    // Zillow-specific logic
    const data = await zillowAPI.search(params);
    return { success: true, data: transformZillowToProperty(data) };
  };

  return { searchProperties };
};

// In your route (ONE LINE CHANGE):
const client = useZillow
  ? createZillowClient(zillowConfig)
  : createRealEstateClient(realEstateConfig);

// Everything else stays the same!
const result = await client.searchProperties(validatedData);

Your voice agent code? Zero changes. āœ… Your API route? Zero changes. āœ… Your types? Zero changes. āœ…

New code: One client file. That's it.


Key Takeaways: The Full Picture

What You've Built

Layer 1: Type Foundation

  • TypeScript interfaces for all data structures
  • Clear separation: external vs internal types
  • Compile-time safety everywhere

Layer 2: Client Architecture

  • Result pattern for explicit error handling
  • External → internal data transformation
  • Rate limiting and retry logic
  • Multi-provider support built-in

Layer 3: API Routes

  • Zod validation for runtime safety
  • Voice-friendly error messages
  • Self-correcting feedback loops
  • Clean separation of concerns

The Benefits

For Users:

  • Reliable voice agents that handle errors gracefully
  • Fast responses (multi-provider, parallel calls)
  • Consistent experience across data sources

For Developers:

  • Type safety at compile-time AND runtime
  • External API changes don't cascade
  • Add new providers in hours, not weeks
  • Debug one layer at a time
  • Sleep well at night 😓

Your Action Plan

Ready to build this yourself? Here's the checklist:

Phase 1: Discovery (1-2 hours)

  • Create .rest file and test all API endpoints
  • Document request/response formats
  • Note quirks: rate limits, auth, weird fields

Phase 2: Types (1 hour)

  • Define TypeScript interfaces for config, requests, responses
  • Separate external types from internal types

Phase 3: Client (3-4 hours)

  • Build client functions returning Result types
  • Add transformation logic (external → internal)
  • Implement error handling and rate limiting

Phase 4: Routes (2-3 hours)

  • Create Next.js API routes with Zod validation
  • Format responses for voice agents
  • Test complete flow with .rest file

Phase 5: Scale (as needed)

  • Add new providers by creating client files
  • Keep the same internal interface
  • Voice routes remain untouched!

Total time: 8-12 hours for first implementation. Each additional provider: 2-4 hours.


What's Next?

This trilogy covered the architecture. Want to see it in action?

Coming soon:

  • Building the actual voice AI agent
  • Connecting to this API architecture
  • Advanced patterns: caching, webhooks, streaming responses

Series Wrap-Up

Part 1: Built the foundation with API discovery and type-safe interfaces Part 2: Architected scalable clients with the Result pattern Part 3: Wired everything together with Next.js routes and Zod validation

You now have a production-ready, scalable architecture for voice AI integrations! šŸš€

No more guessing, no more surprises - just reliable, scalable voice AI integrations!


← Part 2: Building Scalable Clients

Thanks for following along! If you build something with this architecture, I'd love to hear about it.

- Seif šŸ§‘ā€šŸ’»