Guides
Integrate Narev with Portkey for LLM Cost Optimization

Integrate Narev with Portkey for LLM Cost Optimization

Use Narev to test and validate model configurations before deploying to Portkey. Reduce LLM costs by 99% while maintaining quality through systematic A/B testing.

Portkey manages your infrastructure. Narev tells you what to deploy. Portkey gives you observability, reliability, and control over your LLM applications. But which models should you route to? What's the actual cost difference? Will quality suffer if you switch? Narev answers these questions before you change production.

The Problem with Portkey Alone

Portkey is an excellent AI gateway—it provides observability, fallbacks, load balancing, and caching for your LLM infrastructure. But that infrastructure still needs optimization: you need to know what to route.

With dozens of models to choose from and complex routing strategies, teams often:

  • Stick with expensive defaults (GPT-4) because switching feels risky
  • Test models manually by deploying to production and hoping for the best
  • Guess at which model offers the best cost-quality-latency tradeoff
  • Miss optimization opportunities because testing is time-consuming

The result? Most teams overspend on LLMs by 10-100x because they lack systematic testing.

How Narev + Portkey Work Together

Narev and Portkey complement each other perfectly:

ToolPurposeWhen You Use It
NarevTest models systematically to find optimal configurationBefore changing production
Portkey

Manage production LLM infrastructure with observability and reliability

In production, after testing

The workflow:

  1. Export production traces from Portkey's observability dashboard
  2. Test alternative configurations in Narev with A/B experiments
  3. Deploy winners to Portkey with confidence
  4. Monitor results using Portkey's analytics and repeat continuously

Integration Guide

Step 1: Export Your Portkey Usage Data

Narev works with your existing Portkey logs to create realistic test scenarios. Export your recent prompts, model selections, and response patterns from Portkey's observability dashboard to build experiments that reflect your actual production workload.

Step 2: Create Your First Experiment

Let's say you're currently using gpt-4o-mini through Portkey and want to explore if Claude 3.5 Haiku offers better performance.

Create an experiment in Narev testing:

Variant A (Baseline)

claude-3-5-haiku-20241022
Current production model
Cost: $35.85/1M requests
Latency: 713.4ms
Quality: 60%

Variant B

gpt-4o-mini
Alternative to test
Cost: $18.36/1M requests (49% cheaper)
Latency: 623.4ms (13% faster)
Quality: 80% (33% better)

Narev will test both variants on the same prompts and measure:

  • Cost per request and per million tokens
  • Latency (time to first token, total response time)
  • Quality (accuracy, completeness, tone)

Step 3: Analyze Results with Confidence

Narev provides clear data on which model performs best:

Variant comparison results showing cost, quality, and latency metrics

Step 4: Update Your Portkey Configuration

With data-backed confidence, update your Portkey integration:

// Before: Using GPT-4o-Mini
import Portkey from 'portkey-ai';
 
const portkey = new Portkey({
  apiKey: process.env.PORTKEY_API_KEY,
  virtualKey: process.env.OPENAI_VIRTUAL_KEY,
});
 
const response = await portkey.chat.completions.create({
  model: "gpt-4o-mini", // ← Old default
  messages: [...],
});
 
// After: Switch to Claude 3.5 Haiku based on Narev results
const portkey = new Portkey({
  apiKey: process.env.PORTKEY_API_KEY,
  virtualKey: process.env.ANTHROPIC_VIRTUAL_KEY, // ← Updated virtual key
});
 
const response = await portkey.chat.completions.create({
  model: "claude-3-5-haiku-20241022", // ← Tested winner
  messages: [...],
});

Step 5: Monitor and Iterate

Portkey's analytics dashboard will show you the real-world performance. Use Narev to:

  • Test new models before adding them to Portkey configs
  • Experiment with prompt variations
  • Validate routing strategies and fallback configurations
  • A/B test temperature and parameter changes

Why Test Before Deploying to Portkey?

Without Narev: Risky Approach

  1. "Should we try Claude instead of GPT-4?"
  2. Deploy directly to Portkey production
  3. Hope quality doesn't drop
  4. Wait days/weeks for enough data
  5. Quality issues surface → rollback
  6. Lost time + degraded user experience 💸

With Narev: Data-Driven Approach

  1. "Should we try Claude instead of GPT-4?"
  2. Test in Narev with production-like prompts
  3. Get results in minutes with statistical confidence
  4. Deploy winner to Portkey ✅
  5. Monitor with confidence
  6. Realize savings immediately 💰

Portkey Features Narev Helps You Optimize

1. Model Selection

Portkey gives you: Access to all major LLM providers
Narev tells you: Which model actually works best for your use case

2. Load Balancing

Portkey gives you: Automatic load balancing across providers
Narev tells you: Which models to include in your load balancer for optimal cost-quality balance

3. Fallback Configuration

Portkey gives you: Automatic fallback routing when primary models fail
Narev tells you: Which fallback models maintain quality without breaking budget

4. Caching Strategy

Portkey gives you: Semantic caching to reduce costs
Narev tells you: How much you can save with cheaper models + caching combined

5. Virtual Keys Management

Portkey gives you: Centralized API key management
Narev tells you: Which provider keys to prioritize based on actual performance

Common Portkey + Narev Use Cases

🎯 Model Migration

Test whether switching from GPT-4 to Claude-3.5 or GPT-4o-mini maintains quality for your specific prompts before updating Portkey config

⚡ Latency Optimization

Compare models to find the fastest option, then configure Portkey's load balancer to prioritize low-latency providers

💰 Cost Reduction

Systematically test cheaper alternatives to expensive defaults and validate they meet your quality bar before deploying

🔧 Fallback Strategy

Test which fallback models maintain quality when primary models fail, optimizing Portkey's reliability features

Pricing: Narev + Portkey

Portkey pricing: Free tier available, paid plans based on usage
Narev pricing: Free for experimentation, no fees on top of your model costs

Combined value: Test $1 worth of prompts in Narev to validate a configuration that saves $10,000/month in Portkey production costs.

Getting Started

Step 1: Sign Up for Narev

Sign up - no credit card required.

Step 2: Export Data from Portkey

Export your prompts and traces from Portkey's observability dashboard to create your first experiment.

Step 3: Run Your First Test

Compare your current model against 2-3 alternatives. Results in minutes.

Step 4: Deploy Winners

Update your Portkey configuration with confidence based on real data.

Frequently Asked Questions

Start Optimizing Your Portkey Costs Today

Stop guessing which models to use. Start testing systematically.