Guides
Integrate Narev with Weights & Biases Weave for LLM Cost Optimization

Integrate Narev with Weights & Biases Weave for LLM Cost Optimization

Import production traces from W&B Weave into Narev to test and validate model optimizations. Reduce LLM costs by 99% using real production data through systematic A/B testing.

Weave shows you what's happening. Narev shows you what to change. W&B Weave captures every LLM interaction in production, giving you visibility into costs, latency, and performance. Narev uses those exact traces to test optimizations before you deploy them.

The Problem with Observability Alone

Weights & Biases Weave is an excellent LLM observability platform—it gives you complete visibility into your production LLM usage. You can see exactly:

  • Which prompts are most expensive
  • Where latency bottlenecks occur
  • Which models you're using and how often
  • Total costs broken down by endpoint, user, or feature
  • Detailed performance metrics and token usage

But observability alone doesn't solve the problem. Seeing the problem isn't the same as fixing it.

When Weave shows you're spending $10,000/month on GPT-4, you're left wondering:

  • Can I switch to a cheaper model without breaking quality?
  • Which of the 400+ available models would work for my specific use case?
  • Will GPT-4o Mini handle my prompts as well as GPT-4?
  • Should I adjust my prompts or change models?

The result? Teams have full observability but still overspend by 10-100x because they lack a systematic way to test alternatives.

How Narev + W&B Weave Work Together

Narev and W&B Weave are the perfect pairing for LLM optimization:

ToolPurposeWhat It Tells You
W&B WeaveMonitor production LLM usage"You're spending $10K/month on GPT-4"
NarevTest alternatives systematically"Switch to GPT-4o Mini and save $9K/month"

The workflow:

  1. Monitor production with W&B Weave to identify optimization opportunities
  2. Import traces from Weave into Narev
  3. Test alternative models, prompts, and parameters with A/B experiments
  4. Deploy validated optimizations to production with confidence
  5. Verify improvements in Weave and repeat

Integration Guide

Step 1: Export Production Traces from W&B Weave

Narev integrates directly with W&B Weave to import your production traces. These traces become the test dataset for your experiments—ensuring you're testing against real-world usage patterns.

To connect W&B Weave:

  1. In Narev, go to Import Traces

  2. Select W&B Weave as your provider

  3. Enter your W&B Weave project credentials:

    • Entity: Your W&B entity (username or team name)
    • Project Name: Your Weave project identifier
    • API Key: Your W&B API key (starts with your-api-key)
  4. Select your date range (default: last 7 days)

  5. Click Save Project to import traces

Import traces from W&B Weave interface

Narev will import your prompts, model configurations, and usage patterns to create realistic test scenarios.

Step 2: Identify Optimization Opportunities

Use W&B Weave to spot areas where optimization would have the biggest impact:

💰 High-Cost Operations

Which operations or chains consume the most tokens? These are prime candidates for model switching.

⚡ Latency Bottlenecks

Where are users waiting? Test faster models to improve response times.

📊 High-Volume Calls

Which operations run most frequently? Small optimizations here yield big savings.

Step 3: Create Experiments with Real Production Data

Let's say Weave shows you're spending heavily on a customer support feature using GPT-4. Import those traces to Narev and test alternatives:

Create an experiment comparing:

Variant A (Current)

claude-3-5-haiku-20241022
Your production model from Weave traces
Avg cost: $35.85/1M requests
Avg latency: 713.4ms
Quality: 60%

Variant B (Test)

gpt-4o-mini
Alternative to test
Projected cost: $18.36/1M requests (49% cheaper)
To be measured...

Narev will run both variants on your actual production prompts from Weave and measure:

  • Cost savings in dollars and percentage
  • Latency differences (time to first token, total time)
  • Quality metrics (accuracy, completeness, formatting)

Step 4: Analyze Results with Statistical Confidence

Narev provides clear, data-backed answers:

Variant comparison showing cost, quality, and latency metrics

Example results:

  • GPT-4o Mini costs 49% less ($18.36 vs $35.85 per 1M requests)
  • Quality improved by 33% (80% vs 60%)
  • Latency improved by 13% (623.4ms vs 713.4ms)

Projected savings: Based on your Weave volume data, switching to GPT-4o Mini reduces costs by nearly 50% while improving both quality and latency.

Step 5: Deploy and Monitor

With validated results, confidently deploy your optimization:

# Before: Current model from Weave traces
import weave
from anthropic import Anthropic
 
client = Anthropic()
 
@weave.op()
def generate_response(message: str) -> str:
    response = client.messages.create(
        model="claude-3-5-haiku-20241022",  # ← Old model
        messages=[{"role": "user", "content": message}],
        max_tokens=1024
    )
    return response.content[0].text
 
# After: Switch to validated alternative
from openai import OpenAI
 
client = OpenAI()
 
@weave.op()
def generate_response(message: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o-mini",  # ← Tested winner
        messages=[{"role": "user", "content": message}]
    )
    return response.choices[0].message.content

Monitor the impact in W&B Weave:

  • Cost reduction appears immediately in your Weave dashboards
  • Track quality through user feedback and error rates
  • Compare before/after metrics to validate experiment predictions

Step 6: Continuous Optimization

Use this workflow continuously:

  1. Weekly: Review Weave for new optimization opportunities
  2. Test: Import the highest-cost traces into Narev
  3. Validate: Run experiments on new models or prompt variations
  4. Deploy: Roll out proven optimizations
  5. Repeat: As new models launch or usage patterns change

Why Import from W&B Weave?

✅ Test with Real Data

Your Weave traces represent actual production usage. Testing on real prompts ensures results translate to production.

✅ Realistic Volume Projections

Weave shows request volume. Narev multiplies per-request savings by actual volume for accurate ROI estimates.

✅ Representative Edge Cases

Production traces include the weird prompts, long conversations, and edge cases synthetic tests miss.

✅ Zero Setup Time

If you're already using Weave, your test data is ready. No need to create synthetic datasets.

The W&B Weave → Narev → Production Loop

Without Narev: Risky Guesswork

  1. Weave shows high GPT-4 costs
  2. "Maybe a cheaper model would work?"
  3. Deploy to production and hope
  4. Wait weeks for statistically significant data
  5. Quality issues surface → rollback
  6. Lost time + user complaints 💸

With Narev: Data-Driven Confidence

  1. Weave shows high GPT-4 costs
  2. Import traces to Narev
  3. Test alternatives on actual production prompts
  4. Get results in 10 minutes with confidence
  5. Deploy winner ✅
  6. Verify savings in Weave 💰

Common W&B Weave + Narev Use Cases

🎯 Model Migration

Weave shows you're using expensive models. Narev tests which operations can safely switch to GPT-4o Mini for better performance and lower costs.

⚡ Latency Optimization

Weave identifies slow operations. Narev tests faster models while ensuring quality doesn't drop.

💰 Cost Attribution

Weave breaks down costs by operation. Narev optimizes each operation independently based on its specific traces.

🔧 Chain Optimization

Weave shows expensive chains or agents. Narev A/B tests different models or prompt configurations on real data.

Frequently Asked Questions

Getting Started

Step 1: Set Up W&B Weave (if not already)

If you're not using Weave yet, sign up for free and add the Weave SDK to your application for observability.

Step 2: Sign Up for Narev

Sign up for Narev - no credit card required.

Step 3: Connect Your W&B Weave Project

Import your traces using your Weave credentials (entity, project name, API key). Results available immediately.

Step 4: Run Your First Experiment

Compare your current model from Weave against 2-3 cheaper alternatives. Get results in minutes.

Step 5: Deploy and Verify

Update your production code with the winning configuration. Watch savings appear in your Weave dashboard.

Start Optimizing Today

Stop wondering if you can reduce costs. Start testing systematically with your real production data.