Integrate Narev with Langfuse for LLM Cost Optimization
Import production traces from Langfuse into Narev to test and validate model optimizations. Reduce LLM costs by 99% using real production data through systematic A/B testing.
Langfuse shows you what's happening. Narev shows you what to change. Langfuse captures every LLM interaction in production, giving you visibility into costs, latency, and user patterns. Narev uses those exact traces to test optimizations before you deploy them.
The Problem with Observability Alone
Langfuse is an excellent LLM observability platform—it gives you complete visibility into your production LLM usage. You can see exactly:
- Which prompts are most expensive
- Where latency bottlenecks occur
- Which models you're using and how often
- Total costs broken down by endpoint, user, or feature
But observability alone doesn't solve the problem. Seeing the problem isn't the same as fixing it.
When Langfuse shows you're spending $10,000/month on GPT-4, you're left wondering:
- Can I switch to a cheaper model without breaking quality?
- Which of the 400+ available models would work for my specific use case?
- Will GPT-4o Mini handle my prompts as well as GPT-4?
- Should I adjust my prompts or change models?
The result? Teams have full observability but still overspend by 10-100x because they lack a systematic way to test alternatives.
How Narev + Langfuse Work Together
Narev and Langfuse are the perfect pairing for LLM optimization:
Tool | Purpose | What It Tells You |
---|---|---|
Langfuse | Monitor production LLM usage | "You're spending $10K/month on GPT-4" |
Narev | Test alternatives systematically | "Switch to GPT-4o Mini and save $9K/month" |
The workflow:
- Monitor production with Langfuse to identify optimization opportunities
- Import traces from Langfuse into Narev
- Test alternative models, prompts, and parameters with A/B experiments
- Deploy validated optimizations to production with confidence
- Verify improvements in Langfuse and repeat
Integration Guide
Step 1: Export Production Traces from Langfuse
Narev integrates directly with Langfuse to import your production traces. These traces become the test dataset for your experiments—ensuring you're testing against real-world usage patterns.
To connect Langfuse:
-
In Narev, go to Import Traces
-
Select Langfuse as your provider
-
Enter your Langfuse project credentials:
- Project Name: Your Langfuse project identifier
- Secret Key: Your Langfuse secret key (sk-lf-...)
- Public Key: Your Langfuse public key (pk-lf-...)
- Host: Your Langfuse instance URL (default: https://cloud.langfuse.com)
-
Select your date range (default: last 7 days)
-
Click Save Project to import traces
Narev will import your prompts, model configurations, and usage patterns to create realistic test scenarios.
Step 2: Identify Optimization Opportunities
Use Langfuse to spot areas where optimization would have the biggest impact:
💰 High-Cost Endpoints
Which features or endpoints consume the most tokens? These are prime candidates for model switching.
⚡ Latency Bottlenecks
Where are users waiting? Test faster models to improve response times.
📊 High-Volume Prompts
Which prompts run most frequently? Small optimizations here yield big savings.
Step 3: Create Experiments with Real Production Data
Let's say Langfuse shows you're spending heavily on a customer support feature using GPT-4. Import those traces to Narev and test alternatives:
Create an experiment comparing:
Variant A (Current)
claude-3-5-haiku-20241022
Variant B (Test)
gpt-4o-mini
Narev will run both variants on your actual production prompts from Langfuse and measure:
- Cost savings in dollars and percentage
- Latency differences (time to first token, total time)
- Quality metrics (accuracy, completeness, formatting)
Step 4: Analyze Results with Statistical Confidence
Narev provides clear, data-backed answers:
Example results:
- ✅ GPT-4o Mini costs 49% less ($18.36 vs $35.85 per 1M requests)
- ✅ Quality improved by 33% (80% vs 60%)
- ✅ Latency improved by 13% (623.4ms vs 713.4ms)
Projected savings: Based on your Langfuse volume data, switching to GPT-4o Mini reduces costs by nearly 50% while improving both quality and latency
Step 5: Deploy and Monitor
With validated results, confidently deploy your optimization:
// Before: Current model from Langfuse traces
import { anthropic } from '@ai-sdk/anthropic';
import { generateText } from 'ai';
const result = await generateText({
model: anthropic('claude-3-5-haiku-20241022'), // ← Old model
prompt: userMessage,
});
// After: Switch to validated alternative
import { openai } from '@ai-sdk/openai';
const result = await generateText({
model: openai('gpt-4o-mini'), // ← Tested winner
prompt: userMessage,
});
Monitor the impact in Langfuse:
- Cost reduction appears immediately in your Langfuse dashboards
- Track quality through user feedback and error rates
- Compare before/after metrics to validate experiment predictions
Step 6: Continuous Optimization
Use this workflow continuously:
- Weekly: Review Langfuse for new optimization opportunities
- Test: Import the highest-cost traces into Narev
- Validate: Run experiments on new models or prompt variations
- Deploy: Roll out proven optimizations
- Repeat: As new models launch or usage patterns change
Why Import from Langfuse?
✅ Test with Real Data
Your Langfuse traces represent actual production usage. Testing on real prompts ensures results translate to production.
✅ Realistic Volume Projections
Langfuse shows request volume. Narev multiplies per-request savings by actual volume for accurate ROI estimates.
✅ Representative Edge Cases
Production traces include the weird prompts, long conversations, and edge cases synthetic tests miss.
✅ Zero Setup Time
If you're already using Langfuse, your test data is ready. No need to create synthetic datasets.
The Langfuse → Narev → Production Loop
Without Narev: Risky Guesswork
- Langfuse shows high GPT-4 costs
- "Maybe Claude would be cheaper?"
- Deploy to production and hope
- Wait weeks for statistically significant data
- Quality issues surface → rollback
- Lost time + user complaints 💸
With Narev: Data-Driven Confidence
- Langfuse shows high GPT-4 costs
- Import traces to Narev
- Test Claude on actual production prompts
- Get results in 10 minutes with confidence
- Deploy winner ✅
- Verify savings in Langfuse 💰
Common Langfuse + Narev Use Cases
🎯 Model Migration
Langfuse shows you're using expensive models. Narev tests which endpoints can safely switch to GPT-4o Mini for better performance and lower costs.
⚡ Latency Optimization
Langfuse identifies slow endpoints. Narev tests faster models while ensuring quality doesn't drop.
💰 Cost Attribution
Langfuse breaks down costs by feature. Narev optimizes each feature independently based on its specific traces.
🔧 Prompt Optimization
Langfuse shows expensive prompts. Narev A/B tests shorter prompts or prompt engineering techniques on real data.
Frequently Asked Questions
Getting Started
Step 1: Set Up Langfuse (if not already)
If you're not using Langfuse yet, sign up for free and add their SDKs to your application for observability.
Step 2: Sign Up for Narev
Sign up for Narev - no credit card required.
Step 3: Connect Your Langfuse Project
Import your traces using your Langfuse credentials. Results available immediately.
Step 4: Run Your First Experiment
Compare your current model from Langfuse against 2-3 cheaper alternatives. Get results in minutes.
Step 5: Deploy and Verify
Update your production code with the winning configuration. Watch savings appear in your Langfuse dashboard.
Start Optimizing Today
Stop wondering if you can reduce costs. Start testing systematically with your real production data.
Next Steps: - Read the 3-Step FinOps Framework for AI - Learn how to reduce costs by 99% by switching models - See how to reduce costs by 24% through prompt optimization - Explore the OpenRouter + Narev integration for model routing