Skip to main content

Documentation Index

Fetch the complete documentation index at: https://narev.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Narev is a router and experimentation layer that integrates with most providers, gateways and tracing platforms. If you don’t see an integration, contact support.

Available integrations

Providers

  • OpenAI - GPT-4 and other OpenAI models

Gateways

  • OpenRouter - Universal LLM gateway with load balancing
  • LiteLLM - Unified interface for 100+ LLMs
  • Portkey - AI gateway with routing and fallbacks
  • Helicone Gateway - AI gateway with routing and fallbacks

Observability

Narev is the missing experimentation layer

A common LLM stack has three layers:
LayerToolsPurpose
OpenAI, Claude, GroqGenerates responses
Helps find optimal configuration
OpenRouter, LiteLLM, PortkeyProvides common interface
LLM: Helicone, Langfuse, W&BShows LLM history
FinOps: Vantage, CloudZero, FinoutTracks costs and spend

How Narev complements existing tools?

CategoryHow Narev works togetherWhat Narev adds
Provider
(OpenAI, Anthropic, Groq)
Tests providers side-by-side on your promptsIdentifies which provider delivers the best cost/quality/latency
Gateway
(OpenRouter, LiteLLM, Portkey)
Tests routing strategies and fallback configurationsDetermines optimal routing logic and failover paths
Observability: LLM
(Helicone, Langfuse, W&B)
Imports production traces and runs what-if experimentsTests configurations before and during production
Observability: FinOps
(Vantage, CloudZero, Finout)
Connects cost data to quality-aware optimization testsTurns spend tracking into actionable cost reduction