Narev provides the infrastructure, middleware, and control plane to accurately measure and monetize AI usage across your entire stack. Whether you are a developer calculating token costs on the fly, a DevOps engineer standardizing cloud infrastructure, or a founder syncing dynamic AI prices to your billing provider, you are in the right place.Documentation Index
Fetch the complete documentation index at: https://narev.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Choose your path
Select the product tier you are integrating to get started.Narev Cloud
The control plane. Sync AI model prices dynamically, benchmark costs, and integrate natively with Stripe, Lago, and Polar.sh.
Narev SDK
The app layer. Drop-in Vercel AI SDK middleware to intercept LLM calls and calculate precise token usage on the fly.
Narev Self-Hosted
The infrastructure layer. A Docker agent that standardizes your raw compute and GPU bills into the FOCUS format.
How the ecosystem fits together
You can use any Narev product independently, but they’re designed to work together as a seamless, end-to-end FinOps pipeline.Standardize Infra Costs (Self-Hosted)
Deploy the Narev Self-Hosted Docker agent to standardize your underlying cloud costs into the FinOps Open Cost and Usage Specification (FOCUS). This gives you a clear baseline for your Cost of Goods Sold (COGS).
Track App Usage (SDK)
Integrate the Narev SDK into your app. As your users generate AI completions, the middleware tracks the exact token counts and calculates costs instantly.