Router API Reference

OpenAI-compatible endpoint for intelligent request routing to A/B tests.

The Router API provides intelligent request routing based on filters and rules you configure in the Narev UI. Requests are automatically routed to the appropriate A/B test based on message content, metadata, and custom rules.

New to the Router API? Routing rules are configured in the Narev dashboard. The API uses the production variant of matched A/B tests for responses.

Endpoint

POST /api/router/{router_id}/v1/chat/completions

Authentication

Include your Narev API key in the Authorization header:

Authorization: Bearer YOUR_API_KEY

Setup Example

from openai import OpenAI
 
client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://narev.ai/api/router/{router_id}/v1"
)

Request Parameters

Required Parameters

ParameterTypeDescription
messagesarrayArray of message objects with role and content

Optional Parameters

ParameterTypeDescription
streambooleanWhether to stream the response. Default: false
metadataobjectCustom metadata for routing decisions and tracking
modelstringIgnored - model is determined by the routed A/B test's production variant

Unlike the Applications API, the Router API ignores model, temperature, top_p, max_tokens, and other generation parameters. These are determined by the routed A/B test's production variant configuration.

Metadata Fields

FieldTypeDescription
custom fieldsanyAny fields used for routing filters and request tracking

Response Format

Non-Streaming Response

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "openai:gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Paris is the capital of France."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 7,
    "total_tokens": 20
  }
}

The model field indicates which model was used by the routed A/B test's production variant.

Streaming Response

Server-sent events (SSE) format with data: prefix:

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"openai:gpt-4","choices":[{"index":0,"delta":{"content":"Paris"},"finish_reason":null}]}

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"openai:gpt-4","choices":[{"index":0,"delta":{"content":" is"},"finish_reason":null}]}

data: [DONE]

Routing Logic

For detailed information on routing filters, best practices, and advanced patterns, see the Routing with the API guide.

Error Responses

All errors return a JSON object with an error field:

{
  "error": {
    "message": "Error description",
    "code": "error_code"
  }
}

Request Examples

Basic Request

response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

With System Prompt

response = client.chat.completions.create(
    messages=[
        {"role": "system", "content": "You are a helpful geography expert."},
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

With Routing Metadata

response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "Analyze this code"}
    ],
    extra_body={
        "metadata": {
            "user_tier": "premium",
            "task_type": "code_review",
            "complexity": "high"
        }
    }
)

Streaming

stream = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "Tell me a story."}
    ],
    stream=True
)
 
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

HTTP Status Codes

StatusCodeDescription
400bad_requestInvalid request format or parameters
400no_filters_configuredRouter has no filters configured
400no_production_variantMatched A/B test has no production variant
401invalid_api_keyInvalid or missing API key
402insufficient_creditsInsufficient credits to complete request
404router_not_foundRouter ID not found
500internal_errorInternal server error

Additional Resources