Skip to main content

Step 1: Get Your API Key

You can get your Auto API key from your dashboard.

Step 2: Proxy your requests through Auto

You’ll need to make some simple changes to your LLM call to integrate with Auto.
const openai = new OpenAI({
  // #1: Switch to the Auto API key
  apiKey: process.env.AUTO_API_KEY, 
  // #2: Add the Auto base URL
  baseURL: "https://api.auto.venki.dev/api/v1", 
  // #3: Add the OpenAI API key - optional, see Bringing Your Own Keys for more details
  defaultHeaders: {
    "X-OpenAI-Api-Key": process.env.OPENAI_API_KEY, 
  },
});

const chatResponse = await openai.chat.completions.create(
  {
    model: "gpt-4o"
    messages: ...,
  },
  {
    // #4: Add a prompt ID
    headers: {
      "X-Auto-Prompt-Id": "summarize-webpage", 
    },
  }
);

Choosing a Prompt ID

Auto aggregates results at the prompt ID level, and picks the best model for each prompt ID. We thus recommend that you pick a unique, descriptive string for each distinct use case in your codebase. Generally, if you’re unsure about whether to reuse the prompt ID for different use cases in your codebase, it’s better to err towards creating a new prompt ID.

Step 3: You’re All Set! 🎉

You should start seeing logs immediately, and leaderboards in a few days.