SDKs & Libraries
DeployAI is OpenAI-compatible. Use any OpenAI client library — just change the base URL.
No custom SDK required. Since DeployAI implements the OpenAI API format, you can use the official OpenAI SDKs, the Vercel AI SDK, or any OpenAI-compatible library. Just point it at https://api.deployai.dev/v1.
Client Libraries
TypeScript / JavaScript
Package: openai
$ npm install openai
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.deployai.dev/v1",
apiKey: process.env.DEPLOYAI_API_KEY,
});
const completion = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);Python
Package: openai
$ pip install openai
from openai import OpenAI
import os
client = OpenAI(
base_url="https://api.deployai.dev/v1",
api_key=os.environ["DEPLOYAI_API_KEY"],
)
completion = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(completion.choices[0].message.content)Vercel AI SDK
Package: @ai-sdk/openai
$ npm install ai @ai-sdk/openai
import { streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
const deployai = createOpenAI({
baseURL: "https://api.deployai.dev/v1",
apiKey: process.env.DEPLOYAI_API_KEY,
});
const result = streamText({
model: deployai("anthropic/claude-3.5-sonnet"),
prompt: "Explain quantum computing simply",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}cURL
curl https://api.deployai.dev/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DEPLOYAI_API_KEY" \
-d '{
"model": "openai/gpt-4o",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'Framework Integrations
DeployAI works seamlessly with popular AI frameworks.
LangChain
Use DeployAI as a drop-in OpenAI replacement in LangChain pipelines.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.deployai.dev/v1",
api_key="sk-your-key",
model="anthropic/claude-3.5-sonnet",
)
response = llm.invoke("What is the meaning of life?")
print(response.content)LlamaIndex
Connect DeployAI to LlamaIndex for RAG and document Q&A.
from llama_index.llms.openai import OpenAI
llm = OpenAI(
api_base="https://api.deployai.dev/v1",
api_key="sk-your-key",
model="openai/gpt-4o",
)
response = llm.complete("Summarize this document...")
print(response)Bring Your Own Key (BYOK)
Already have API keys from providers like OpenAI or Anthropic? You can use them directly through DeployAI for centralized routing, logging, and failover — without paying a markup on tokens.
Configure BYOK from your dashboard settings.