Python
Use the official OpenAI Python SDK with Conduit.im — just change two lines of configuration.
Installation
Conduit.im is fully compatible with the official OpenAI Python SDK. Install it with pip:
pip install openaiConfiguration
Point the SDK at the Conduit.im API by setting the base_url and using your Conduit.im API key:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["CONDUIT_API_KEY"],
base_url="https://api.conduit.im/v1",
)Note: If you have existing code that uses the OpenAI SDK, these two parameters (api_key and base_url) are the only changes needed to switch to Conduit.im.
Chat Completions
Send a chat completion request just like you would with the OpenAI API:
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
)
print(response.choices[0].message.content)Streaming
Set stream=True and iterate over the response chunks:
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Write a short poem about APIs."}],
stream=True,
)
for chunk in stream:
token = chunk.choices[0].delta.content or ""
print(token, end="", flush=True)
print() # trailing newlineAsync Support
The SDK provides an AsyncOpenAI client for use with async/await:
import asyncio
import os
from openai import AsyncOpenAI
client = AsyncOpenAI(
api_key=os.environ["CONDUIT_API_KEY"],
base_url="https://api.conduit.im/v1",
)
async def main():
response = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Async streaming
stream = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Write a haiku."}],
stream=True,
)
async for chunk in stream:
token = chunk.choices[0].delta.content or ""
print(token, end="", flush=True)
print()
asyncio.run(main())Error Handling
The SDK raises typed exceptions that you can catch and handle individually:
from openai import OpenAI, APIError, AuthenticationError, RateLimitError
try:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
except AuthenticationError:
print("Invalid API key. Check your CONDUIT_API_KEY.")
except RateLimitError:
print("Rate limited. Retry after a short delay.")
except APIError as e:
print(f"API error: {e.status_code} {e.message}")Using Requests Directly
If you prefer not to use the SDK, you can call the API directly with the requests library:
import os
import requests
response = requests.post(
"https://api.conduit.im/v1/chat/completions",
headers={
"Authorization": f"Bearer {os.environ['CONDUIT_API_KEY']}",
"Content-Type": "application/json",
},
json={
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}],
},
)
data = response.json()
print(data["choices"][0]["message"]["content"])Next Steps
You're ready to build with Conduit.im and Python. Explore further: