Skip to main content

Before you begin

Make sure you have:
  • Python 3.9 or higher installed
  • The Dedalus SDK installed (pip install dedalus_labs)
  • Your API key from the Dedalus Labs dashboard

Your first API call

Let’s create a simple chat completion to get you started:
1

Import and initialize the client

import os
from dedalus_labs import Dedalus

client = Dedalus(
    api_key=os.environ.get("DEDALUS_API_KEY"),  # This is the default and can be omitted
)
2

Create a chat completion

chat_completion = client.chat.completions.create(
    model="openai/gpt-5-nano",
    messages=[
        {
            "role": "system",
            "content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
        },
        {
            "role": "user",
            "content": "Hello, how are you today?",
        },
    ],
)
print(chat_completion.id)
The SDK automatically reads the DEDALUS_API_KEY environment variable, so you can omit the api_key parameter if it’s set.

Async usage

For async applications, use AsyncDedalus instead:
import os
import asyncio
from dedalus_labs import AsyncDedalus

client = AsyncDedalus(
    api_key=os.environ.get("DEDALUS_API_KEY"),
)

async def main() -> None:
    chat_completion = await client.chat.completions.create(
        model="openai/gpt-5-nano",
        messages=[
            {
                "role": "system",
                "content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
            },
            {
                "role": "user",
                "content": "Hello, how are you today?",
            },
        ],
    )
    print(chat_completion.id)

asyncio.run(main())
Functionality between the synchronous and asynchronous clients is identical. Simply use await with each API call.

Streaming responses

The SDK supports streaming for real-time responses using Server-Sent Events (SSE):
from dedalus_labs import Dedalus

client = Dedalus()

stream = client.chat.completions.create(
    model="openai/gpt-5-nano",
    stream=True,
    messages=[
        {
            "role": "system",
            "content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
        },
        {
            "role": "user",
            "content": "What do you think of artificial intelligence?",
        },
    ],
)
for chat_completion in stream:
    print(chat_completion.id)

Using aiohttp for better async performance

For improved concurrency with async operations, you can use aiohttp as the HTTP backend:
1

Install aiohttp support

pip install dedalus_labs[aiohttp]
2

Configure the client

import os
import asyncio
from dedalus_labs import DefaultAioHttpClient
from dedalus_labs import AsyncDedalus

async def main() -> None:
    async with AsyncDedalus(
        api_key=os.environ.get("DEDALUS_API_KEY"),
        http_client=DefaultAioHttpClient(),
    ) as client:
        chat_completion = await client.chat.completions.create(
            model="openai/gpt-5-nano",
            messages=[
                {
                    "role": "system",
                    "content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
                },
                {
                    "role": "user",
                    "content": "Hello, how are you today?",
                },
            ],
        )
        print(chat_completion.id)

asyncio.run(main())

Working with nested parameters

Request parameters that are objects are typed as dictionaries using TypedDict:
from dedalus_labs import Dedalus

client = Dedalus()

chat_completion = client.chat.completions.create(
    model="openai/gpt-5",
    audio={
        "format": "wav",
        "voice": "string",
    },
)
print(chat_completion.audio)

File uploads

For endpoints that accept file uploads, you can pass files as bytes, PathLike instances, or tuples:
from pathlib import Path
from dedalus_labs import Dedalus

client = Dedalus()

client.audio.transcriptions.create(
    file=Path("/path/to/file"),
    model="model",
)
The async client uses the same interface and will read files asynchronously when you pass a PathLike instance.

Error handling

The SDK provides comprehensive error handling with specific error types:
import dedalus_labs
from dedalus_labs import Dedalus

client = Dedalus()

try:
    client.chat.completions.create(
        model="openai/gpt-5-nano",
        messages=[
            {
                "role": "system",
                "content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
            },
            {
                "role": "user",
                "content": "Hello, how are you today?",
            },
        ],
    )
except dedalus_labs.APIConnectionError as e:
    print("The server could not be reached")
    print(e.__cause__)  # an underlying Exception, likely raised within httpx.
except dedalus_labs.RateLimitError as e:
    print("A 429 status code was received; we should back off a bit.")
except dedalus_labs.APIStatusError as e:
    print("Another non-200-range status code was received")
    print(e.status_code)
    print(e.response)

Error types

Status CodeError Type
400BadRequestError
401AuthenticationError
403PermissionDeniedError
404NotFoundError
422UnprocessableEntityError
429RateLimitError
>=500InternalServerError
N/AAPIConnectionError
Certain errors are automatically retried 2 times by default with exponential backoff. This includes connection errors, 408 Request Timeout, 409 Conflict, 429 Rate Limit, and 5xx errors.

Using types

The SDK provides full type safety with TypedDicts for requests and Pydantic models for responses:
from dedalus_labs import Dedalus

client = Dedalus()

chat_completion = client.chat.completions.create(
    model="openai/gpt-5-nano",
    messages=[{"role": "user", "content": "Hello!"}],
)

# Serialize to JSON
json_str = chat_completion.to_json()

# Convert to dictionary
data_dict = chat_completion.to_dict()
For type errors in VS Code, set python.analysis.typeCheckingMode to basic to catch bugs earlier.

Next steps

Now that you’ve made your first API call, explore more features:

API Reference

Browse the complete API reference

Error Handling

Learn about error handling and retries

Advanced Usage

Explore timeouts, custom headers, and more

GitHub

View the source code on GitHub

Build docs developers (and LLMs) love