The Chat Completions endpoint allows you to generate text responses using the distributed language models in the Fortytwo network. It supports both standard responses and streaming via Server-Sent Events (SSE).
Create Chat Completion
Generate a response to a conversation using a specified swarm model.
Endpoint
POST https://api.fortytwo.network/v1/chat/completions
Request Body
ID of the model to use (from /v1/models).
List of messages in the conversation.
Enable Server-Sent Events streaming.
Each message in the messages array should have:
One of: system | user | assistant.
The content of the message.
Response Modes
Standard Response Mode
Streaming Mode (SSE)
In standard mode, the API returns a complete response once generation is finished.Example Request
curl https://api.fortytwo.network/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_FORTYTWO_API_KEY" \
-d '{
"model": "fortytwo-preview",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the Fortytwo network?"
}
]
}'
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677858242,
"model": "fortytwo-preview",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The Fortytwo network is a decentralized AI protocol that leverages swarm intelligence..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 50,
"total_tokens": 75
}
}
Response Fields
Unique identifier for the completion.
Unix timestamp of creation.
Model used for generation.
List of completion choices.
OpenAI SDK Integration
The Fortytwo API is fully compatible with the OpenAI Python and Node.js SDKs:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_FORTYTWO_API_KEY",
base_url="https://api.fortytwo.network/v1"
)
# Standard completion
response = client.chat.completions.create(
model="fortytwo-preview",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
# Streaming completion
stream = client.chat.completions.create(
model="fortytwo-preview",
messages=[
{"role": "user", "content": "Tell me a story"}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end='')
Preview Rate Limiting
Chat completion requests are subject to rate limits while in Preview. See ‘Preview Limits & Quotas’ for more details.
Error Responses
Common error responses for chat completions can be found in ‘Errors’.
Next Steps