|
English

Swfte Connect is a unified AI gateway that sits between your application and every major AI provider. One API key, one SDK, one billing dashboard -- and you get automatic failover, cost tracking, and model routing across OpenAI, Anthropic, Google, Mistral, and more.

This guide walks you through setup in under five minutes.


Step 1: Create Your API Key

Log in to connect.swfte.com and navigate to the dashboard. Click New API Key in the header, or scroll to the API Keys section.

When creating a key, you can scope it to specific endpoints:

  • All Access -- Full access to every endpoint (chat completions, agents, embeddings, images)
  • Chat Completions -- Restricted to /v2/gateway/chat/completions
  • Agents -- Access to agent-related endpoints only

Give your key a descriptive name like production-backend or dev-testing so you can track usage later.

Important: Copy your API key immediately after creation. It is only shown once. Store it securely -- treat it like a password.


Step 2: Install the SDK

# Python
pip install swfte-sdk

Step 3: Make Your First Request

The Swfte SDK follows the same interface conventions as OpenAI's SDK, so migration is straightforward. The key difference is the model identifier format: provider:model-name.

from swfte import SwfteClient

client = SwfteClient(api_key="sk-swfte-...")

response = client.chat.completions.create(
    model="openai:gpt-5",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in one paragraph."}
    ],
    max_tokens=256,
    temperature=0.7
)

print(response.choices[0].message.content)

Understanding Model Identifiers

Swfte Connect uses a provider:model format to route requests. This gives you explicit control over which provider handles each request:

IdentifierProviderModel
openai:gpt-5OpenAIGPT-5
anthropic:claude-sonnet-4AnthropicClaude Sonnet 4
google:gemini-2.5-proGoogleGemini 2.5 Pro
mistral:mistral-largeMistralMistral Large
xai:grok-3xAIGrok 3

You can also use just the model name (e.g., gpt-5) and Swfte will route to the appropriate provider automatically.


Step 4: Try the SDK Playground

Before writing integration code, use the built-in SDK Playground on your dashboard. It supports Python, JavaScript, and Java, lets you edit model parameters inline, and shows the full response including latency and token usage.

Click Run to execute the request directly against the gateway. The response appears in the Response tab with timing metadata.


Step 5: Monitor Your Usage

Once requests are flowing, the dashboard updates in real-time:

  • Total Tokens -- Aggregate token consumption across all providers
  • Responses & Completions -- Request count over time
  • Total Costs -- Spend tracking with per-request granularity
  • Credit Balance -- Remaining credits with low-balance alerts

Navigate to Insights for deeper analytics: per-model cost breakdowns, latency distributions (p50/p90/p99), provider intelligence, and usage forecasting.


What's Next

Now that you have a working integration, explore these guides:


Common Issues

"Invalid API key" error: Make sure you copied the full key including the sk-swfte- prefix. Keys are workspace-scoped, so ensure you are using a key from the correct workspace.

"Model not found" error: Check the model identifier format. Use provider:model-name (e.g., openai:gpt-5, not just gpt-5). Browse available models at connect.swfte.com/connections.

High latency on first request: The first request to a cold provider may take longer. Subsequent requests benefit from connection pooling. For latency-critical applications, consider using provider routing with latency-based selection.

0
0
0
0

Enjoyed this article?

Get more insights on AI and enterprise automation delivered to your inbox.