Model Providers
Every agent needs a model. Swap providers without changing any agent code. They all implement the same ModelProvider protocol.
Overview
The SDK ships two providers, both safe to use in shipped apps:
| Provider | Auth | Where it runs |
|---|---|---|
| AWS Bedrock | Cognito / IAM (temporary, scoped) | Cloud |
| MLX (local) | None | On-device, Apple Silicon |
Providers that require embedding a permanent API key (Anthropic, OpenAI, Gemini) are intentionally not included. Embedding a permanent secret in a binary is a security risk for shipped iOS and macOS apps -- the key can be extracted and used to run up your bill or access your account. Bedrock uses temporary Cognito credentials; MLX runs fully on-device.
AWS Bedrock
Uses the AWS Bedrock ConverseStream API. Supports Claude, Titan, Llama, Mistral, and any model available in your AWS region.
import StrandsAgents
let provider = try BedrockProvider(config: BedrockConfig(
modelId: "us.anthropic.claude-sonnet-4-20250514-v1:0",
region: "us-east-1" // optional, defaults to AWS_DEFAULT_REGION
))
import Amplify
import AWSCognitoAuthPlugin
try Amplify.add(plugin: AWSCognitoAuthPlugin())
try Amplify.configure()
try await Amplify.Auth.signIn(username: email, password: password)
// BedrockProvider picks up Cognito credentials automatically
let provider = try BedrockProvider(config: BedrockConfig(
modelId: "us.anthropic.claude-sonnet-4-20250514-v1:0"
))
Common model IDs
| Model | ID |
|---|---|
| Claude Sonnet 4 | us.anthropic.claude-sonnet-4-20250514-v1:0 |
| Claude Haiku 3.5 | us.anthropic.claude-haiku-3-5-20241022-v1:0 |
| Llama 3.3 70B | us.meta.llama3-3-70b-instruct-v1:0 |
MLX (Local)
Runs a quantized language model entirely on Apple Silicon. No network, no credentials, no cost. Models download from HuggingFace and cache locally on first run. See the Local Inference page for recommended models and hybrid routing.
import StrandsAgents
let provider = MLXProvider(modelId: "mlx-community/Qwen3-8B-4bit")
let agent = Agent(model: provider)