Managing Providers
Rephlo allows you to connect to multiple AI providers simultaneously. You might have an OpenAI account for general tasks, an Anthropic account for long documents, and a local Ollama model for private data.
Two Ways to Access AI
Rephlo offers two approaches for connecting to AI services:
1. Rephlo Gateway (Recommended for New Users)
The Rephlo Gateway is the easiest way to get started. Instead of managing your own API keys, you access AI models through your Rephlo account subscription.
Benefits:
- No API key management - Just log in with your Rephlo account
- Unified billing - One subscription covers multiple AI providers
- Pre-configured models - Access to popular models without setup
- Automatic updates - New models added automatically
To use Rephlo Gateway:
- Go to Account menu in the top navigation
- Click Login and sign in with your Rephlo account
- Once logged in, Gateway models become available automatically
2. BYOK (Bring Your Own Key)
For users who prefer direct control or have existing API subscriptions, Rephlo supports connecting directly to AI providers using your own API keys.
Benefits:
- Direct vendor relationship - Pay the provider directly
- Full control - Access all models and features from the vendor
- Custom endpoints - Connect to private or enterprise deployments
Supported Providers (BYOK)
Rephlo supports the following AI providers:
| Provider | Vision | Streaming | Notes |
|---|---|---|---|
| OpenAI | Yes | Yes | GPT-5.1, o3, etc. |
| Anthropic | Yes | Yes | Claude Sonnet 4.5, Opus 4.5, Haiku 4.5 |
| Google AI | Yes | Yes | Gemini models |
| Grok (xAI) | Yes | Yes | Grok models |
| Groq | No | Yes | Fast inference for open models |
| OpenRouter | Varies | Yes | Aggregator for multiple providers |
| Ollama | Varies | Yes | Local/offline models |
| OpenAI-Compatible | Varies | Yes | Azure OpenAI, vLLM, LocalAI, LiteLLM, etc. |
Adding a Provider
- Navigate to Settings > Providers.
- Click "Add Provider".
- Select Type: Choose the vendor from the dropdown.
- Enter Details:
- Name: A friendly alias (e.g., "My GPT-5").
- API Key: Your secret key from the vendor's dashboard.
- Base URL (for OpenAI-Compatible/Ollama): The endpoint URL for the service.
- Model Name (for OpenAI-Compatible): The specific model to use (required).
- Test: Click "Test Connection" to ensure the key works.
- Save.
Configuring OpenAI-Compatible Providers
The OpenAI-Compatible provider type allows you to connect to any service that implements the OpenAI API specification. This includes:
- Azure OpenAI: Enterprise-grade OpenAI model hosting with compliance guarantees
- vLLM: High-performance inference server for open models
- LocalAI: Self-hosted OpenAI-compatible API
- LiteLLM: Unified proxy for multiple LLM providers
To configure:
- Select OpenAI-Compatible as the provider type
- Enter your API Key from the service
- Enter the Base URL (e.g.,
https://your-resource.openai.azure.com/for Azure) - Enter the Model Name exactly as required by the service
Note: The OpenAI-Compatible provider uses a distinct plug icon to differentiate it from the native OpenAI provider.
Provider Interface Features
The Providers page offers several features to help manage your connections:
- Search: Use the search bar to quickly find providers by name
- View Toggle: Switch between Card View (visual tiles) and List View (compact table)
- Status Indicators: See at a glance which providers are active and working
Getting API Keys (BYOK)
When using BYOK, you are responsible for the API usage costs directly with the vendor.
- OpenAI: Get keys at
platform.openai.com - Anthropic: Get keys at
console.anthropic.com - Google AI: Get keys at
aistudio.google.com - Grok (xAI): Get keys at
console.x.ai - Groq: Get keys at
console.groq.com - OpenRouter: Get keys at
openrouter.ai
Configuring Local Models (Ollama)
To use Rephlo completely offline:
- Install Ollama.
- Pull a model in your terminal:
ollama pull llama3. - In Rephlo, add an Ollama provider.
- Endpoint:
http://localhost:11434(default). - Model: Type the name exactly as it appears in Ollama (e.g.,
llama3:latest).
Setting the Active Provider
You can designate an Active Provider that is used by default for all commands.
- This provider is used automatically for all commands unless a command specifically overrides it.
- To change it, click the "Activate" button next to your preferred provider in the list.
- The currently active provider is highlighted in the interface.
Provider Capabilities
Different providers offer different capabilities:
- Vision: Can analyze images (screenshots, photos, diagrams)
- Streaming: Responses appear word-by-word in real-time
- Function Calling: Advanced features for structured outputs
When creating commands, you can choose providers based on the capabilities you need. For example, use a Vision-enabled provider for image analysis commands.
Troubleshooting Providers
- 401 Unauthorized: Your API Key is likely incorrect. Re-copy it.
- 429 Too Many Requests: You have hit the rate limit or run out of credits with the vendor.
- Connection Refused (Ollama): Ensure the Ollama background service is actually running.
- Model Not Found: Check that the model ID is spelled correctly and available on your account.
Next Feature: Learn the various ways to Execute Commands.