Configuration
Configuration
Section titled “Configuration”Configure Subtide for your needs.
Extension Configuration
Section titled “Extension Configuration”Click the Subtide extension icon to open the configuration popup.
Basic Settings
Section titled “Basic Settings”| Setting | Description |
|---|---|
| Operation Mode | Select Tier 1-4 based on your needs |
| Backend URL | URL of your backend server (default: http://localhost:5001) |
| Target Language | Language to translate subtitles into |
API Settings (Tier 1 & 2)
Section titled “API Settings (Tier 1 & 2)”| Setting | Description |
|---|---|
| API Provider | OpenAI, OpenRouter, or Custom Endpoint |
| API Key | Your API key for the selected provider |
| Model | LLM model to use for translation |
| API URL | Custom API endpoint (for custom providers) |
Display Settings
Section titled “Display Settings”| Setting | Description |
|---|---|
| Subtitle Size | Small, Medium, Large, or XL |
| Dual Mode | Show both original and translated text |
Backend Environment Variables
Section titled “Backend Environment Variables”Configure the backend using environment variables or a .env file.
Core Settings
Section titled “Core Settings”| Variable | Description | Default |
|---|---|---|
PORT | Server port | 5001 |
GUNICORN_WORKERS | Number of worker processes | 2 |
GUNICORN_TIMEOUT | Request timeout in seconds | 300 |
CORS_ORIGINS | Allowed CORS origins | * |
API Configuration (Tier 3/4)
Section titled “API Configuration (Tier 3/4)”| Variable | Description | Default |
|---|---|---|
SERVER_API_KEY | API key for LLM provider | — |
SERVER_API_URL | LLM API endpoint | — |
SERVER_MODEL | LLM model name | — |
Whisper Configuration
Section titled “Whisper Configuration”| Variable | Description | Default |
|---|---|---|
WHISPER_MODEL | Model size: tiny, base, small, medium, large-v3, large-v3-turbo | base |
WHISPER_BACKEND | Backend: mlx, faster, openai | Auto-detected |
Example Configurations
Section titled “Example Configurations”Tier 1: Basic YouTube Translation
Section titled “Tier 1: Basic YouTube Translation”Extension:
Operation Mode: Tier 1 (Standard)Backend URL: http://localhost:5001API Provider: OpenAIAPI Key: sk-...Model: gpt-4o-miniTarget Language: SpanishBackend:
./subtide-backend-macosTier 2: Whisper Transcription
Section titled “Tier 2: Whisper Transcription”Extension:
Operation Mode: Tier 2 (Enhanced)Backend URL: http://localhost:5001API Provider: OpenAIAPI Key: sk-...Model: gpt-4oTarget Language: JapaneseBackend:
WHISPER_MODEL=large-v3-turbo WHISPER_BACKEND=mlx ./subtide-backend-macosTier 3: Shared Server
Section titled “Tier 3: Shared Server”Extension:
Operation Mode: Tier 3 (Managed)Backend URL: https://your-server.comTarget Language: GermanBackend:
SERVER_API_KEY=sk-xxx \SERVER_API_URL=https://api.openai.com/v1 \SERVER_MODEL=gpt-4o \docker-compose up subtide-tier3Tier 4: Streaming Mode
Section titled “Tier 4: Streaming Mode”Extension:
Operation Mode: Tier 4 (Stream)Backend URL: https://your-server.comTarget Language: FrenchBackend:
SERVER_API_KEY=sk-xxx \SERVER_API_URL=https://api.openai.com/v1 \SERVER_MODEL=gpt-4o \WHISPER_MODEL=large-v3-turbo \docker-compose up subtide-tier4Local LLM with LM Studio
Section titled “Local LLM with LM Studio”Extension:
Operation Mode: Tier 2 (Enhanced)Backend URL: http://localhost:5001API Provider: Custom EndpointAPI URL: http://localhost:1234/v1API Key: lm-studioModel: local-modelTarget Language: KoreanWhisper Model Selection
Section titled “Whisper Model Selection”Choose a model based on your hardware and quality needs:
| Model | VRAM/RAM | Speed | Quality | Best For |
|---|---|---|---|---|
tiny | ~1 GB | Fastest | Basic | Testing |
base | ~1 GB | Fast | Good | General use |
small | ~2 GB | Medium | Better | Better accuracy |
medium | ~5 GB | Slow | Great | High quality |
large-v3 | ~10 GB | Slowest | Best | Maximum quality |
large-v3-turbo | ~6 GB | Fast | Excellent | Recommended |
Tip: Recommendation Use
large-v3-turbofor the best balance of speed and quality.
Next Steps
Section titled “Next Steps”- YouTube Guide - Using Subtide with YouTube
- Backend Overview - Backend deployment options