Skip to content

Configuration

Configure Subtide for your needs.


Click the Subtide extension icon to open the configuration popup.

SettingDescription
Operation ModeSelect Tier 1-4 based on your needs
Backend URLURL of your backend server (default: http://localhost:5001)
Target LanguageLanguage to translate subtitles into
SettingDescription
API ProviderOpenAI, OpenRouter, or Custom Endpoint
API KeyYour API key for the selected provider
ModelLLM model to use for translation
API URLCustom API endpoint (for custom providers)
SettingDescription
Subtitle SizeSmall, Medium, Large, or XL
Dual ModeShow both original and translated text

Configure the backend using environment variables or a .env file.

VariableDescriptionDefault
PORTServer port5001
GUNICORN_WORKERSNumber of worker processes2
GUNICORN_TIMEOUTRequest timeout in seconds300
CORS_ORIGINSAllowed CORS origins*
VariableDescriptionDefault
SERVER_API_KEYAPI key for LLM provider
SERVER_API_URLLLM API endpoint
SERVER_MODELLLM model name
VariableDescriptionDefault
WHISPER_MODELModel size: tiny, base, small, medium, large-v3, large-v3-turbobase
WHISPER_BACKENDBackend: mlx, faster, openaiAuto-detected

Extension:

Operation Mode: Tier 1 (Standard)
Backend URL: http://localhost:5001
API Provider: OpenAI
API Key: sk-...
Model: gpt-4o-mini
Target Language: Spanish

Backend:

Terminal window
./subtide-backend-macos

Extension:

Operation Mode: Tier 2 (Enhanced)
Backend URL: http://localhost:5001
API Provider: OpenAI
API Key: sk-...
Model: gpt-4o
Target Language: Japanese

Backend:

Terminal window
WHISPER_MODEL=large-v3-turbo WHISPER_BACKEND=mlx ./subtide-backend-macos

Extension:

Operation Mode: Tier 3 (Managed)
Backend URL: https://your-server.com
Target Language: German

Backend:

Terminal window
SERVER_API_KEY=sk-xxx \
SERVER_API_URL=https://api.openai.com/v1 \
SERVER_MODEL=gpt-4o \
docker-compose up subtide-tier3

Extension:

Operation Mode: Tier 4 (Stream)
Backend URL: https://your-server.com
Target Language: French

Backend:

Terminal window
SERVER_API_KEY=sk-xxx \
SERVER_API_URL=https://api.openai.com/v1 \
SERVER_MODEL=gpt-4o \
WHISPER_MODEL=large-v3-turbo \
docker-compose up subtide-tier4

Extension:

Operation Mode: Tier 2 (Enhanced)
Backend URL: http://localhost:5001
API Provider: Custom Endpoint
API URL: http://localhost:1234/v1
API Key: lm-studio
Model: local-model
Target Language: Korean

Choose a model based on your hardware and quality needs:

ModelVRAM/RAMSpeedQualityBest For
tiny~1 GBFastestBasicTesting
base~1 GBFastGoodGeneral use
small~2 GBMediumBetterBetter accuracy
medium~5 GBSlowGreatHigh quality
large-v3~10 GBSlowestBestMaximum quality
large-v3-turbo~6 GBFastExcellentRecommended

Tip: Recommendation Use large-v3-turbo for the best balance of speed and quality.