Skip to content

API Configuration

Environment variables and configuration options for the Subtide backend.


Configure the backend using environment variables or a .env file in the backend directory.


Server listening port.

Default5001
ExamplePORT=8080

Number of Gunicorn worker processes.

Default2
Recommendation(CPU cores × 2) + 1
ExampleGUNICORN_WORKERS=4

Request timeout in seconds.

Default300
NoteIncrease for large videos
ExampleGUNICORN_TIMEOUT=600

Allowed CORS origins.

Default* (all origins)
ExampleCORS_ORIGINS=https://youtube.com,https://www.youtube.com

Whisper model size for transcription.

Defaultbase
Optionstiny, base, small, medium, large-v3, large-v3-turbo
ExampleWHISPER_MODEL=large-v3-turbo

Model Comparison:

ModelVRAMSpeedQuality
tiny~1 GBFastestBasic
base~1 GBFastGood
small~2 GBMediumBetter
medium~5 GBSlowGreat
large-v3~10 GBSlowestBest
large-v3-turbo~6 GBFastExcellent

Whisper implementation to use.

DefaultAuto-detected
Optionsmlx, faster, openai
ExampleWHISPER_BACKEND=mlx

Backend Selection:

BackendHardwarePerformance
mlxApple SiliconBest for M1/M2/M3/M4
fasterNVIDIA GPUBest for CUDA
openaiCPUFallback option

LLM API key stored on the server.

DefaultNone
RequiredTier 3 and Tier 4
ExampleSERVER_API_KEY=sk-xxx

LLM API endpoint.

DefaultNone
ExampleSERVER_API_URL=https://api.openai.com/v1

Common Endpoints:

ProviderURL
OpenAIhttps://api.openai.com/v1
OpenRouterhttps://openrouter.ai/api/v1
LM Studiohttp://localhost:1234/v1
Ollamahttp://localhost:11434/v1

Default LLM model name.

DefaultNone
ExampleSERVER_MODEL=gpt-4o

Terminal window
PORT=5001
WHISPER_MODEL=base
CORS_ORIGINS=*
Terminal window
PORT=5001
GUNICORN_WORKERS=4
GUNICORN_TIMEOUT=600
WHISPER_MODEL=large-v3-turbo
WHISPER_BACKEND=faster
CORS_ORIGINS=https://youtube.com,https://www.youtube.com
Terminal window
PORT=5001
GUNICORN_WORKERS=4
GUNICORN_TIMEOUT=600
WHISPER_MODEL=large-v3-turbo
WHISPER_BACKEND=faster
SERVER_API_KEY=sk-xxx
SERVER_API_URL=https://api.openai.com/v1
SERVER_MODEL=gpt-4o
CORS_ORIGINS=https://youtube.com,https://www.youtube.com
Terminal window
WHISPER_MODEL=large-v3-turbo
WHISPER_BACKEND=mlx
Terminal window
WHISPER_MODEL=large-v3
WHISPER_BACKEND=faster

Create a .env file in the backend directory:

Terminal window
# backend/.env
PORT=5001
WHISPER_MODEL=large-v3-turbo
WHISPER_BACKEND=mlx
CORS_ORIGINS=*

The backend automatically loads this file on startup.


Pass environment variables to Docker:

Terminal window
docker run -d \
-p 5001:5001 \
-e WHISPER_MODEL=large-v3-turbo \
-e WHISPER_BACKEND=faster \
ghcr.io/rennerdo30/subtide-backend:latest

Or use an env file:

Terminal window
docker run -d \
-p 5001:5001 \
--env-file .env \
ghcr.io/rennerdo30/subtide-backend:latest

The backend validates configuration on startup. Check logs for warnings:

Terminal window
./subtide-backend 2>&1 | grep -i warning