Skip to content

Backend Overview

Understanding the Subtide backend server.


The Subtide backend is a Flask-based REST API that handles:

  1. Video Processing - Downloading and extracting audio
  2. Transcription - Converting speech to text using Whisper
  3. Translation - Translating text using LLM APIs
  4. Streaming - Real-time translation for Tier 4
┌─────────────────────────────────────────────────────────────┐
│ Subtide Backend │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────────┐ ┌────────────────┐ │
│ │ Flask │───▶│ Whisper │───▶│ Translation │ │
│ │ API │ │ Service │ │ Service │ │
│ └──────────┘ └──────────────┘ └────────────────┘ │
│ │ │ │ │
│ │ ▼ ▼ │
│ │ ┌──────────────┐ ┌────────────────┐ │
│ │ │ MLX / │ │ OpenAI / │ │
│ │ │ Faster │ │ OpenRouter │ │
│ │ └──────────────┘ └────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ YouTube Service │ │
│ │ (yt-dlp for video/audio extraction) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

OptionBest ForSetup Difficulty
BinaryPersonal use, quick startEasy
Python SourceDevelopment, customizationMedium
DockerProduction, teamsMedium
RunPodGPU acceleration, cloudMedium

Terminal window
# Download from releases
chmod +x subtide-backend-macos
./subtide-backend-macos
Terminal window
cd backend
docker-compose up subtide-tier2
Terminal window
cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
./run.sh

Subtide supports multiple Whisper implementations:

Optimized for M1/M2/M3/M4 Macs:

Terminal window
WHISPER_BACKEND=mlx ./subtide-backend
  • Uses unified memory efficiently
  • No separate GPU memory needed
  • Best for macOS users

CUDA-accelerated for NVIDIA GPUs:

Terminal window
WHISPER_BACKEND=faster ./subtide-backend
  • Requires CUDA toolkit
  • Significant speedup on supported GPUs
  • Best for Linux/Windows with NVIDIA

Original Whisper implementation:

Terminal window
WHISPER_BACKEND=openai ./subtide-backend
  • CPU-based (slower)
  • Works everywhere
  • Fallback option

ModelSizeSpeedQuality
tiny~39 MBFastestBasic
base~74 MBFastGood
small~244 MBMediumBetter
medium~769 MBSlowGreat
large-v3~1.5 GBSlowestBest
large-v3-turbo~800 MBFastExcellent

Recommended: large-v3-turbo for best speed/quality ratio.


MacMemoryRecommended Model
8 GBLimitedtiny, base
16 GBGoodsmall, base
32 GBExcellentlarge-v3
64 GB+OptimalAny model
GPUVRAMRecommended Model
RTX 306012 GBmedium
RTX 3090/408016-24 GBlarge-v3
RTX 409024 GBAny model

Core configuration:

VariableDescriptionDefault
PORTServer port5001
WHISPER_MODELModel sizebase
WHISPER_BACKENDBackend typeAuto
CORS_ORIGINSAllowed origins*

See Configuration for full list.


Verify the backend is running:

Terminal window
curl http://localhost:5001/health

Expected response:

{"status": "healthy"}