A modern, self-hosted podcast aggregator (podcatcher) web application built with Django, HTMX, and Tailwind CSS.
- Podcast Discovery: Browse and search thousands of podcasts with full-text search
- Smart Feed Updates: Intelligent RSS feed parsing with conditional requests and dynamic scheduling
- Episode Playback: In-browser audio player with playback position tracking
- Personalized Recommendations: Algorithm-based podcast suggestions based on your subscriptions
- Bookmarks & History: Save favorite episodes and track listening progress
- Email Notifications: Optional digests of new episodes from subscribed podcasts
- Progressive Web App: Installable as a mobile/desktop app
- OAuth Support: Sign in with GitHub or Google
- Backend: Django 6.0 (Python 3.14)
- Frontend: HTMX, AlpineJS, Tailwind CSS
- Database: PostgreSQL 18 with full-text search
- Cache: Redis 8
- RSS Parsing: lxml (XPath) with Pydantic validation
- Image Processing: Pillow with WebP compression
- Python 3.14 (install via
uv python install 3.14.xif needed) - uv - Fast Python package installer
- just - Command runner (optional but recommended)
- Docker & Docker Compose (required for development services)
-
Clone the repository
git clone https://github.com/yourusername/radiofeed-app.git cd radiofeed-app -
Start development services (do this first!)
just start
This starts Docker containers via
docker-compose.ymlfor:- PostgreSQL 18 (port 5432)
- Redis 8 (port 6379)
- Mailpit for email testing (UI: http://localhost:8025)
These services must be running before proceeding with the next steps.
-
Create environment file
cp .env.example .env
The
.env.examplefile contains required development settings (e.g.,DEBUG=true,USE_DEBUG_TOOLBAR=true) that are configured to work with the Docker Compose services. -
Install dependencies
just install
This will:
- Install Python dependencies with uv
- Set up pre-commit hooks
- Download NLTK data for text processing
-
Run database migrations
just dj migrate
-
Create a superuser
just dj createsuperuser
-
Start the development server
just serve
The app will be available at http://localhost:8000
The .env.example file contains default development settings configured to work with the docker-compose.yml services. Simply copy it to .env as shown in step 3 above.
If you need to customize settings (e.g., using local PostgreSQL/Redis instead of Docker, or adding OAuth credentials):
- Edit your
.envfile to customize:DATABASE_URL- PostgreSQL connection string (default works with Docker Compose)REDIS_URL- Redis connection string (default works with Docker Compose)SECRET_KEY- Django secret key (auto-generated in development)DEBUG- Enable debug mode (set totruefor development)USE_DEBUG_TOOLBAR- Enable Django Debug Toolbar (set totruefor development)- OAuth credentials for GitHub/Google login (optional)
just # List all available commands
just serve # Run dev server + Tailwind compiler
just test # Run test suite
just test -k pattern # Run specific tests
just tw # Run tests in watch mode
just lint # Run code formatters and linters
just typecheck # Run type checker (basedpyright)
just dj <command> # Run Django management commandsradiofeed-app/
├── radiofeed/ # Main Django project
│ ├── episodes/ # Episode models, views, audio playback
│ ├── podcasts/ # Podcast models, feed parsing, recommendations
│ ├── users/ # User accounts and preferences
│ └── parsers/ # RSS/Atom feed parsing pipeline
├── config/ # Django settings and URL configuration
├── templates/ # Django templates (HTMX-enhanced)
├── static/ # Static assets (CSS, JS, images)
├── ansible/ # Deployment playbooks for K3s
└── justfile # Development command shortcuts
For detailed architecture documentation, see CLAUDE.md.
The project maintains 100% code coverage:
just test # Run all tests with coverage
just test radiofeed/podcasts # Test specific module
just tw # Watch mode - auto-run on changesTests use pytest with:
factory_boyfor test data generationpytest-djangofor Django integrationpytest-mockfor mockingpytest-xdistfor parallel execution
Pre-commit hooks run automatically on commit:
just precommit run --all-files # Run manually
just precommitupdate # Update hook versionsHooks include:
- Ruff - Fast Python linting and formatting
- basedpyright - Type checking
- djhtml/djlint - Django template formatting
- rustywind - Tailwind class sorting
- gitleaks - Secret detection
- commitlint - Conventional commit messages
The app doesn't include a web UI for adding podcasts yet. Use the Django admin or management commands:
# Via admin
just dj createsuperuser
# Then visit http://localhost:8000/admin/
# Via shell
just dj shell
>>> from radiofeed.podcasts.models import Podcast
>>> Podcast.objects.create(rss="https://example.com/feed.xml")Podcast feeds are parsed via management commands (typically run via cron):
just dj parse_feeds --limit 360 # Parse up to 360 scheduled podcasts
just dj send_notifications # Email new episode notifications
just dj create_recommendations # Generate podcast recommendationsThe parser features:
- Conditional HTTP requests (ETag/If-Modified-Since)
- Dynamic scheduling based on episode frequency
- Exponential backoff on errors
- Bulk episode upserts for efficiency
- Support for iTunes, Google Play, and Podcast Index namespaces
The frontend uses HTMX for dynamic updates without JavaScript frameworks. Custom middleware handles:
- Out-of-band message injection
- Redirect conversion to HX-Location headers
- Cache control for HTMX requests
- Session-based audio player state
Uses PostgreSQL's native full-text search instead of external search engines:
- SearchVector fields with GIN indexes
- SearchRank for relevance scoring
- Supports podcast and episode search across multiple fields
Instead of Celery, uses Python's ThreadPoolExecutor:
- Simpler deployment (no separate workers)
- Database-safe threading with connection cleanup
- Batch processing (500 items/batch)
- Suitable for ~360 podcasts/run
Dynamic update frequency based on podcast activity:
- Analyzes episode publication intervals
- Ranges from 1 hour (active) to 3 days (inactive)
- Prioritizes by: new podcasts, subscriber count, promoted status
- Incremental backoff if no new episodes
See CLAUDE.md for complete architecture documentation.
Radiofeed can be deployed in several ways, from simple Docker containers to production-ready K3s clusters on Hetzner Cloud.
For production deployments, we provide Terraform configuration to provision infrastructure on Hetzner Cloud:
Infrastructure:
- K3s control plane + load balancer (Traefik)
- Dedicated database server (PostgreSQL + Redis)
- Job runner for cron tasks
- Multiple web application servers
- Private network for secure internal communication
- Persistent volume for PostgreSQL
Quick Start:
-
Provision infrastructure:
cd terraform/hetzner cp terraform.tfvars.example terraform.tfvars # Edit terraform.tfvars with your Hetzner API token and SSH key terraform init terraform apply
-
Configure Cloudflare CDN + SSL:
cd ../cloudflare cp terraform.tfvars.example terraform.tfvars # Edit terraform.tfvars with Cloudflare API token and server IP terraform init terraform apply # Download origin certificates from Cloudflare Dashboard # Save to ansible/certs/cloudflare.pem and ansible/certs/cloudflare.key
-
Generate Ansible inventory:
cd ../hetzner terraform output -raw ansible_inventory > ../../ansible/hosts.yml
-
Deploy with Ansible:
cd ../../ just apb site
See terraform/hetzner/README.md and terraform/cloudflare/README.md for complete setup instructions.
Cloudflare is used for CDN, caching, and SSL/TLS termination. The Terraform configuration sets up:
- CDN: Caching for static assets (
/static/*) and media files (/media/*) - SSL/TLS: Full SSL mode with origin certificates
- Security: Firewall rules, DDoS protection, security headers
- Performance: HTTP/3, Brotli compression, early hints
Requirements:
- Cloudflare account (free tier is sufficient)
- Domain added to Cloudflare
- Nameservers updated at your DNS provider (e.g., Namecheap)
- Cloudflare origin certificates saved to
ansible/certs/
Origin Certificates:
The Ansible deployment requires Cloudflare origin certificates:
- Go to Cloudflare Dashboard → SSL/TLS → Origin Server
- Click "Create Certificate"
- Save the certificate as
ansible/certs/cloudflare.pem - Save the private key as
ansible/certs/cloudflare.key
These certificates are used for secure communication between Cloudflare and your origin server.
See terraform/cloudflare/README.md for detailed setup instructions.
Required for production:
ALLOWED_HOSTS=radiofeed.app
DATABASE_URL=postgresql://user:pass@host:5432/radiofeed
REDIS_URL=redis://host:6379/0
SECRET_KEY=<generate-with-manage-py-generate_secret_key>
ADMIN_URL=<random-path>/ # e.g., "my-secret-admin-path/"
ADMINS=admin@radiofeed.app
SENTRY_URL=https://...@sentry.io/... # OptionalOptional (email via Mailgun):
EMAIL_HOST=mg.radiofeed.app
MAILGUN_API_KEY=<mailgun-api-key>Optional (security headers):
USE_HSTS=true # If load balancer doesn't set HSTS headersBuild and run with Docker:
docker build -t radiofeed .
docker run -p 8000:8000 \
-e DATABASE_URL="postgresql://..." \
-e REDIS_URL="redis://..." \
-e SECRET_KEY="..." \
radiofeedAnsible playbooks are provided in the ansible/ directory for K3s deployment. These work with the Terraform-provisioned infrastructure or manually created servers.
Using with Terraform (recommended):
# After terraform apply
cd terraform/hetzner
terraform output -raw ansible_inventory > ../../ansible/hosts.yml
cd ../../
just apb siteManual deployment:
cd ansible
cp hosts.yml.example hosts.yml
# Edit hosts.yml with your server IPs
ansible-playbook -i hosts.yml site.ymlThis sets up:
- K3s cluster with control plane and agent nodes
- PostgreSQL with persistent volume
- Redis for caching
- Django application with multiple replicas
- Traefik load balancer with SSL/TLS
- Automated cron jobs for feed parsing
See ansible/README.md for detailed deployment instructions.
-
Generate secret key:
python manage.py generate_secret_key
-
Run migrations:
python manage.py migrate
-
Create superuser:
python manage.py createsuperuser
-
Configure Site in admin:
- Visit
/admin/(or your customADMIN_URL) - Update the default Site with your domain
- Visit
-
Set up cron jobs for feed parsing:
*/15 * * * * /path/to/manage.py parse_feeds --limit 360 0 9 * * * /path/to/manage.py send_notifications 0 3 * * 0 /path/to/manage.py create_recommendations
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following the code style (Ruff will enforce this)
- Write tests to maintain 100% coverage
- Commit using conventional commits (
feat:,fix:,docs:, etc.) - Push to your fork and submit a pull request
Pre-commit hooks will run automatically to ensure code quality.
MIT License - see LICENSE file for details.
- Issues: GitHub Issues
- Security: See SECURITY.md for reporting vulnerabilities
- RSS feed parsing inspired by various podcast aggregators
- Uses excellent open-source libraries (Django, HTMX, PostgreSQL)
- Thanks to all contributors!