Community
177
HostiServer
2026-02-23 14:53:00

What is WSGI: An Easy Guide to Python Web Projects

⏱️ Reading time: ~8 minutes | 📅 Updated: February 23, 2026

Why Server Configuration Is Critical for Python Applications

Django and Flask are among the most popular web development frameworks. Instagram, Pinterest, Spotify, Dropbox are built on Django. Netflix, LinkedIn, Reddit use Flask for microservices and APIs. But even perfectly written code will lag on a poorly configured server.

Wrong number of workers, missing connection pooling, running as root, DEBUG=True in production — typical mistakes that turn a fast application into a slow and vulnerable one. We've seen projects where response time dropped from 1.2 seconds to 150ms simply after properly configuring the database.

In this guide, we'll cover production-ready setup: from choosing between Gunicorn and uWSGI to systemd services with auto-restart, from connection pooling to monitoring. All recommendations are based on real experience deploying hundreds of Python projects.

Django vs Flask: When to Choose What

Django — a "batteries included" framework with built-in admin, ORM, authentication, forms, middleware, migration system. Ideal for large projects: eCommerce, CRM, SaaS platforms, content management systems, social networks. More code "out of the box", fewer decisions to make yourself, faster start for typical tasks.

Flask — a minimalist microframework. Gives full freedom in choosing components: ORM (SQLAlchemy), forms (WTForms), authentication (Flask-Login), admin (Flask-Admin). Suitable for APIs, microservices, small applications, prototypes, Machine Learning services. Easier to learn, but requires more manual work for large projects.

Both frameworks are production-ready and used in large companies. The difference is in approach: Django — convention over configuration (Django Way), Flask — freedom of choice (micro framework).

Gunicorn vs uWSGI: What to Choose

Both are WSGI servers for running Python applications. They act as intermediaries between the web server (Nginx) and your code (Django/Flask). But in practice, we choose Gunicorn in 90% of cases.

Why Gunicorn

  • Simpler to configure — fewer parameters, clearer documentation, faster start
  • Standard for Docker — most tutorials, Dockerfile examples, and best practices are written for it
  • "Cleaner" code — easier to debug problems, less "magic"
  • Sufficient functionality — for 90% of projects, nothing more is needed
  • Active community — quick answers to questions, regular updates

When uWSGI

uWSGI is more powerful: has its own binary protocol (uwsgi), built-in caching, emperor mode for managing multiple applications, out-of-the-box WebSocket support. But configuration is too cumbersome for typical projects — dozens of parameters, complex syntax. Choose uWSGI only if:

  • You need emperor mode for managing many applications on one server
  • You use specific features: built-in caching, spooler for queues
  • You already have experience with uWSGI

Number of Workers

The classic formula (2 × CPU cores) + 1 is a good starting point, but not an axiom. More details on choosing the number of cores and threads for server:

Task Type Recommendation Why
CPU-bound ≈ number of cores Image processing, heavy computations — workers compete for CPU
I/O-bound (2-4) × cores Many DB queries, external APIs — workers often wait
Mixed (2 × cores) + 1 Typical web application — balance

Async Workers (gevent, eventlet)

For I/O-bound applications with many concurrent requests, you can use async workers:

# Installation
pip install gunicorn gevent
# Running with gevent workers
gunicorn myproject.wsgi:application \
    --worker-class gevent \
    --workers 4 \
    --worker-connections 1000 \
    --bind unix:/run/gunicorn.sock

One gevent worker can handle thousands of concurrent connections thanks to cooperative multitasking. But there's a nuance: all code must be "gevent-friendly" — blocking operations will block the entire worker.

Full Gunicorn Configuration Example

# /etc/gunicorn/gunicorn.conf.py
# Number of workers for 4-core server
workers = 9  # (2×4)+1
# Bind to Unix socket (faster than TCP)
bind = 'unix:/run/gunicorn.sock'
# Timeouts
timeout = 30  # If longer — move to Celery
graceful_timeout = 30  # Time to finish current request
keepalive = 5  # Keep-alive connections with Nginx
# Restart workers to prevent memory leaks
max_requests = 1000
max_requests_jitter = 100  # Random deviation
# Logging
accesslog = '/var/log/gunicorn/access.log'
errorlog = '/var/log/gunicorn/error.log'
loglevel = 'warning'
# Process
daemon = False  # Systemd manages the process
user = 'www-data'
group = 'www-data'

Important parameters:

  • timeout = 30 — if a request takes longer than 30 seconds, the architecture needs to change to queues (Celery). Don't increase the timeout — it hides the problem
  • graceful_timeout = 30 — time to finish the current request before reloading the worker
  • max_requests = 1000 — restarting the worker after 1000 requests prevents accumulation of memory leaks from poorly written libraries

Production Checklist for Django

Before going to production, be sure to check these settings:

settings.py

# CRITICAL: disable debug mode
DEBUG = False
# Clear list of allowed domains
ALLOWED_HOSTS = ['example.com', 'www.example.com']
# If Django is behind Nginx/proxy
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Security
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True

⚠️ DEBUG = False is critical! With DEBUG enabled, on errors Django shows full traceback with code, file paths, environment variables. This is a direct security threat.

Secrets and Environment Variables

Never commit secrets to Git. Use:

  • Environment Variables via .env file (django-environ or python-dotenv)
  • HashiCorp Vault or AWS Secrets Manager for large projects
# .env file (DO NOT commit to Git!)
SECRET_KEY=your-super-secret-key
DATABASE_URL=postgres://user:pass@localhost/dbname
DEBUG=False

Middleware

Required middleware for production:

  • SecurityMiddleware — redirects to HTTPS, protection from XSS/Clickjacking
  • WhiteNoise — serving static files without a separate web server (if no CDN)
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'whitenoise.middleware.WhiteNoiseMiddleware',  # After SecurityMiddleware
    # ... other middleware
]
# WhiteNoise configuration
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

Systemd: Auto-start and Monitoring

Systemd ensures automatic application startup on server boot and restart on failures.

Typical Service File

Create file /etc/systemd/system/gunicorn.service:

[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/var/www/myproject
# Path to gunicorn in virtualenv
ExecStart=/var/www/myproject/venv/bin/gunicorn \
    --workers 3 \
    --bind unix:/run/gunicorn.sock \
    myproject.wsgi:application
# Auto-restart on failure
Restart=always
RestartSec=5
# Environment variables
EnvironmentFile=/var/www/myproject/.env
[Install]
WantedBy=multi-user.target

Service Management

# Reload systemd configuration
sudo systemctl daemon-reload
# Enable auto-start
sudo systemctl enable gunicorn
# Start service
sudo systemctl start gunicorn
# Check status
sudo systemctl status gunicorn
# View logs
sudo journalctl -u gunicorn -f

The Restart=always parameter together with RestartSec=5 guarantees that the service will automatically come up 5 seconds after any failure.

Nginx as Reverse Proxy

Nginx accepts requests from clients and passes them to Gunicorn. Also serves static files and provides SSL encryption.

Configuration for Django

# /etc/nginx/sites-available/myproject
upstream gunicorn {
    server unix:/run/gunicorn.sock fail_timeout=0;
}
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;
}
server {
    listen 443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL certificates (Let's Encrypt)
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    # Static files
    location /static/ {
        alias /var/www/myproject/staticfiles/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }
    
    # Media files (uploads)
    location /media/ {
        alias /var/www/myproject/media/;
        expires 7d;
    }
    
    # Proxy to Gunicorn
    location / {
        proxy_pass http://gunicorn;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_connect_timeout 30s;
        proxy_read_timeout 30s;
    }
}
# Activate configuration
sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Configuration for Flask

For Flask, Nginx configuration is almost identical. The main difference is in the Gunicorn command:

# Django
gunicorn myproject.wsgi:application --workers 3 --bind unix:/run/gunicorn.sock
# Flask
gunicorn app:app --workers 3 --bind unix:/run/gunicorn.sock
# Flask with factory pattern (create_app)
gunicorn "app:create_app()" --workers 3 --bind unix:/run/gunicorn.sock

Where app:app means: file app.py, variable app (Flask instance).

PostgreSQL vs MySQL

For Django, we recommend PostgreSQL 95% of the time. Reasons:

  • Best support in Django — JSONB fields, full-text search, ArrayField, data type handling
  • Reliability — better handling of concurrent transactions, MVCC
  • Extensibility — PostGIS for geodata, TimescaleDB for time-series
  • Industry standard — most Python projects use Postgres

We use MySQL only if it's a client requirement or legacy code.

Connection Pooling

Each request to Django creates a database connection. With high traffic, this becomes a serious bottleneck — the database can't handle the number of concurrent connections.

At Django level (basic solution):

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'mydb',
        'USER': 'myuser',
        'PASSWORD': 'mypassword',
        'HOST': 'localhost',
        'CONN_MAX_AGE': 60,  # Keep connection for 60 seconds
        'CONN_HEALTH_CHECKS': True,  # Django 4.1+
    }
}

CONN_MAX_AGE is usually set to 60-300 seconds. This allows reusing connections between requests.

For high traffic — PgBouncer:

If CONN_MAX_AGE isn't enough or you see "too many connections" errors, set up PgBouncer between Django and the database. It maintains a connection pool and distributes them among workers.

# Installing PgBouncer
sudo apt install pgbouncer
# Configuration /etc/pgbouncer/pgbouncer.ini
[databases]
mydb = host=localhost dbname=mydb
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20

💡 Real case: A Flask project where each request created a new DB connection. Result: 504 Gateway Timeout under load, response time ~1.2 seconds. After implementing PgBouncer (connection pooling), response time dropped to 150ms — an 8x improvement.

Docker in Production

We definitely recommend it. Docker ensures identical development and production environments — "works on my machine" stops being a problem. Deployment becomes predictable and repeatable.

Choosing Base Image

We recommend python:3.12-slim:

  • alpine — often creates problems with compiling C libraries (psycopg2, numpy, Pillow, cryptography). Build can take much longer due to compilation with musl libc
  • slim — ideal balance between size (~150MB) and stability. Uses glibc, all libraries install without problems
  • full image — only if you need specific system packages (ffmpeg, imagemagick, etc.)

Dockerfile Example

FROM python:3.12-slim
# Environment variables for Python
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# System dependencies
RUN apt-get update && apt-get install -y \
    libpq-dev \
    && rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy requirements first for layer caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Then application code
COPY . .
# Create unprivileged user
RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser
# Collect static
RUN python manage.py collectstatic --noinput
# Start
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "myproject.wsgi:application"]

Docker Compose for Production

version: '3.8'
services:
  web:
    build: .
    restart: always
    env_file: .env
    depends_on:
      - db
      - redis
    networks:
      - backend
    
  db:
    image: postgres:16
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    networks:
      - backend
  
  redis:
    image: redis:7-alpine
    restart: always
    networks:
      - backend
  nginx:
    image: nginx:alpine
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./staticfiles:/var/www/static:ro
      - ./certbot/conf:/etc/letsencrypt:ro
    depends_on:
      - web
    networks:
      - backend
networks:
  backend:
volumes:
  postgres_data:

Important Docker Practices

  • Multi-stage builds — to reduce final image size
  • Health checks — for automatic restart of unhealthy containers
  • .dockerignore — exclude .git, __pycache__, .env, venv
  • Don't store data in container — use volumes for DB and media files

Celery for Background Tasks

If a Django request takes longer than 30 seconds — that's a signal to move the task to Celery. Gunicorn worker is blocked for the duration of request execution, and with long tasks, free workers run out quickly.

Typical candidates for Celery:

  • Sending email (especially mass mailings)
  • Image processing (resize, watermark, optimization)
  • Report generation (PDF, Excel)
  • External API calls (payment systems, delivery)
  • Data import/export (CSV, XML)
  • Video/audio conversion
  • CRM/ERP synchronization

Basic Setup

# settings.py
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/Kyiv'
# Limits to prevent overload
CELERY_WORKER_PREFETCH_MULTIPLIER = 1
CELERY_TASK_ACKS_LATE = True

Task Example

# tasks.py
from celery import shared_task
from django.core.mail import send_mail
@shared_task
def send_welcome_email(user_id):
    from users.models import User
    user = User.objects.get(id=user_id)
    send_mail(
        'Welcome!',
        'Thank you for registering.',
        'noreply@example.com',
        [user.email],
    )
# Call in view
send_welcome_email.delay(user.id)  # Asynchronously

Systemd Service for Celery Worker

# /etc/systemd/system/celery.service
[Unit]
Description=Celery Worker
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/var/www/myproject
EnvironmentFile=/var/www/myproject/.env
ExecStart=/var/www/myproject/venv/bin/celery \
    -A myproject worker \
    --loglevel=info \
    --concurrency=4
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target

For periodic tasks (cron-like), add Celery Beat as a separate service with celery -A myproject beat.

Monitoring and Logging

Production without monitoring is flying blind. When something crashes, you find out from users, not from alerts.

Logging in Django

# settings.py
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'verbose': {
            'format': '{levelname} {asctime} {module} {message}',
            'style': '{',
        },
    },
    'handlers': {
        'file': {
            'level': 'WARNING',
            'class': 'logging.FileHandler',
            'filename': '/var/log/django/app.log',
            'formatter': 'verbose',
        },
        'console': {
            'class': 'logging.StreamHandler',
        },
    },
    'root': {
        'handlers': ['file', 'console'],
        'level': 'INFO',
    },
}

Sentry for Error Tracking

Sentry automatically collects exceptions from production and sends alerts. An invaluable debugging tool.

# Installation
pip install sentry-sdk
# settings.py
import sentry_sdk
sentry_sdk.init(
    dsn="https://xxx@sentry.io/xxx",
    traces_sample_rate=0.1,  # 10% of transactions for performance monitoring
    environment="production",
)

Health Checks

Endpoint for checking application status — useful for load balancers and monitoring:

# Django urls.py
from django.http import JsonResponse
def health_check(request):
    # Check DB
    from django.db import connection
    try:
        connection.ensure_connection()
    except Exception as e:
        return JsonResponse({'status': 'error', 'db': str(e)}, status=500)
    
    return JsonResponse({'status': 'ok'})
urlpatterns = [
    path('health/', health_check),
    # ...
]

Metrics for Prometheus/Grafana

For serious monitoring, use django-prometheus or statsd:

  • Requests per second
  • Response time (p50, p95, p99)
  • Error count
  • Worker memory usage

Common Mistakes

❌ Running Django as root

Never run the application as root. Create a separate user (www-data or django) with minimal permissions. If the application is compromised, the attacker gets only limited permissions, not full control over the server.

❌ DEBUG = True in production

On errors, Django shows all code, file paths, variable values, SQL queries. This is a direct security threat — the attacker sees the internal structure of the application. Always check DEBUG before deploying.

❌ Missing logging

When something crashes in production and logs are empty — debugging becomes a nightmare. Configure LOGGING in settings.py, log errors to file or centralized system. Use Sentry for real-time exception tracking.

❌ Media files in code folder

Storing uploads in the code folder makes horizontal scaling impossible and complicates Docker deployment. Use S3-compatible storage (AWS S3, MinIO, DigitalOcean Spaces) or a separate volume.

❌ Secrets in code

SECRET_KEY, DB passwords, API keys in settings.py or committed to Git — a classic mistake. Git history cannot be fully cleaned, secrets will remain accessible. Use environment variables from day one of the project.

❌ Missing migrations in production

Running python manage.py migrate in production without testing — risk of data loss or downtime. Always test migrations on staging environment, backup database before migration.

🐍 Ready to Launch Your Python Project?

VPS to start or Dedicated for high loads — solutions that scale with you.

💻 Cloud (VPS) Hosting

  • From $19.95/mo — Start small, scale instantly
  • KVM virtualization — Guaranteed resources without overselling
  • Instant upgrades — No downtime
  • NVMe storage — Fast performance
  • 24/7 support — Response under 10 min

🖥️ Dedicated Servers

  • From $200/mo — Modern configurations
  • Custom configurations — Intel or AMD, latest models
  • Multiple locations — EU + USA
  • 99.9% uptime — Reliability
  • DDoS protection — Included
  • Free migration — We'll help
  • Private Cloud — Proxmox, VMware, OpenStack

💬 Not sure which option you need?
💬 Contact us — we'll help with everything!

Frequently Asked Questions

Gunicorn or uWSGI for production?

For 90% of projects, we recommend Gunicorn — simpler to configure, is the standard for Docker containers, has clear documentation. Choose uWSGI only if you need its specific features: own binary protocol uwsgi, built-in caching, emperor mode for managing multiple applications.

PostgreSQL or MySQL for Django?

PostgreSQL 95% of the time. Django has the best support for Postgres: JSONB fields for storing JSON without separate tables, full-text search without Elasticsearch, ArrayField, HStoreField. MySQL only if it's a client requirement or migrating an existing project.

How many workers for Gunicorn?

Starting formula: (2 × CPU cores) + 1. For a 4-core server, that's 9 workers. For I/O-bound tasks (many DB queries), you can increase. For CPU-bound (image processing) — stay closer to the number of cores. Always test under real load.

Which Docker image to choose?

python:3.12-slim — optimal balance between size (~150MB) and stability. Alpine often creates problems with compiling C libraries due to musl libc — psycopg2, numpy, Pillow may not build or take hours to build.

What to do with 504 Gateway Timeout?

Usually the problem is long queries to DB or external services. Check PostgreSQL slow query log, implement connection pooling (PgBouncer), move heavy tasks to Celery, use Redis for caching frequent queries.

How to safely update dependencies?

Don't update everything at once. Use pip-audit to check vulnerabilities, update one package at a time, test on staging. For critical security patches — update immediately after testing.

How to scale Django/Flask application?

Horizontally: add more servers behind a load balancer. Vertically: increase server resources. Mandatory: move sessions to Redis, media files to S3, database to a separate server. Use CDN for static files.

Contents

Share this article

MANAGED VPS STARTING AT

$19 95 / mo

NEW INTEL XEON BASED SERVERS

$80 / mo

CDN STARTING AT

$0 / mo

 

By using this website you consent to the use of cookies in accordance with our privacy and cookie policy.