HostiServer
2026-01-23 13:35:00
Nginx vs Apache in 2026: Comparison, Configurations, How to Choose
Nginx vs Apache in 2026: What's the Real Difference?
If you're reading yet another "Nginx vs Apache" article expecting a definitive answer — there won't be one. But there will be something more useful: real production data, specific configs, and clear selection criteria.
Let's start with a fact: 70% of Hostiserver clients use Nginx (including configurations with Nginx as Reverse Proxy). Pure Apache is primarily found on legacy projects and specific Shared hosting. The trend in recent years is clear — Nginx + PHP-FPM is displacing Apache even where it was once the standard.
But this doesn't mean Apache is dead. In some scenarios, it's still irreplaceable — especially where dynamic configuration via .htaccess or specific modules is needed. That's why it's important to understand the strengths of both servers.
This guide will help you understand when to use what — without marketing fluff and with concrete examples from real projects.
Who Uses What in Practice
According to our statistics, project distribution looks like this:
Apache is chosen for:
- Legacy projects with many individual .htaccess modifications
- Bitrix, older Magento versions, and other systems critically dependent on .htaccess
- Shared hosting where user isolation is needed
Nginx is chosen for:
- High-load media portals
- APIs for mobile applications
- Modern SPAs on React/Vue/Angular
- WordPress and WooCommerce
- eCommerce platforms (PrestaShop, modern Magento)
Architecture: Why Nginx Is Faster
The difference between Nginx and Apache isn't about "code quality" or "modernity." The difference is in the architectural approach to connection handling. And this difference has practical consequences you'll feel under load.
Apache: One Process — One Connection
Apache was born in an era when servers handled dozens of concurrent users. Its classic model (prefork) works simply: one process per connection. It's reliable, easy to debug, but scales terribly.
Imagine: 1000 concurrent connections = 1000 processes. Each process with mod_php weighs 50-100 MB. Do the math: 50-100 GB of RAM just for the web server. During traffic spikes (or DDoS), the server crashes because it exhausts the MaxRequestWorkers limit.
This isn't a theoretical problem — it's #1 on our clients' Apache problem list: MaxClients/MaxRequestWorkers limit exhaustion. The site simply "dies" during bot floods or sudden traffic increases.
ℹ️ MPM Event: In 2026, Apache has the MPM Event module, which works more efficiently — one thread can serve multiple keep-alive connections. This significantly improved the situation, but architecturally Apache still falls short of Nginx. MPM Event is a "patch" on old architecture, not a new architecture.
Nginx: One Process — Thousands of Connections
Nginx is built on event-driven architecture. One worker process handles thousands of connections simultaneously through non-blocking I/O. No context switching between processes, no overhead for creating new threads.
Result: Nginx with 4 worker processes (matching CPU cores) can serve as much traffic as Apache with hundreds of processes — while consuming 10x less RAM.
This explains why under sudden load (DDoS or viral traffic) Apache crashes much faster. Nginx usually holds — it might return 504 Gateway Timeout if the backend (PHP) can't keep up, but the web server itself continues working and accepting connections.
| Parameter | Apache (prefork) | Apache (event) | Nginx |
|---|---|---|---|
| Processing Model | Process per connection | Thread per connection | Event-driven |
| RAM for 1000 connections | ~50-100 GB | ~5-10 GB | ~50-100 MB |
| Behavior under DDoS | Crashes quickly | Holds better | Stable |
| Static content | Slow | Medium | Very fast |
| Dynamic configuration (.htaccess) | Full support | Full support | Not supported |
The C10k Problem
In the 2000s, the so-called "C10k problem" emerged — how to serve 10,000 concurrent connections. Apache with its "process per connection" model couldn't do this efficiently. Nginx was created specifically to solve this problem.
In 2026, Apache has improved thanks to MPM Event, but it still consumes more memory per new connection compared to Nginx. If your project expects significant loads — Nginx remains the better choice.
Real Performance: What Tests Show
Synthetic benchmarks are one thing, real production is another. Here's what we see on client projects.
WordPress: Nginx Is 15-30% Faster
Under identical conditions (same server, same PHP-FPM version) Nginx + PHP-FPM shows better TTFB (Time to First Byte) by 15-30%. With parallel requests, the difference is even greater — Apache starts "choking" earlier.
For WooCommerce with large catalogs, it's critical to configure fastcgi_buffer_size so large product pages don't create temporary files on disk. This can give an additional 10-20% speed boost on pages with many products.
Static Content: Nginx Is Unmatched
Serving images, CSS, JS — this is what Nginx was built for. It does this with minimal CPU load thanks to sendfile() and zero-copy data transfer. Apache loses even with mod_cache.
Recommended settings for static content in Nginx:
- Maximum
expiresfor static files - Disable logging for static files (reduces I/O)
sendfile on— use sendfile system calltcp_nopush on— TCP packet optimization
Under Load: The Difference Is Critical
During sudden traffic floods (DDoS, viral content, Reddit hug of death) Apache crashes faster. Nginx usually holds — it might return 504 Gateway Timeout if the backend (PHP) can't keep up, but the web server itself continues working.
💡 Success Story: A media resource migrated from Apache to Nginx + FastCGI Cache. Result: reduced servers from 4 to 1 under the same load. Infrastructure savings — over $500/month. Caching anonymous users allowed handling traffic spikes that previously "killed" the entire infrastructure.
Typical Performance Problems
Top 3 Apache Problems:
- MaxClients/MaxRequestWorkers limit exhaustion (site "dies" during bot floods)
- RAM overconsumption through mod_php (each process 50-100 MB)
- Complex .htaccess conflicts that are hard to debug
Top 3 Nginx Problems:
- fastcgi_params configuration errors (502 Bad Gateway)
- Permission issues with PHP-FPM sockets
- Incorrect redirect configuration (redirect loops)
Recommended Nginx Configurations
Theory is good, but you need working configs. Here's what we use in production.
WordPress + FastCGI Cache
This configuration can handle thousands of requests per second. Caching anonymous users is a must-have for any WordPress site. Logged-in users and POST requests automatically bypass the cache.
# Define cache zone in http block
fastcgi_cache_path /var/cache/nginx/wordpress
levels=1:2
keys_zone=WP_CACHE:100m
inactive=60m;
server {
listen 443 quic reuseport; # HTTP/3
listen 443 ssl;
server_name example.com;
# SSL settings
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.3;
# HTTP/3 announcement
add_header Alt-Svc 'h3=":443"; ma=86400';
root /var/www/wordpress;
index index.php;
# Cache bypass logic
set $skip_cache 0;
# Don't cache POST requests
if ($request_method = POST) {
set $skip_cache 1;
}
# Don't cache requests with query string
if ($query_string != "") {
set $skip_cache 1;
}
# Don't cache logged-in users
if ($http_cookie ~* "comment_author|wordpress_logged_in|wp-postpass") {
set $skip_cache 1;
}
# Static files with long caching
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# Caching
fastcgi_cache WP_CACHE;
fastcgi_cache_valid 200 301 302 1h;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
# Debug header (shows HIT/MISS)
add_header X-FastCGI-Cache $upstream_cache_status;
}
}
Reverse Proxy for Node.js / Python / Go
Ideal for modern applications on NestJS, FastAPI, Django, Gin. Includes WebSocket support and real IP forwarding.
upstream backend {
server 127.0.0.1:3000;
server 127.0.0.1:3001 backup;
keepalive 32;
}
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Real IP forwarding
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# Health check endpoint
location /health {
proxy_pass http://backend;
access_log off;
}
}
Static File Optimization
# Global settings in http block
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/javascript
text/xml application/xml application/xml+rss text/javascript
image/svg+xml;
# Static files
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
log_not_found off;
}
Recommended Apache Configurations
If you need Apache — use it correctly. The main rule: only MPM Event + PHP-FPM. mod_php is an anachronism that belongs in 2010.
MPM Event — The 2026 Gold Standard
MPM Event is closest to the Nginx model and works efficiently with keep-alive connections. Here's the optimal configuration:
# /etc/apache2/mods-enabled/mpm_event.conf
<IfModule mpm_event_module>
# Number of processes at startup
StartServers 3
# Minimum spare threads
MinSpareThreads 25
# Maximum spare threads
MaxSpareThreads 75
# Maximum threads per process
ThreadLimit 64
ThreadsPerChild 25
# Maximum concurrent requests
# Calculate: RAM / ~50MB per worker
MaxRequestWorkers 400
# 0 = processes don't restart
MaxConnectionsPerChild 0
</IfModule>
⚠️ Important: MaxRequestWorkers value must be calculated for your RAM. Formula: (Available RAM - RAM for system and database) / ~50MB. If set too high — the server will start swapping and everything gets worse.
Connecting PHP-FPM
Forget about mod_php. PHP-FPM allows Apache processes to stay lightweight because PHP runs in separate processes.
# Enable required modules
# a2enmod proxy_fcgi setenvif
# a2enconf php8.3-fpm
<VirtualHost *:443>
ServerName example.com
DocumentRoot /var/www/html
# SSL
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
# PHP via FPM
<FilesMatch "\.php$">
SetHandler "proxy:unix:/run/php/php8.3-fpm.sock|fcgi://localhost"
</FilesMatch>
<Directory /var/www/html>
AllowOverride All
Require all granted
# Prevent PHP execution in uploads
<Directory /var/www/html/wp-content/uploads>
<FilesMatch "\.php$">
Require all denied
</FilesMatch>
</Directory>
</Directory>
# Compression
<IfModule mod_deflate.c>
AddOutputFilterByType DEFLATE text/html text/plain text/css
AddOutputFilterByType DEFLATE application/javascript application/json
</IfModule>
# Static file caching
<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType image/jpg "access plus 1 year"
ExpiresByType image/jpeg "access plus 1 year"
ExpiresByType image/png "access plus 1 year"
ExpiresByType image/gif "access plus 1 year"
ExpiresByType text/css "access plus 1 month"
ExpiresByType application/javascript "access plus 1 month"
</IfModule>
</VirtualHost>
Why Not mod_php?
mod_php embeds the PHP interpreter in every Apache process. This means:
- Each process weighs 50-100 MB instead of 5-10 MB
- Even when Apache serves an image — the process still holds PHP in memory
- With 100 concurrent connections — 5-10 GB RAM just for Apache
With PHP-FPM, Apache processes stay lightweight, and PHP runs separately — only when needed.
HTTP/2, HTTP/3, and QUIC
HTTP/2: The Default Standard
In 2026, HTTP/2 is the baseline. Modern web is impossible without it. What HTTP/2 provides:
- Multiplexing — multiple requests over one connection
- Header compression — HPACK algorithm reduces overhead
- Server Push — server can proactively send resources
- Prioritization — important resources load first
Both servers support HTTP/2, but Nginx does it more efficiently thanks to its architecture.
HTTP/3 (QUIC): The Future Is Here
HTTP/3 is based on the QUIC protocol (UDP instead of TCP). In 2026, this is already stable technology, not an experiment.
HTTP/3 Advantages:
- Faster connection establishment — 0-RTT possible on repeat visits
- Better performance with packet loss — no head-of-line blocking
- Built-in encryption — TLS 1.3 integrated into the protocol
- Connection migration — when switching networks (WiFi → LTE) the connection doesn't break
In Nginx, HTTP/3 is supported through the ngx_http_v3_module and is production-ready.
⚠️ Gotcha: HTTP/3 requires open UDP port 443. Some corporate firewalls still block UDP, so always configure fallback to HTTP/2. This happens automatically via the Alt-Svc header — the browser first connects via HTTP/2, receives the header, and tries HTTP/3.
# Nginx with HTTP/3
server {
# HTTP/3 on QUIC
listen 443 quic reuseport;
# Fallback to HTTP/2
listen 443 ssl;
http2 on;
# TLS 1.3 only for HTTP/3
ssl_protocols TLSv1.2 TLSv1.3;
# HTTP/3 support announcement
add_header Alt-Svc 'h3=":443"; ma=86400';
# ECDSA certificate (faster than RSA)
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
}
SSL/TLS Recommendations 2026
- TLS 1.3 — the only recommended version (TLS 1.2 for legacy)
- ECDSA certificates — faster than RSA, use P-256
- OCSP Stapling — reduces handshake time
- Session Tickets — for faster session resumption
Security: Common Vulnerabilities and Hardening
Common Configuration Mistakes
Most breaches aren't due to web server code vulnerabilities, but incorrect configuration. Here's what we see regularly:
Apache:
Options +Indexes— allows viewing directory file lists. Attackers can find backup files, configs, temporary files- Outdated CGI scripts with known vulnerabilities
- Exposed .env, .git, backup files (.sql, .bak)
- Outdated modules with CVEs
Nginx:
- Incorrect
location ~ \.php$— can execute PHP code in uploaded files (e.g., image.php.jpg) - Missing access denial for hidden files (.git, .env)
- Exposed server version (server_tokens on) — makes CVE searching easier
🚨 Real Example: Breaches through exposed .env files are one of the most common problems. The .env file contains database passwords, API keys, secrets. If it's accessible via web — it's complete compromise.
Security Headers
Minimum security header set for 2026:
# Nginx Security Headers
# Prevents MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;
# Clickjacking protection
add_header X-Frame-Options "SAMEORIGIN" always;
# HTTPS only (enable after testing!)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# XSS protection (for older browsers)
add_header X-XSS-Protection "1; mode=block" always;
# Referrer Policy
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Content Security Policy (customize for your site!)
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" always;
ℹ️ CSP: Content-Security-Policy is the most complex but most important header. It prevents XSS attacks but requires careful configuration for each specific site. Start with Content-Security-Policy-Report-Only for testing.
Nginx Hardening
# Hide server version
server_tokens off;
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Deny access to sensitive files
location ~* \.(env|git|sql|bak|old|backup|log|ini|conf)$ {
deny all;
}
# Limit HTTP methods
location / {
limit_except GET HEAD POST {
deny all;
}
}
# Slowloris protection
client_body_timeout 10s;
client_header_timeout 10s;
keepalive_timeout 65s;
send_timeout 10s;
# Request size limits
client_max_body_size 10m;
client_body_buffer_size 128k;
Rate Limiting
For Layer 7 DDoS protection, Nginx has built-in tools — and they work much better than Apache mod_evasive.
# Define rate limiting zones
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
limit_req_zone $binary_remote_addr zone=api:10m rate=100r/s;
limit_conn_zone $binary_remote_addr zone=conn:10m;
# Apply to login
location /wp-login.php {
limit_req zone=login burst=3 nodelay;
limit_conn conn 5;
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
}
# Apply to API
location /api/ {
limit_req zone=api burst=50 nodelay;
proxy_pass http://backend;
}
Apache: mod_evasive works worse — for serious protection, it's better to use Cloudflare, Fail2Ban, or iptables/ipset based on log analysis.
What to Choose: Decision Matrix 2026
Instead of abstract advice — a concrete selection table. Find your scenario:
| Scenario | Recommendation | Why |
|---|---|---|
| Landing / SPA | Nginx | Fast static, minimal CPU |
| WordPress | Nginx + FastCGI Cache | Caching, +15-30% speed |
| Legacy: Bitrix, Magento | Apache MPM Event | Needs .htaccess |
| API servers | Nginx Reverse Proxy | SSL, buffers, WebSockets |
| High-load / DDoS | Nginx | Handles thousands of connections |
| Shared hosting | CloudLinux + Apache | Isolation, .htaccess |
| Kubernetes | Nginx Ingress | K8s standard |
| Local dev | Docker + Nginx | Same as production |
| Microservices | Nginx / Envoy | Mesh, load balancing |
When Does Nginx in Front of Apache Make Sense?
If you need .htaccess for developers but want to protect the server from slow connections (Slowloris attacks) — put Nginx as Reverse Proxy in front of Apache. Nginx buffers slow clients and passes only ready requests to Apache.
This configuration also allows:
- Caching static on Nginx (faster)
- SSL termination on Nginx (less load on Apache)
- Rate limiting on Nginx (more effective)
- Keeping .htaccess for specific rules
🚨 "Double Caching" Mistake: If you put Nginx in front of Apache — don't enable caching on both levels. This leads to content update problems and debugging nightmares. Cache only on Nginx.
Common Mistakes: What We See at Clients
These mistakes come up regularly in consulting — don't repeat them:
- ❌ Mistake #1: Apache "Out of Habit"
-
What happens: Client installs Apache on a new project simply because "always did it that way" or "that's what the tutorial said."
Problem: For modern SPA, API, or WordPress without complex .htaccess rules, Apache just gets in the way — consumes more resources with no benefits.
Solution: Before choosing a web server, ask yourself: "Do I really need .htaccess?" If no — Nginx. If unsure — also Nginx, because converting nginx.conf to .htaccess is easier than vice versa.
- ❌ Mistake #2: mod_php Instead of PHP-FPM
-
What happens: Apache with mod_php — each process weighs 50-100 MB, even when serving a static image.
Problem: With 100 concurrent connections — 5-10 GB RAM just for Apache. At 200 — already 10-20 GB. Server quickly runs out of memory.
Solution: PHP-FPM only. Always. mod_php is an anachronism that belongs in 2010. Setup takes 5 minutes, resource savings are dramatic.
- ❌ Mistake #3: MaxRequestWorkers Exhaustion
-
What happens: Site "dies" during traffic spikes, logs show "server reached MaxRequestWorkers setting".
Problem: Apache can't accept new connections because all slots are taken. Even legitimate users can't access the site.
Solution:
- Check if you're using MPM Event (not prefork)
- Check if you're using PHP-FPM (not mod_php)
- Calculate MaxRequestWorkers for your RAM
- Or put Nginx in front of Apache as a buffer
- Or migrate to Nginx completely
- ❌ Mistake #4: 502 Bad Gateway in Nginx
-
What happens: Nginx returns 502, even though PHP-FPM seems to be running.
Causes:
- Wrong socket path in fastcgi_pass
- Socket permission issues (www-data vs nginx)
- PHP-FPM overloaded or crashed
- Incorrect fastcgi_params
Diagnostics:
# Check socket ls -la /run/php/php8.3-fpm.sock # Check PHP-FPM status systemctl status php8.3-fpm # Check Nginx logs tail -f /var/log/nginx/error.log # Check PHP-FPM logs tail -f /var/log/php8.3-fpm.log
- ❌ Mistake #5: Redirect Loops
-
What happens: Browser shows "ERR_TOO_MANY_REDIRECTS".
Cause: Nginx redirects HTTP→HTTPS, and behind it a load balancer or Cloudflare also redirects — creating an infinite loop.
Solution: Check X-Forwarded-Proto header before redirecting:
# Correct redirect behind load balancer if ($http_x_forwarded_proto = "http") { return 301 https://$server_name$request_uri; } # OR use map map $http_x_forwarded_proto $redirect_https { default 0; http 1; } server { if ($redirect_https) { return 301 https://$server_name$request_uri; } }
- ❌ Mistake #6: Double Caching
-
What happens: Nginx in front of Apache, caching enabled on both levels.
Problem: Content is cached twice, hard to understand where data comes from, invalidation problems, debugging becomes a nightmare.
Solution: Cache only at one level — preferably Nginx (it's more efficient). Disable mod_cache on Apache completely.
Monitoring and Diagnostics
A configured web server is half the battle. You also need to see what's happening with it.
- ➤ Key Nginx Metrics
-
What to monitor first:
- Active connections — how many connections are currently active
- Requests per second (RPS) — server load
- Response codes — especially 4xx and 5xx
- Request time — how long request processing takes
- Upstream response time — how long we wait for backend response
We recommend nginx-vts-exporter + Grafana — this provides detailed statistics per virtual host.
- ➤ JSON Log Format
-
We recommend JSON format for logs. This allows instant indexing in ELK (Elasticsearch, Logstash, Kibana) or Grafana Loki.
# Nginx JSON log format log_format json_combined escape=json '{' '"time_local":"$time_local",' '"remote_addr":"$remote_addr",' '"request":"$request",' '"status":$status,' '"body_bytes_sent":$body_bytes_sent,' '"request_time":$request_time,' '"upstream_response_time":"$upstream_response_time",' '"http_referer":"$http_referer",' '"http_user_agent":"$http_user_agent"' '}'; access_log /var/log/nginx/access.json json_combined;
- ➤ GoAccess — Quick Log Analysis
-
For quick console analysis, we recommend GoAccess — it draws reports right in the terminal in real time:
# Installation apt install goaccess # Real-time in terminal goaccess /var/log/nginx/access.log -c # HTML report goaccess /var/log/nginx/access.log -o report.html --log-format=COMBINED
- ➤ Health Checks
-
For load balancing and monitoring, you need health check endpoints. In Nginx Open Source — only through third-party modules or scripts, in Nginx Plus — natively.
The simplest option — monitor 200 response code on /health page:
# Backend should return 200 OK on /health location /health { access_log off; return 200 "healthy\n"; add_header Content-Type text/plain; }
Containers and Kubernetes
In 2026, containerization is the standard for production. Here are best practices for web servers in containers.
- ➤ Docker Best Practices
-
Key rules for web server containerization:
- One process per container — Nginx separate, PHP-FPM separate
- Alpine images — nginx:alpine weighs ~20MB instead of ~140MB
- Logs to stdout/stderr — don't write to files, Docker will collect them
- Non-root user — run Nginx as unprivileged user
# Dockerfile for Nginx FROM nginx:alpine # Copy config COPY nginx.conf /etc/nginx/nginx.conf COPY conf.d/ /etc/nginx/conf.d/ # Non-root user RUN chown -R nginx:nginx /var/cache/nginx USER nginx EXPOSE 80 443 CMD ["nginx", "-g", "daemon off;"]
- ➤ Kubernetes Ingress Controller
-
Nginx Ingress Controller is the de facto standard in the K8s world. It provides:
- SSL termination
- Load balancing between pods
- Path-based and host-based routing
- Rate limiting
- Custom error pages
# Example Ingress resource apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/limit-rps: "100" spec: ingressClassName: nginx tls: - hosts: - example.com secretName: example-tls rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: my-app port: number: 80
🚀 Powerful Servers for Your Project
Hostiserver offers VPS and Dedicated servers with pre-configured Nginx or Apache — depending on your needs.
🖥️ What You Get:
- Optimized web server configuration
- PHP 8.3 + PHP-FPM with tuning
- Free SSL certificates (Let's Encrypt)
- HTTP/3 support
- Configuration for your project
- 24/7 technical support
💬 Not sure what to choose?
💬 Contact us — we'll help with configuration!
Frequently Asked Questions
- Can I use Nginx and Apache together?
-
Yes, it's a popular configuration. Nginx works as Reverse Proxy (accepts connections, serves static, buffers slow clients), while Apache handles dynamic content via .htaccess.
This makes sense if your project critically depends on .htaccess (e.g., Bitrix), but you want to protect against Slowloris and optimize static file delivery.
Important: don't enable caching on both levels — cache only on Nginx.
- Which web server is better for WordPress?
-
Nginx + PHP-FPM + FastCGI Cache — definitely. 15-30% faster in tests, more stable under load.
Apache makes sense only if you use plugins that critically depend on .htaccess (some security plugins, specific redirects). But such cases are becoming fewer — most plugins have adapted to Nginx.
For WooCommerce with large catalogs, Nginx is mandatory — configure fastcgi_buffer_size for large product pages.
- How to migrate from Apache to Nginx?
-
The main pain point is rewriting .htaccess rules to Nginx format. Automatic converters work poorly, most rules have to be rewritten manually.
Migration Plan:
- Collect all .htaccess files from the project
- Analyze rules (RewriteRule, RedirectMatch, etc.)
- Write equivalent rules for nginx.conf
- Deploy staging server with Nginx
- Test all URLs, redirects, forms
- Check SEO redirects (301) separately
- Switch DNS or load balancer
Typical time: from a few hours (simple site) to several days (complex project with many .htaccess rules).
- What is HTTP/3 and do I need it?
-
HTTP/3 is a new version of the HTTP protocol based on QUIC (UDP instead of TCP). In 2026, it's already stable technology.
Advantages:
- Faster connection establishment (0-RTT)
- Better performance with packet loss
- Connection migration when switching networks
When needed: If your audience is mobile users or people with unstable internet (poor WiFi, LTE with interruptions) — HTTP/3 will provide noticeable improvement.
Caveat: Requires open UDP port 443. Some corporate networks block UDP — always configure fallback to HTTP/2.
- Why does my Apache crash under load?
-
Most likely cause — MaxRequestWorkers limit exhaustion. When all slots are taken, Apache can't accept new connections.
What to check:
# Search in logs grep "MaxRequestWorkers" /var/log/apache2/error.log # Current load apachectl status # RAM consumption ps aux | grep apache | awk '{sum+=$6} END {print sum/1024 " MB"}' # Process count ps aux | grep apache | wc -lSolution:
- Switch to MPM Event + PHP-FPM (if not already)
- Calculate MaxRequestWorkers for your RAM
- Put Nginx in front of Apache as a buffer
- Or migrate to Nginx completely
- How to set up web server monitoring?
-
Minimum metric set:
- Active connections
- Requests per second (RPS)
- Response codes (4xx, 5xx)
- Response time / Upstream time
- CPU and RAM usage
Recommended tools:
- nginx-vts-exporter + Grafana — detailed per-host statistics
- GoAccess — quick log analysis in terminal
- ELK Stack / Grafana Loki — centralized logging
- Prometheus + Node Exporter — system metrics
We recommend JSON format for logs — this allows instant indexing without parsing.
- What automation is needed for SSL certificates?
-
In 2026, the standard is Certbot (Let's Encrypt) with automatic renewal and config reload.
# Installation apt install certbot python3-certbot-nginx # Get certificate for Nginx certbot --nginx -d example.com -d www.example.com # Automatic renewal (cron job added automatically) certbot renew --dry-runCertbot automatically renews certificates 30 days before expiration and reloads Nginx/Apache.