HostiServer
2026-03-23 12:40:00
Node.js on VPS: Complete Guide from Setup to Production in 2026
Why Node.js on a VPS — and why not shared hosting
The problem is that most guides on the internet end at node app.js — and that's it, as if the application is production-ready. In reality, there's a huge gap between "ran it on localhost" and "the app runs stably under load": a process manager, reverse proxy, SSL, firewall, monitoring, memory leak protection.
In this guide — the complete path from a clean Ubuntu server to a production-ready Node.js setup. Every step with real configurations and explanations of why exactly this way and not another. All material is based on recommendations from Hostiserver engineers who work with Node.js client projects every day.
Step 1: Preparing the VPS
Before installing Node.js — basic security setup. This takes 10 minutes but will save you from problems later. A fresh Ubuntu VPS without these steps is like an apartment with the door wide open: sooner or later someone will walk in.
Regarding VPS configuration choice: for a small API or bot, 1 vCPU and 2 GB RAM is sufficient. For a mid-sized project with PM2 Cluster Mode, Redis, and Nginx — we recommend at least 2 vCPU and 4 GB. An NVMe disk is essential if you use Redis or have intensive logging — the difference from HDD in random read/write is enormous.
System update
The first thing we do on any new server — update packages. An outdated OpenSSL or kernel version means known vulnerabilities that are exploited by automated scanners:
sudo apt update && sudo apt upgrade -y
Creating a separate user
Never run Node.js as root. Create a separate user with limited privileges:
sudo adduser nodeapp
sudo usermod -aG sudo nodeapp
su - nodeapp
Firewall setup (UFW)
Ubuntu has a built-in UFW (Uncomplicated Firewall), but it's disabled by default. Enable it and open only three ports — SSH for management, 80 for HTTP, and 443 for HTTPS:
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
After activation, check the status with sudo ufw status — only these three rules should be listed. Everything else is blocked automatically.
⚠️ Important: Do not open port 3000 (or any other application port) externally. Nginx will proxy traffic from 80/443 to your app. Direct access to the Node.js port is a security hole.
SSH keys instead of passwords — mandatory. Password authentication is one of the most common causes of server breaches via brute-force. If you haven't set up SSH keys yet — do it first, even before installing Node.js. Generate a key pair on your local machine (ssh-keygen -t ed25519), copy the public key to the server (ssh-copy-id nodeapp@your-server-ip), and disable password authentication in /etc/ssh/sshd_config.
Step 2: Installing Node.js via NVM
Forget apt install nodejs — this command will give you an outdated version from the Ubuntu repository and create conflicts if the server runs more than one project. The issue isn't just the version: updating via apt can break global npm packages, and rolling back without removing everything is difficult.
The proper way is NVM (Node Version Manager). It installs into the user's home directory, doesn't require sudo, and allows instant switching between versions for different projects. One VPS — two projects — one on Node 20, another on Node 22 — zero conflicts.
Installing NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
Installing Node.js LTS
nvm install 22
nvm use 22
nvm alias default 22
In 2026, the gold standard is LTS 22.x. This version provides the balance between new features (updated V8 engine, native Fetch API) and production stability.
Verification
node -v # v22.x.x
npm -v # 10.x.x
ℹ️ Alternative: Fast Node Manager (fnm) — a faster alternative to NVM, written in Rust. Both options work for production. The key point — don't use the system package manager.
Step 3: Deploying the application
Server is ready, Node.js is installed — time to deploy the actual app. The simplest method is via Git. If you don't have a repository yet — now is the time to create one, because deploying via FTP or scp individual files in 2026 is a path to version chaos.
cd /home/nodeapp
git clone https://github.com/your-user/your-app.git
cd your-app
npm install --production
The --production flag skips devDependencies — testing frameworks, linters, and build tools aren't needed on a production server. This saves space and reduces the number of potential vulnerabilities.
If you have multiple projects on the VPS — each in its own directory with its own ecosystem.config.js for PM2 and a separate Nginx config. For example: /home/nodeapp/api on port 3000 and /home/nodeapp/admin on port 3001, each behind its own Nginx server block.
Environment file (.env)
Never store passwords and keys in code. Create an .env file:
NODE_ENV=production
PORT=3000
DB_HOST=localhost
DB_PASSWORD=your_secure_password
SESSION_SECRET=random_string_here
🚨 Warning: Add .env to .gitignore. If this file ends up in a Git repository — consider all passwords compromised.
Test run
node app.js
If the app starts without errors and responds to curl http://localhost:3000 — move on to PM2.
Step 4: PM2 for production
Running Node.js via node app.js in production is like driving without a seatbelt. The process will crash on the first unhandled error, and nobody will restart it.
PM2 is the de facto standard for managing Node.js processes. It restarts the app on crash, distributes load across cores, and survives server reboots. Without PM2, any unhandled exception will kill your process — and only an administrator can restore it manually, if they notice the problem.
Installation
npm install -g pm2
ecosystem.config.js
Instead of launching from the command line — create a configuration file:
module.exports = {
apps: [{
name: 'my-app',
script: './app.js',
instances: 'max',
exec_mode: 'cluster',
max_memory_restart: '1G',
exp_backoff_restart_delay: 100,
env_production: {
NODE_ENV: 'production',
PORT: 3000
}
}]
};
What matters here:
- instances: 'max' — PM2 will automatically create one process per CPU core. On a 4-core VPS — 4 processes, each handling requests in parallel.
- exec_mode: 'cluster' — mandatory. Without cluster mode, Node.js uses only one core, the rest sit idle.
- max_memory_restart: '1G' — protection against memory leaks. If a process consumes more than 1 GB — PM2 quietly restarts it without downtime (other processes continue working).
- exp_backoff_restart_delay — if the app crashes in a loop, the delay between restarts increases gradually. Without this, PM2 could create an infinite restart cycle.
Launch and autostart
pm2 start ecosystem.config.js --env production
pm2 save
pm2 startup
Now the app will survive server reboots and start automatically.
Useful PM2 commands
pm2 list # List processes
pm2 logs my-app # Real-time logs
pm2 monit # CPU/RAM monitoring
pm2 reload my-app # Zero-downtime restart
Step 5: Nginx as reverse proxy + SSL
Node.js should not listen on port 80 or 443 directly. This is one of the most common mistakes — "why do I need Nginx if Node.js can handle HTTP itself?" The answer: Nginx sits in front of Node.js and handles tasks that Node.js does poorly or not at all.
SSL termination — Nginx handles encryption far more efficiently. Protection from slow loris and other slow attacks — Node.js would spend an entire event loop thread on each such connection, Nginx won't. Static files — Nginx serves them directly from disk without loading Node.js. Load balancing — if PM2 launched 4 workers, Nginx distributes requests among them.
Installing Nginx and Certbot
Nginx is the most popular reverse proxy for Node.js. Certbot from Let's Encrypt issues free SSL certificates with automatic renewal every 90 days:
sudo apt install nginx certbot python3-certbot-nginx -y
sudo certbot --nginx -d example.com
Certbot will automatically add SSL lines to the Nginx configuration. You can verify that auto-renewal works with sudo certbot renew --dry-run.
Nginx configuration
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
💡 Tip: Pay attention to the proxy_set_header Upgrade and Connection 'upgrade' lines — without them, WebSocket connections (chats, notifications, real-time dashboards) won't work through Nginx.
Check and restart
sudo nginx -t
sudo systemctl reload nginx
Don't forget to also add a block for HTTP → HTTPS redirect:
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
Now your app is accessible via HTTPS, and port 3000 is blocked by the firewall from external access. Replace example.com with your actual domain in all configs.
If your app also serves static files (images, CSS, JavaScript for the frontend) in addition to the API — add a separate location block so Nginx serves them directly without loading Node.js:
location /static/ {
alias /home/nodeapp/your-app/public/;
expires 30d;
add_header Cache-Control "public, immutable";
}
This significantly reduces the load on Node.js processes — Nginx serves static content from disk tens of times more efficiently.
Node.js security in production
Firewall and SSH keys are the baseline. For Node.js, there are additional attack vectors that are often overlooked. The npm ecosystem consists of thousands of dependencies, and each one is a potential entry point. And default HTTP headers reveal information about your stack that helps attackers.
Helmet.js
A set of middleware that sets secure HTTP headers. Without Helmet, your Express application responds with default headers that expose your tech stack (e.g., X-Powered-By: Express) and don't protect against basic attacks. Helmet closes a dozen common vulnerabilities with a single line of code — XSS, clickjacking, MIME sniffing:
const helmet = require('helmet');
app.use(helmet());
Rate Limiting
Limits the number of requests from a single IP address within a time period. Without rate limiting, an automated script can send thousands of requests per second — brute-force passwords, scrape content, or simply overwhelm your API with excessive load:
const rateLimit = require('express-rate-limit');
app.use(rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // maximum 100 requests
}));
For authorization endpoints, set a separate, stricter limit — for example, 5 attempts per 15 minutes. For more robust protection — add rate limiting at the Nginx level as well using the limit_req directive. Two layers of restrictions are more reliable than one: Nginx rejects excess requests before they even reach Node.js.
npm audit
Regularly check for vulnerabilities in dependencies:
npm audit
npm audit fix
Make this part of your CI/CD or at least run it before every deploy. The average Node.js project has hundreds of transitive dependencies — libraries that pull in other libraries, which pull in yet more. A single vulnerable library somewhere on the third level of nesting can compromise the entire server. In recent years, there have been several high-profile incidents with npm packages stealing environment variables — meaning database passwords and API keys.
⚠️ Reminder: A Node.js process should never run as root. A separate user with limited privileges is not a recommendation — it's a requirement for production.
Performance optimization and monitoring
The app is running, Nginx is proxying traffic, PM2 is watching over processes — the basic setup is complete. Now it's time to make everything work not just correctly, but fast. Three tools that give the biggest performance boost: Redis for caching, Gzip for response compression, and proper monitoring to see where bottlenecks exist.
Redis for caching
If your app frequently hits the database with identical queries — Redis changes the picture dramatically. In the eCommerce API case from the introduction, implementing Redis caching for heavy SQL queries to the product catalog was one of three factors that reduced response time from 850 to 120 ms.
Installation on Ubuntu:
sudo apt install redis-server -y
sudo systemctl enable redis-server
The principle is simple: before querying the database — check if the result is in Redis. If yes — return from cache (microseconds instead of milliseconds). If not — make the query, save the result to cache with a TTL:
const Redis = require('ioredis');
const redis = new Redis();
async function getProducts(categoryId) {
const cacheKey = `products:${categoryId}`;
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
const products = await db.query(
'SELECT * FROM products WHERE category_id = ?',
[categoryId]
);
await redis.set(cacheKey, JSON.stringify(products), 'EX', 300);
return products;
}
TTL (Time To Live) depends on the data type: product catalog — 5–15 minutes, configurations — 1 hour, user sessions — up to 24 hours. The golden rule: cache what's read often and changes rarely.
One nuance that's often forgotten — cache invalidation. When an administrator updates a product price, the old cache must be deleted. The simplest approach — when data changes, delete the corresponding key: await redis.del('products:' + categoryId). The next request will go directly to the database and refresh the cache with fresh data.
Gzip compression via Nginx
If your API returns JSON — and most Node.js APIs do exactly that — Gzip compression reduces traffic volume by 5–10x. This is especially noticeable on mobile connections and with large data arrays. Add to the http section of the main Nginx config (/etc/nginx/nginx.conf):
gzip on;
gzip_types application/json text/plain application/javascript;
gzip_min_length 1000;
The gzip_min_length parameter means Nginx won't compress responses smaller than 1000 bytes — for small responses, compression overhead exceeds the benefit. Clients won't notice the difference — browsers and HTTP libraries automatically decompress Gzip.
Zero-downtime update deployment
When you need to update code in production — don't stop the entire service. PM2 in cluster mode supports the reload command, which restarts processes one by one. While one worker restarts with the new code, others continue handling requests:
cd /home/nodeapp/your-app
git pull origin main
npm install --production
pm2 reload my-app
Users won't notice a thing. For larger projects, it's worth setting up a CI/CD pipeline via GitHub Actions or GitLab CI — then deployment happens automatically on push to the main branch. Basic workflow: push → automatic npm audit → npm test → SSH to VPS → git pull → npm install → pm2 reload. The entire cycle takes 30–60 seconds.
Monitoring: Prometheus + Grafana
The app is running, but is it running well? Without monitoring, you'll learn about problems only when clients start complaining — or when PM2 restarts the process for the fifth time in an hour. Event Loop Lag gradually increases, memory leaks, database connections aren't being closed — all invisible without observability tools.
The best stack for production Node.js monitoring is Prometheus + Grafana. Prometheus collects metrics from your app through a dedicated endpoint (use the prom-client package), and Grafana visualizes them in dashboards with alerts to email or Slack. Key metrics:
| Metric | What it shows | When to raise alarm |
|---|---|---|
| Event Loop Lag | Event processing delay — the most important metric for Node.js | > 100 ms |
| Process CPU | CPU load per worker | > 80% consistently |
| Heap Memory Used | Memory consumption (look for a growth trend — that's a memory leak) | Constant growth without drops |
| Active Handles | Open connections, files, timers | Sudden increase |
ℹ️ Budget alternative: If Prometheus + Grafana is too much for your project, PM2 Plus (paid version of PM2) provides basic monitoring with a web dashboard without additional setup.
Production checklist: verify before launch
Everything is configured, the app is running — but before letting real users in, go through this list. Every skipped item is a potential incident at 3 AM. Better to spend 15 minutes checking now than an hour recovering later.
💡 Node.js production checklist:
☐ Node.js installed via NVM (not apt), LTS version 22.x
☐ App launched via PM2 in cluster mode, not via node app.js
☐ PM2 configured for autostart (pm2 save + pm2 startup)
☐ Nginx reverse proxy in front of Node.js with SSL via Let's Encrypt
☐ App port (3000) closed in firewall, only 80, 443, SSH open
☐ Process runs under a separate user, not root
☐ Passwords, keys, and tokens in .env file, .env in .gitignore
☐ Helmet.js enabled for secure HTTP headers
☐ Rate limiting configured (at the app and/or Nginx level)
☐ npm audit shows no critical vulnerabilities
☐ Logging works — pm2 logs shows what's happening
☐ Monitoring is set up — you know when something goes wrong
If all items are checked — your Node.js app is ready for production load. If not — every skipped item is a potential incident at 3 AM.
5 mistakes that break Node.js in production
These mistakes repeat from project to project. Each looks trivial during development — but in production under load, they turn into serious problems with downtime, data leaks, or performance degradation.
| Mistake | Consequences | Solution |
|---|---|---|
| Running as root | A vulnerability in the app = full access to the server | Separate user with limited privileges |
| node app.js without PM2 | One error — app is dead, nobody will restart it | PM2 with cluster mode and auto-restart |
| Node.js listening on port 80 directly | No SSL, no slow loris protection, no load balancing | Nginx reverse proxy in front of Node.js |
| Passwords in code or Git | Anyone with repository access sees credentials | .env file + .gitignore |
| Installing via apt install | Outdated version, conflicts between projects, difficult updates | NVM or fnm for version management |
All these mistakes share one thing: they're invisible during development and testing. Problems start only when the app hits real load or becomes a target for automated scanners — and that's a matter of days, not months.
🚀 Ready to launch Node.js on a reliable VPS?
Your application's performance starts with the right server. NVMe disks, guaranteed resources, support that understands Node.js.
💻 Cloud (VPS) Hosting
- From $19.95/mo — Start small, scale instantly
- KVM virtualization — Guaranteed resources, no overselling
- NVMe storage — Fast performance for Node.js
- Root access — Full control: NVM, PM2, Nginx — all yours
- 24/7 support — <10 min response
🖥️ Dedicated Servers
- From $200/mo — For high-load Node.js APIs
- Custom configurations — Intel or AMD, up to 128 cores
- Multiple locations — EU + USA
- 99.9% uptime — SLA guarantee
- DDoS protection — Included
- Free migration — We'll help move your project
💬 Not sure which option you need?
💬 Contact us and we'll help you figure it out!
Frequently Asked Questions
- Which Node.js version to choose for production in 2026?
LTS 22.x — the gold standard. This version receives security patches until April 2027 and supports all modern features: updated V8 engine, native Fetch API, improved ESM. Install via NVM, not via apt.
- How much RAM does Node.js need on a VPS?
Minimum — 2 GB for a small app with PM2 and Nginx. For an API with Redis caching and multiple workers — at least 4 GB. A real eCommerce API runs stably on 4 GB with 4 PM2 processes.
- PM2 or systemd — which is better for Node.js?
PM2 — for Node.js it's more convenient. Cluster mode with load balancing, zero-downtime reload, built-in monitoring, ecosystem.config.js — all out of the box. Systemd works for simple services, but for Node.js, PM2 gives you more control.
- Is Nginx mandatory if Node.js can listen on port 80 itself?
Yes, it is. Nginx handles SSL termination, compression, static files, and protects against slow connections (slow loris). Node.js on port 80 without Nginx is production without protection.
- How to update Node.js in production without downtime?
Via NVM: install the new version (
nvm install 22.x), check compatibility, thenpm2 reload all. PM2 in cluster mode will restart processes one by one — not a single request will be lost.
- Is Redis needed for every Node.js project?
No, not for every one. Redis provides maximum benefit when the app frequently makes identical database queries — product catalogs, category lists, configurations. For a simple landing page or an API with unique queries on every call, Redis won't make a noticeable difference.