⏱️ Reading time: ~10 minutes | 📅 Published: December 23, 2025
Not a technical issue. Not "something to fix someday." A hole through which money leaks — every day, while you're reading this text.
A customer clicks on your ad. You paid for that click. The site loads for one second, two, three... On the fourth, they close the tab. Money gone, customer gone, and you never even knew about it.
And this isn't an isolated case — it's a pattern. Every extra second of loading time cuts off part of your audience. Google pushes you down in search results. Conversions drop. And you think the problem is with your ads, your product, your pricing — anything but that white screen your customers are staring at.
Good news: this can be fixed. But first, you need a diagnosis.
Before optimizing anything, you need to measure. "I feel like the site is slow" isn't a diagnosis. You need concrete numbers.
Google PageSpeed Insights — free and most important. Shows a score from 0 to 100 for mobile and desktop versions separately. If you see the red zone (0-49) — there are serious problems. Orange (50-89) — room for improvement. Green (90-100) — you're good.
GTmetrix — more detailed report. Shows a loading "waterfall": what loads, in what order, how long each element takes. Useful for finding specific bottlenecks.
WebPageTest — for deep analysis. Allows testing from different locations and devices. If your audience is in Europe but your server is in the USA — you'll see it here.
Google uses Mobile-First Indexing. This means it looks at the mobile version of your site for ranking, even if the user is searching from a computer.
For example: PageSpeed shows 85 points for desktop and 35 for mobile. The site owner only checked desktop and thought everything was fine. Meanwhile, Google was seeing a slow mobile site the whole time.
Always check both versions. If you have to choose a priority — mobile is more important.
And another thing: don't test your site on your computer with fast internet. Chrome DevTools lets you simulate slow 3G — that's how many mobile users actually see your site.
In 2021, Google introduced Core Web Vitals as an official ranking factor. These are three specific metrics that measure real user experience. Not abstract "speed," but concrete things.
What it is: the time it takes for the largest visible element on the screen to load. Usually this is the main image, video, or a large block of text.
Target value: under 2.5 seconds — good. 2.5-4 seconds — needs improvement. Over 4 seconds — poor.
Typical cause of problems: heavy hero image without optimization, slow server, render-blocking resources.
What it is: time from user click/tap to visual response from the site. Replaced the old FID metric in March 2024.
Target value: under 200 ms — good. 200-500 ms — needs improvement. Over 500 ms — poor.
Typical cause of problems: heavy JavaScript, especially third-party scripts (analytics, chatbots, advertising pixels). User clicks a button — and nothing happens for half a second.
What it is: how much content "jumps" during loading. You know that feeling when you want to click a button, and it suddenly shifts down because an ad loaded?
Target value: under 0.1 — good. 0.1-0.25 — needs improvement. Over 0.25 — poor.
Typical cause of problems: images without specified dimensions, fonts loading with delay, dynamic content (ads, banners).
| Metric | Good | Needs Work | Poor |
|---|---|---|---|
| LCP | ≤ 2.5s | 2.5-4s | > 4s |
| INP | ≤ 200ms | 200-500ms | > 500ms |
| CLS | ≤ 0.1 | 0.1-0.25 | > 0.25 |
The most common cause of slow websites. And the easiest to fix.
Typical scenario: a designer uploaded a hero image at 4000×3000 pixels, 5 MB. On the site it displays at 1200×800. Those 5 MB still download completely. Multiply by 10-15 images per page — and you get 50+ MB that the user has to download.
Compress images. Tools like TinyPNG or Squoosh reduce size by 60-80% without noticeable quality loss. It's free and takes a minute.
Use modern formats. WebP is 25-35% smaller than JPEG at the same quality. AVIF is even smaller, but not supported by all browsers. WordPress, Shopify, and most modern CMSs can automatically convert to WebP.
Specify dimensions. Always add width and height attributes to the <img> tag. This prevents content "jumping" (CLS) during loading.
<!-- ❌ Bad -->
<img src="photo.jpg">
<!-- ✅ Good -->
<img src="photo.webp" width="800" height="600" alt="Image description">
Enable lazy loading. Images below the fold don't need to load immediately. The loading="lazy" attribute delays their loading until the user scrolls to them.
<img src="photo.webp" loading="lazy" width="800" height="600" alt="Description">
Responsive images. Different screen sizes need different image sizes. Why should a mobile phone download a 2000-pixel-wide image?
The quickest way to get results — just compress the images on your homepage through TinyPNG. 10 minutes of work can speed up your site by 30-50%.
Modern websites are overloaded with code. A typical WordPress theme drags along 20+ CSS files and just as many JavaScript files. Most of this code isn't used on any given page.
This is the sneakiest problem. The browser won't start displaying the page until it downloads and processes CSS and JS located in the <head>. The user sees a white screen and waits.
PageSpeed often shows the warning "Eliminate render-blocking resources." Here's what to do about it:
For JavaScript: add the defer or async attribute. This allows the browser to download the script in parallel without blocking rendering.
<!-- ❌ Blocks rendering -->
<script src="analytics.js"></script>
<!-- ✅ Doesn't block -->
<script src="analytics.js" defer></script>
For CSS: extract critical CSS (styles for the first screen) inline into the <head>, and load the rest asynchronously.
Minification removes whitespace, comments, and shortens variable names in code. The file becomes unreadable for humans, but the browser doesn't care. Size decreases by 10-30%.
Most CMSs have plugins for automatic minification. For WordPress — Autoptimize, WP Rocket, LiteSpeed Cache.
Chrome DevTools (Coverage tab) shows how much code on the page is actually used. Often it's 20-30% of what's loaded. The rest is dead weight.
The solution depends on the situation:
But be careful: aggressive minification and removal of "unnecessary" code can break the site. Always test on a staging environment before production.
Every file on the page is a separate HTTP request to the server. CSS, JavaScript, images, fonts, icons... On a typical site there can be 100+. Each request has latency, and they add up.
Here's a typical setup on a marketing website:
Each of these scripts pulls in more scripts. GTM can load dozens of additional tags. And they all execute in the user's browser, consuming resources.
Result: INP metric flies into the red zone. User clicks — and the site "thinks" for half a second because JavaScript is busy with analytics.
What to do:
Google Fonts are convenient, but each font is a request to an external server. And if you've connected 3 fonts with 4 weights each — that's 12 requests.
Solution:
font-display: swap so text is visible before the font loads10 small CSS files are better combined into one. This reduces the number of requests. But with HTTP/2 this is no longer as critical — the protocol allows loading many files in parallel.
Without caching, the browser downloads all files fresh on every visit. The site logo, CSS, JavaScript — everything loads again and again, even though it hasn't changed.
The server can tell the browser: "This file won't change for a year, save it locally." On the next visit, the file loads from cache instantly.
Configured through HTTP headers Cache-Control and Expires. For static files (CSS, JS, images), caching for a year is recommended.
# Example for Nginx
location ~* \.(css|js|jpg|jpeg|png|gif|ico|webp|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
⚠️ For beginners: If you're not sure where server configuration files are located (nginx.conf, .htaccess) — better contact your hosting support. Incorrect editing can bring down the site.
Dynamic pages (PHP, Python, Node.js) are generated on each request. The server executes code, makes database queries, assembles HTML. This takes time.
Server cache stores ready HTML. The next user gets a saved copy without code execution.
Tools:
For most content sites (blogs, catalogs, landing pages), full page caching is the most effective method. The page is generated once, then served as static HTML.
But remember: caching doesn't work for pages with personalized content (shopping cart, user account). Those need a different approach — fragment caching or AJAX loading of dynamic parts.
You can perfectly optimize the frontend, but if the server takes 2 seconds to respond — the site will still be slow.
Time from request to first byte of response. Shows how fast the server reacts. Ideal — under 200 ms. If over 600 ms — there's a problem.
High TTFB can mean:
Text files (HTML, CSS, JS, JSON) can be compressed before sending. Gzip reduces size by 70-80%. Brotli is even more effective for text.
Check if compression is enabled. In Chrome DevTools (Network → select file → Headers) look for Content-Encoding: gzip or br.
# Enabling Gzip in Nginx
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
gzip_min_length 1000;
The performance difference between PHP versions is dramatic. PHP 8.3 can be twice as fast as PHP 7.4 on the same code.
| PHP Version | Status | Recommendation |
|---|---|---|
| PHP 8.4 / 8.3 | ✅ Active support | Recommended |
| PHP 8.2 | ⚠️ Security only | Minimum version |
| PHP 8.1 and below | ❌ End of Life | Update urgently |
Slow database queries are a common cause of high TTFB. Especially on sites with lots of content or complex filters.
Basic steps:
Cheap shared hosting or VPS means hundreds of sites on one server. If a neighboring site/client "eats up" resources — everyone suffers.
A dedicated server provides guaranteed resources. The speed difference can be 2-3x.
🔴 Red flag: If TTFB is consistently above 1 second — no frontend optimization will save you. The problem is on the server side, and it needs to be solved there.
CDN (Content Delivery Network) is a network of servers around the world that store copies of your content. Users get files from the nearest server, not your main one.
From our experience: if your audience is in one country and the server is nearby — CDN will give 10-15% improvement through static caching. The real effect (50%+) is felt by projects with a global audience. But even for local sites, CDN is useful as DDoS protection and an additional caching layer.
Quick self-diagnosis. Go through the list and mark what's already done:
If more than 5 points aren't done — there's significant room for improvement.
If you're having optimization issues — get in touch. Our team of specialists will help you figure it out: we'll conduct an audit, find bottlenecks, configure the server.
Google recommends LCP under 2.5 seconds. In practice: if the page fully loads in 3 seconds — that's good. In 1-2 seconds — excellent. Over 5 seconds — a problem users will notice.
It's a good first step, but not a cure-all. A caching plugin will help with server load, but won't fix heavy images, render-blocking scripts, or a slow database. You need a comprehensive approach.
Mobile devices have weaker processors and often slower internet. JavaScript that executes quickly on desktop can "hang" a mobile browser. Always test the mobile version separately.
Yes, directly. Core Web Vitals have been an official Google ranking factor since 2021. Slow sites get lower positions. Additionally, a slow site has a higher bounce rate, which also negatively impacts SEO.
We recommend monthly via PageSpeed Insights. Mandatory — after CMS updates, installing new plugins, or site changes. Speed can degrade gradually, and regular monitoring helps catch it early.
Depends on specific providers, but on average VPS is 2-3x faster. The main advantage is guaranteed resources. On shared hosting, your site competes with hundreds of others for CPU and RAM.