Why Performance Audits Matter
Website performance directly impacts three things: search rankings, user experience, and conversion rates. Google uses Core Web Vitals as a ranking signal. Users abandon pages that take more than 3 seconds to load. And every 100ms of added latency costs e-commerce sites roughly 1% in revenue.
A structured performance audit identifies exactly where your site is slow, why it's slow, and what to fix first.
Before You Start
Tools You'll Need
| Tool | Purpose | Cost |
|---|---|---|
| Google PageSpeed Insights | CWV scores + field data | Free |
| Chrome DevTools (Performance tab) | Detailed waterfall analysis | Free |
| Lighthouse (Chrome built-in) | Comprehensive audit scoring | Free |
| WebPageTest | Multi-location, multi-device testing | Free tier |
| Google Search Console | CWV report with real-user data | Free |
| Chrome UX Report (CrUX) | Real-user performance data | Free |
| BundleAnalyzer / Source Map Explorer | JavaScript bundle analysis | Free |
Pages to Audit
Don't audit every page. Start with these categories:
- Homepage — Highest traffic, first impression
- Top landing pages — Highest organic traffic (check GSC)
- Conversion pages — Pricing, signup, checkout
- Template representatives — One page per layout template
- Worst performers — Flagged in CWV report
Phase 1: Performance Baseline
Step 1: Collect Lab Data
Run Lighthouse on each target page in an incognito window. Record:
| Page | LCP | INP/FID | CLS | TTFB | FCP | Speed Index | Score |
|---|---|---|---|---|---|---|---|
| Homepage | |||||||
| /pricing | |||||||
| /blog/top-post |
Run each page 3 times and use the median score. Network conditions and CPU load cause variance.
Step 2: Collect Field Data
Lab data shows potential; field data shows reality. Pull real-user metrics from:
- Google Search Console → Core Web Vitals report — Shows pass/fail status by page group
- PageSpeed Insights → "Field Data" section — 28-day real-user CWV averages
- Chrome UX Report — Origin-level and URL-level p75 metrics
Critical thresholds (75th percentile):
| Metric | Good | Needs Improvement | Poor |
|---|---|---|---|
| LCP | ≤2.5s | ≤4.0s | >4.0s |
| INP | ≤200ms | ≤500ms | >500ms |
| CLS | ≤0.1 | ≤0.25 | >0.25 |
Step 3: Benchmark Competitors
Test 3–5 competitor pages targeting the same keywords. This contextualizes your numbers—if competitors load in 1.5s and you load in 3.5s, the gap is a competitive disadvantage.
Phase 2: Core Web Vitals Deep Dive
Largest Contentful Paint (LCP)
LCP measures when the largest visible element finishes rendering. Common culprits:
Slow server response (high TTFB):
- Unoptimized database queries
- No CDN or CDN misconfiguration
- Server-side rendering bottlenecks
Render-blocking resources:
- Large CSS files loaded synchronously
- JavaScript blocking the main thread before LCP element renders
- Font files delaying text paint
Large LCP element:
- Uncompressed hero images
- Missing responsive image srcset
- No image preload for above-fold content
Diagnostic steps:
- Open DevTools → Performance tab → Record page load
- Find the LCP marker in the timeline
- Identify what element triggered LCP (usually hero image or heading)
- Trace the waterfall to find what delayed it
Interaction to Next Paint (INP)
INP measures responsiveness—how quickly the page responds to user interactions. Common culprits:
- Long JavaScript tasks blocking the main thread
- Heavy event handlers (click, scroll, input)
- Excessive DOM size causing slow layout recalculations
- Third-party scripts competing for main thread time
Cumulative Layout Shift (CLS)
CLS measures visual stability—unexpected movement of page elements. Common culprits:
- Images and iframes without explicit dimensions
- Dynamically injected content above existing content
- Web fonts causing FOUT/FOIT layout shifts
- Ads and embeds without reserved space
Phase 3: Resource Loading Audit
JavaScript Analysis
- Open DevTools → Network tab → Filter by JS
- Record total JS transferred and parsed
- Identify the largest bundles
- Check for unused JS: Coverage tab (Ctrl+Shift+P → "Coverage")
Key questions:
- Is any JavaScript render-blocking? (loaded in
<head>withoutdeferorasync) - How much JS is unused on initial load? (Coverage tab shows percentage)
- Are vendor libraries duplicated across bundles?
- Could any libraries be replaced with lighter alternatives?
CSS Analysis
- Filter Network tab by CSS
- Check total CSS transferred
- Use Coverage tab to find unused CSS
- Identify render-blocking stylesheets
Common fixes:
- Inline critical CSS, async-load the rest
- Remove unused CSS with PurgeCSS or similar
- Split CSS by route for code-split loading
Font Loading
Web fonts cause invisible or fallback text (FOIT/FOUT) and add to page weight.
Audit checklist:
- Using
font-display: swaporoptional? - Fonts preloaded with
<link rel="preload">? - Subset to needed character sets only?
- Using modern formats (WOFF2)?
- Self-hosted vs third-party CDN?
Image Audit
Images typically account for 50–70% of page weight.
| Check | Tool | Target |
|---|---|---|
| Format | Lighthouse | WebP or AVIF for all raster images |
| Sizing | DevTools | Serve at display size, not larger |
| Compression | PageSpeed Insights | Quality 75–85 for photos |
| Lazy loading | Source inspection | All below-fold images |
| Dimensions | HTML inspection | Width/height on all <img> tags |
| Preload | <head> inspection | Hero/LCP images preloaded |
Phase 4: Third-Party Script Audit
Third-party scripts (analytics, ads, chat, social) often cause the largest performance regressions.
Measuring Impact
- Run Lighthouse with all third-party scripts
- Block all third-party scripts (DevTools → Network → Block request domain)
- Run Lighthouse again
- The difference is your third-party performance cost
Common Offenders
| Category | Typical Impact | Mitigation |
|---|---|---|
| Chat widgets | 200–500ms LCP, 100–300KB | Lazy-load on scroll or click |
| Analytics (multiple) | 100–300ms, 50–150KB | Consolidate to one provider |
| A/B testing | 200–800ms TTFB (blocking) | Move to edge, reduce blocking |
| Social embeds | 500ms+, 200KB+ each | Use static screenshots with links |
| Ad scripts | 300ms+, highly variable | Lazy-load below fold |
Phase 5: Mobile Performance Testing
Test on real mobile conditions, not just desktop with a narrow viewport.
Throttling Profiles
| Profile | CPU | Network | Represents |
|---|---|---|---|
| Fast mobile | 4x slowdown | Fast 4G (12Mbps, 50ms RTT) | Good mobile experience |
| Average mobile | 4x slowdown | Regular 4G (4Mbps, 170ms RTT) | Typical user |
| Slow mobile | 6x slowdown | Slow 3G (400Kbps, 400ms RTT) | Worst case |
Mobile-Specific Checks
- Touch target sizes (minimum 48x48px)
- Viewport configuration
- No horizontal scroll
- Font sizes readable without zoom
- Critical content visible without JavaScript
Phase 6: Prioritized Fix List
After completing the audit, organize findings by impact and effort:
Priority Matrix
| Priority | Criteria | Examples |
|---|---|---|
| P0 — Critical | Failing CWV, high impact, quick fix | Image optimization, preload LCP element |
| P1 — High | Significant impact, moderate effort | Code splitting, defer non-critical JS |
| P2 — Medium | Measurable impact, larger effort | Migrate to SSG, implement image CDN |
| P3 — Low | Minor impact or high effort | Full framework migration, edge rendering |
Fix Documentation Template
For each finding, document:
Issue: [What's wrong]
Impact: [Which metric, by how much]
Current: [Current measurement]
Target: [Goal measurement]
Fix: [Specific technical action]
Effort: [Hours estimate]
Priority: [P0/P1/P2/P3]
Post-Audit: Monitoring
After implementing fixes, set up ongoing monitoring:
- Google Search Console CWV report — Weekly review
- Real-user monitoring (RUM) — Track field metrics continuously
- Performance budgets — Set CI/CD gates for bundle size and Lighthouse scores
- Regression alerts — Notify when metrics degrade beyond thresholds
Schedule re-audits quarterly, or immediately after major site changes (redesigns, new features, infrastructure changes).
Your Performance Audit Checklist
Baseline (Phase 1):
- Lighthouse scores recorded for all target pages
- Field data collected from CrUX/GSC
- Competitor benchmarks captured
Core Web Vitals (Phase 2):
- LCP bottleneck identified for each page
- INP issues diagnosed
- CLS sources catalogued
Resources (Phase 3):
- JS bundle sizes and unused code documented
- CSS delivery strategy evaluated
- Font loading optimized
- Image audit complete
Third-Party (Phase 4):
- Third-party impact measured
- Mitigation strategies for top offenders
Mobile (Phase 5):
- Throttled testing complete
- Mobile-specific issues documented
Action Plan (Phase 6):
- All findings prioritized (P0–P3)
- Fix documentation complete
- Monitoring established
FAQs
How often should I run a performance audit?
Full audits quarterly. Lightweight checks (Lighthouse scores on key pages) monthly. Always run an audit before and after major site changes.
What Lighthouse score should I target?
Aim for 90+ on mobile. Scores of 50–89 indicate significant optimization opportunities. Scores below 50 require urgent attention—they likely correlate with ranking penalties.
Should I prioritize lab data or field data?
Field data (real users) is the ground truth and what Google uses for rankings. Lab data is useful for diagnosing specific issues and testing fixes before deployment.
Can performance improvements actually improve rankings?
Yes, but the effect size depends on your starting point. Sites with poor CWV that fix to "good" often see measurable ranking improvements. Sites already in the "good" range see diminishing returns from further speed optimization.