OtherAdvanced

Performance Audit Template

Run a comprehensive website performance audit covering Core Web Vitals, page speed, and rendering optimization. Step-by-step framework with actionable fixes.

Time to Complete
3-5 hours
Word Count
2,500-4,000
Sections
8
Difficulty
Advanced

Best Used For

Technical SEO Audits

Systematic performance evaluation of existing websites to identify and prioritize speed improvements

Pre-Launch Checks

Performance validation before launching a new site or major redesign

Core Web Vitals Remediation

Targeted audit when Google Search Console flags CWV issues affecting rankings

Competitive Benchmarking

Compare your site's performance metrics against top-ranking competitors

Template Structure

1

Performance Baseline

Capture current metrics across Core Web Vitals, Lighthouse scores, and real-user data

Example: Run Lighthouse on 5 representative pages; record LCP, FID/INP, CLS, TTFB, FCP, and Speed Index
2

Core Web Vitals Analysis

Deep-dive into each CWV metric with root cause identification

Example: LCP: 3.2s caused by unoptimized hero image (1.8MB PNG). Target: <2.5s via WebP conversion and preload
3

Resource Loading Audit

Evaluate CSS, JavaScript, fonts, and image delivery

Example: Found 4 render-blocking scripts totaling 280KB. Recommendation: defer non-critical JS, inline critical CSS
4

Rendering Performance

Assess server-side vs client-side rendering impact on perceived speed

Example: SPA routes fully client-rendered—FCP delayed to 2.8s. Migrate to SSG/SSR for content pages
5

Image Optimization

Audit image formats, sizing, lazy loading, and compression

Example: 47 images lacking width/height attributes causing CLS. 12 above-fold images missing preload hints
6

Third-Party Script Impact

Measure performance cost of analytics, ads, chat widgets, and tracking scripts

Example: Intercom widget adds 340ms to LCP and 120KB to bundle. Consider lazy-loading after interaction
7

Mobile Performance

Evaluate performance on mobile networks and devices

Example: On 4G throttle: TTI increases from 3.1s to 8.4s due to unminified JS bundle
8

Prioritized Fix List

Ranked recommendations by impact and implementation effort

Example: P0: Convert hero images to WebP (est. -1.2s LCP). P1: Defer non-critical JS (est. -0.8s TTI)

Example Outputs

Performance Baseline

Run Lighthouse on 5 representative pages; record LCP, FID/INP, CLS, TTFB, FCP, and Speed Index

Core Web Vitals Analysis

LCP: 3.2s caused by unoptimized hero image (1.8MB PNG). Target: <2.5s via WebP conversion and preload

Resource Loading Audit

Found 4 render-blocking scripts totaling 280KB. Recommendation: defer non-critical JS, inline critical CSS

Rendering Performance

SPA routes fully client-rendered—FCP delayed to 2.8s. Migrate to SSG/SSR for content pages

Image Optimization

47 images lacking width/height attributes causing CLS. 12 above-fold images missing preload hints

Why Performance Audits Matter

Website performance directly impacts three things: search rankings, user experience, and conversion rates. Google uses Core Web Vitals as a ranking signal. Users abandon pages that take more than 3 seconds to load. And every 100ms of added latency costs e-commerce sites roughly 1% in revenue.

A structured performance audit identifies exactly where your site is slow, why it's slow, and what to fix first.

Before You Start

Tools You'll Need

ToolPurposeCost
Google PageSpeed InsightsCWV scores + field dataFree
Chrome DevTools (Performance tab)Detailed waterfall analysisFree
Lighthouse (Chrome built-in)Comprehensive audit scoringFree
WebPageTestMulti-location, multi-device testingFree tier
Google Search ConsoleCWV report with real-user dataFree
Chrome UX Report (CrUX)Real-user performance dataFree
BundleAnalyzer / Source Map ExplorerJavaScript bundle analysisFree

Pages to Audit

Don't audit every page. Start with these categories:

  1. Homepage — Highest traffic, first impression
  2. Top landing pages — Highest organic traffic (check GSC)
  3. Conversion pages — Pricing, signup, checkout
  4. Template representatives — One page per layout template
  5. Worst performers — Flagged in CWV report

Phase 1: Performance Baseline

Step 1: Collect Lab Data

Run Lighthouse on each target page in an incognito window. Record:

PageLCPINP/FIDCLSTTFBFCPSpeed IndexScore
Homepage
/pricing
/blog/top-post

Run each page 3 times and use the median score. Network conditions and CPU load cause variance.

Step 2: Collect Field Data

Lab data shows potential; field data shows reality. Pull real-user metrics from:

  • Google Search Console → Core Web Vitals report — Shows pass/fail status by page group
  • PageSpeed Insights → "Field Data" section — 28-day real-user CWV averages
  • Chrome UX Report — Origin-level and URL-level p75 metrics

Critical thresholds (75th percentile):

MetricGoodNeeds ImprovementPoor
LCP≤2.5s≤4.0s>4.0s
INP≤200ms≤500ms>500ms
CLS≤0.1≤0.25>0.25

Step 3: Benchmark Competitors

Test 3–5 competitor pages targeting the same keywords. This contextualizes your numbers—if competitors load in 1.5s and you load in 3.5s, the gap is a competitive disadvantage.

Phase 2: Core Web Vitals Deep Dive

Largest Contentful Paint (LCP)

LCP measures when the largest visible element finishes rendering. Common culprits:

Slow server response (high TTFB):

  • Unoptimized database queries
  • No CDN or CDN misconfiguration
  • Server-side rendering bottlenecks

Render-blocking resources:

  • Large CSS files loaded synchronously
  • JavaScript blocking the main thread before LCP element renders
  • Font files delaying text paint

Large LCP element:

  • Uncompressed hero images
  • Missing responsive image srcset
  • No image preload for above-fold content

Diagnostic steps:

  1. Open DevTools → Performance tab → Record page load
  2. Find the LCP marker in the timeline
  3. Identify what element triggered LCP (usually hero image or heading)
  4. Trace the waterfall to find what delayed it

Interaction to Next Paint (INP)

INP measures responsiveness—how quickly the page responds to user interactions. Common culprits:

  • Long JavaScript tasks blocking the main thread
  • Heavy event handlers (click, scroll, input)
  • Excessive DOM size causing slow layout recalculations
  • Third-party scripts competing for main thread time

Cumulative Layout Shift (CLS)

CLS measures visual stability—unexpected movement of page elements. Common culprits:

  • Images and iframes without explicit dimensions
  • Dynamically injected content above existing content
  • Web fonts causing FOUT/FOIT layout shifts
  • Ads and embeds without reserved space

Phase 3: Resource Loading Audit

JavaScript Analysis

  1. Open DevTools → Network tab → Filter by JS
  2. Record total JS transferred and parsed
  3. Identify the largest bundles
  4. Check for unused JS: Coverage tab (Ctrl+Shift+P → "Coverage")

Key questions:

  • Is any JavaScript render-blocking? (loaded in <head> without defer or async)
  • How much JS is unused on initial load? (Coverage tab shows percentage)
  • Are vendor libraries duplicated across bundles?
  • Could any libraries be replaced with lighter alternatives?

CSS Analysis

  1. Filter Network tab by CSS
  2. Check total CSS transferred
  3. Use Coverage tab to find unused CSS
  4. Identify render-blocking stylesheets

Common fixes:

  • Inline critical CSS, async-load the rest
  • Remove unused CSS with PurgeCSS or similar
  • Split CSS by route for code-split loading

Font Loading

Web fonts cause invisible or fallback text (FOIT/FOUT) and add to page weight.

Audit checklist:

  • Using font-display: swap or optional?
  • Fonts preloaded with <link rel="preload">?
  • Subset to needed character sets only?
  • Using modern formats (WOFF2)?
  • Self-hosted vs third-party CDN?

Image Audit

Images typically account for 50–70% of page weight.

CheckToolTarget
FormatLighthouseWebP or AVIF for all raster images
SizingDevToolsServe at display size, not larger
CompressionPageSpeed InsightsQuality 75–85 for photos
Lazy loadingSource inspectionAll below-fold images
DimensionsHTML inspectionWidth/height on all <img> tags
Preload<head> inspectionHero/LCP images preloaded

Phase 4: Third-Party Script Audit

Third-party scripts (analytics, ads, chat, social) often cause the largest performance regressions.

Measuring Impact

  1. Run Lighthouse with all third-party scripts
  2. Block all third-party scripts (DevTools → Network → Block request domain)
  3. Run Lighthouse again
  4. The difference is your third-party performance cost

Common Offenders

CategoryTypical ImpactMitigation
Chat widgets200–500ms LCP, 100–300KBLazy-load on scroll or click
Analytics (multiple)100–300ms, 50–150KBConsolidate to one provider
A/B testing200–800ms TTFB (blocking)Move to edge, reduce blocking
Social embeds500ms+, 200KB+ eachUse static screenshots with links
Ad scripts300ms+, highly variableLazy-load below fold

Phase 5: Mobile Performance Testing

Test on real mobile conditions, not just desktop with a narrow viewport.

Throttling Profiles

ProfileCPUNetworkRepresents
Fast mobile4x slowdownFast 4G (12Mbps, 50ms RTT)Good mobile experience
Average mobile4x slowdownRegular 4G (4Mbps, 170ms RTT)Typical user
Slow mobile6x slowdownSlow 3G (400Kbps, 400ms RTT)Worst case

Mobile-Specific Checks

  • Touch target sizes (minimum 48x48px)
  • Viewport configuration
  • No horizontal scroll
  • Font sizes readable without zoom
  • Critical content visible without JavaScript

Phase 6: Prioritized Fix List

After completing the audit, organize findings by impact and effort:

Priority Matrix

PriorityCriteriaExamples
P0 — CriticalFailing CWV, high impact, quick fixImage optimization, preload LCP element
P1 — HighSignificant impact, moderate effortCode splitting, defer non-critical JS
P2 — MediumMeasurable impact, larger effortMigrate to SSG, implement image CDN
P3 — LowMinor impact or high effortFull framework migration, edge rendering

Fix Documentation Template

For each finding, document:

Issue: [What's wrong]
Impact: [Which metric, by how much]
Current: [Current measurement]
Target: [Goal measurement]
Fix: [Specific technical action]
Effort: [Hours estimate]
Priority: [P0/P1/P2/P3]

Post-Audit: Monitoring

After implementing fixes, set up ongoing monitoring:

  • Google Search Console CWV report — Weekly review
  • Real-user monitoring (RUM) — Track field metrics continuously
  • Performance budgets — Set CI/CD gates for bundle size and Lighthouse scores
  • Regression alerts — Notify when metrics degrade beyond thresholds

Schedule re-audits quarterly, or immediately after major site changes (redesigns, new features, infrastructure changes).

Your Performance Audit Checklist

Baseline (Phase 1):

  • Lighthouse scores recorded for all target pages
  • Field data collected from CrUX/GSC
  • Competitor benchmarks captured

Core Web Vitals (Phase 2):

  • LCP bottleneck identified for each page
  • INP issues diagnosed
  • CLS sources catalogued

Resources (Phase 3):

  • JS bundle sizes and unused code documented
  • CSS delivery strategy evaluated
  • Font loading optimized
  • Image audit complete

Third-Party (Phase 4):

  • Third-party impact measured
  • Mitigation strategies for top offenders

Mobile (Phase 5):

  • Throttled testing complete
  • Mobile-specific issues documented

Action Plan (Phase 6):

  • All findings prioritized (P0–P3)
  • Fix documentation complete
  • Monitoring established

FAQs

How often should I run a performance audit?

Full audits quarterly. Lightweight checks (Lighthouse scores on key pages) monthly. Always run an audit before and after major site changes.

What Lighthouse score should I target?

Aim for 90+ on mobile. Scores of 50–89 indicate significant optimization opportunities. Scores below 50 require urgent attention—they likely correlate with ranking penalties.

Should I prioritize lab data or field data?

Field data (real users) is the ground truth and what Google uses for rankings. Lab data is useful for diagnosing specific issues and testing fixes before deployment.

Can performance improvements actually improve rankings?

Yes, but the effect size depends on your starting point. Sites with poor CWV that fix to "good" often see measurable ranking improvements. Sites already in the "good" range see diminishing returns from further speed optimization.

Generate Content with This Template

Rankwise uses this template structure automatically. Create AI-optimized content in minutes instead of hours.

Try Rankwise Free
Newsletter

Stay ahead of AI search

Weekly insights on GEO and content optimization.