Learnbeginner

FMP Deprecation: Why First Meaningful Paint Was Retired

Learn why First Meaningful Paint was deprecated, what replaced it, and how the shift to Largest Contentful Paint changed web performance measurement.

Rankwise Team·Updated Mar 12, 2026·5 min read

First Meaningful Paint (FMP) was a web performance metric that measured when a page's primary content became visible. Google deprecated it in Lighthouse 6.0 (May 2020) and replaced it with Largest Contentful Paint (LCP). This article explains what went wrong with FMP and why LCP is a better fit.


What FMP tried to do

FMP aimed to capture the moment users perceived a page as "loaded enough" — not when the first pixel appeared (that is FCP), but when the meaningful content showed up. Think: the hero image on a landing page, the article text on a blog post, or the product photo on an e-commerce listing.

The algorithm tracked layout changes during page load, identified the biggest visual shift after First Contentful Paint, and reported that timestamp as FMP.

The idea was sound. The execution was not.


Why FMP failed

1. "Meaningful" is subjective

There is no universal definition of what content is most meaningful on a page. FMP used heuristics to guess, and those heuristics disagreed across implementations.

Two measurement tools could produce different FMP values for the exact same page load because they weighed layout changes differently. This made FMP unreliable for benchmarking or A/B testing.

2. Results varied across browsers

Chrome's FMP implementation differed from what Lighthouse computed in its simulated throttling mode. Even within Chrome, results changed between versions as the heuristic was tuned.

3. Edge cases were common

FMP broke in predictable ways:

  • Single-page apps: Layout changes during client-side rendering confused the algorithm
  • Ad-heavy pages: Ads triggered large layout shifts that FMP sometimes mistook for primary content
  • Progressive loading: Pages that loaded content in stages produced unstable FMP values
  • Skeleton screens: The skeleton itself could register as the "meaningful" paint

4. Poor correlation with user perception

Google's research found that FMP did not reliably match when users actually perceived a page as useful. In user studies, participants reported different "meaningful" moments than what FMP detected.


What replaced FMP

Largest Contentful Paint (LCP) replaced FMP as the primary loading metric. Instead of trying to detect "meaningful" content, LCP measures the render time of the largest image or text block visible in the viewport.

This works better because:

  • Objective criteria: "largest element" is measurable without heuristics
  • Consistent results: all tools agree on what the largest element is
  • Strong UX correlation: the largest element is usually what users care about most
  • Clear optimization target: you know exactly which element to speed up

LCP thresholds (Core Web Vitals)

RatingTime
GoodUnder 2.5 seconds
Needs improvement2.5 – 4.0 seconds
PoorOver 4.0 seconds

The broader metric evolution

FMP's deprecation was part of a larger shift toward user-centric, reproducible performance metrics:

EraKey metricsApproach
Early (pre-2018)Load time, DOMContentLoadedTechnical events, not user-facing
FMP era (2018–2020)FCP, FMP, TTIAttempted user-centric, but relied on heuristics
Core Web Vitals (2020+)LCP, FID/INP, CLSObjective, reproducible, correlated with UX research

The lesson: metrics need to be both meaningful to users and reproducible by tools. FMP achieved the first goal partially but failed at the second.


What to do if you still reference FMP

In documentation or reports

Replace FMP references with LCP. Update any performance budgets that set FMP thresholds.

In monitoring dashboards

Remove FMP panels. Modern browsers no longer report it. Add LCP tracking using the web-vitals library or CrUX data.

In historical data

Keep FMP data for historical context, but do not compare it directly with LCP values. They measure different things. Establish new LCP baselines and track trends from there.

In CI pipelines

Remove FMP assertions from Lighthouse CI configs. Lighthouse 10+ does not compute FMP and will error on assertions that reference it.


Lessons from FMP's failure

FMP's story offers useful principles for choosing any metric:

  1. Objectivity matters more than cleverness. A simpler metric that everyone agrees on beats a sophisticated one that varies by implementation.
  2. Validate with real users. FMP was designed theoretically. LCP was validated against perception studies.
  3. Optimize for the metric you can control. With LCP, you know the target element. With FMP, you were optimizing blind.
  4. Metrics should be actionable. If a metric goes up, you need to know what to fix. LCP points to a specific element. FMP pointed to a heuristic.

Frequently Asked Questions

Is FMP data still available anywhere?

No. Chrome removed FMP from the Performance API, and Lighthouse stopped computing it in version 6.0. Historical data in tools like WebPageTest or older Lighthouse reports may still show FMP, but no new data is generated.

Does LCP have any weaknesses?

LCP can be unintuitive on pages where the largest element is not the most important one (for example, a decorative background image). In these cases, supplement LCP with Element Timing API measurements on the elements you consider critical.

Should I care about FCP now that FMP is gone?

Yes. First Contentful Paint (FCP) is still a valid metric that measures when any content first appears. It is useful as a leading indicator — if FCP is slow, LCP will be slow too. But FCP alone does not capture whether the primary content loaded.

How does this affect SEO?

LCP is one of the three Core Web Vitals that Google uses as a ranking signal. Good LCP (under 2.5s at the 75th percentile) contributes positively to page experience signals. FMP was never a direct ranking factor.

Part of the SEO Fundamentals topic

Newsletter

Stay ahead of AI search

Weekly insights on GEO and content optimization.