Every web page goes through a rendering process before it becomes visible. For static HTML pages, this process is straightforward. For JavaScript-powered pages, it's a multi-stage pipeline that browsers, search engines, and AI crawlers handle very differently. Understanding these differences is the foundation of JavaScript SEO.
How browsers render pages
When a user visits a page, the browser follows this sequence:
- Fetch HTML — Download the initial HTML document
- Parse HTML — Build the DOM (Document Object Model) tree
- Fetch CSS — Download and parse stylesheets, build the CSSOM
- Fetch JavaScript — Download script files
- Execute JavaScript — Run scripts that may modify the DOM
- Render tree — Combine DOM and CSSOM into the render tree
- Layout — Calculate element positions and sizes
- Paint — Draw pixels to the screen
For a server-rendered page, the DOM is complete after step 2. The browser can start rendering immediately.
For a client-rendered SPA, the DOM after step 2 is nearly empty (just a <div id="root">). The actual content doesn't appear until step 5, when JavaScript executes and populates the DOM.
How Googlebot renders JavaScript
Google uses a two-phase process:
Phase 1: Crawling (immediate)
Googlebot fetches the initial HTML and extracts:
- Links (for discovering other pages)
- Meta tags (title, description, robots)
- Canonical URLs
- Structured data
If the HTML already contains content (SSR/SSG), Google can index the page immediately.
Phase 2: Rendering (deferred)
For JavaScript-dependent pages, Google queues the page for rendering:
- Google launches a headless Chromium instance
- Loads the page and executes JavaScript
- Waits for the DOM to stabilize
- Extracts the rendered content
- Updates the index with the rendered version
Key limitation: Rendering is not instant. Pages sit in a queue that can take hours or days to process. During this gap, the page is either not indexed or indexed with incomplete content.
| Aspect | Phase 1 (Crawl) | Phase 2 (Render) |
|---|---|---|
| Timing | Immediate | Hours to days later |
| What's processed | Initial HTML only | JavaScript-rendered DOM |
| Content available | Server-rendered content | All content including JS-rendered |
| Resource cost | Low | High (Chromium instance per page) |
How AI crawlers handle JavaScript
AI search crawlers — GPTBot (ChatGPT), ClaudeBot (Claude), PerplexityBot (Perplexity) — do not execute JavaScript. They fetch the HTML response and process it as-is.
This means:
- Client-rendered content is invisible to AI search
- Meta tags injected by JavaScript are not seen
- Dynamic routes that rely on JS routing are not discovered
- AI search citations require server-rendered HTML
If your content only exists after JavaScript runs, it cannot be cited by AI answer engines.
Rendering strategies compared
Client-Side Rendering (CSR)
JavaScript runs in the browser and builds the page from scratch.
Server sends: <div id="root"></div> + bundle.js
Browser: downloads JS → executes → renders content
Crawler sees: empty HTML (initially)
SEO impact: Content invisible until rendering queue processes the page. AI crawlers never see content.
Server-Side Rendering (SSR)
The server executes JavaScript and sends complete HTML.
Server: executes React/Vue → generates HTML → sends to browser
Browser: displays HTML immediately → hydrates with JS
Crawler sees: full content in initial HTML
SEO impact: Content immediately indexable. All crawlers can process it.
Static Site Generation (SSG)
Pages are pre-rendered at build time and served as static HTML.
Build step: executes React/Vue → generates HTML files
CDN: serves pre-built HTML
Browser: displays HTML → hydrates with JS
Crawler sees: full content in initial HTML
SEO impact: Best performance (CDN-served), full content available. Suitable for content that doesn't change on every request.
Incremental Static Regeneration (ISR)
Pages are statically generated but regenerate on demand.
First request: serves cached static HTML
Background: regenerates page after revalidation period
Next request: serves updated HTML
SEO impact: Combines SSG performance with content freshness. Good for large sites where full rebuilds are impractical.
What Googlebot can and cannot render
Can render
- React, Vue, Angular applications
- Standard DOM manipulation
- Most CSS features
- Fetch/XHR API calls to public endpoints
- Web Components (with polyfills)
Cannot render reliably
- Content behind authentication (login walls)
- Content that requires user interaction (click to load)
- Infinite scroll without pagination
- Content loaded by IntersectionObserver (sometimes)
- Content dependent on service workers
Never processes
- localStorage/sessionStorage data
- IndexedDB
- WebSocket connections
- Browser notifications
- Geolocation API
The rendering gap in practice
Consider a React e-commerce site with 10,000 product pages:
| Phase | Server-Rendered | Client-Rendered |
|---|---|---|
| Day 1 | Google indexes all 10K pages | Google crawls all 10K pages (HTML is empty) |
| Day 2-5 | Pages appear in search results | Pages queued for rendering |
| Day 5-14 | Full index, rich results eligible | ~7K pages rendered and indexed |
| Day 30 | Stable indexation | ~9K pages rendered, some still missing |
The client-rendered site loses 2-4 weeks of indexation time and may never achieve 100% coverage.
How to verify your rendering
Browser: View Source vs. Inspect
- View Source (Ctrl+U) shows the initial HTML that crawlers receive
- Inspect Element (DevTools) shows the live DOM after JavaScript execution
If content appears in Inspect but not View Source, crawlers may not see it.
Google URL Inspection Tool
In Google Search Console, enter a URL and click "Test Live URL." Google shows:
- The rendered HTML as Googlebot sees it
- A screenshot of the rendered page
- Any JavaScript errors encountered
curl test
# See what crawlers receive (no JavaScript)
curl -s https://yoursite.com/page | grep "<h1>"
If the <h1> tag appears, the content is server-rendered. If it doesn't, the page relies on client-side JavaScript.
Choosing the right strategy
| Scenario | Recommended Strategy |
|---|---|
| Blog, docs, marketing pages | SSG |
| E-commerce product pages | SSG or ISR |
| User dashboards (authenticated) | CSR (doesn't need indexing) |
| Search results pages | SSR |
| Real-time data pages | SSR with caching |
| Large content sites (50K+ pages) | ISR |
The general rule: if a page should appear in search results, serve its content in the initial HTML. Use CSR only for authenticated sections that don't need search visibility.
Frequently Asked Questions
Does Googlebot use the latest version of Chrome?
Google updates the rendering engine to the latest stable Chromium version. However, the rendering environment has no GPU acceleration, limited memory, and cannot handle all browser APIs (localStorage, notifications, etc.).
How much JavaScript can Googlebot handle?
There's no official limit, but Googlebot has a rendering budget. Complex SPAs with large bundles, many API calls, and heavy DOM manipulation are more likely to time out. Keep JavaScript execution lean for pages that need indexing.
Do I need to worry about Bingbot rendering?
Bingbot has limited JavaScript rendering capabilities compared to Googlebot. If Bing traffic matters to you, server-side rendering is even more important.
Can I use Suspense and lazy loading with SSR?
Yes. React 18+ supports streaming SSR with Suspense. Lazy-loaded components can have server-rendered fallbacks that provide content for crawlers while the interactive version loads for users.
Is dynamic rendering (serving different content to bots) considered cloaking?
Google explicitly says dynamic rendering is not cloaking, as long as the rendered content matches what users see. However, they recommend SSR/SSG as the preferred long-term solution over dynamic rendering.
Related Resources
- JavaScript SEO — Core JavaScript SEO concepts
- JavaScript SEO Best Practices — Technical checklist
- SPA SEO Optimization — SEO for single-page applications
- React SEO Guide — React-specific SEO patterns