JavaScript powers modern web applications, but it creates a fundamental tension with search engines: crawlers need HTML to index content, while SPAs generate HTML in the browser. This guide covers every technical requirement for making JavaScript-rendered content visible to Google, Bing, and AI search platforms.
The rendering gap
When Googlebot requests a page, it receives the initial HTML response. For server-rendered pages, this HTML contains the full content. For client-rendered SPAs, it contains an empty shell and a <script> tag.
Google's rendering pipeline processes JavaScript, but with significant limitations:
| Factor | Server-Rendered | Client-Rendered |
|---|---|---|
| Initial HTML | Complete content | Empty shell |
| Indexing speed | Immediate | Delayed (hours to days) |
| Rendering reliability | 100% | ~90% (timeouts, errors) |
| AI crawler support | Full | None (no JS execution) |
| Resource cost | Server CPU | Google's rendering budget |
The gap between what users see and what crawlers see determines your indexation rate. Closing this gap is the goal of JavaScript SEO.
Best practice 1: Choose the right rendering strategy
Static Site Generation (SSG)
Best for content that changes infrequently (blog posts, docs, landing pages):
// Next.js SSG
export async function generateStaticParams() {
const posts = await getAllPosts()
return posts.map(post => ({ slug: post.slug }))
}
export default async function Post({ params }) {
const post = await getPost(params.slug)
return <Article post={post} />
}
When to use: Content pages, marketing sites, documentation.
Server-Side Rendering (SSR)
Best for dynamic content that needs to be fresh on every request:
// Next.js SSR (dynamic rendering)
export const dynamic = "force-dynamic"
export default async function Dashboard({ params }) {
const data = await fetchDashboardData(params.id)
return <DashboardView data={data} />
}
When to use: Dashboards with public data, search results pages, real-time content.
Hybrid approach (recommended)
Most applications benefit from mixing strategies:
- SSG for content pages (blog, glossary, guides)
- SSR for dynamic public pages (search results, filtered catalogs)
- CSR for authenticated sections (user dashboards, settings)
Search engines don't need to index authenticated pages, so CSR is fine for those.
Best practice 2: Unique meta tags per page
Every indexable page needs its own title, description, and canonical in the initial HTML response.
Framework-specific implementation
// Next.js App Router
export async function generateMetadata({ params }) {
const page = await getPage(params.slug)
return {
title: page.title,
description: page.metaDescription,
alternates: { canonical: page.canonicalUrl },
openGraph: {
title: page.title,
description: page.metaDescription,
url: page.canonicalUrl
}
}
}
Verify in HTML source
Check that meta tags appear in the initial HTML (view-source, not DevTools):
curl -s https://yoursite.com/page | grep -E '<title>|<meta name="description"'
If the tags only appear after JavaScript execution, they may be missed or delayed by crawlers.
Best practice 3: Internal links must be crawlable
Search engines discover pages by following links. In SPAs, links must render as standard <a> elements with href attributes.
Correct patterns
// React Router — renders crawlable <a> tag
<Link to="/products/widget">Widget</Link>
// Next.js — renders crawlable <a> tag
<Link href="/products/widget">Widget</Link>
// Standard HTML — always crawlable
<a href="/products/widget">Widget</a>
Incorrect patterns
// No href — invisible to crawlers
<div onClick={() => router.push("/products/widget")}>Widget</div>
// JavaScript-only navigation — invisible to crawlers
<button onClick={() => navigate("/page")}>Go to page</button>
// Fragment URLs — treated as same page
<a href="#/products/widget">Widget</a>
Verify link crawlability
# Extract all links from a page's HTML source
curl -s https://yoursite.com/ | grep -oP 'href="[^"]*"' | sort -u
Every important internal page should appear in the link list.
Best practice 4: Generate XML sitemaps
SPAs with dynamic routes need a programmatically generated sitemap:
// Next.js sitemap.ts
export default async function sitemap() {
const pages = await getAllPages()
return pages.map(page => ({
url: `https://yoursite.com${page.url}`,
lastModified: page.updatedAt,
changeFrequency: "weekly",
priority: page.priority
}))
}
Submit the sitemap in Google Search Console and reference it in robots.txt:
Sitemap: https://yoursite.com/sitemap.xml
Best practice 5: Implement structured data
Add JSON-LD structured data in the HTML <head>, not injected by client-side JavaScript:
// Server-rendered JSON-LD
export default function ArticlePage({ article }) {
const schema = {
"@context": "https://schema.org",
"@type": "Article",
headline: article.title,
datePublished: article.publishedAt,
dateModified: article.updatedAt,
author: { "@type": "Person", name: article.author }
}
return (
<>
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(schema) }}
/>
<article>{/* content */}</article>
</>
)
}
Best practice 6: Handle JavaScript errors gracefully
If your JavaScript throws an error during rendering, the page may show empty content to crawlers.
Error boundaries (React)
class SEOErrorBoundary extends React.Component {
state = { hasError: false }
static getDerivedStateFromError() {
return { hasError: true }
}
render() {
if (this.state.hasError) {
// Return meaningful HTML that crawlers can index
return (
<div>
<h1>Content temporarily unavailable</h1>
</div>
)
}
return this.props.children
}
}
SSR error handling
If server-side rendering fails, return a meaningful HTTP status code (500) rather than a 200 with empty content. Empty 200 responses tell Google "this page exists but has no content" — Google may index the empty page.
Best practice 7: Optimize for AI crawlers
AI search crawlers (GPTBot, ClaudeBot, PerplexityBot) do not execute JavaScript. They rely on the initial HTML response.
Ensure content is in the HTML
# Test what AI crawlers see
curl -s -H "User-Agent: GPTBot" https://yoursite.com/page | head -100
The response should contain your page content, not just a JavaScript bundle reference.
Don't block AI crawlers
Check your robots.txt:
# Allow AI crawlers
User-agent: GPTBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: PerplexityBot
Allow: /
Blocking AI crawlers prevents your content from being cited in AI-generated answers.
Testing checklist
Manual testing
- View page source (Ctrl+U) — not DevTools Elements. This shows what crawlers receive.
- Google URL Inspection — in Search Console, enter a URL to see how Google renders it.
- Google Cache — search
cache:yoursite.com/pageto see Google's stored version. - curl test —
curl -s https://yoursite.com/page | grep "expected content". - Mobile-Friendly Test — shows rendered HTML as Googlebot sees it.
Automated testing
Add these checks to your CI pipeline:
# Verify SSR output contains key content
curl -s $PREVIEW_URL/blog/my-post | grep -q "<h1>" || exit 1
# Verify sitemap is accessible
curl -s $PREVIEW_URL/sitemap.xml | grep -q "<urlset" || exit 1
# Verify meta tags in HTML source
curl -s $PREVIEW_URL/blog/my-post | grep -q '<meta name="description"' || exit 1
Common JavaScript SEO mistakes
| Mistake | Impact | Fix |
|---|---|---|
| Client-only meta tags | Crawlers see default/empty metadata | Use SSR/SSG for meta tag generation |
| onClick navigation | Pages not discovered by crawlers | Use <a href> links |
| Hash-based routing | All routes treated as one URL | Switch to history-based routing |
| Blocking AI crawlers | Content invisible to ChatGPT, Perplexity | Allow GPTBot, ClaudeBot in robots.txt |
| No sitemap | Crawlers miss dynamic routes | Generate sitemap programmatically |
| Empty error states | Google indexes blank pages | Return proper HTTP error codes |
Frequently Asked Questions
Does Google fully support JavaScript rendering?
Google renders JavaScript using a recent version of Chromium. It handles React, Vue, and Angular well, but rendering is queued (not real-time), may time out on complex apps, and consumes crawl budget. Server-side rendering is more reliable.
How long does Google take to render JavaScript pages?
Google separates crawling and rendering into two phases. Crawling happens quickly, but rendering may be delayed by hours or days depending on the page's priority and Google's rendering queue capacity.
Do I need SSR for every page?
No. Authenticated pages (dashboards, settings) don't need to be indexed, so CSR is fine. Focus SSR/SSG on public pages you want in search results.
Can I use dynamic rendering (serving pre-rendered HTML to bots only)?
Google considers dynamic rendering an acceptable workaround for sites that can't implement SSR. However, they recommend SSR/SSG as the long-term solution. Dynamic rendering adds maintenance overhead and can diverge from what users see.
Related Resources
- JavaScript SEO — Core concepts and rendering challenges
- SPA SEO Optimization — Use case for single-page apps
- Next.js SEO Setup Template — Ready-to-use configuration
- JavaScript Rendering Explained — How browsers and bots render JS
- React SEO Guide — React-specific SEO patterns