Guideadvanced

JavaScript SEO Best Practices: Ensure Search Engines Index Your JS Content

A technical checklist for making JavaScript-rendered content crawlable and indexable. Covers rendering strategies, meta tags, internal linking, and testing methods.

Rankwise Team·Updated Apr 13, 2026·5 min read

JavaScript powers modern web applications, but it creates a fundamental tension with search engines: crawlers need HTML to index content, while SPAs generate HTML in the browser. This guide covers every technical requirement for making JavaScript-rendered content visible to Google, Bing, and AI search platforms.


The rendering gap

When Googlebot requests a page, it receives the initial HTML response. For server-rendered pages, this HTML contains the full content. For client-rendered SPAs, it contains an empty shell and a <script> tag.

Google's rendering pipeline processes JavaScript, but with significant limitations:

FactorServer-RenderedClient-Rendered
Initial HTMLComplete contentEmpty shell
Indexing speedImmediateDelayed (hours to days)
Rendering reliability100%~90% (timeouts, errors)
AI crawler supportFullNone (no JS execution)
Resource costServer CPUGoogle's rendering budget

The gap between what users see and what crawlers see determines your indexation rate. Closing this gap is the goal of JavaScript SEO.


Best practice 1: Choose the right rendering strategy

Static Site Generation (SSG)

Best for content that changes infrequently (blog posts, docs, landing pages):

// Next.js SSG
export async function generateStaticParams() {
  const posts = await getAllPosts()
  return posts.map(post => ({ slug: post.slug }))
}

export default async function Post({ params }) {
  const post = await getPost(params.slug)
  return <Article post={post} />
}

When to use: Content pages, marketing sites, documentation.

Server-Side Rendering (SSR)

Best for dynamic content that needs to be fresh on every request:

// Next.js SSR (dynamic rendering)
export const dynamic = "force-dynamic"

export default async function Dashboard({ params }) {
  const data = await fetchDashboardData(params.id)
  return <DashboardView data={data} />
}

When to use: Dashboards with public data, search results pages, real-time content.

Most applications benefit from mixing strategies:

  • SSG for content pages (blog, glossary, guides)
  • SSR for dynamic public pages (search results, filtered catalogs)
  • CSR for authenticated sections (user dashboards, settings)

Search engines don't need to index authenticated pages, so CSR is fine for those.


Best practice 2: Unique meta tags per page

Every indexable page needs its own title, description, and canonical in the initial HTML response.

Framework-specific implementation

// Next.js App Router
export async function generateMetadata({ params }) {
  const page = await getPage(params.slug)
  return {
    title: page.title,
    description: page.metaDescription,
    alternates: { canonical: page.canonicalUrl },
    openGraph: {
      title: page.title,
      description: page.metaDescription,
      url: page.canonicalUrl
    }
  }
}

Verify in HTML source

Check that meta tags appear in the initial HTML (view-source, not DevTools):

curl -s https://yoursite.com/page | grep -E '<title>|<meta name="description"'

If the tags only appear after JavaScript execution, they may be missed or delayed by crawlers.


Search engines discover pages by following links. In SPAs, links must render as standard <a> elements with href attributes.

Correct patterns

// React Router — renders crawlable <a> tag
<Link to="/products/widget">Widget</Link>

// Next.js — renders crawlable <a> tag
<Link href="/products/widget">Widget</Link>

// Standard HTML — always crawlable
<a href="/products/widget">Widget</a>

Incorrect patterns

// No href — invisible to crawlers
<div onClick={() => router.push("/products/widget")}>Widget</div>

// JavaScript-only navigation — invisible to crawlers
<button onClick={() => navigate("/page")}>Go to page</button>

// Fragment URLs — treated as same page
<a href="#/products/widget">Widget</a>
# Extract all links from a page's HTML source
curl -s https://yoursite.com/ | grep -oP 'href="[^"]*"' | sort -u

Every important internal page should appear in the link list.


Best practice 4: Generate XML sitemaps

SPAs with dynamic routes need a programmatically generated sitemap:

// Next.js sitemap.ts
export default async function sitemap() {
  const pages = await getAllPages()
  return pages.map(page => ({
    url: `https://yoursite.com${page.url}`,
    lastModified: page.updatedAt,
    changeFrequency: "weekly",
    priority: page.priority
  }))
}

Submit the sitemap in Google Search Console and reference it in robots.txt:

Sitemap: https://yoursite.com/sitemap.xml

Best practice 5: Implement structured data

Add JSON-LD structured data in the HTML <head>, not injected by client-side JavaScript:

// Server-rendered JSON-LD
export default function ArticlePage({ article }) {
  const schema = {
    "@context": "https://schema.org",
    "@type": "Article",
    headline: article.title,
    datePublished: article.publishedAt,
    dateModified: article.updatedAt,
    author: { "@type": "Person", name: article.author }
  }

  return (
    <>
      <script
        type="application/ld+json"
        dangerouslySetInnerHTML={{ __html: JSON.stringify(schema) }}
      />
      <article>{/* content */}</article>
    </>
  )
}

Best practice 6: Handle JavaScript errors gracefully

If your JavaScript throws an error during rendering, the page may show empty content to crawlers.

Error boundaries (React)

class SEOErrorBoundary extends React.Component {
  state = { hasError: false }

  static getDerivedStateFromError() {
    return { hasError: true }
  }

  render() {
    if (this.state.hasError) {
      // Return meaningful HTML that crawlers can index
      return (
        <div>
          <h1>Content temporarily unavailable</h1>
        </div>
      )
    }
    return this.props.children
  }
}

SSR error handling

If server-side rendering fails, return a meaningful HTTP status code (500) rather than a 200 with empty content. Empty 200 responses tell Google "this page exists but has no content" — Google may index the empty page.


Best practice 7: Optimize for AI crawlers

AI search crawlers (GPTBot, ClaudeBot, PerplexityBot) do not execute JavaScript. They rely on the initial HTML response.

Ensure content is in the HTML

# Test what AI crawlers see
curl -s -H "User-Agent: GPTBot" https://yoursite.com/page | head -100

The response should contain your page content, not just a JavaScript bundle reference.

Don't block AI crawlers

Check your robots.txt:

# Allow AI crawlers
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: PerplexityBot
Allow: /

Blocking AI crawlers prevents your content from being cited in AI-generated answers.


Testing checklist

Manual testing

  1. View page source (Ctrl+U) — not DevTools Elements. This shows what crawlers receive.
  2. Google URL Inspection — in Search Console, enter a URL to see how Google renders it.
  3. Google Cache — search cache:yoursite.com/page to see Google's stored version.
  4. curl testcurl -s https://yoursite.com/page | grep "expected content".
  5. Mobile-Friendly Test — shows rendered HTML as Googlebot sees it.

Automated testing

Add these checks to your CI pipeline:

# Verify SSR output contains key content
curl -s $PREVIEW_URL/blog/my-post | grep -q "<h1>" || exit 1

# Verify sitemap is accessible
curl -s $PREVIEW_URL/sitemap.xml | grep -q "<urlset" || exit 1

# Verify meta tags in HTML source
curl -s $PREVIEW_URL/blog/my-post | grep -q '<meta name="description"' || exit 1

Common JavaScript SEO mistakes

MistakeImpactFix
Client-only meta tagsCrawlers see default/empty metadataUse SSR/SSG for meta tag generation
onClick navigationPages not discovered by crawlersUse <a href> links
Hash-based routingAll routes treated as one URLSwitch to history-based routing
Blocking AI crawlersContent invisible to ChatGPT, PerplexityAllow GPTBot, ClaudeBot in robots.txt
No sitemapCrawlers miss dynamic routesGenerate sitemap programmatically
Empty error statesGoogle indexes blank pagesReturn proper HTTP error codes

Frequently Asked Questions

Does Google fully support JavaScript rendering?

Google renders JavaScript using a recent version of Chromium. It handles React, Vue, and Angular well, but rendering is queued (not real-time), may time out on complex apps, and consumes crawl budget. Server-side rendering is more reliable.

How long does Google take to render JavaScript pages?

Google separates crawling and rendering into two phases. Crawling happens quickly, but rendering may be delayed by hours or days depending on the page's priority and Google's rendering queue capacity.

Do I need SSR for every page?

No. Authenticated pages (dashboards, settings) don't need to be indexed, so CSR is fine. Focus SSR/SSG on public pages you want in search results.

Can I use dynamic rendering (serving pre-rendered HTML to bots only)?

Google considers dynamic rendering an acceptable workaround for sites that can't implement SSR. However, they recommend SSR/SSG as the long-term solution. Dynamic rendering adds maintenance overhead and can diverge from what users see.

Part of the SEO Fundamentals topic

Newsletter

Stay ahead of AI search

Weekly insights on GEO and content optimization.