How to Find Out Why a Page Is Not Indexed
Step-by-step diagnostic guide using Google Search Console to identify and fix unindexed pages. No agency needed.
The Problem: Pages That Disappear Into the Void
You ship a page. You wait. Google doesn't index it. No traffic. No visibility. Nothing.
This kills founders. You've built something real, but if Google can't find it—or won't index it—you're invisible.
The brutal truth: most unindexed pages are diagnostic problems, not mysteries. Google tells you exactly why. You just need to know where to look and what the signals mean.
This guide walks you through the fastest way to diagnose why a page isn't indexed using Google Search Console (GSC) alone. No paid tools. No guessing. Just facts.
Prerequisites: What You Need Before You Start
Before you diagnose anything, make sure you have the basics in place:
- Google Search Console access: You need GSC set up and your domain verified. If you haven't done this yet, follow the 10-minute setup guide first. This is non-negotiable.
- The URL you're checking: Have the exact page URL ready. Copy-paste it from your browser. Don't guess at the format.
- Recent publication: The page should be live and accessible (not behind a login, not returning a 404). If it's brand new, wait 24–48 hours before diagnosing. Google needs time to crawl.
- At least one backlink or sitemap submission: Google needs a reason to crawl your page. Either submit it via your sitemap or request indexing directly in GSC.
If you skip these, you'll waste time chasing ghosts.
Step 1: Verify the Page Is Actually Not Indexed
Before you diagnose, confirm the page really isn't indexed. This sounds obvious, but many founders assume non-indexing when the page simply hasn't been crawled yet.
Check Using the Site Operator
Open Google Search and type:
site:yourdomain.com/exact-page-url
Replace yourdomain.com with your actual domain and /exact-page-url with the exact path of the page you're checking.
If the page appears in results, it's indexed. Stop here—you don't have an indexing problem.
If nothing appears, move to the next step.
Check Using GSC URL Inspection
This is the most reliable method. Open Google Search Console, select your property, and paste the exact URL into the URL Inspection tool at the top.
GSC will return one of these statuses:
- URL is on Google: The page is indexed. You're done.
- URL is not on Google: The page has been crawled but not indexed, or it hasn't been crawled at all.
- Coverage issue: There's a specific problem blocking indexing.
The URL Inspection tool is your diagnostic starting point. Learn how to use it effectively if you're unfamiliar.
If GSC says "URL is not on Google," proceed to Step 2.
Step 2: Check the Coverage Report for Errors and Warnings
Now that you've confirmed the page isn't indexed, go to the Coverage report in Google Search Console.
The Coverage report shows three categories:
- Error: Pages Google crawled but couldn't process (usually 4xx or 5xx errors).
- Warning: Pages crawled but not indexed due to specific issues (like noindex tags or duplicate content).
- Valid: Pages successfully indexed.
- Excluded: Pages you or Google intentionally excluded from indexing.
Look for your URL in the Error or Warning sections. If it appears, GSC will tell you exactly why the page isn't indexed.
Common Coverage issues include:
- Submitted URL marked 'noindex': You have a noindex tag on the page. This is intentional blocking. If you want it indexed, remove the noindex tag and resubmit.
- Submitted URL not found (404): Google crawled the page and got a 404 error. The page is returning "not found" status. Check your server logs and fix the response code.
- Submitted URL has a redirect: The page redirects elsewhere. If this is intentional, Google will index the destination instead. If it's accidental, fix the redirect.
- Submitted URL blocked by robots.txt: Your robots.txt file is blocking Google from crawling this page. Check your robots.txt and remove the block if it's unintentional.
- Submitted URL blocked by Disallow in robots.txt: Same as above. Your robots.txt is actively preventing crawl.
- Submitted URL returns soft 404: Google crawled the page but thinks it's a 404 (blank page, minimal content, or error-like structure). Fix the page content or HTTP status code.
- Crawled – currently not indexed: Google crawled the page but chose not to index it. This is the hardest diagnosis. See Step 3.
If you see a specific error in Coverage, fix it and resubmit. If your page isn't listed in Coverage at all, move to Step 3.
Step 3: Diagnose "Crawled – Currently Not Indexed"
This is the most common non-indexing scenario. Google crawled your page, but it decided not to index it.
Why? Usually one of these reasons:
Low Content Quality or Thin Content
Google crawled the page and found insufficient value. This happens when:
- The page has fewer than ~300 words of unique content.
- The page is mostly duplicate content from other pages.
- The page offers no unique perspective or information.
- The page is auto-generated without meaningful differentiation.
Fix: Expand the page with original, valuable content. Aim for at least 1,000 words if it's a core page. Make sure it answers a specific question or solves a real problem. Check this guide on common indexing issues for more context on content-quality blocks.
Poor Internal Linking
The page isn't linked from anywhere on your site. Google crawls via links. If your page is orphaned (no internal links pointing to it), Google may crawl it once but deprioritize indexing.
Fix: Add internal links from your homepage, navigation, or related pages. Use descriptive anchor text. This signals to Google that the page is important.
Crawl Budget Issues
Your site has high crawl demand relative to crawl budget. Google may crawl some pages but not index all of them if the site is huge or has many duplicate URLs.
Fix: This is rare for small sites. If you have thousands of pages, focus on blocking low-value URLs via robots.txt or using the noindex directive strategically.
Canonical Tag Issues
The page has a canonical tag pointing to a different URL. Google respects canonicals and indexes the target instead.
Fix: Check the page source for a canonical tag. If it's pointing elsewhere unintentionally, remove it. If the page should be canonical to itself, use a self-referential canonical or remove the tag entirely.
Noindex Tag (Accidental)
You or a plugin added a noindex tag to the page by mistake. This is a direct instruction to Google: "Don't index this."
Fix: Check the page source (right-click → Inspect → search for "noindex"). Remove the meta tag and resubmit.
JavaScript Rendering Issues
The page uses JavaScript to render content. Google crawled the HTML but couldn't execute the JavaScript, so it saw a blank page.
Fix: Test the page in GSC's URL Inspection tool. It shows you what Google sees after rendering. If content is missing, you need server-side rendering or a different approach. Read more on common indexing issues here.
Server Issues or Slow Response
Google tried to crawl the page but hit server errors, timeouts, or extremely slow response times. It crawled the page once but won't crawl it again, so it doesn't index it.
Fix: Check your server logs for 5xx errors or timeout events. Optimize page load speed. Use Lighthouse to identify performance bottlenecks.
Step 4: Use URL Inspection to See What Google Actually Sees
Go back to the URL Inspection tool in GSC. Click the Coverage tab within the inspection results.
This shows you:
- Last crawl date: When Google last visited the page.
- Crawl status: Whether the crawl succeeded or failed.
- Indexing status: Why the page isn't indexed (if applicable).
- Rendering: A screenshot of what Google sees after rendering JavaScript.
- Page resources: Images, CSS, JavaScript files Google tried to load.
Click Test live URL to see the page as Google crawls it right now. This often reveals rendering issues or missing content that blocks indexing.
If Google sees a blank page or error message, your page isn't ready for indexing. Fix the rendering or content, then resubmit.
Step 5: Request Indexing (If You've Fixed the Problem)
Once you've identified and fixed the issue, request indexing:
- Go to GSC URL Inspection.
- Paste the URL.
- Click Request Indexing.
Google will re-crawl and re-evaluate the page. This usually takes 24–48 hours, sometimes longer.
Don't spam the request button. One request per page per fix is enough. Learn more about when and how to request indexing.
Step 6: Check for Robots.txt or Sitemap Issues
If your page still isn't indexed after fixing the obvious problems, check these files:
Robots.txt Blocks
Open yourdomain.com/robots.txt in your browser. Look for:
Disallow: /path-to-your-page
Or:
Disallow: /
If your page path is listed in a Disallow rule, Google can't crawl it. Remove the rule and redeploy.
Missing From Sitemap
If the page is new, submit your sitemap to GSC. New pages should be in your sitemap so Google knows to crawl them.
Older pages might not need sitemap inclusion if they're well-linked internally, but it doesn't hurt.
Sitemap Errors
Go to GSC Sitemaps report. If your sitemap shows errors, Google can't read it properly. Fix the XML syntax and resubmit. Here's a step-by-step guide to submitting sitemaps correctly.
Step 7: Check for Canonical or Redirect Chains
Go to the page in your browser. Open Developer Tools (F12) and check the page source:
- Search for
<link rel="canonical"in the source. - If it exists, note the URL it points to.
- If it points to a different page, that's why yours isn't indexed. Google indexes the canonical target instead.
- If the canonical is self-referential (points to itself), it's fine.
- If there's no canonical, that's also fine.
Also check for redirect chains:
- Visit the page URL.
- Note if the browser redirects to a different URL.
- If it does, follow the chain. Multiple redirects waste crawl budget and confuse Google.
Fix: Remove unnecessary canonicals or redirects. Keep redirect chains to one hop maximum.
Step 8: Verify HTTPS and Mixed Content
Google prefers HTTPS. If your page is HTTP or has mixed content (HTTPS page loading HTTP resources), it may not index.
Check:
- Is the page served over HTTPS? (Look at the URL bar—it should show a lock icon.)
- Are all resources (images, scripts, stylesheets) also HTTPS?
- If you see warnings in the browser console about "mixed content," fix them.
Fix: Set up HTTPS properly and redirect all HTTP traffic to HTTPS. This is foundational for SEO.
Step 9: Monitor and Recheck
After you've fixed the issue:
- Request indexing via GSC.
- Wait 48 hours.
- Check the URL Inspection tool again. Has Google re-crawled?
- Check the Coverage report. Is the page now in the "Valid" section?
- Run the site: operator search again. Does the page now appear in Google Search?
If the page is still not indexed after 7 days, you may have missed something. Go back through the steps and check for:
- Accidental noindex tags added by plugins or CMS.
- Server-side redirects you forgot about.
- Content that's too thin or duplicated elsewhere.
- Crawl errors in GSC that you overlooked.
Common Mistakes Founders Make (And How to Avoid Them)
Mistake 1: Not Waiting Long Enough
Indexing takes time. New pages can take 1–2 weeks to appear in Google. If your page is less than 48 hours old, stop worrying and wait.
Fix: Only diagnose indexing issues for pages that have been live for at least 48 hours and have been submitted to GSC or linked from your homepage.
Mistake 2: Confusing "Crawled" With "Indexed"
Google crawled the page ≠ Google indexed the page. Crawling is just visiting. Indexing is adding to the search index.
Many pages are crawled but not indexed. This is normal if the page is low-quality, duplicate, or blocked intentionally.
Fix: Use the exact GSC terminology. If GSC says "Crawled – currently not indexed," the page was visited but not added to the index.
Mistake 3: Ignoring the Coverage Report
The Coverage report is your diagnostic goldmine. Most founders never look at it.
Fix: Check Coverage weekly. It tells you every indexing problem on your site. Fix errors first, then warnings.
Mistake 4: Adding Noindex by Accident
Plugins, staging environments, and CMS defaults sometimes add noindex tags without telling you.
Fix: Check the page source for <meta name="robots" content="noindex"> or <meta name="googlebot" content="noindex">. If it's there unintentionally, remove it.
Mistake 5: Not Submitting Your Sitemap
Google doesn't automatically know about your pages. You need to tell it via sitemap or internal links.
Fix: Submit your sitemap to GSC. Do this on day one, not month six.
Mistake 6: Creating Orphan Pages
Pages with no internal links are "orphaned." Google may crawl them once but won't prioritize them for indexing.
Fix: Link every page from at least one other page on your site. Ideally, link from your homepage or main navigation.
Pro Tips for Faster Indexing
Use IndexNow for Instant Notification
IndexNow pings Bing and Yandex whenever you publish a new page. This doesn't directly affect Google, but it's a signal that your content is fresh and worth crawling.
Setup time: 10 minutes. Worth it.
Request Indexing Strategically
GSC has daily limits on indexing requests. Use them for your most important pages only. Learn when to request indexing and when to skip it.
Monitor Crawl Stats
Go to GSC Settings → Crawl stats. This shows you how much crawl budget Google is using on your site.
If Google isn't crawling your site much, you may have:
- Too many errors (fix them).
- Too many redirects (simplify).
- Too much duplicate content (use canonicals).
Improving crawl stats directly improves indexing speed.
Build Internal Links Strategically
Pages linked from your homepage are crawled and indexed faster. Use your homepage and main navigation to link to your most important pages.
Keep Content Fresh
Google favors fresh content. If you update a page, submit it for re-indexing. This signals that the page is worth re-crawling.
Troubleshooting Specific Scenarios
Scenario: "Discovered – currently not indexed"
This means Google found the page (via sitemap or backlinks) but hasn't crawled it yet.
Fix: This is temporary. Wait 48 hours. If the page is still not crawled, check if it's blocked by robots.txt or if your site has serious crawl issues.
Scenario: Page Indexed Yesterday, Missing Today
The page was indexed, then removed. This happens when:
- You added a noindex tag.
- The page now returns a 404 or 5xx error.
- The page is now behind a login.
- Google re-evaluated the content and decided it's not valuable enough.
Fix: Check the Coverage report for the reason. If the page is returning an error, fix the server issue. If it's noindex, remove it. If the content was deindexed due to quality, improve the content and resubmit.
Scenario: Thousands of Pages Not Indexed
If most of your site isn't indexed, you likely have a site-wide issue:
- Robots.txt is blocking everything.
- All pages have noindex tags.
- Your site has serious crawl errors.
- Your site is new and Google hasn't crawled it much yet.
Fix: Check robots.txt first. Then check the Coverage report for site-wide errors. Fix the top errors and resubmit your sitemap.
Summary: The Diagnostic Flowchart
- Verify non-indexing: Use site: operator and GSC URL Inspection. Confirm the page really isn't indexed.
- Check Coverage: Look for specific errors or warnings. If you see one, fix it and resubmit.
- Check "Crawled – currently not indexed": Diagnose quality, linking, or rendering issues.
- Check URL Inspection rendering: See what Google actually sees. Fix blank pages or missing content.
- Request indexing: Once fixed, submit the URL for re-crawl.
- Check robots.txt and sitemap: Ensure the page isn't blocked and is in your sitemap.
- Verify HTTPS and canonicals: Check for redirect chains and mixed content.
- Monitor and recheck: Wait 48 hours, then verify the page is now indexed.
Most non-indexing issues are fixable in under 30 minutes using GSC alone. You don't need paid tools. You need to know where to look and what the signals mean.
Getting Help: When to Use Tools Beyond GSC
GSC tells you 90% of what you need. But if you're stuck:
- Ahrefs, Semrush, or Moz: These tools show you pages across your site that aren't indexed, making it easier to spot patterns.
- Lighthouse: Tests page performance and rendering. Useful if you suspect JavaScript issues.
- Screaming Frog: Crawls your site and shows you robots.txt blocks, canonicals, and redirects all in one place.
But honestly? Start with GSC. It's free and it's usually enough.
One More Thing: Prevention Is Easier Than Diagnosis
The best way to avoid indexing problems is to prevent them:
- Set up GSC on day one of your project.
- Submit your sitemap immediately.
- Link every page from at least one other page.
- Never add noindex tags unless you specifically don't want a page indexed.
- Keep robots.txt clean and simple.
- Monitor your Coverage report weekly.
- Test new pages in GSC URL Inspection before shipping.
If you do this, you'll rarely have indexing problems. And when you do, you'll diagnose them in minutes using GSC.
The Bottom Line
Pages don't get indexed because of specific, fixable problems. Google tells you what they are in the Coverage report and URL Inspection tool.
You don't need an agency. You don't need expensive tools. You need to know where to look and what to fix.
Follow this guide, and you'll find and fix indexing issues faster than any consultant can. Ship faster. Stay visible.
Get the next one on Sunday.
One short email a week. What is working in SEO right now. Unsubscribe in one click.
Subscribe on Substack →