Coverage Issues in Google Search Console: A Plain-English Guide
Decode Coverage Issues in Google Search Console. 30-minute fixes for errors, warnings, and excluded pages. Plain-English guide for founders.
Understanding Coverage Issues Before You Fix Them
Google Search Console's Coverage report is where your SEO visibility goes to die—or thrive. It's the single most important signal that Google can actually find and index your pages. Ignore it, and you're shipping content into the void. Pages that don't appear in the Coverage report aren't ranking. They're not even in Google's index.
This guide decodes the five most common Coverage issues you'll see, gives you the exact reason each one happens, and walks you through a 30-minute fix for each. No agency jargon. No fluff. Just the brutal mechanics of why Google can't index your pages and how to fix it.
The Coverage report lives in Google Search Console under "Indexing > Coverage." It shows you four categories: Valid (indexed and good), Valid with warnings (indexed but flagged), Excluded (Google found them but chose not to index), and Error (Google couldn't index them). Most founders only look at this when traffic tanks. By then, the damage is done.
Prerequisites: What You Need Before You Start
Before you dive into fixing Coverage issues, make sure you have the right tools and access.
You'll need:
- Access to Google Search Console for your domain (owner or editor role)
- Access to your site's backend or hosting (to edit robots.txt, add redirects, or remove pages)
- A text editor (even Notepad works)
- 30 minutes per issue type
- A way to test changes (staging environment is ideal, but not required)
Optional but helpful:
- Access to Seoable's domain audit to get a baseline of your crawl health and indexing status across all pages
- A browser with developer tools (Chrome, Firefox, Safari all have them built in)
- A spreadsheet to track which pages have which issues
If you don't have Google Search Console set up yet, you'll need to add your property and verify ownership. Google's official Page indexing report documentation walks through the setup process, but it's straightforward: sign in with your Google account, add your domain, and verify ownership through DNS, HTML file, or other methods.
One critical note: Coverage issues don't appear instantly. Google needs time to crawl your site and attempt to index pages. If you've just launched or made changes, wait 24-48 hours before expecting the Coverage report to show accurate data.
Issue #1: "Crawled – Currently Not Indexed"
Why This Happens
Google found your page, crawled it, but decided not to index it. This is the most common Coverage warning and the most misunderstood. It doesn't mean your page is broken or bad. It means Google thinks the page isn't valuable enough to take up crawl budget. Reasons include:
- Low-quality or thin content. Pages with fewer than 300 words, duplicate content, or pages that don't answer a user's search intent get deprioritized.
- Crawl budget constraints. If your site has thousands of pages, Google crawls the most important ones first. Less important pages wait.
- Too many parameters or duplicate URLs. If you have 50 versions of the same page with different sorting parameters, Google indexes one and crawls the rest without indexing.
- Redirect chains. If Page A redirects to Page B, which redirects to Page C, Google might crawl all three but only index one.
- Noindex tags. If you accidentally added
<meta name="robots" content="noindex">to a page, Google crawls it but doesn't index it.
The 30-Minute Fix
Step 1: Identify which pages are affected (5 minutes).
In Google Search Console, go to Indexing > Coverage. Click the "Crawled – currently not indexed" tab. Export the list or note the top 10-20 pages. Open each one in your browser.
Step 2: Check for noindex tags (5 minutes).
Right-click the page, select "View Page Source," and search for noindex. If you find <meta name="robots" content="noindex"> or <meta name="googlebot" content="noindex">, that's your culprit. Delete it. Also check your HTTP headers: if there's a X-Robots-Tag: noindex header, remove that too.
Save the file, deploy to production, and move to Step 3.
Step 3: Check content quality and length (5 minutes).
Read the page. Does it answer a specific search query? Is it at least 300 words? If it's a landing page or navigation page, 300 words might be fine. If it's a blog post or resource, aim for 800+ words. If your page is thin, add more substantive content. Use Conductor Academy's guide on Index Coverage to understand what Google considers "quality" content.
Step 4: Check for duplicate content (5 minutes).
Use Google Search Console's URL Inspection tool. Paste the URL that's not indexed. Look for the "Coverage" section. If Google says "Excluded," check why. If it says "Crawled – currently not indexed," look at the "Linked from" section. Are there other pages linking to this URL with different parameters (e.g., ?utm_source=email or ?sort=price)? If yes, add a canonical tag to point all versions to the primary URL.
Add this to your page's <head> section:
<link rel="canonical" href="https://yoursite.com/primary-page-url">
Step 5: Test and resubmit (10 minutes).
After making changes, use the URL Inspection tool in Google Search Console. Paste the URL, click "Test live URL," and wait for Google to recrawl it. This usually takes 30 seconds to 2 minutes. If Google finds the page now and says "Indexable," click "Request indexing." If it still says "Not indexable," check the error message. Common errors include:
- Blocked by robots.txt. Check your robots.txt file and make sure you're not blocking the page.
- Blocked by noindex. Double-check you removed the noindex tag.
- Redirect error. If the page redirects, make sure the redirect chain is no more than 2 hops.
If you're still stuck, this Centori guide to the Coverage Report has detailed troubleshooting for each error type.
Pro Tip: Crawl Budget and Priority
If you have thousands of pages and many are "Crawled – currently not indexed," you might have a crawl budget problem. Google allocates crawl budget based on your site's importance and server response speed. To improve crawl budget:
- Remove low-value pages. If you have 10,000 product variations but only 100 sell, delete or noindex the rest.
- Speed up your site. Faster sites get higher crawl budgets. Run a speed audit and fix the biggest bottlenecks.
- Use internal linking strategically. Link to important pages from your homepage and high-authority pages. Google crawls linked pages more often.
For a deeper dive into how crawl budget works, check out Seoable's crawlability primer for founders, which explains robots.txt, crawl budget, and rendering in plain terms.
Issue #2: "Excluded by 'noindex' Tag"
Why This Happens
You explicitly told Google not to index this page. This is intentional 90% of the time—you don't want Google indexing your login page, thank-you page, or draft blog posts. But sometimes you forget you added the noindex tag, or someone on your team added it without telling you.
The 30-Minute Fix
Step 1: Understand why the page is noindexed (5 minutes).
Open the page in your browser. Is it a page that should be indexed? Examples:
- Should NOT be indexed: Login pages, sign-up pages, thank-you pages, checkout pages, password reset pages, admin pages, duplicate content pages.
- Should be indexed: Blog posts, product pages, category pages, resource pages, landing pages.
If it's a page that should NOT be indexed, you're done. Leave the noindex tag. If it should be indexed, move to Step 2.
Step 2: Remove the noindex tag (5 minutes).
Edit the page's HTML. Search for <meta name="robots" content="noindex"> or <meta name="googlebot" content="noindex">. Delete these lines. Also check for <meta name="robots" content="noindex, follow"> or similar variants.
If you're using a CMS like WordPress, Webflow, or Shopify, there's usually a checkbox or dropdown for indexing settings. Look for "SEO" or "Visibility" settings and make sure "Index this page" or "Allow indexing" is checked.
For WordPress, the setting is usually in the Yoast SEO or Rank Math plugin. For Webflow, it's in the page settings under "SEO." For Shopify, it's in the page or product settings under "Search engine listing."
Step 3: Check HTTP headers (5 minutes).
Sometimes noindex is set at the server level, not in the HTML. In Google Search Console, go to the URL Inspection tool, paste the URL, and look for the "Coverage" section. If it says "Excluded by 'noindex' tag," click "Test live URL" and look at the "HTTP headers" section. If there's a X-Robots-Tag: noindex header, you need to remove it from your server configuration.
If you're on a managed hosting platform (Vercel, Netlify, AWS), this is usually in your deployment settings or environment variables. If you're on shared hosting or a VPS, contact your hosting provider or check your server's configuration files (usually .htaccess for Apache or nginx.conf for Nginx).
Step 4: Deploy and test (5 minutes).
After removing the noindex tag, deploy your changes to production. Wait 1-2 minutes, then use the URL Inspection tool again. Click "Test live URL" and wait for Google to recrawl. It should now say "Indexable."
Step 5: Request indexing (5 minutes).
Once Google confirms the page is indexable, click "Request indexing" in the URL Inspection tool. Google will add it to the crawl queue and index it within 24-72 hours.
Pro Tip: Bulk Remove Noindex Tags
If you have dozens of pages with noindex tags you want to remove, don't do it one by one. Export the list from Google Search Console, then use a find-and-replace tool to remove the tags from your entire site. Most code editors (VS Code, Sublime, etc.) support regex find-and-replace. Search for <meta name="robots" content="noindex[^>]*> and replace with nothing.
For a detailed explanation of how to audit and fix technical SEO issues like this across your entire site, read Seoable's guide to hidden SEO pitfalls in auto-generated sites, which covers noindex tags, canonical issues, and indexing problems in detail.
Issue #3: "Excluded by 'nofollow' Tag"
Why This Happens
This is a common misconception: nofollow doesn't prevent indexing. It only prevents Google from following links on the page. However, if ALL links on a page have nofollow, and the page has no internal links pointing to it, Google might not crawl it. If a page isn't crawled, it can't be indexed.
More commonly, you see "Excluded by 'nofollow'" in the Coverage report when:
- You added nofollow to the entire page. This is rare but possible with
<meta name="robots" content="nofollow">. - You're confusing nofollow with noindex. They're different. Nofollow means "don't follow links on this page." Noindex means "don't index this page."
- Google is showing you a warning about link equity. If a page has only nofollow links, Google might deprioritize it.
The 30-Minute Fix
Step 1: Verify the issue (5 minutes).
Use the URL Inspection tool in Google Search Console. Paste the URL and check the error message. Does it say "Excluded by 'nofollow'"? If yes, click "Test live URL" and look at the "Crawlability" section. Google should still be able to crawl the page.
Step 2: Remove nofollow from the page-level meta tag (5 minutes).
If the issue is a page-level <meta name="robots" content="nofollow"> tag, remove it. This is the only scenario where nofollow actually prevents indexing.
Search your page's HTML for nofollow. If you find <meta name="robots" content="nofollow">, delete it.
Step 3: Check internal linking (5 minutes).
If the page doesn't have nofollow at the page level, the issue is likely that Google can't find the page through internal links. Check your site's navigation, footer, and sidebar. Does at least one page link to this page without a nofollow tag? If not, add an internal link from your homepage or a high-authority page.
Step 4: Deploy and resubmit (5 minutes).
After making changes, use the URL Inspection tool to test the live URL. Click "Request indexing" once Google confirms it's indexable.
Step 5: Monitor (5 minutes).
Check back in Google Search Console after 48 hours. The page should move from "Excluded by 'nofollow'" to "Valid" or "Valid with warnings." If it doesn't, the issue might be something else. Check for noindex tags, robots.txt blocks, or redirect issues.
Pro Tip: Nofollow vs. Noindex
They're not the same. Here's the difference:
- Noindex: "Don't index this page." Google crawls it but doesn't add it to the index. The page won't appear in search results.
- Nofollow: "Don't follow links on this page." Google crawls the page and indexes it, but doesn't follow links on it to crawl other pages. The page can still rank.
If you want to prevent a page from ranking, use noindex. If you want to prevent Google from crawling links on a page (e.g., user-generated content or affiliate links), use nofollow.
Issue #4: "Excluded by 'Canonical' Tag"
Why This Happens
You told Google that another page is the "canonical" (primary) version of this page. Google listened and indexed the canonical page instead. This is intentional most of the time—you use canonical tags to consolidate duplicate pages. But sometimes you set the wrong canonical, or you set a canonical to a page that doesn't exist.
The 30-Minute Fix
Step 1: Identify the canonical target (5 minutes).
Use the URL Inspection tool. Paste the URL that's excluded. Look for the "Canonical" section. It should show you which page Google considers the primary version. Open that page in your browser. Is it the right page? Is it indexed?
If the canonical points to a non-existent page, a redirect, or a page that's also excluded, you have a problem.
Step 2: Verify the canonical is correct (5 minutes).
Ask yourself: Is the canonical page the one I want to rank? If yes, move to Step 3. If no, you need to change the canonical tag.
Edit the page's HTML. Find the <link rel="canonical" href="https://..."> tag. Change the href to the correct URL. Make sure:
- The canonical URL is absolute (starts with
https://), not relative. - The canonical URL has no parameters (no
?utm_source=emailor?sort=price). - The canonical URL doesn't redirect to another page.
- The canonical URL is indexable (no noindex tag, not blocked by robots.txt).
Step 3: Check the canonical target is indexed (5 minutes).
Use the URL Inspection tool to check the canonical page. Is it indexed? If it says "Valid," you're good. If it says "Excluded," you need to fix that page first before the current page can be indexed.
Step 4: Deploy and test (5 minutes).
After updating the canonical, deploy your changes. Wait 1-2 minutes, then use the URL Inspection tool to test the live URL. Click "Request indexing."
Step 5: Monitor the canonical page (5 minutes).
After 48 hours, check Google Search Console. The current page should move from "Excluded by 'Canonical'" to "Excluded by 'User-declared canonical'," which means Google found your canonical tag and is honoring it. The canonical page should remain "Valid."
Pro Tip: Self-Referential Canonicals
If a page's canonical points to itself (e.g., https://yoursite.com/page has a canonical tag pointing to https://yoursite.com/page), that's fine. It's called a self-referential canonical and it's a best practice. It tells Google, "This is the primary version of this URL." Use self-referential canonicals on all pages to prevent URL variations (www vs. non-www, http vs. https) from being treated as duplicates.
Issue #5: "Excluded by 'User-Declared Canonical'"
Why This Happens
This is similar to the previous issue, but it's a status, not an error. Google found your canonical tag and is honoring it. The page is excluded from the index because you told Google to index a different page instead. This is usually intentional.
However, if you see thousands of pages with this status, you might have an over-canonicalization problem. You might be canonicalizing too many pages to a single page, which wastes crawl budget.
The 30-Minute Fix
Step 1: Decide if this is intentional (5 minutes).
Ask yourself: Do I want this page to rank? If no, leave the canonical as is. If yes, remove the canonical tag and make the page self-referential.
Step 2: Remove or update the canonical (5 minutes).
If you want the page to rank, edit its HTML. Find the <link rel="canonical" href="https://..."> tag. Replace it with a self-referential canonical:
<link rel="canonical" href="https://yoursite.com/this-pages-url">
Step 3: Check for parameter variations (5 minutes).
If you have multiple versions of the same page with different parameters (e.g., ?color=red, ?color=blue), add canonicals to all of them pointing to the primary version. This consolidates ranking signals.
Step 4: Deploy and monitor (5 minutes).
After making changes, deploy to production. Wait 48 hours, then check Google Search Console. The page should move from "Excluded by 'User-declared canonical'" to "Valid."
Step 5: Check the canonical target (5 minutes).
If you removed the canonical, make sure the page is now indexed. If you updated it, make sure the new canonical target is indexed.
Pro Tip: Canonical Chains
Never create a canonical chain. Page A's canonical shouldn't point to Page B if Page B's canonical points to Page C. Google will follow the chain, but it wastes crawl budget. Always have canonicals point directly to the primary page.
Issue #6: "Excluded by 'Robots.txt'"
Why This Happens
Your robots.txt file is blocking Google from crawling the page. This is usually intentional—you don't want Google crawling your admin panel, API endpoints, or staging environment. But sometimes you accidentally block a page you want to rank.
The 30-Minute Fix
Step 1: Locate your robots.txt file (5 minutes).
It should be at https://yoursite.com/robots.txt. Open it in your browser or text editor. You should see something like:
User-agent: *
Disallow: /admin/
Disallow: /staging/
Disallow: /*.pdf
This tells all bots (User-agent: *) not to crawl pages in /admin/, /staging/, or any PDF files.
Step 2: Check if your page is blocked (5 minutes).
Does your robots.txt contain a rule that matches your page's URL? For example, if your page is /blog/my-post, and your robots.txt says Disallow: /blog/, then your page is blocked.
If your page is blocked, remove or modify the rule. If the rule is too broad, use a more specific path. For example, instead of Disallow: /admin/, use Disallow: /admin/settings/ if you only want to block the settings page.
Step 3: Use robots.txt syntax correctly (5 minutes).
Robots.txt has specific syntax. Here are the basics:
- Disallow: Blocks crawling.
/admin/blocks all pages in the admin folder. - Allow: Allows crawling even if a broader Disallow rule applies.
/admin/disallowed, but/admin/public/allowed. - User-agent: Specifies which bot the rule applies to.
User-agent: googlebotapplies only to Google's bot. - Crawl-delay: Tells bots how long to wait between requests. Usually not needed.
Example:
User-agent: *
Disallow: /admin/
Disallow: /staging/
Allow: /admin/public/
User-agent: googlebot
Crawl-delay: 1
Step 4: Deploy and test (5 minutes).
After updating robots.txt, deploy your changes. Use the URL Inspection tool in Google Search Console. Paste your page's URL and click "Test live URL." Google should now say the page is crawlable.
Step 5: Request indexing (5 minutes).
Once Google confirms the page is crawlable, click "Request indexing." Google will add it to the crawl queue.
Pro Tip: Test Your robots.txt
Google Search Console has a built-in robots.txt tester. Go to Settings > Crawlers > robots.txt Tester. Paste a URL and it will tell you if it's blocked. This is faster than manually checking your robots.txt file.
Issue #7: "Excluded by 'Redirect'"
Why This Happens
Your page redirects to another page. Google crawls the original page, follows the redirect, and indexes the destination page instead. This is intentional when you're consolidating pages or moving content. But if the redirect chain is too long, Google might not follow it.
The 30-Minute Fix
Step 1: Identify the redirect chain (5 minutes).
Use the URL Inspection tool. Paste the URL. Look for the "Redirect" section. It should show you where the page redirects to. Is the destination page indexed? If yes, you're probably fine. If no, move to Step 2.
Step 2: Check redirect depth (5 minutes).
A redirect chain is when Page A redirects to Page B, which redirects to Page C. Google prefers direct redirects (Page A to Page B). If your chain is more than 2 hops, shorten it.
To check your redirect chain, use a browser extension like "Redirect Path" or "Redirect Trace." Install it, then visit the page. It will show you all the redirects.
Step 3: Fix the redirect (5 minutes).
If the chain is too long, update your redirects to point directly to the final destination. If Page A redirects to Page B, and Page B redirects to Page C, change Page A's redirect to point directly to Page C.
In your server configuration (.htaccess for Apache, nginx.conf for Nginx) or your CMS, find the redirect rule and update the destination.
Step 4: Check the destination page (5 minutes).
Make sure the destination page is indexable. It should have no noindex tag, not be blocked by robots.txt, and have a self-referential canonical.
Step 5: Deploy and monitor (5 minutes).
After updating redirects, deploy your changes. Wait 48 hours, then check Google Search Console. The original page should either move to "Valid" (if you removed the redirect) or stay "Excluded by 'Redirect'" (if you kept the redirect). The destination page should be "Valid."
Pro Tip: HTTP Status Codes Matter
Use 301 (permanent) redirects for pages you're moving permanently. Use 302 (temporary) redirects for pages you're moving temporarily. Google treats 301s as permanent and will eventually stop crawling the old page. Google will keep crawling 302s.
For most cases, use 301.
Bulk Fixes: When You Have Hundreds of Coverage Issues
If you have hundreds or thousands of Coverage issues, fixing them one by one will take weeks. Here's how to prioritize and bulk-fix.
Step 1: Export and Categorize (10 minutes)
In Google Search Console, go to Indexing > Coverage. Click each issue type and export the list. You should have separate lists for:
- Crawled – currently not indexed
- Excluded by 'noindex' tag
- Excluded by 'nofollow' tag
- Excluded by 'Canonical' tag
- Excluded by 'Robots.txt'
- Excluded by 'Redirect'
Count how many pages are in each category. Focus on the largest categories first.
Step 2: Identify Patterns (10 minutes)
Look at the URLs in each category. Do they all have something in common? For example:
- All URLs in
/admin/? Update robots.txt to block the entire folder. - All URLs with
?utm_source=? Add a canonical tag to consolidate parameters. - All URLs with
/staging/? Delete the staging environment or noindex it entirely. - All product pages with low sales? Delete or noindex them to improve crawl budget.
Patterns are your friend. One fix can resolve hundreds of issues.
Step 3: Bulk Apply Fixes (20 minutes)
Once you've identified patterns, apply fixes in bulk:
- Bulk noindex: If you're excluding 500 pages, add noindex to the entire folder in robots.txt or your CMS. Don't edit each page individually.
- Bulk canonical: If you have 200 product variations, add a canonical tag to all of them pointing to the primary product. Use a template or script to automate this.
- Bulk delete: If you have 1,000 low-value pages, delete them. It's faster than noindexing each one.
- Bulk redirect: If you're consolidating pages, use a redirect rule that matches a pattern (e.g., all
/old-blog/*redirects to/new-blog/*).
For a detailed approach to this kind of bulk audit and optimization, check out Seoable's guide to using Claude Opus 4.7's 1M context window to audit an entire site at once.
Step 4: Resubmit in Bulk (10 minutes)
After making bulk changes, use Google Search Console's "Request indexing" feature to resubmit affected pages. You can submit up to 500 URLs at a time. Create a list of the most important pages (your homepage, top 10 blog posts, top products) and submit them first.
Monitoring Coverage Issues: A Monthly Routine
Once you've fixed your Coverage issues, you need to prevent new ones from appearing. Check your Coverage report monthly.
The 10-Minute Monthly Check
- Open Google Search Console. Go to Indexing > Coverage.
- Check the numbers. Compare this month's "Valid" count to last month's. Is it growing or shrinking? Growing is good. Shrinking is bad.
- Look for new errors. Scroll through the error categories. Are there new issues you didn't see last month? If yes, investigate.
- Spot-check a few pages. Click on a few URLs in the "Crawled – currently not indexed" list and audit them. Are they thin content? Do they have noindex tags?
- Check your robots.txt. Did anyone accidentally block important pages?
- Review your redirects. Are any redirect chains longer than 2 hops?
If everything looks good, you're done for the month. If you spot issues, fix them immediately using the 30-minute fix guides above.
For a more comprehensive monthly checklist, see Seoable's 10-minute SEO review for founders, which covers Coverage issues, ranking changes, and content decay.
Common Mistakes That Create Coverage Issues
Here are the mistakes that create Coverage issues in the first place. Avoid them.
Mistake #1: Noindexing Too Much
Founders often noindex pages thinking it will improve SEO. It doesn't. Noindex tells Google not to index a page. If you noindex 50% of your site, you're telling Google to ignore half your content. Only noindex pages that shouldn't rank: login pages, thank-you pages, checkout pages, duplicate content.
Mistake #2: Overly Broad Robots.txt Rules
If you write Disallow: / in robots.txt, you're blocking your entire site. If you write Disallow: /p, you're blocking /products/, /pages/, /pricing/, and anything else starting with /p. Be specific. Use exact paths.
Mistake #3: Redirect Chains
Page A redirects to Page B, which redirects to Page C. Google follows the chain, but it's inefficient. Always redirect directly to the final destination.
Mistake #4: Canonical Chains
Page A's canonical points to Page B, which has a canonical pointing to Page C. Google will eventually figure it out, but it's inefficient. Always have canonicals point directly to the primary page.
Mistake #5: Canonical to Non-Existent Pages
If your canonical points to a page that doesn't exist or returns a 404, Google can't index anything. Always canonical to a real, indexable page.
Mistake #6: Mixing Noindex and Canonical
If a page has both noindex and a canonical tag, Google will honor the noindex. The canonical is ignored. Don't mix them.
Mistake #7: Ignoring Coverage Issues
Most founders check Google Search Console once a quarter, if at all. By then, hundreds of pages are unindexed. Check monthly. Fix issues immediately.
When to Bring in Help
Most Coverage issues are fixable in 30 minutes. But if you have thousands of pages, a complex site architecture, or issues you can't diagnose, you might need help.
Signs you need an expert:
- More than 10,000 pages. Bulk fixes are complex. You need someone who understands your site architecture.
- Millions of indexing issues. This suggests a fundamental problem (site-wide noindex, robots.txt block, redirect loop). An expert can diagnose faster.
- You've tried the fixes and they didn't work. There might be a server-level issue or a CMS-specific problem.
- You don't have access to your site's backend. You can't edit robots.txt or redirects without backend access.
If you're a founder without an SEO background, Seoable's one-time domain audit and keyword roadmap includes a crawl health analysis and specific recommendations for fixing Coverage issues. It's a $99 one-time fee and takes 60 seconds to generate.
Alternatively, if you want a comprehensive audit of your entire site's technical SEO, Seoable's guide to auditing Shopify stores in 30 minutes has a step-by-step process you can adapt to any platform.
Summary: The Coverage Issues Playbook
Here's what you need to remember:
Coverage issues are preventing your pages from ranking. Google Search Console's Coverage report shows you exactly which pages are indexed and which aren't. Ignore it and you're invisible.
Most Coverage issues have a 30-minute fix. The seven most common issues are:
- Crawled – currently not indexed: Add content, remove noindex, check for duplicates.
- Excluded by 'noindex' tag: Remove the noindex tag.
- Excluded by 'nofollow' tag: Remove page-level nofollow, add internal links.
- Excluded by 'Canonical' tag: Fix the canonical to point to the right page.
- Excluded by 'User-declared canonical': Remove or update the canonical.
- Excluded by 'Robots.txt': Update robots.txt to allow crawling.
- Excluded by 'Redirect': Shorten redirect chains, ensure destination is indexable.
Patterns solve bulk issues. If 500 pages are excluded, they probably share a pattern. Fix the pattern, not the pages.
Prevention beats fixing. Check your Coverage report monthly. Catch issues early.
When in doubt, use Google Search Console's URL Inspection tool. It tells you exactly why a page isn't indexed and what to do about it.
Start with your biggest Coverage issues. Fix them this week. Then set up a monthly check. In 30 days, you'll have more pages indexed, more organic visibility, and more traffic.
Ship or stay invisible. Coverage issues are the difference between the two.
Additional Resources
For deeper dives into specific Coverage issues and advanced troubleshooting, these resources are invaluable:
Google's official Page indexing report documentation explains every status and how to interpret them. Conductor Academy's comprehensive guide to Index Coverage breaks down each issue type with real examples. Centori's ultimate guide to the Coverage Report includes troubleshooting trees for each error. Decoding's detailed post on Google Search Console errors covers fixes for the most common issues like "Crawled – currently not indexed" and canonical problems. Artlogic's support article on Coverage issues walks through the Coverage page and resolution steps. Lofty's explanation of Index Coverage Issues helps site owners monitor and maintain search presence. This YouTube tutorial on fixing Coverage issues shows step-by-step URL inspection and validation.
For broader SEO context, Seoable's guide to the difference between indexing and ranking explains why Coverage matters before you optimize for rankings. Seoable's SEO triage guide for founders shows which SEO tasks move the needle. And if you're building a site from scratch, Seoable's Week 1 SEO guide for founders includes Coverage audits as a day-one deliverable.
Get the next
dispatch on Monday.
One email per week with the most important SEO and AEO moves for founders. Unsubscribe in one click.