Reading the Google Search Console Coverage Report
Decode Google Search Console Coverage statuses. Learn which errors to fix first, which to ignore, and get indexed faster.
What You're Actually Looking At
Google Search Console's Coverage Report tells you which pages Google found, which it indexed, and why it rejected the rest. It's the difference between shipping content and shipping visible content.
Most founders never open it. They assume Google indexes everything automatically. They're wrong. The Coverage Report is where you discover that 40% of your site isn't indexed, your pagination is broken, or your canonicals are pointing to the wrong place.
This guide cuts through the noise. We'll walk through the Coverage Report step-by-step, decode what each status means, and show you which errors to fix immediately and which ones to ignore entirely.
Prerequisites: Get Your Ducks in a Row First
Before you dive into the Coverage Report, you need three things:
1. Google Search Console access. If you haven't set it up yet, start with the 10-minute setup guide. You'll need to verify your domain using DNS, HTML file, meta tag, or Google Analytics. Verifying your domain in Google Search Console covers every method.
2. A submitted sitemap. The Coverage Report pulls data from your sitemap first. If you haven't submitted one, the report will only show pages Google discovered through links. Submit your first sitemap to get a complete picture.
3. Patience. Google needs 24-48 hours to crawl and index new pages. If you just launched, wait before panicking about coverage.
You should also have GA4 and Google Tag Manager set up so you can tie indexing problems to actual traffic loss. But that's optional for reading the Coverage Report itself.
Navigating to the Coverage Report
The Coverage Report lives in Search Console's left sidebar under "Index."
Here's the path:
- Open Google Search Console
- Select your property (if you have multiple)
- Click Index in the left menu
- Click Coverage
You'll see a graph with four colored bars:
- Valid (green): Pages Google found and indexed successfully
- Valid with warnings (yellow): Pages indexed but with issues that might affect visibility
- Excluded (blue): Pages Google found but intentionally didn't index (usually the right call)
- Error (red): Pages Google couldn't index due to problems you need to fix
Below the graph, you'll see a table with individual pages grouped by status. This is where the real work happens.
Understanding Coverage Statuses: The Breakdown
Google uses specific status codes to explain why each page landed where it did. Knowing the difference between them is critical—because some require action and others don't.
Valid (Indexed and Serving)
This is what you want. The page is indexed and live in Google's search results.
No action needed. Move on.
Valid with Warnings
These pages are indexed, but Google flagged something that might hurt their ranking or visibility. Common warnings include:
- Submitted URL marked as noindex: Your sitemap includes a page with
noindexin its meta tags. Remove the noindex tag or remove the page from your sitemap. - Crawled – currently not indexed: Google crawled the page but hasn't indexed it yet. This happens with new pages. Wait 48 hours, then check again.
- Indexed, though blocked by robots.txt: The page is indexed, but you're blocking it in robots.txt. This is usually a mistake. Either remove the block or remove the page from your sitemap.
- Duplicate without user-selected canonical: You have duplicate pages without telling Google which one to index. Add a canonical tag pointing to the preferred version.
Action required: Fix warnings. They signal wasted crawl budget and lost ranking potential.
Excluded
Google found these pages but chose not to index them. This is often correct—you don't want to index paginated results, duplicate content, or internal search pages.
Common exclusion reasons:
- Noindex tag: You explicitly told Google not to index the page. This is intentional and good.
- Duplicate without user-selected canonical: Google found duplicates and picked one to index. If the wrong page was chosen, add a canonical tag.
- Blocked by robots.txt: You blocked the page in robots.txt. This is usually intentional (e.g., /admin, /search).
- Blocked by page robots meta tag: The page has
robots: noindexorrobots: none. Intentional. Leave it. - Alternate page with proper canonical tag: This is a duplicate that has a canonical tag pointing elsewhere. Correct behavior.
- Soft 404: The page returns a 404 status code or looks empty. Google won't index it. If this page should exist, fix the 404 error.
- Blocked by robots.txt (though not by the site owner): Rare. Usually means a plugin or framework is blocking crawling unintentionally.
Action required: Check soft 404s. If those pages should exist, fix the underlying issue. For everything else, exclusions are usually correct.
Error
These pages should be indexed but aren't. Google tried to crawl them and failed. These require immediate attention.
Common error statuses:
- Server error (5xx): The page returned a 500, 502, 503, or similar error. Your server is broken or overloaded. Fix the server issue immediately.
- Not found (404): The page returned a 404 error. Either the page doesn't exist (delete it from your sitemap) or it's misconfigured (fix the URL or redirect).
- Unauthorized (401): The page requires authentication. Google can't index it without credentials. Either remove authentication or remove the page from your sitemap.
- Forbidden (403): The page is blocked from crawling. Check your robots.txt and server configuration.
- Redirect error: The page redirects in a chain or to a broken page. Simplify redirects to point directly to the final destination.
- Blocked by robots.txt: You're blocking the page in robots.txt but included it in your sitemap. Remove the block or the sitemap entry.
- Crawled – currently not indexed: Google crawled the page but hasn't indexed it yet. Wait 48 hours. If it persists, the page might have duplicate content or low-quality signals.
Action required: Fix all errors. These are preventing indexing.
Step 1: Look at the Graph First
Before you drill into individual pages, scan the graph at the top of the Coverage Report.
You're looking for two things:
1. Trend over time. Is the green (Valid) section growing or shrinking? A declining Valid count means pages are getting de-indexed or broken. A growing Valid count means your content is getting indexed.
2. The ratio of Valid to Error. If you have 1,000 indexed pages and 10 errors, that's fine. If you have 1,000 indexed pages and 500 errors, you have a systemic problem. Errors are the red flag.
If the graph looks healthy (mostly green, minimal red), you might not need to do anything. Skip to Step 5 (spot checks).
If the graph shows declining Valid pages or growing Error counts, move to Step 2.
Step 2: Filter by Status to Find Problem Pages
The Coverage Report table below the graph shows individual pages. By default, it displays all statuses mixed together.
Filter by status to focus on what matters:
- Click the Status dropdown at the top of the table
- Select Error to see only pages Google couldn't index
- Note the count. If you have fewer than 10 errors, you can probably fix them manually. If you have 100+, you have a structural problem (bad sitemap, server issues, misconfigured robots.txt).
Export the error list to a spreadsheet:
- Click the Download button (looks like a down arrow) at the bottom right
- Select CSV to export all error URLs
- Open the CSV in Google Sheets or Excel
Now you have a working list of broken pages. You'll come back to this.
Repeat the filter for Valid with warnings. These are indexed but at risk. Fix them second.
Ignore Excluded pages unless you see a large count of soft 404s (those suggest broken pages).
Step 3: Diagnose Errors Using the URL Inspection Tool
For each error in your list, use Google Search Console's URL Inspection Tool to diagnose the exact problem.
Here's the process:
- Copy the first error URL from your CSV
- Paste it into the URL Inspection search bar at the top of Search Console
- Wait for the report to load (usually 10-30 seconds)
- Look at the Coverage section. It will show the exact error code (404, 500, robots.txt, etc.)
- Click View crawled page to see what Google actually received
- Compare it to what you expect. If the page looks broken or empty, that's your problem.
Repeat for the next 5-10 errors. You'll start seeing patterns. Maybe all errors are 404s (broken URLs). Maybe they're all 500s (server issue). Maybe they're all blocked by robots.txt (misconfigured).
Once you identify the pattern, you can fix them in bulk instead of one by one.
Step 4: Fix Errors in Priority Order
Not all errors are equally urgent. Fix them in this order:
Priority 1: Server Errors (5xx)
If your server is returning 500 errors, Google can't index anything. Fix your server first.
Action: Check your server logs. Is the site overloaded? Is a plugin broken? Is the database down? Fix the underlying issue. If you're on a shared hosting plan, upgrade or switch hosts.
Priority 2: 404 Errors on Important Pages
If a high-traffic page returns a 404, you're losing visibility immediately.
Action: Check if the page still exists. If yes, fix the URL or add a redirect. If no, remove it from your sitemap. Use Google Search Console's request indexing feature to re-crawl the page after you fix it.
Priority 3: robots.txt Blocks
If you're blocking pages in robots.txt but including them in your sitemap, Google gets confused.
Action: Either remove the robots.txt block or remove the page from your sitemap. Don't do both—decide whether the page should be indexed.
Priority 4: Redirect Chains
If a page redirects to another page, which redirects to another, Google loses ranking power and crawl budget.
Action: Simplify redirects. Page A should redirect directly to the final destination, not to Page B which redirects to Page C.
Priority 5: Crawled But Not Indexed
If Google crawled the page but didn't index it, the page might have low-quality signals, duplicate content, or thin content.
Action: Check for duplicate content. Check for noindex tags you forgot about. Check the page quality—does it have enough unique content? If it's a new page, wait 48 hours. If it's old and still not indexed, the page probably shouldn't be indexed (low-quality, duplicate, or thin).
Step 5: Fix Valid with Warnings
Once errors are handled, fix warnings. These pages are indexed but at risk.
Duplicate Without Canonical
You have multiple versions of the same page (e.g., example.com/product and example.com/product?utm_source=email). Google doesn't know which one to rank.
Action: Add a canonical tag to all duplicate pages pointing to the preferred version. Place it in the <head> section:
<link rel="canonical" href="https://example.com/product">
If you're using a CMS (WordPress, Webflow, etc.), the canonical tag is usually automatic. Check your SEO plugin settings.
Noindex in Sitemap
Your sitemap includes a page with noindex in its meta tags. This is contradictory—you're telling Google to index it (via sitemap) and not to index it (via noindex tag).
Action: Remove the page from your sitemap. The noindex tag is what you want—it's intentional. Learn more about robots, sitemaps, and canonicals.
Indexed But Blocked by robots.txt
Google indexed the page before you blocked it in robots.txt. The page is still in the index but won't be crawled again.
Action: Decide: should the page be indexed? If yes, remove the robots.txt block. If no, remove it from your sitemap and let the noindex tag handle it.
Step 6: Validate Your Fixes
After you fix errors, Google won't automatically re-crawl the pages. You need to request indexing.
- Go back to the URL Inspection Tool
- Paste the fixed URL
- Click Request indexing at the top right
- Google will re-crawl the page within 24-48 hours
You can request indexing for up to 200 URLs per day. If you have 500 errors, you'll need to spread requests across multiple days.
Alternatively, resubmit your sitemap. Google will re-crawl all pages in the sitemap within a few days.
Step 7: Monitor the Coverage Report Weekly
The Coverage Report changes constantly. New pages get indexed. Old pages get de-indexed. Errors appear and disappear.
Set a calendar reminder to check the Coverage Report weekly (or at least monthly). Look for:
- New errors: Did you introduce a bug? Did a plugin break something?
- Declining Valid count: Are pages being de-indexed? Why?
- Growing Excluded count: Are you accidentally blocking pages?
If you're tracking this for your quarterly SEO review, follow the founder's repeatable process to stay organized.
Common Mistakes Founders Make (And How to Avoid Them)
Mistake 1: Panicking About Excluded Pages
Excluded pages are usually correct. Google found them but decided not to index them—usually because they're duplicates, low-quality, or blocked intentionally.
Don't try to force Google to index excluded pages. If they should be indexed, add a canonical tag or fix the quality issue. If they shouldn't be indexed, leave them alone.
Mistake 2: Ignoring the Coverage Report Until It's Too Late
Errors accumulate. A misconfigured robots.txt might block 100 pages. A server issue might prevent indexing for weeks. By the time you notice, you've lost months of organic traffic.
Check the Coverage Report monthly. Catch problems early.
Mistake 3: Submitting Broken Pages to Your Sitemap
Your sitemap should only include pages that should be indexed. Don't include:
- 404 pages
- Redirects
- Noindex pages
- Pagination (like /page/2, /page/3)
- Internal search results
- Duplicate content
A clean sitemap makes the Coverage Report easier to read and helps Google crawl more efficiently.
Mistake 4: Not Fixing Canonical Issues
Duplicate pages without canonicals waste crawl budget and dilute ranking power. Google has to guess which version to rank.
Add canonicals. It takes 10 minutes and fixes the problem permanently.
Mistake 5: Requesting Indexing for Everything
You can only request indexing for 200 URLs per day. Don't waste quota on pages that don't matter (old blog posts, low-traffic pages, duplicates).
Focus on new pages and fixed errors. Learn when to actually use the indexing request feature.
Reading the Coverage Report for Different Site Types
E-commerce Sites
E-commerce sites often have massive Coverage Reports (thousands of product pages). Focus on:
- Errors on top-selling products: Fix these first. They drive revenue.
- Excluded pages: Check for soft 404s on product pages. If a product page returns a 404, that's an error, not an exclusion.
- Duplicate product pages: Add canonicals to filter pages (size, color, etc.) pointing to the main product page.
Use the Coverage Report's filtering to focus on specific URL patterns (e.g., /products/). Don't try to fix all 10,000 pages at once.
SaaS and Web Apps
SaaS sites often have authentication-gated pages. Google can't index them. This is correct—don't try to force indexing.
Focus on:
- Public pages: Blog, pricing, landing pages. These should all be Valid.
- Authentication pages: These will show as 401 (Unauthorized). This is expected. Don't try to fix it.
- Sitemap quality: Only include publicly indexable pages in your sitemap.
Blogs and Content Sites
Content sites live or die by indexing. Every published post should be Valid.
Focus on:
- New posts: Are they indexed within 48 hours? If not, check for noindex tags or robots.txt blocks.
- Old posts: Are they staying indexed? A declining Valid count means older content is being de-indexed (usually due to quality issues).
- Pagination: Don't include
/page/2,/page/3, etc. in your sitemap. Use rel="next" and rel="prev" instead.
Connecting Coverage to Actual Traffic
Indexing is a prerequisite for organic traffic, but it's not the whole story. A page can be indexed and still get zero clicks.
Connect Coverage data to your actual performance:
- Check the Google Search Console Performance Report to see which indexed pages are actually getting clicks
- Look for indexed pages with zero impressions (they're indexed but not ranking)
- Look for indexed pages with high impressions but low CTR (they're ranking but not compelling)
- Use GA4 reports to see which indexed pages drive actual conversions
This tells you which indexing problems are actually costing you traffic.
When to Ignore the Coverage Report
Not every Coverage Report issue requires action. Here's when to ignore it:
- Excluded pages with no soft 404s: Exclusions are usually correct. Don't force indexing.
- Errors on old, low-traffic pages: If a page gets zero traffic and no backlinks, fixing the error won't help. Delete the page instead.
- Warnings on new pages: "Crawled but not indexed" is normal for new pages. Wait 48 hours before worrying.
- Massive excluded counts on paginated content: If you have 5,000 paginated pages excluded, that's fine. You shouldn't index pagination anyway.
Focus on errors that affect high-traffic pages or new content. Ignore everything else.
The 30-Minute Coverage Audit
If you're short on time, here's the minimum:
- Open the Coverage Report (2 minutes)
- Look at the graph: Is Valid declining? Are Errors growing? (2 minutes)
- Filter by Error (1 minute)
- Check the error count: Is it under 50? (1 minute)
- Export errors to CSV (1 minute)
- Use URL Inspection on the top 5 errors (10 minutes)
- Identify the pattern: Are they all 404s? All server errors? All robots.txt blocks? (2 minutes)
- Fix the root cause: Update robots.txt, fix the server, remove broken URLs from sitemap (8 minutes)
- Request indexing for fixed pages (2 minutes)
That's it. You've fixed the biggest problems in 30 minutes.
Key Takeaways
The Coverage Report is your window into indexing health. It tells you which pages Google found, which it indexed, and why it rejected the rest.
Here's what to remember:
- Valid = Good. These pages are indexed. Leave them alone.
- Valid with Warnings = Fix. These pages are indexed but at risk. Add canonicals, remove noindex tags, simplify redirects.
- Excluded = Usually Correct. Google chose not to index them for good reasons. Only fix soft 404s.
- Error = Urgent. These pages should be indexed but aren't. Fix server errors, 404s, and robots.txt blocks immediately.
- Check monthly. The Coverage Report changes constantly. Catch problems early.
- Focus on high-traffic pages first. Don't waste time fixing errors on pages that get zero traffic.
- Use the URL Inspection Tool to diagnose. It tells you exactly why each page failed.
- Request indexing after fixing. Google won't automatically re-crawl fixed pages.
The Coverage Report isn't flashy, but it's foundational. Most founders ignore it until their organic traffic crashes. Don't be that founder. Check it monthly. Fix errors. Stay indexed.
If you want to go deeper, explore the plain-English guide to coverage issues for a 30-minute deep dive into specific error types and fixes. And if you're setting up Search Console from scratch, the 10-minute setup guide will get you there in one sitting.
Ship fast. Index faster. Get visible.
Get the next one on Sunday.
One short email a week. What is working in SEO right now. Unsubscribe in one click.
Subscribe on Substack →