Noindex Checker
Paste your page's HTML source to detect noindex, nofollow, noarchive, and other robots directives in meta tags and headers.
About Noindex Checker β Meta Robots and Noindex Directive Checker Online
Search engines follow robots directives to decide whether to index a page and follow its links. These directives can appear in two places: as a <meta name="robots"> tag in the HTML head, or as an X-Robots-Tag HTTP response header. Both are supported by Google and Bing β a noindex in either location tells search engines not to include the page in their index. This tool checks both sources: paste your HTML source for the meta tag check, or paste your HTTP response headers for the X-Robots-Tag check.
Unintended noindex directives are a significant cause of pages not appearing in search results after launch or migration. A WordPress staging environment with a "Discourage search engines" setting left enabled after going live, a CMS that adds noindex to paginated URLs by default, a developer adding noindex to a template for testing and forgetting to remove it β these are all common scenarios where pages that should be indexed are blocked without anyone realizing. Checking noindex status is one of the first steps when a page that should rank isn't appearing in search results.
How to Use Noindex Checker
- Choose the check type using the HTML Source or HTTP Headers tab above.
- For HTML Source: press Ctrl+U (or Cmd+U on Mac) to view the page source in a new tab, then select all and copy the full HTML.
- For HTTP Headers: open Chrome DevTools (F12), click the Network tab, reload the page, click the first document request, click the Headers tab, and copy the Response Headers section.
- Paste the copied content into the appropriate text area and click Check Noindex.
- Review the results β the checker reports which robots directives are present and whether the page is blocked from indexing, link-following, caching, or snippet generation.
Robots Directives Explained
Robots directives are cumulative β a page can have multiple directives applied simultaneously, each controlling a different search engine behavior.
- noindex: The most critical directive β tells search engines not to add this page to their search index. A page with noindex will eventually disappear from search results even if it was previously indexed. Note that "eventually" can mean days or weeks β removing a noindex directive and waiting for search engines to re-crawl and re-index takes time.
- nofollow: Tells search engines not to follow any links on the page. This doesn't prevent the page itself from being indexed β it prevents link equity from flowing to any linked pages. A page can be indexed but have its links treated as if they don't exist from a ranking perspective.
- noarchive: Tells search engines not to store a cached copy of the page. The page can still be indexed and appear in search results, but clicking the cached link won't show a copy. Useful for pages with frequently changing content or for pages where you don't want a cached version publicly accessible.
- nosnippet: Tells search engines not to show a text snippet in search results. The page can still be indexed and appear in results, but only the URL and title will show β no descriptive text. Also prevents extended features like featured snippets and "People also ask" answers.
- none: Equivalent to
noindex, nofollowcombined β the shortest way to apply both directives simultaneously. - X-Robots-Tag (HTTP header): The same directives but applied at the HTTP response level rather than the HTML level. This is the only way to apply robots directives to non-HTML files like PDFs and images, since those don't have HTML head sections. When present on an HTML page, the header takes the same effect as the meta tag β if either says noindex, the page is treated as noindex.
Tips for Diagnosing Noindex Issues
When a page isn't indexing as expected, checking the right sources in the right order speeds up diagnosis.
- Check both HTML and headers: A page can have noindex set in either location. If the HTML source shows no noindex meta tag but the page still isn't indexing, check the HTTP response headers for an X-Robots-Tag. Some frameworks and CDN configurations inject noindex at the header level without touching the HTML.
- Check robots.txt separately: This tool checks meta robots and X-Robots-Tag directives. A robots.txt Disallow rule is separate and blocks crawling (not just indexing) β a page blocked by robots.txt can still be indexed from links, but a page with noindex won't be. Use a robots.txt checker for that separate check.
- After removing noindex, wait for recrawl: Removing a noindex directive doesn't instantly add the page to the search index. Search engines need to recrawl the page, process the updated directive, and then add the page to the index β which can take days. Submit the URL through Google Search Console (URL Inspection β Request Indexing) to prompt a faster recrawl rather than waiting for the normal crawl schedule.
- Check template-level directives in CMSes: In WordPress, the "Discourage search engines" setting in Settings β Reading adds noindex to all pages. Check this setting after migration or when a wholesale indexing issue is suspected. Individual page noindex settings in SEO plugins can coexist with this setting β clearing the site-wide setting doesn't affect per-page settings.
Why Use a Noindex Checker Online
Checking robots directives in page source manually means searching through potentially thousands of lines of HTML for a specific meta tag. A noindex in an X-Robots-Tag response header is invisible in the HTML source altogether β you'd need to check DevTools separately. This tool handles both checks with a paste-and-click workflow and presents the results clearly, making it faster than manual inspection and accessible without browser extension or CLI tools.
SEO auditors checking pages after a CMS migration benefit from a quick check that confirms the noindex configuration didn't carry over unexpectedly. Developers deploying new pages benefit from confirming the indexing configuration before launch. Content teams investigating why a recently published page hasn't appeared in search results benefit from a tool that surfaces the most common indexing blocker in seconds.
Frequently Asked Questions about Noindex Checker
In your browser, press Ctrl+U (or Cmd+U on Mac), or right-click the page and select "View Page Source." This opens a new tab with the full HTML source of the page. Select all (Ctrl+A) and copy. Paste it into the HTML Source tab above. Note that "View Page Source" shows the original HTML served by the server, while browser DevTools Elements panel shows the live DOM which may be different after JavaScript execution β for robots meta tags, you want the original HTML source.
A robots.txt Disallow rule tells crawlers not to visit the URL. A noindex directive (meta tag or header) tells crawlers they can visit the URL but should not include it in the index. A blocked-by-robots.txt page can still be indexed from links on other pages β Google may show it in results with no snippet because it can see the URL in links but can't read the page. A noindex page that allows crawling is the correct way to prevent indexing while allowing the page to be accessible.
Yes, and it's redundant β either one alone is sufficient to tell search engines not to index the page. Having both doesn't cause a problem, but finding noindex in both places when you only expected it in one location is often a sign of a misconfiguration worth investigating (e.g., a CDN layer and the CMS both adding noindex independently, which could cause issues if you remove one but not the other).
No. nofollow tells search engines not to follow links on the page β it affects how link equity flows out of the page, not the page's own ranking signals. A nofollow page can still rank highly in search results based on its own authority and content. The links on a nofollow page simply won't pass ranking signals to the pages they link to, which may matter if you were relying on those links for SEO.
Yes, completely free. No account, no sign-up, and no usage limits. The checker runs entirely in your browser using JavaScript β nothing you paste is sent to any server. You can check as many pages as you need.