Skip to main content
Checks if robots.txt exists and is properly configured
Rule IDcrawl/robots-txt
CategoryCrawlability
ScopeSite-wide
Severityerror
Weight8/10

Solution

robots.txt tells search engines which pages to crawl. Place it at the root of your domain (example.com/robots.txt). Include your sitemap URL. Avoid blocking important resources (CSS, JS, images) that search engines need to render pages. Never use ‘Disallow: /’ unless you want to block all crawling. Use Google Search Console to test your robots.txt.

Enable / Disable

Disable this rule

squirrel.toml
[rules]
disable = ["crawl/robots-txt"]

Disable all Crawlability rules

squirrel.toml
[rules]
disable = ["crawl/*"]

Enable only this rule

squirrel.toml
[rules]
enable = ["crawl/robots-txt"]
disable = ["*"]