Skip to main content
The audit command crawls a website and runs SEO rules to generate a report.

Usage

squirrel audit <url> [options]

Arguments

ArgumentDescription
urlThe URL to audit (required)

Options

OptionAliasDescriptionDefault
--max-pages-mMaximum pages to crawlvaries by coverage mode
--coverage-CCoverage mode: quick, surface, fullsurface
--format-fOutput format: console, text, json, html, markdown, xml, llmconsole
--output-oOutput file pathauto
--refresh-rIgnore cache, fetch all pages freshfalse
--resumeResume interrupted crawl for this domainfalse
--verbose-vVerbose outputfalse
--debugEnable debug loggingfalse
--traceEnable performance tracingfalse
--project-name-nProject name (overrides config and prompts)auto
--publish-pPublish report to reports.squirrelscan.comfalse
--visibilityVisibility: public, unlisted, privatepublic

Coverage Modes

ModeDefault PagesDescription
quick25Fast scan - seed URL + sitemaps only, no link discovery
surface100Smart sampling - one page per URL pattern (default)
full500Comprehensive - crawl everything up to limit
For SARIF format (IDE integration), use the report command after running an audit:
squirrel audit https://example.com
squirrel report <audit-id> --format sarif
See the report command docs for details.

Examples

Basic Audit

squirrel audit https://example.com

Quick Health Check

squirrel audit https://example.com -C quick

Full Comprehensive Audit

squirrel audit https://example.com -C full

Crawl More Pages

squirrel audit https://example.com -m 200

Export to JSON

squirrel audit https://example.com -f json -o report.json

Generate HTML Report

squirrel audit https://example.com -f html -o report.html

Fresh Crawl (Ignore Cache)

squirrel audit https://example.com --refresh

Audit and Publish

squirrel audit https://example.com --publish
Publish with unlisted visibility:
squirrel audit https://example.com --publish --visibility unlisted
Publishing requires authentication. Run squirrel auth login first.

Verbose Output

squirrel audit https://example.com -v

Output Formats

Console (default)

Human-readable output with colored issue severity:
squirrelscan v0.1.0
========================================
Auditing: https://example.com
Max pages: 500

Crawled 12 pages

ISSUES

[high] Missing meta description
  → /about
  → /contact

[medium] Image missing alt text
  → /images/hero.png on /

[low] Non-HTTPS link
  → http://oldsite.com on /links

JSON

Machine-readable JSON for CI/CD pipelines and AI processing:
squirrel audit https://example.com -f json -o report.json
{
  "url": "https://example.com",
  "crawledAt": "2025-01-08T00:00:00Z",
  "pages": [...],
  "issues": [...],
  "stats": {
    "totalPages": 12,
    "issueCount": { "high": 2, "medium": 3, "low": 5 }
  }
}

HTML

Visual report you can open in a browser:
squirrel audit https://example.com -f html -o report.html
open report.html

Crawl Behavior

The audit command manages crawl sessions intelligently:
ScenarioBehavior
First runCreates new crawl
Re-run (completed)New crawl, uses cache for 304s
Re-run (interrupted)Resumes from where it left off
Config changedNew crawl with fresh scope
Changing scope-affecting config (include, exclude, allow_query_params, drop_query_prefixes) triggers a fresh crawl.

Caching

SquirrelScan caches page content locally. On subsequent audits:
  • 304 Not Modified: Uses cached content (fast)
  • 200 OK: Fetches fresh content (slower)
Use --refresh to bypass the cache entirely.

Exit Codes

CodeMeaning
0Success
1Error (invalid URL, crawl failed, etc.)

Configuration

The audit command respects settings from squirrel.toml:
[crawler]
max_pages = 100
delay_ms = 200
timeout_ms = 30000
include = ["/blog/*"]
exclude = ["/admin/*"]

[rules]
enable = ["*"]
disable = ["ai/*"]
See Configuration for all options.

Using with AI

Pipe JSON output to your LLM:
squirrel audit https://example.com -f json | claude "analyze this SEO report"
Or use with Claude Code’s MCP integration for interactive auditing.