Skip to main content
squirrelscan is built for autonomous AI workflows. This guide shows you how to integrate squirrelscan with AI coding agents to audit websites and implement fixes automatically.

Three Ways to Use squirrelscan

CLI for Humans

Run audits directly from your terminal with human-readable output.

Pipe to Agent

Pipe audit reports to Claude or other AI assistants using squirrel report --format llm.

Skill Integration

Install the squirrelscan skill so agents can run audits autonomously.

Install the Skill

The squirrelscan skill enables AI agents to run audits, analyze results, and implement fixes without manual intervention.

Installation

npx skills install squirrelscan/skills
This installs the audit-website skill for:
  • Claude Code - Desktop and CLI
  • Cursor - AI-first code editor
  • Any agent supporting Claude Code skills
The skill is a thin wrapper that calls the squirrelscan CLI. Install both the CLI and skill for full functionality.
For best results, use the skill in plan mode if your agent supports it. This lets the agent analyze all issues and create a comprehensive implementation plan before making changes to your codebase.

Verify Installation

After installing, verify the skill is available:
npx skills list
You should see audit-website in the output.

Using with Claude Code

Basic Audit Workflow

The easiest way to run an audit is with a slash command:
/audit-website
This triggers the skill directly. Claude will detect your project’s website (from config, environment, or code) and run an audit. You can also specify a URL explicitly:
/audit-website https://example.com
Or use natural language:
Use the audit-website skill to audit squirrelscan.com
Claude will:
  1. Run the audit using squirrelscan CLI
  2. Parse the results
  3. Summarize issues by severity
  4. Suggest next steps

Example Prompts

Audit and summarize issues:
/audit-website example.com and summarize the top 5 most critical issues
Audit with specific focus:
/audit-website mysite.com focusing on accessibility and performance issues
Audit and fix all issues:
/audit-website this site and fix all errors and warnings
Audit local development site:
/audit-website http://localhost:3000 and create a prioritized fix list

Plan Mode for Comprehensive Fixes

For larger fix efforts, use Claude’s plan mode to create an implementation strategy:
1

Trigger plan mode

Ask Claude to enter plan mode before starting work:
Enter plan mode. Use the audit-website skill to audit example.com,
then create a comprehensive plan to fix all high and medium severity issues.
2

Review the plan

Claude will:
  • Run the audit
  • Analyze all issues
  • Group fixes by category
  • Create an ordered implementation plan
  • Identify dependencies between fixes
Review and approve the plan.
3

Execute the plan

Once approved, Claude will implement fixes systematically, checking off completed items.

Using Subagents for Parallel Fixes

For complex sites with many issues, prompt Claude to use subagents:
Use the audit-website skill to audit example.com.
Then spawn subagents to fix issues in parallel:
- Subagent 1: Fix all accessibility issues
- Subagent 2: Fix all SEO meta tag issues
- Subagent 3: Fix all performance issues
This parallelizes work across independent issue categories.

Piping to Claude (Alternative Method)

If you prefer not to use skills, pipe audit output directly to Claude:

Using Report Formats

Pipe audit results directly to Claude in LLM-optimized format:
squirrel audit example.com --format llm | claude
Or run separately and export later:
# Run audit (stores in database)
squirrel audit example.com

# Export and pipe to Claude
squirrel report <audit-id> --format llm | claude
LLM Format Benefits:
  • Compact structured XML/text hybrid (40% smaller than verbose XML)
  • Token-optimized for API costs and context limits
  • Includes actionable fix suggestions
  • Works with any LLM (Claude, GPT, etc.)

Example Workflows

# Audit and ask Claude to prioritize fixes
AUDIT_ID=$(squirrel audit example.com | tail -1)
squirrel report $AUDIT_ID --format llm | claude "Prioritize these issues and create a fix plan"

# Audit and implement high-severity fixes
squirrel report <audit-id> --format llm | claude "Fix all high-severity issues"

# Audit and explain issues to non-technical stakeholder
squirrel report <audit-id> --format text | claude "Explain these issues in simple terms"

Regression Diffs for Agents

Use diff reports to let agents focus on regressions and improvements:
# Compare current audit against a baseline ID
squirrel report --diff a7b3c2d1 --format llm | claude "Summarize regressions and suggest fixes"

# Compare latest domain report against a baseline domain
squirrel report --regression-since example.com --format llm | claude "List regressions only and propose fixes"
Diff mode supports console, text, json, llm, and markdown. html and xml are not supported in diff mode.

Other Formats for AI

FormatFlagBest For
llm--format llmCompact structured XML for AI agents (40% smaller, token-optimized)
text--format textPlain text output for simple piping
json--format jsonCustom AI processing scripts
markdown--format markdownAI agents that prefer markdown
xml--format xmlVerbose structured XML for enterprise integration
Note: All formats except xml work with both squirrel audit --format and squirrel report --format.

Using with Other AI Coding Assistants

Cursor

Cursor supports Claude Code skills natively:
  1. Install the skill:
    npx skills install squirrelscan/skills
    
  2. Run with slash command:
    /audit-website
    
  3. Or use composer mode for multi-file fixes:
    /audit-website then fix all issues across the codebase
    

Windsurf / Aider / Other Agents

For agents without skill support, use piping:
# Windsurf (Cascade)
squirrel report <audit-id> --format llm | windsurf

# Aider
squirrel report <audit-id> --format llm | aider

# Generic LLM API - save to file then send
squirrel report <audit-id> --format llm > audit.xml
# Then send audit.xml content to your LLM

Advanced Agent Patterns

Pre-Deploy Audits

In your deployment workflow:
Before I deploy, use audit-website skill to audit
http://localhost:3000 and ensure there are no high-severity
issues introduced since the last deployment.

Automated Regression Detection

After making changes:
I just updated the homepage. Use audit-website skill to audit
the site and verify I didn't introduce any SEO or accessibility
regressions.

Configuration for Agents

Project-Scoped Config

Create squirrel.toml in your project so agents use consistent settings:
squirrel.toml
[crawler]
max_pages = 50
respect_robots = true

[rules]
disable = ["content/word-count", "content/reading-level"]
Now when agents run audits, they’ll use these settings automatically. See Configuration Reference for all options.

Limiting Crawl Scope

For large sites, configure agents to audit specific sections:
squirrel.toml
[crawler]
max_pages = 20
include = ["/blog/*"]
exclude = ["/admin/*", "/api/*"]

Skill vs Piping: Which to Use?

Use the skill when:
  • Working in Claude Code, Cursor, or skill-compatible editors
  • You want agents to discover and use squirrelscan autonomously
  • Building multi-step workflows where the agent decides when to audit
  • The agent needs to run audits as part of a larger task
Use piping when:
  • Working with agents that don’t support skills
  • You want explicit control over when audits run
  • Integrating into shell scripts or automation
  • Using squirrelscan with non-coding LLMs

Troubleshooting

Skill not found

Verify installation:
npx skills list | grep audit-website
Reinstall if missing:
npx skills install squirrelscan/skills --force

Agent can’t run audits

Ensure squirrelscan CLI is installed:
squirrel --version
The skill requires the CLI to be in PATH.

Piping produces no output

Check the export format:
squirrel report <audit-id> --format llm
Ensure you’re using the report command with --format llm for LLM-optimized output.

Next Steps