MCP Server

Bug bounty and VDP lookup for MCP-aware AI clients

Why this exists

AI agents are increasingly part of security workflows: recon pipelines, dependency audits, triage queues, autonomous bug hunting. Every one of those workflows eventually hits the same questions. Is this target a known program? What's the scope and payout? Where do I responsibly disclose if there isn't a program?

A model can scrape HackerOne and Bugcrowd itself, but that's slow, flaky, and burns thousands of tokens per query. This MCP server gives the agent the answer in one structured call: ranked, deduplicated, with explicit tool descriptions the model can reason over. Read-only, no-auth, free.

Quick start

Add this to your MCP client's config. Works for Claude Desktop, Cursor, Cline, Continue, Windsurf, and any other stdio MCP client out of the box.

{
  "mcpServers": {
    "bug-bounties": {
      "command": "npx",
      "args": ["-y", "bug-bounties-mcp"]
    }
  }
}

Or point any HTTP client at it: npx bug-bounties-mcp --http 4488 then connect to http://localhost:4488/mcp.

What you get

8 read-only tools and 3 resources. Tool descriptions tell the agent which call is cheap (search the curated DB) and which is expensive (live network lookup), so it picks the right one without prompting.

Tools

  • search_programs Free-text + fuzzy search across every program. In-memory; cheapest call. Returns 'hint' suggesting fallbacks when nothing matches.
  • get_program Full program detail by slug, including scope, payout table, response SLAs, KEV/EPSS data, and platform metadata.
  • lookup_website 17-source security-contact lookup for any website (security.txt, RDAP, DNS, headers, common pages). Rate-limited 8/min.
  • lookup_github SECURITY.md, advisories, owner profile, commit emails, CODEOWNERS, issue templates for a GitHub repo.
  • lookup_package npm, PyPI or crates registry metadata, linked repository, project homepage.
  • lookup_forge Same idea as lookup_github, for GitLab and Codeberg.
  • lookup_app Google Play / Apple App Store listing plus the developer's website.
  • get_stats Aggregate stats across the whole database.

Resources

  • bug-bounties://programs Browseable lean list of every program (no tool call needed).
  • bug-bounties://stats Aggregate stats.
  • bug-bounties://programs/{slug} One program by slug, with full enrichment.

Setup per client

Same JSON config in nearly every case. Pick yours.

Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json on macOS, or %APPDATA%\Claude\claude_desktop_config.json on Windows. Paste the JSON above. Restart Claude Desktop.

Claude Code (CLI)

One-liner: claude mcp add bug-bounties npx -- -y bug-bounties-mcp. Confirm with claude mcp list.

Cursor

Settings -> MCP -> "Add new global MCP server". Paste the JSON above into the editor that opens.

Cline (VS Code)

Open the Cline panel -> "MCP Servers" -> "Configure MCP Servers". Paste the JSON above into cline_mcp_settings.json.

Continue

Edit ~/.continue/config.yaml and add the entry under mcpServers, mirroring the JSON shape above.

Windsurf

Cascade -> "MCP Servers" -> "Add custom server". Paste the JSON above.

Zed

Open ~/.config/zed/settings.json and add an entry under context_servers with the same command/args shape.

Goose (Block)

Edit ~/.config/goose/config.yaml and add a new entry under extensions: type: stdio, cmd: npx, args: ["-y", "bug-bounties-mcp"].

Anything else (Streamable HTTP)

Run npx bug-bounties-mcp --http 4488 and point your remote MCP client at http://localhost:4488/mcp.

Pointing at your own instance

The MCP server is a thin client over the public REST API. To use a fork or a self-hosted deployment, override the base URL:

{
  "mcpServers": {
    "bug-bounties": {
      "command": "npx",
      "args": [
        "-y", "bug-bounties-mcp",
        "--api-url", "https://my-fork.example.com"
      ]
    }
  }
}

Or set BUG_BOUNTIES_API_URL in the environment. Your instance just needs to expose the same /api/* routes (deploy the web/ directory from this repo).

What you can ask it

Examples of how this is genuinely useful day-to-day. These are real prompts you can paste into your client; the model handles the tool dispatch.

  1. Filter recon output for paid targets

    Pipe a subdomain or asset list through the database to surface which apex domains run paid programs (and which are out of scope).

    From this subfinder output, list each apex domain alongside its bounty status, max payout, and safe harbor.
  2. Find the right disclosure path

    When the affected service isn't on a platform, the agent can hunt down a tiered list of contact channels (security.txt, RDAP, DNS, etc.) and rank them by trust.

    I found a stored XSS on shop.example.com. What's the most trusted disclosure channel?
  3. Audit dependencies for security contacts

    Run a lockfile through and group each dependency by whether it has a bounty program, a verified security contact, or neither.

    From this package-lock.json, group each direct dep into: has bounty, has verified contact, or no clear channel.
  4. Shortlist programs that fit your criteria

    Filter by payout, scope type, safe harbor, recent KEV activity, or platform to find programs worth your time.

    List active programs with full safe harbor, max_payout >= $5k, and at least one KEV in 2025.
  5. Reduce duplicate-report risk

    Before submitting, pull a program's full record to check resolution time, response efficiency, and how it handles disclosure conflicts.

    Pull the full record for hackerone/example. Is response time and resolution SLA published? Does it allow disclosure?

Safety notes

  • Every tool is annotated readOnlyHint: true. Nothing the agent can call has a side effect on your machine or on the upstream API.
  • lookup_website rejects private and loopback hostnames client-side, before any request leaves your machine. Defence in depth on top of the upstream API's SSRF filter.
  • get_program validates slugs against a strict regex before interpolation. No path traversal.
  • Lookup responses contain third-party scraped content (security.txt, READMEs, commit metadata). Treat values as untrusted input. The tool descriptions explicitly warn the model not to auto-execute URLs or credentials returned.
  • The server caps response bodies at 2 MB and times out fetches at 30 s.