Skip to content

Artifex1/auditor-addon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

118 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Auditor Addon Logo

The LLM Multi Tool for Code Auditing

License: Apache 2.0 Zig Claude Code Gemini CLI Cursor Windsurf Codex

Skills and a CLI for code estimation, security auditing, and professional report writing. Works with any AI coding environment.

🎯 Skills

Skills are structured workflows that guide the AI through multi-step processes. Each skill contains detailed instructions, phases, and best practices for specific tasks.

Skill Purpose Capabilities
πŸ›‘οΈ security-auditor Interactive security auditing with Map & Probe methodology Map (structural inventory) β†’ Checklist (optional, standard-specific) β†’ Probe (per-path vulnerability analysis)
πŸ” threat-modeling Systematic threat enumeration before code-level auditing Analyze β†’ Diagram β†’ Attackers β†’ Assets β†’ Threats (STRIDE) β†’ Report
πŸ“Š estimator Project scoping and effort estimation Full scope (Discovery, Explore, Metrics, Report) or Diff scope (Discovery, Review, Report)
🧠 design-challenger Challenge overcomplicated designs Propose simplifications with explicit trade-offs
πŸ“ scribe Report writing and finding generation Professional issue descriptions, report introductions
πŸ”¬ sast-pipeline Run the SAiST static analysis pipeline Init scan β†’ Resolve gaps β†’ Run rules (shipped + custom)
✏️ rule-authoring Author SAiST detection rules Scope, deep, and map rule types with testing patterns

How Skills Work

Skills provide complete workflows that the AI follows autonomously. When invoked, the AI loads the skill's protocol and executes it step-by-step, using the available tools as needed. Each skill can be invoked through its respective slash command (e.g., /security-auditor, /estimator).

Note

Model Performance: Skills perform differently across AI models. Depending on your needs, you may want to adjust the model for optimal results:

  • Speed: Lighter models (e.g., Claude Haiku, Gemini Flash) execute faster but may miss subtle issues
  • Reasoning Effort: More capable models (e.g., Claude Sonnet/Opus, Gemini Pro) provide deeper analysis and better edge case detection
  • Thoroughness: Higher-tier models tend to be more comprehensive in their exploration and validation
  • Verbosity: Models with higher reasoning capabilities can be less verbose in their thinking process

Experiment with different models to find the right balance for your use case.


🧰 CLI Tools

The aud CLI provides structured code analysis through tree-sitter AST parsing. All commands support glob patterns for analyzing multiple files at once (e.g., "src/**/*.sol"). Skills invoke these commands automatically as part of their workflows. Output uses TOON by default; pass --json for JSON.

πŸ‘€ aud peek

Extracts function and method signatures from source files without reading full implementations. The estimator skill uses peek to quickly understand a codebase's API surface, what functions exist, their parameters, visibility, and modifiers. This is ideal for initial exploration and building a mental map of unfamiliar code, without the need to read full files.

πŸ“ aud metrics

Calculates code metrics:

  • Normalized Lines of Code (nLOC): Total lines minus blank lines, comment-only lines, and multi-line constructs normalized to single lines (e.g., a function signature spanning 3 lines counts as 1).
  • Comment Density: Percentage of lines that have/are comments, indicating documentation coverage.
  • Cognitive Complexity: Measures control flow complexity by counting branches (if, for, while, etc.) weighted by nesting depth. Deeply nested logic scores higher than flat code.
  • Estimated Hours: Review time estimate based on nLOC and a per-language base rate.

The estimator skill uses this command to calculate how long it takes to perform a security audit.

πŸ“ˆ aud diff-metrics

Metrics restricted to lines changed between two git refs. Shells out to git diff -U0 -M to extract added/removed line ranges per file, then parses each changed file with tree-sitter and computes the same metrics as aud metrics β€” restricted to added lines for nloc_added and complexity_added, and to removed lines for nloc_removed.

Complexity follows SPEC-CLI Β§2.2 semantics: a new branch node adds 1 + branching_ancestors (including pre-existing ancestors). Non-branch added lines contribute zero complexity.

Each row also emits changed_functions β€” the names of head-tree callables whose bodies overlap β‰₯1 surviving added line (blank/comment/test-only changes don't list the function). Feed these straight into aud call-chains --root=<name> for reach analysis.

The estimator skill uses this command for incremental audit scoping (sizing a PR before review).

πŸ”— aud gaps

Builds a symbol graph (containers, callables, variables, events, modifiers, edges) from source files and outputs unresolved edge gaps β€” references the static pass cannot resolve (unresolved callees, interface dispatch, external libraries). Gaps are prioritized by edge kind (high/medium/low) for agent triage.

Supports --resolutions=<file> to apply a CSV of manually resolved gaps, promoting them to concrete edges.

⛓️ aud call-chains

Traces call chains from root functions (callables with no incoming call edges) through the full call graph, grouped by root and sorted longest-first. The security-auditor skill uses this to understand how execution flows through a system and to identify attack surfaces.

Supports --root=<name> to start from specific functions, and --max-depth=<n> to limit traversal depth.

πŸ“Š aud graph

Builds and dumps the full symbol graph β€” all nodes (files, containers, callables, variables, modifiers, events) and edges (contains, calls, reads, writes, has_modifier, inherits, emits, imports). Useful for inspecting the graph structure directly.

πŸ”¬ aud run β€” Rules Engine

Builds the symbol graph and runs Lua-based detection rules against it. Rules are either shipped (built-in) or custom (.lua files).

  • --rule=<ID> β€” run specific shipped rule(s) only
  • --rule-path=<path> β€” run an adhoc rule from a .lua file
  • --rule-inline=<lua> β€” run an adhoc rule from an inline Lua string

Findings include rule metadata, confidence, location, and optional execution paths for deep rules. Supports filtering by confidence level (issue, smell, pointer).

ℹ️ aud info

Lists language config details (container types, callable types, variable types, visibility extraction, builtin filters, metrics config). Useful for understanding what the parser sees for a given language.

🌐 Supported Languages

Solidity Β· Rust Β· Go Β· Python Β· Cairo Β· Compact Β· Move Β· Noir Β· Tolk Β· Masm Β· C++ Β· Java Β· JavaScript Β· TypeScript Β· TSX Β· Flow

πŸ“¦ Installation

Via Claude Code Plugin

# 1. Start Claude Code
claude

# 2. Go to plugins
/plugin

# 3. Navigate to Marketplaces tab
# 4. <enter> on "+ Add Marketplace"
# 5. Paste this repo's link, <enter>
# 6. Hit <space> and <i>

Via Gemini CLI Extension

gemini extensions install <repository-url>

Other AI Coding Environments (Cursor, Codex, Windsurf, etc.)

Skills can be installed using the skills CLI. This includes the aud CLI β€” pre-built binaries for all platforms are shipped with the auditor-addon-cli skill:

npx skills add <repository-url>

The AI can invoke aud directly via the skill path. For manual use, see the auditor-addon-cli skill's SKILL.md for instructions on adding aud to your PATH.

Building from Source

Requires Zig 0.15+.

# Clone the repository
git clone <repository-url>
cd auditor-addon

# Native build
zig build

# Run tests
zig build test

# Cross-compile all platforms (macOS/Linux/Windows Γ— arm64/x86_64)
./scripts/build-all.sh --release

πŸ—οΈ Architecture & Design

Core Principles

  • 🧩 Modular: Clear separation between CLI, pipeline, language configs, and output
  • πŸ”Œ Extensible: Add new languages via declarative LanguageConfig structs
  • ⚑ Fast: Single Zig binary, zero runtime dependencies, tree-sitter grammars compiled in
  • πŸ”¬ Rules in Lua: Detection rules are authored in Lua, loaded at runtime

Technology Stack

  • Language: Zig β€” single binary, cross-compiles to all platforms
  • AST Engine: Tree-sitter β€” grammars compiled into the binary
  • Rules Engine: Lua β€” embedded via ziglua
  • Output Format: TOON β€” Token-Oriented Object Notation (or JSON)
  • CLI Parsing: zig-clap

Key Project Files

About

An LLM Multi Tool for Code Auditing

Topics

Resources

Stars

Watchers

Forks

Contributors