This repository contains a small, personal proof-of-concept for an AI-assisted architecture review agent.
The goal is to support lightweight, structured architecture reviews by:
- Providing a simple, repeatable architecture brief format (inputs)
- Generating review summaries focused on risks and tradeoffs
- Producing checklists and follow-up questions
- Capturing a small amount of evidence (what was reviewed, when, and why)
This is a personal R&D prototype, not a production review tool.
- Python 3.8+
- pip
pip install -r requirements.txtpython -m src.cli --brief fixtures/sample_brief.yamlThis prints a structured JSON review to stdout using the built-in stub reviewer.
export LLM_API_KEY="your-api-key"
export LLM_BASE_URL="https://api.openai.com/v1" # or any OpenAI-compatible endpoint
export LLM_MODEL="gpt-4o-mini"
python -m src.cli --brief fixtures/sample_brief.yamlUsage: python -m src.cli [OPTIONS]
Options:
--brief PATH Path to the architecture brief YAML file (required)
--log PATH Path to the evidence log file (default: logs/evidence.jsonl)
--out PATH Path to write the review JSON output (default: stdout)
--verbose Enable verbose logging to stderr
--help Show this message and exit
Write the review to a file:
python -m src.cli --brief fixtures/sample_brief.yaml --out review.jsonUse a custom evidence log location:
python -m src.cli --brief fixtures/sample_brief.yaml --log /tmp/evidence.jsonlSet a reviewer identity:
export REVIEWER_ID="alice"
python -m src.cli --brief fixtures/sample_brief.yaml| Variable | Default | Description |
|---|---|---|
LLM_API_KEY |
(empty -- stub mode) | API key for the LLM provider |
LLM_BASE_URL |
https://api.openai.com/v1 |
Base URL for the OpenAI-compatible API |
LLM_MODEL |
gpt-4o-mini |
Model identifier |
EVIDENCE_LOG_PATH |
logs/evidence.jsonl |
Path to the JSONL evidence log |
REVIEWER_ID |
anonymous |
Identity string recorded in evidence |
See config/example.env for a copyable template.
src/
models.py - Dataclasses for brief, review, and evidence
config.py - Environment-driven configuration
loader.py - Brief loader and validator
prompt.py - Prompt templates for the LLM
llm_client.py - OpenAI-compatible client with stub mode
reviewer.py - Review orchestration logic
evidence.py - JSONL evidence log writer
cli.py - Click CLI entrypoint
config/
example.env - Example environment configuration
fixtures/
sample_brief.yaml - Sample architecture brief
logs/
.gitkeep - Evidence log directory (created at runtime)
tests/
test_*.py - pytest test suite
python -m pytest tests/ -v- Initial specification (
SPEC.md) - Minimal review flow (read brief -> call model -> return structured review)
- Simple evidence log (who/what/when was reviewed)
- Basic CLI endpoint
- Run instructions in README
- Make it easy to perform a quick, structured architecture review:
- Inputs: context, constraints, main components, key risks
- Outputs: narrative review, risk list, questions, and a simple checklist
- Encourage repeatability:
- A consistent template for the architecture brief
- A consistent structure for the AI-generated review
- Keep the implementation small enough for reviewers to understand quickly.
- Formal architecture governance or sign-off workflows
- Full documentation generators or diagramming tools
- Deep integration with any specific cloud/provider or internal tooling
SPEC.md-- detailed specification for this prototypeDISCLAIMER.md-- IP and usage disclaimermemory/constitution.md-- constraints/instructions for IDE agents.specify/and.github/prompts/-- Spec Kit scaffolding (after init)src/-- implementation
For the first working version, the agent should be able to:
- Accept a short architecture brief (from a file or HTTP request).
- Produce a structured review with:
- Overview summary
- Potential risks / tradeoffs
- Questions to ask
- Simple checklist items
- Append a record to a small evidence log indicating that a review occurred.
See SPEC.md for detailed requirements.