Назад към всички

unified-review

// Use this skill when orchestrating multiple review types. Use when general review needed without knowing which specific skill applies, full multi-domain review desired, integrated reporting needed. Do not use when specific review type known - use bug-review, test-review, etc. DO NOT use when: archite

$ git log --oneline --stat
stars:201
forks:38
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
nameunified-review
descriptionUse this skill when orchestrating multiple review types. Use when general review needed without knowing which specific skill applies, full multi-domain review desired, integrated reporting needed. Do not use when specific review type known - use bug-review, test-review, etc. DO NOT use when: architecture-only focus - use architecture-review.
categoryorchestration
tagsreview,orchestration,code-quality,analysis,multi-domain
toolsskill-selector,context-analyzer,report-integrator
usage_patternsauto-detect-review,full-review,focused-review
complexityintermediate
estimated_tokens400
progressive_loadingtrue
dependenciespensive:shared,imbue:proof-of-work,imbue:structured-output
orchestratespensive:rust-review,pensive:api-review,pensive:architecture-review,pensive:bug-review,pensive:test-review,pensive:makefile-review,pensive:math-review

Table of Contents

Unified Review Orchestration

Intelligently selects and executes appropriate review skills based on codebase analysis and context.

Quick Start

# Auto-detect and run appropriate reviews
/full-review

# Focus on specific areas
/full-review api          # API surface review
/full-review architecture # Architecture review
/full-review bugs         # Bug hunting
/full-review tests        # Test suite review
/full-review all          # Run all applicable skills

Verification: Run pytest -v to verify tests pass.

When To Use

  • Starting a full code review
  • Reviewing changes across multiple domains
  • Need intelligent selection of review skills
  • Want integrated reporting from multiple review types
  • Before merging major feature branches

When NOT To Use

  • Specific review type known
    • use bug-review
  • Test-review
  • Architecture-only focus - use architecture-review
  • Specific review type known
    • use bug-review

Review Skill Selection Matrix

Codebase PatternReview SkillsTriggers
Rust files (*.rs, Cargo.toml)rust-review, bug-review, api-reviewRust project detected
API changes (openapi.yaml, routes/)api-review, architecture-reviewPublic API surfaces
Test files (test_*.py, *_test.go)test-review, bug-reviewTest infrastructure
Makefile/build systemmakefile-review, architecture-reviewBuild complexity
Mathematical algorithmsmath-review, bug-reviewNumerical computation
Architecture docs/ADRsarchitecture-review, api-reviewSystem design
General code qualitybug-review, test-reviewDefault review

Workflow

1. Analyze Repository Context

  • Detect primary languages from extensions and manifests
  • Analyze git status and diffs for change scope
  • Identify project structure (monorepo, microservices, library)
  • Detect build systems, testing frameworks, documentation

2. Select Review Skills

# Detection logic
if has_rust_files():
    schedule_skill("rust-review")
if has_api_changes():
    schedule_skill("api-review")
if has_test_files():
    schedule_skill("test-review")
if has_makefiles():
    schedule_skill("makefile-review")
if has_math_code():
    schedule_skill("math-review")
if has_architecture_changes():
    schedule_skill("architecture-review")
# Default
schedule_skill("bug-review")

Verification: Run pytest -v to verify tests pass.

3. Execute Reviews

Dispatch selected skills concurrently via the Agent tool. Use this mapping to resolve skill names to agent types:

Skill NameAgent TypeNotes
bug-reviewpensive:code-reviewerCovers bugs, API, tests
api-reviewpensive:code-reviewerSame agent, API focus
test-reviewpensive:code-reviewerSame agent, test focus
architecture-reviewpensive:architecture-reviewerADR compliance
rust-reviewpensive:rust-auditorRust-specific
code-refinementpensive:code-refinerDuplication, quality
math-reviewgeneral-purposePrompt: invoke Skill(pensive:math-review)
makefile-reviewgeneral-purposePrompt: invoke Skill(pensive:makefile-review)
shell-reviewgeneral-purposePrompt: invoke Skill(pensive:shell-review)

Rules:

  • Never use skill names as agent types (e.g., pensive:math-review is NOT an agent)
  • When pensive:code-reviewer covers multiple domains, dispatch once with combined scope
  • For skills without dedicated agents, use general-purpose and instruct it to invoke the Skill tool
  • Maintain consistent evidence logging across all agents
  • Track progress via TodoWrite

4. Integrate Findings

  • Consolidate findings across domains
  • Identify cross-domain patterns
  • Prioritize by impact and effort
  • Generate unified action plan

Deferred capture for backlog findings: Findings that are triaged to the backlog (out-of-scope for the current review or deferred by the team) should be preserved so they are not lost between review cycles. For each finding assigned to the backlog, run:

python3 scripts/deferred_capture.py \
  --title "<finding title>" \
  --source review \
  --context "Review dimension: <dimension>. <finding description>"

The <dimension> value should match the review skill that surfaced the finding (e.g. bug-review, api-review, architecture-review). This runs automatically after the action plan is finalised, without prompting the user.

Review Modes

Auto-Detect (default)

Automatically selects skills based on codebase analysis.

Focused Mode

Run specific review domains:

  • /full-review api → api-review only
  • /full-review architecture → architecture-review only
  • /full-review bugs → bug-review only
  • /full-review tests → test-review only

Full Review Mode

Run all applicable review skills:

  • /full-review all → Execute all detected skills

Quality Gates

Each review must:

  1. Establish proper context
  2. Execute all selected skills successfully
  3. Document findings with evidence
  4. Prioritize recommendations by impact
  5. Create action plan with owners

Deliverables

Executive Summary

  • Overall codebase health assessment
  • Critical issues requiring immediate attention
  • Review frequency recommendations

Domain-Specific Reports

  • API surface analysis and consistency
  • Architecture alignment with ADRs
  • Test coverage gaps and improvements
  • Bug analysis and security findings
  • Performance and maintainability recommendations

Integrated Action Plan

  • Prioritized remediation tasks
  • Cross-domain dependencies
  • Assigned owners and target dates
  • Follow-up review schedule

Modular Architecture

All review skills use a hub-and-spoke architecture with progressive loading:

  • pensive:shared: Common workflow, output templates, quality checklists
  • Each skill has modules/: Domain-specific details loaded on demand
  • Cross-plugin deps: imbue:proof-of-work, imbue:diff-analysis/modules/risk-assessment-framework

This reduces token usage by 50-70% for focused reviews while maintaining full capabilities.

Exit Criteria

  • All selected review skills executed
  • Findings consolidated and prioritized
  • Action plan created with ownership
  • Evidence logged per structured output format

Supporting Modules

Troubleshooting

Common Issues

If the auto-detection fails to identify the correct review skills, explicitly specify the mode (e.g., /full-review rust instead of just /full-review). If integration fails, check that TodoWrite logs are accessible and that evidence files were correctly written by the individual skills.