Назад към всички

code-refinement

// Analyze and improve living code quality: duplication, algorithmic efficiency, clean code principles, architectural fit, anti-slop patterns, and error handling robustness. Use when improving code quality, reducing AI slop, refactoring for clarity, optimizing algorithms, applying clean code principles

$ git log --oneline --stat
stars:201
forks:38
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
namecode-refinement
descriptionAnalyze and improve living code quality: duplication, algorithmic efficiency, clean code principles, architectural fit, anti-slop patterns, and error handling robustness. Use when improving code quality, reducing AI slop, refactoring for clarity, optimizing algorithms, applying clean code principles. Do not use when removing dead/unused code (use conserve:bloat-detector). reviewing for bugs (use pensive:bug-review). selecting architecture paradigms (use archetypes skills). This skill actively improves living code, complementing bloat detection (dead code removal) with quality refinement (living code improvement).
version1.7.1
alwaysApplyfalse
categorycode-quality
tagsrefactoring,clean-code,algorithms,duplication,anti-slop,craft
toolsRead,Grep,Glob,Bash
usage_patternscode-quality-improvement,duplication-reduction,algorithm-optimization,clean-code-enforcement
complexityadvanced
model_hintdeep
estimated_tokens350
progressive_loadingtrue
dependenciespensive:shared,pensive:safety-critical-patterns,imbue:proof-of-work
modulesmodules/duplication-analysis.md,modules/algorithm-efficiency.md,modules/clean-code-checks.md,modules/architectural-fit.md

Table of Contents

Code Refinement Workflow

Analyze and improve living code quality across six dimensions.

Quick Start

/refine-code
/refine-code --level 2 --focus duplication
/refine-code --level 3 --report refinement-plan.md

When To Use

  • After rapid AI-assisted development sprints
  • Before major releases (quality gate)
  • When code "works but smells"
  • Refactoring existing modules for clarity
  • Reducing technical debt in living code

When NOT To Use

  • Removing dead/unused code (use conserve:bloat-detector)
  • Removing dead/unused code (use conserve:bloat-detector)

Analysis Dimensions

#DimensionModuleWhat It Catches
1Duplication & Redundancyduplication-analysisNear-identical blocks, similar functions, copy-paste
2Algorithmic Efficiencyalgorithm-efficiencyO(n^2) where O(n) works, unnecessary iterations
3Clean Code Violationsclean-code-checksLong methods, deep nesting, poor naming, magic values
4Architectural Fitarchitectural-fitParadigm mismatches, coupling violations, leaky abstractions
5Anti-Slop Patternsclean-code-checksPremature abstraction, enterprise cosplay, hollow patterns
6Error Handlingclean-code-checksBare excepts, swallowed errors, happy-path-only

Progressive Loading

Load modules based on refinement focus:

  • modules/duplication-analysis.md (~400 tokens): Duplication detection and consolidation
  • modules/algorithm-efficiency.md (~400 tokens): Complexity analysis and optimization
  • modules/clean-code-checks.md (~450 tokens): Clean code, anti-slop, error handling
  • modules/architectural-fit.md (~400 tokens): Paradigm alignment and coupling

Load all for comprehensive refinement. For focused work, load only relevant modules.

Required TodoWrite Items

  1. refine:context-established — Scope, language, framework detection
  2. refine:scan-complete — Findings across all dimensions
  3. refine:prioritized — Findings ranked by impact and effort
  4. refine:plan-generated — Concrete refactoring plan with before/after
  5. refine:evidence-captured — Evidence appendix per imbue:proof-of-work

Workflow

Step 1: Establish Context (refine:context-established)

Detect project characteristics:

# Language detection
find . -not -path "*/.venv/*" -not -path "*/__pycache__/*" \
  -not -path "*/node_modules/*" -not -path "*/.git/*" \
  \( -name "*.py" -o -name "*.ts" -o -name "*.rs" -o -name "*.go" \) \
  | head -20

# Framework detection
ls package.json pyproject.toml Cargo.toml go.mod 2>/dev/null

# Size assessment
find . -not -path "*/.venv/*" -not -path "*/__pycache__/*" \
  -not -path "*/node_modules/*" -not -path "*/.git/*" \
  \( -name "*.py" -o -name "*.ts" -o -name "*.rs" \) \
  | xargs wc -l 2>/dev/null | tail -1

Step 2: Dimensional Scan (refine:scan-complete)

Load relevant modules and execute analysis per tier level.

Step 3: Prioritize (refine:prioritized)

Rank findings by:

  • Impact: How much quality improves (HIGH/MEDIUM/LOW)
  • Effort: Lines changed, files touched (SMALL/MEDIUM/LARGE)
  • Risk: Likelihood of introducing bugs (LOW/MEDIUM/HIGH)

Priority = HIGH impact + SMALL effort + LOW risk first.

Step 4: Generate Plan (refine:plan-generated)

For each finding, produce:

  • File path and line range
  • Current code snippet
  • Proposed improvement
  • Rationale (which principle/dimension)
  • Estimated effort

Step 5: Evidence Capture (refine:evidence-captured)

Document with imbue:proof-of-work (if available):

  • [E1], [E2] references for each finding
  • Metrics before/after where measurable
  • Principle violations cited

Fallback: If imbue is not installed, capture evidence inline in the report using the same [E1] reference format without TodoWrite integration.

Tiered Analysis

TierTimeScope
1: Quick (default)2-5 minComplexity hotspots, obvious duplication, naming, magic values
2: Targeted10-20 minAlgorithm analysis, full duplication scan, architectural alignment
3: Deep30-60 minAll above + cross-module coupling, paradigm fitness, comprehensive plan

Cross-Plugin Dependencies

DependencyRequired?Fallback
pensive:sharedYesCore review patterns
imbue:proof-of-workOptionalInline evidence in report
conserve:code-quality-principlesOptionalBuilt-in KISS/YAGNI/SOLID checks
archetypes:architecture-paradigmsOptionalPrinciple-based checks only (no paradigm detection)

Supporting Modules

When optional plugins are not installed, the skill degrades gracefully:

  • Without imbue: Evidence captured inline, no TodoWrite proof-of-work
  • Without conserve: Uses built-in clean code checks (subset)
  • Without archetypes: Skips paradigm-specific alignment, uses coupling/cohesion principles only