ml-experiment-tracker
// Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
nameml-experiment-tracker
descriptionPlan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.
ML Experiment Tracker
Overview
Generate structured experiment plans that can be logged consistently in experiment tracking systems.
Workflow
- Define dataset, target task, model family, and parameter search space.
- Define metrics and acceptance thresholds before training.
- Produce run plan with version and artifact expectations.
- Export the run plan for execution in tracking tools.
Use Bundled Resources
- Run
scripts/build_experiment_plan.pyto generate consistent run plans. - Read
references/tracking-guide.mdfor reproducibility checklist.
Guardrails
- Keep inputs explicit and machine-readable.
- Always include metrics and baseline criteria.