dl-transformer-finetune
// Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning workflows.
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
namedl-transformer-finetune
descriptionBuild transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning workflows.
DL Transformer Finetune
Overview
Generate reproducible fine-tuning run plans for transformer models and downstream tasks.
Workflow
- Define base model, task type, and dataset.
- Set training hyperparameters and evaluation cadence.
- Produce run plan plus model card skeleton.
- Export configuration-ready artifacts for training pipelines.
Use Bundled Resources
- Run
scripts/build_finetune_plan.pyfor deterministic plan output. - Read
references/finetune-guide.mdfor hyperparameter baseline guidance.
Guardrails
- Keep run plans reproducible with explicit seeds and output directories.
- Include evaluation and rollback criteria.