A practical workshop covering common failure modes in LLM fine-tuning, including issues with dataset quality, learning rate schedules, overfitting, catastrophic forgetting, and evaluation methodology. Participants work through concrete examples using open-source tooling to identify and correct these problems in realistic fine-tuning scenarios.
This page was last edited on 2024-11-21.
This page was last edited on 2024-11-21.