This video is a continuation of the code walk through a complete fine-tuning pipeline using Hugging Face Transformers and PEFT (Parameter-Efficient Fine-Tuning), including. We also provide python script in jupyter notebook discussions to illustrated tokenization and embeddings:
1. Using 4-bit quantization with bitsandbytes
2. Instruction-style prompt formatting
3. Fine-tuning with QLoRA (Low-Rank Adaptation)
4. Loss evaluation and training visualization
5. Format and print reports.
6. Generate graphs
7. Push fine tuned LLM to hugging face
Whether you're an ML engineer, data scientist, or curious developer, this tutorial equips you with practical tools and insights to fine-tune your own models efficiently and effectively.
Download
0 formats
No download links available.
Part 6 - Fine-Tuning LLMs Explained + Full Python Walkthrough | Hugging Face, QLoRA, PEFT | NatokHD