In this video, I fine-tune Phi-3 Mini (4-bit) on the Python Code Instructions 18K Alpaca dataset to transform it into a Python code-generating language model. I cover the full workflow—from preparing the instruction-style dataset and training the model on limited hardware to evaluating its performance on real Python coding tasks—showing how instruction tuning can significantly improve code generation even with a small, efficient LLM.
Google Colab Code :
https://colab.research.google.com/drive/151AzqoNCLdlnftNAM1VXcSb31X8mri_k?usp=sharing
Download
0 formats
No download links available.
Fine-Tuning Phi-3 Mini for Python Code Generation | NatokHD