This post explores the process of fine-tuning LLMs for writing data science code using Gemma. It discusses the use of Gemma models, the installation and import of Keras and KerasNLP, and the configuration of the model. It also introduces the concept of LoRA and its benefits in fine-tuning LLMs. The post concludes with tips for improving the fine-tuning process.

4m read time From medium.com
Post cover image
Table of contents
Finetuning with LoRAfine-tuningInferenceConclusionMore Resources

Sort: