Fine-tuning helps you become a master and professional in AI engineering.

We just posted a course on the freeCodeCamp.org YouTube channel that offers a comprehensive guide to fine-tuning LLMs, taking you from the basics to advanced practical applications. It's taught by Tatev, an expert with over seven years of experience in data science, data engineering, and AI engineering. She is also the CEO of Lunar Tech.

Throughout the course, you'll learn the key differences between fine-tuning, pre-training, and prompt engineering, and dive into powerful methodologies such as supervised fine-tuning and reinforcement learning with human feedback (RLHF). The course also teaches about QLoRA, a revolutionary technique that allows you to fine-tune massive models like Llama 70B on a home workstation.

Here are the sections covered in this course:

  • What is Fine-Tuning and How is it Different?

  • Hands-on Methodologies

  • Deep Dive into Parameter Efficient Fine-Tuning

  • Exploring QLoRA: A Revolutionary Method

  • Practical Case Studies

  • Instructor Introduction

  • Course Outline (More Detail)

  • Highlight of the Course: Parameter Efficient Fine-Tuning

  • Who is this Course For?

  • Module 1: Introduction to Fine-Tuning

  • The Benefits of Fine-Tuning

  • First Part: Fine-Tuning LLMs Module

  • Fine-Tuning Allocation in LLM Life Cycle

  • Pre-trained vs Fine-Tuned Model

  • Understanding Shortcomings and Specialization

  • Fine-Tuning Impact Example: Chatbot

  • Formal Definition of Fine-Tuning

  • Fine-Tuning Examples: Doctor and Lawyer

  • Pre-Training vs. Fine-Tuning

  • Prompt Engineering vs. Fine-Tuning

  • Pros & Cons of Prompt Engineering vs. Fine-Tuning

  • Fine-Tuning Benefits & Demerits

  • Step-by-Step Fine-Tuning Process

Watch the full course on the freeCodeCamp.org YouTube channel (2-hour watch).