Learn how to work with a wide range of open large language models (LLMs) such as Gemma, Kimmy, and GLM across various local and cloud-based environments.
We just posted a course on the freeCodeCamp.org YouTube channel, taught by Andrew Brown, that explores how to use coding harnesses like Claude Code and Pi Agent to build real-world agentic workflows while benchmarking model performance and hardware requirements.
The course provides a practical look at the current state of open AI by conducting "smoke tests," such as building Flappy Bird clones to evaluate how different models handle real-world coding tasks. You will explore the hardware requirements necessary for local execution, including the VRAM limitations that often make cloud-hosted options more viable for large context windows.
Andrew also evaluates various coding harnesses, like Claude Code and PI Coding Agent. By the end of the course, you will understand which models, such as Kimmy 2.5 and Gemma 4, are most reliable for tool calling and structured code generation.
Watch the full course for free on the freeCodeCamp.org YouTube channel.