I'm excited to share my experience with a local, open-source alternative to Claude Code, and it's completely free! But can it really compete with the pricey plans offered by Claude Code? Let's dive in and find out.
The Promise of Free AI Coding Tools
There's been a buzz around Goose and Qwen3-coder, two AI tools that could potentially replace expensive Claude Code plans. Developed by Jack Dorsey's company, Block, Goose is an open-source agent framework, while Qwen3-coder is a coding-centric large language model. Together, they offer a promising alternative, but how well do they actually perform?
Setting Up the Trio: Goose, Ollama, and Qwen3-coder
In this first article, I'll guide you through the process of integrating these three tools. We'll start by downloading Goose and Ollama, and then we'll install the Qwen3-coder model within Ollama. It's important to get the order right; I learned the hard way that Goose needs Ollama to function!
Installing Ollama and Qwen3-coder
I recommend installing Ollama first. It's a simple process, and I prefer the app version over the command-line. Once installed, you'll see a chat interface and a default model, gpt-oss-20b. From here, you can choose the Qwen3-coder:30b model, which has about 30 billion parameters. Note that this model is quite large, around 17GB, so ensure you have sufficient storage.
One of the key advantages of this setup is that your AI runs locally on your machine, keeping your data secure and private.
Making Ollama Visible
To ensure other applications can access Ollama, you need to expose it to the network. I let Ollama install itself in the .ollama directory, but remember that this hides the directory, so you have a large file buried there. I also set my context length to 32K, which should be sufficient for most tasks.
Installing Goose
Next, we install Goose. There are multiple implementations, so choose the one suitable for your system. When you launch Goose, you'll see a welcome screen with various configuration options. Since we're aiming for a free setup, go to the Other Providers section and click Go to Provider Settings.
Here, you'll find a long list of agent tools and LLMs. Scroll down, find Ollama, and hit Configure. This step is crucial as it sets up the connection between Goose and Ollama.
Configuring the Connection
You'll be prompted to choose a model. Again, select qwen3-coder:30b. Once done, hit Select Model, and you've successfully installed and configured a local coding agent!
Testing Goose
Now, it's time to put Goose through its paces. I used my standard test challenge: building a simple WordPress plugin. Unfortunately, Goose/Qwen3 failed initially, and even after explaining the issue, it took a few more tries to get it right.
First Impressions
I was a bit disappointed that it took five attempts for Goose to succeed in my simple test. However, it's important to note that agentic coding tools like Claude Code and Goose work directly on the source code, so repeated corrections can improve the codebase.
My colleague, Tiernan Ray, had a different experience with Ollama on his M1 Mac, finding the performance unbearable. In contrast, I'm running this setup on an M4 Max Mac Studio with 128GB of RAM, and I've noticed good overall performance. I didn't see a significant difference in turnaround time compared to cloud-based products like Claude Code and OpenAI Codex.
Final Thoughts
These are my initial impressions, and I plan to run a bigger project through this free solution to compare it with the expensive alternatives. Stay tuned for that analysis!
Have you tried running a coding-focused LLM locally with tools like Goose, Ollama, or Qwen? Share your experiences and hardware setup in the comments. I'd love to hear how it compares to cloud options like Claude or OpenAI Codex.