There’s a theory circulating online, repeating like an echo, that AI’s future will be shaped by a handful of companies – essentially, a race for AI monopoly. The idea is that soon, every company will simply rent a “brain” from some model provider like OpenAI or Anthropic, and build on top of that. AI will become an oligopoly, dominated by just a few powerful players controlling the language models and applications that drive it.
I think this is profoundly misguided. The real story of AI’s future will not be the centralized few owning all the models. Instead, the open-source movement will have more influence than most people are willing to admit.
There are a few go-to reasons people dismiss open-source AI, and they sound reasonable at first.
The first is the resource argument. Open-source AI can’t compete with the deep pockets of industry giants. Creating foundation models is expensive, and the average company will always prefer outsourcing their AI to the big players. These companies have the infrastructure and talent to scale, while the small players can’t hope to match their capabilities. Open-source AI can’t keep up.
Then, there’s safety. The argument goes that open-source AI is a ticking time bomb – a bunch of rogue researchers experimenting with intelligence in their basements without proper guardrails. It’s true that the lack of central control raises legitimate safety concerns, but I don’t think it’s a dealbreaker.
Finally, people argue that open-source AI just doesn’t perform as well. Closed-source models, they say, are more capable of sophisticated reasoning, while open models just can’t cut it.
But here’s the thing: none of these concerns are fatal. They miss the forest for the trees.
Outsourcing a core business function to a third party is fine if that function isn’t critical. If you’re a SaaS company and you outsource your payment processing, it’s not going to kill your business. But AI? AI is business-critical. If you’re running an AI-native business, you can’t afford to rely on a third party, particularly one that might have access to your most sensitive data.
Think about it: if you’re building an AI product, your business depends on proprietary data and proprietary models. The more you outsource, the more you put your business at risk. If you don’t control the model, you don’t control your future.
Now, let’s talk about reasoning. This is what everyone’s obsessed with – AI’s ability to reason like humans. Researchers love to talk about scaling models to create artificial intelligence that can solve complex problems, and sure, that’s fascinating. But 85% of the time, users don’t need it. They just need AI to summarize, structure information, or explain things in simple terms. Open-source models already excel at these basic tasks, and they can be fine-tuned to meet most needs with enough data.
Take Llama 2, for example. It doesn’t need to match GPT-4’s reasoning ability to be useful. In fact, it doesn’t even need to run on the same hardware. Thanks to innovations in context length and optimizations, Llama 2’s open-source model can scale much better and more affordably than GPT-4. With a simple tweak, an indie hacker can double the context length for free – something closed models can’t replicate. And the hardware needed to run these models? A MacBook. Imagine running cutting-edge AI on your own device. The implications for cost and security are huge.
And truthfulness? Sure, open-source models might not be as accurate as their closed counterparts. But who says truthfulness is always the point? In some cases, “hallucination” in AI can be a feature, not a bug. If you’re working in creative fields – storytelling, music, design – sometimes you want AI to generate something unexpected, not just correct. Open-source models give you that flexibility, and they’re only getting better at it.
Let’s talk about image generation. Stable Diffusion XL, the leading open-source model, is almost on par with Midjourney. Sure, the ergonomics might not be as smooth, but it gives users control. You can fine-tune models, create custom outputs, and generate structured results – all without the closed-source limitations.
What matters most in AI isn’t the glossy packaging but control. Open-source models give you full control over how you use them, from the data to the infrastructure. You can tweak, optimize, and iterate on your own terms. Open models like Stable Diffusion and Llama 2 might not have the polish of proprietary models, but they give you an immense amount of flexibility and power to build on your own terms.
Then there’s security. Open-source models offer transparency. You own them. You can audit them, ensure they meet your standards, and mitigate risks. When you use closed models, you’re trusting someone else with your data, your business, and your future. This isn’t just a small issue – it’s critical. Open source is safer because it’s built on trust, transparency, and a community of developers actively working to improve it.
So why the preference for closed-source? It boils down to two things: convenience and hype. Using a closed model is easy. Plug-and-play. Open-source models, though, require work. They demand more expertise and more effort, but that’s the price of control. The pace of innovation is faster in the open-source world. If you’re willing to dive in and get your hands dirty, you’ll be rewarded with the ability to create products that are more flexible, more cost-effective, and more secure.
And then there’s mindshare. Right now, closed-source AI providers dominate the conversation. Open-source has a much smaller footprint in the public eye, even though it’s poised to outperform in the long run. The hype cycle has shifted the spotlight to companies like OpenAI, Pinecone, and LangChain. But open-source is on the verge of maturing, and as it does, it will surpass these closed models in flexibility, cost, and control.
So, what’s the takeaway? If you’re thinking long-term, if you’re building something that matters, open-source is the way to go. The future of AI isn’t in the hands of a few monopolies – it’s in the hands of the people who control the models, the data, and the direction of their own future. Open AI is the future. It’s just not the future that’s being sold to you in the headlines.