Open Source > Open Source
“There’s more to Michael Jordan than basketball; there’s more to basketball than basketball.”
- Michael Jordan
This is one of my favorite quotes. My AmLit professor in college introduced it on the first day of class to encourage us to pursue planes of abstraction beyond, beneath, and in between the written words of the stories we’d read towards unearthing and wrestling with meaning and comprehension apart from the explicit or literal. I’m reminded of this quote often and have been returning to it recently in thinking about the value of open source technology models and the intent of openness as the rapid proliferation of artificial intelligence continues. Open source, at its best, is much more than just a publicly available, free source code base. The traits, functions, influences, and effects that together make up that much more of open source, may have a distinct and essential role in shaping our AI future.
If the past three decades of internet history have taught us anything, it’s this: open ecosystems—not walled gardens—catalyze explosive growth, innovation, and global access.
In the early 2000s, the decline of AOL’s tightly controlled web experience coincided with the rise of open browsers like Firefox and, eventually, Google Chrome. This shift allowed users to explore the internet on their own terms—outside of AOL’s curated portal—and marked a turning point in internet freedom and adoption.
Fast forward a few years, and the launch of Android OS—built on open-source Linux—ignited an unprecedented surge in global mobile internet access. Its open architecture enabled manufacturers around the world to produce affordable smartphones, connecting billions and accelerating the internet’s reach far beyond the desktop.
And during the rise of cloud computing, open-source tools like Kubernetes redefined enterprise software infrastructure. Kubernetes, originally developed by Google and released as open source, became the backbone of modern cloud-native applications—powering everything from startups to Fortune 500s.
Each of these milestones illustrates a clear pattern: the internet grows, thrives, and reaches new heights when its building blocks are open. Today, as we stand at the threshold of a new AI era, the same lesson applies.
AI systems—especially large foundation models—are poised to reshape economies, governance, creativity, and daily life. But much like the early internet, their long-term societal value hinges on openness. This isn’t just a technical debate—it’s a foundational choice about who gets to shape the future. Openness in AI offers a cascade of extraordinary benefits. These include:
Transparency and Accountability: Closed models hinder scrutiny and amplify concerns about bias, discrimination, and opacity. Open models allow public researchers, watchdogs, and everyday users to better understand how decisions are made—and who made them.
Democratic Access to Knowledge: When model weights, training data, and code are open, students, educators, and developers anywhere in the world can learn, experiment, and innovate. This lowers barriers to entry, decentralizes expertise, and broadens participation.
Unexpected Innovation and Public Interest Use Cases: Just as open internet protocols gave rise to Wikipedia, peer-to-peer networks, and WhatsApp, open AI models will enable “out-of-left-field” applications—from rural education tools to climate modeling software—that closed systems would never prioritize.
A More Competitive Ecosystem: Open tools help level the playing field. They enable smaller startups, nonprofits, and academic teams to build and iterate without relying on exclusive access granted by dominant players. In doing so, openness actively counters market consolidation and encourages competition.
—
None of this is speculative—it’s historical precedent. Open-source software powers the web servers that host our sites, the operating systems on our phones, and the frameworks our apps are built with. Companies like Google, Meta, Amazon, and even Microsoft (once the emblem of closed systems) all rely on and contribute to open-source communities. The AI stack should follow suit.
Of course, openness isn’t a silver bullet nor just a given. Openness must be pursued thoughtfully, with safeguards against misuse, attention to deployment environments, and proper governance. But dismissing open AI models as inherently “dangerous” misses the mark. As with any transformative technology, the marginal risks of openness must be weighed against its structural benefits—including transparency, innovation, and equitable access. We don’t yet know what the most transformative AI applications of the next decade will look like. But if history is any guide, they’ll come not from behind closed doors, but from the edges—from open systems that invite experimentation, learning, and remixing.
——
The more that I think about it, I tend to believe that the strongest case for open-source AI isn’t just technical or economic—it’s human.
First, open-source projects thrive not only because they are accessible, but because they attract dynamic communities. These communities form in layers—core contributors, frequent users, evangelists, fans. That network of affiliation drives a kind of cultural flywheel. When the product is strong, this layered engagement doesn’t just sustain it—it propels it. The popularity and loyalty that grows around open-source tools isn’t just a byproduct of utility; it’s a function of taste, trust, and shared ownership. These affinities can be the fuel which propels a company to breakaway velocity. And in a world where pace of adoption and community momentum often predict market success, that’s an advantage no proprietary tool can easily manufacture.
Second, open source enables real-time, collective learning from mistakes. Contributors build on each other’s work, flag issues publicly, and iterate fast. This capacity for shared error correction and the gains therein is essential—especially in AI, where models must be not just powerful, but appropriate, accurate, and reliable. This open-source advantage in particular has been a bit of a blind spot for certain current closed AI systems, where mistakes are just eliminations; the priority is to be correct — to give the right answer, make the optimal move, or find the best outcome. Yet, as the level of abstraction increases — especially with messy, real-world decisions — what is “correct” often becomes contextual, subjective, and even debatable. Within the complexities of nuance and ambiguity and shades of grey, there’s intrinsic value in being wrong — making mistakes, reflecting on them, and learning through the process. Historically, that’s been a very human advantage: our ability to grow through trial and error.
What’s most exciting about emerging multi-agent AI system architectures is their potential to engage more like collaborators — not just as oracles of correctness, but as systems that can learn, adapt, and evolve through interaction, feedback, and even failure. This shift — from optimization to exploration, from perfection to progress — could mark a fundamental turning point in how AI systems develop and how we work alongside them.