What Is Open Source When Building Changes?
Questions for Open Source in the AI Era
Open source has always been defined by its output: code released under licenses that grant specific freedoms. But the practice of open source—the culture, the collaboration, the workflows—grew up around a particular mode of building. Humans typed code. Other humans reviewed it, debated it, extended it. The license was the legal wrapper, but the community was the engine.
Now AI is changing how building happens. Not just faster—different in kind.
AI changes the assumptions open source practices were built on. When code was hard to write, the writing was the bottleneck—and reviews, governance, collaboration all organized around that constraint. Now code generation is faster and more abundant, so the bottleneck moves to everything else: understanding systems, maintaining quality, building trust, sustaining communities.
The Shift in Building
I’ve been experimenting with vibe coding for a while now—building applications, prototyping ideas, pushing to see how far it can go. By vibe coding, I mean intent-driven development: I write prompts describing what I want, AI agents generate code, and I steer the results toward coherence.
This isn’t coding in the traditional sense—it’s building software by steering AI rather than writing code. I describe intent, iterate on outputs, curate results. The artifact that emerges came from a process that looks nothing like the open source workflows we’ve built norms around.
Traditional open source assumed:
Humans understand what they write
Contributions come from people who’ve engaged with the codebase
Scarcity of contribution creates natural quality gates
Reproducing a project required enough effort to stay connected to upstream
AI-assisted building breaks every one of those assumptions. Not catastrophically, not all at once, but persistently.
There are many pressure points worth examining. Here are four I keep coming back to, each with a path toward degradation and a path toward evolution. The degradation paths are what happens by default. The evolution paths require deliberate choice.
1. What Happens When Code Is Abundant?
Code generation is fast now, and getting faster. If this continues—and there’s no reason to think it won’t—we’ll likely see repositories filling with code faster than anyone can understand it. A single developer with AI assistance could produce what used to require a team, but without the distributed understanding that team would have built. What happens then?
The degradation path: Maintainers get overwhelmed. PRs flood in that look plausible but each require human judgment to evaluate. The contributor cost is minutes; the review cost is hours. Maintainers shift from architects to janitors. The people who understand systems best leave because the work becomes unbearable.
The evolution path: Maintainers have always been stretched thin. What changes is the ratio—the gap between how fast contributions come in and how fast they can be reviewed widens. The asymmetry gets worse.
Some projects will experiment with new policies: clearer boundaries around what kinds of AI-generated contributions are welcome, higher bars for what requires human review, rate limits to slow the flood. Whether these work, we don’t know yet.
The deeper shift is recognizing that code isn’t the scarce resource anymore. The thinking that shapes what gets built—design, architecture, constraints—and the work that keeps it running—testing, maintenance, support—these matter more when code is abundant.
But this shift is hard to see from inside. Many projects will keep doing what they’ve always done until the pressure becomes unavoidable. The ones that recognize the game changed early might have an advantage—but that requires letting go of the idea that code is the main event, and that’s hard for communities built around code.
2. Who Understands the Code?
When I vibe code, I can build things that work without fully understanding how they work. That’s not a criticism—it’s the point. But it raises a question for new projects: if software gets built by people steering AI rather than writing code, who actually understands these systems? Established open source projects have communities of people who know the codebase deeply. New projects built with AI assistance might never develop that.
The degradation path: A generation of “black box” projects emerges. They work, but fewer people can debug or extend them. When something breaks deeply, there’s nobody who understands enough to fix it. These projects are brittle in ways that don’t show up until they fail.
The evolution path: Surprisingly, this might be the hardest pressure point to resolve. It seems simple—just understand the code—but it cuts to something deeper: trust.
Open source has always run on trust. We trust projects because we trust the people behind them—their reputations, their track records, the fact that collaboration tends to surface problems and improve quality over time. The code is open, but what really matters is that humans we can evaluate have looked at it, argued about it, improved it.
Are we ready to hand that trust over to AI agents? For many people, not yet. And maybe that’s the honest answer for now. Some projects will be built fast with AI and used by people who accept the trade-off. Others will need the slower, more legible process of human understanding to earn trust for critical uses.
The problem is when brittle projects become infrastructure others depend on. That’s where signals matter: ways to indicate “this project has human understanding behind it” versus “this was vibe coded and nobody really knows how it works.” Transparency about what a project is—and isn’t—becomes more important than pretending everything is equally robust.
There’s also a longer view. If AI keeps improving—if it actually becomes the genius coder we’ve been told it will be—then maybe deep human understanding stops mattering. The AI understands it. The AI can fix it. We’ll see. But we’re not there yet, and projects being built now will have to live with the trade-offs made now.
3. Who Sustains the Commons?
AI is starting to intermediate between developers and open source. Instead of engaging with a project’s community, you ask an AI and get an answer. Meanwhile, a new generation of builders is emerging who create software with AI tools without ever participating in open source—many don’t know these communities exist. If this trend continues, what happens to the communities themselves?
The degradation path: Fewer issues, fewer discussions, fewer shared norms. The commons hollow out. Download counts stay high while community engagement drops. Open source becomes invisible infrastructure—used by everyone, sustained by fewer and fewer.
The evolution path: If there’s an answer here, it probably starts with collective action—open source communities, foundations, and the organizations that depend on open source infrastructure.
The vibe coders aren’t a niche—they’re a growing part of how software gets built. More and more people are steering AI rather than writing code. If open source communities don’t reach these builders, they’ll build without open source values, without understanding why shared infrastructure matters, without contributing back.
But reaching them requires making the case in terms that matter to them. Open source offers vibe coders real value: a starting point that saves time, security they can’t evaluate themselves, maintenance they don’t want to do, a way for their projects to outlive their attention span. The pitch can’t be “come contribute to our community”—it has to be “here’s why connecting to upstream serves your interests.”
The hard truth: these new builders could become the next generation of open source contributors, but only if someone shows them why it’s worth it. That’s an opportunity waiting to be seized.
4. What Replaces the Effort Barrier?
Open source economics have always depended on an effort barrier. You could fork a project, but actually reproducing its functionality took work. That effort kept people connected to upstream—it was easier to contribute back than to maintain a separate fork.
AI is eroding that barrier. Point a model at a codebase or its documentation, describe what you want, and it can reproduce the functionality in an afternoon. The code gets regenerated; the license doesn’t follow. The effort that kept people connected is disappearing.
The degradation path: The connection to upstream breaks. When reproducing a project is trivial, there’s no reason to stay linked. Forks don’t just fork—they disappear into their own trajectories. Free riding, always present, becomes the norm rather than the exception. The feedback loops that improved projects—bug reports, patches, shared learning—go quiet.
Projects respond differently. Some go defensive—restrictive licenses, closed development, walls going up. Others can’t afford to care. The ecosystem fragments. The network effects that made open source work—projects building on projects, standards emerging from collaboration, sharing creating more sharing—start to weaken.
If this continues, open source becomes a documentation archive: a reference collection, but no longer a living ecosystem.
The evolution path: Honestly, we don’t know what replaces the effort barrier. The value of open source was never just “the code is free”—it was that code was inspectable, modifiable, understandable, and that recreating it from scratch was prohibitively hard. The first three don’t disappear. But the fourth is eroding.
The economics are shifting, and the new equilibrium isn’t clear yet. Open source may need to find new ways to create connection—new reasons for people to stay linked to upstream rather than spinning off. What those are, we’re still figuring out.
Directions Worth Exploring
Those are the pressure points. Now for some ideas—not solutions, but directions that seem worth exploring.
Starting the conversation with open source communities. Before anything else, we need to understand where existing communities are on this. Are they seeing these pressures? Are they interested in rethinking what community and contribution mean in the age of AI? Some may be eager to experiment. Others may not see the problem yet. The first step is outreach—not to prescribe solutions, but to learn what’s already happening and where there’s appetite for change.
Semantic Open Source. If AI-assisted building means code is an output and context is the input, then “source” needs to expand. The prompts, constraints, design documents, and examples that shape generated code become first-class artifacts—versioned, reviewed, shared alongside the code. Something like a CONTEXT.md or .ai-context file that captures generative context in a format AI tools can read and humans can govern.
This also means designing repositories to be legible to AI tools—clear architecture, explicit constraints, documentation that explains not just what the code does but why it’s structured this way and what shouldn’t change. This isn’t dumbing down—it’s making the implicit explicit.
But this requires understanding how AI agents actually work—how they preserve context, what conventions they follow, how they maintain coherence across a codebase. We need to learn from that and build those patterns into how we structure repositories. This is easier to describe than to do—we don’t have good conventions yet.
Outreach to vibe coders. The case for open source isn’t being made to the people who most need to hear it. What does open source offer builders who steer AI rather than write code? A starting point that saves time. Security they can’t evaluate themselves. Maintenance they don’t want to do. A way for their projects to outlive their attention span. This isn’t about convincing them to become traditional contributors—it’s about showing them why connecting to upstream serves their interests.
Architecture for builders who don’t understand the code. What would it look like to design open source projects specifically for people who can’t read the codebase? Clear extension points. Well-defined boundaries. Examples of how to add functionality without breaking things. This flips the traditional assumption that contributors understand what they’re contributing to. Some projects might benefit from being more opinionated and constrained—easier to use correctly, harder to use wrong.
Embracing AI building processes. Rather than treating AI-generated contributions as a threat to be managed, some projects might integrate AI into their workflows. AI-assisted review. AI-generated tests for incoming PRs. AI explanations of proposed changes. This doesn’t solve the trust problem, but it might help maintainers keep pace with volume. The projects that figure out human-AI collaboration in maintenance might have an advantage.
The role of AI companies. There’s a missing voice in this conversation: the AI labs and tool builders whose products are changing how people interact with open source. They’re training on open source codebases, building tools that intermediate between developers and communities, shaping how a new generation builds software. What responsibility do they have to the ecosystem they depend on? This isn’t a question I have answers to, but it’s one that needs asking.
None of these are proven. They’re hypotheses about what might help open source adapt. The honest answer is we’re in early days, and the practices that work will emerge from experimentation, not from essays like this one.


