Open source has never been about a sprawling community of contributors, despite popular imagination. Most critical software is maintained by a tiny core, often one or two people doing unpaid work that companies rely on as essential infrastructure. That arrangement worked when contributing had friction, but artificial intelligence is erasing that friction at an alarming rate.
The Problem of AI-Generated Pull Requests
Even Mitchell Hashimoto, founder of HashiCorp and a pillar of open source, is considering closing external pull requests to his projects. Not because he has lost faith in open source, but because he is drowning in “slop PRs” generated by large language models and AI agents. This phenomenon, called “agent psychosis” by Flask creator Armin Ronacher, describes developers addicted to the dopamine hit of agentic coding, spinning up agents that run wild through projects and eventually through everyone else’s.
These pull requests are often vibe-slop: code that feels right because it was generated by a statistical model but lacks the context, trade-offs, and historical understanding that a human maintainer brings. The situation is getting worse as tools like Claude Code can research codebases, execute commands, and submit pull requests autonomously. While this is a productivity gain for individual developers, it is a nightmare for maintainers of popular repositories. The barrier to producing plausible patches has collapsed, but the barrier to responsibly merging them has not.
Economic Asymmetry of Review
This creates a brutal asymmetry: a developer takes 60 seconds to prompt an agent to fix typos and optimize loops across a dozen files, but a maintainer needs an hour to carefully review those changes, verify edge cases, and ensure alignment with long-term vision. Multiply that by hundreds of contributors using personal LLM assistants, and you get a maintainer who simply walks away. The OCaml community recently rejected an AI-generated pull request containing over 13,000 lines of code, citing copyright concerns, lack of review resources, and long-term maintenance burden. One maintainer warned that such low-effort submissions risk bringing the entire pull request system to a halt.
Even GitHub is feeling the pressure. As reported, GitHub is exploring tighter pull request controls and even UI-level deletion options because maintainers are overwhelmed by AI-generated submissions. If the host of the world’s largest code forge is considering a kill switch for pull requests, this is no longer a niche annoyance—it is a structural shift in how open source is made.
Impact on Small Libraries
Small open source projects are hit hardest. Developer Nolan Lawson noted that libraries like blob-util, which had millions of downloads for working with Blobs in JavaScript, are becoming obsolete. In the age of Claude and advanced LLMs, developers can simply ask AI to write a utility function in milliseconds instead of adding a dependency. AI has made low-value utility libraries unnecessary. Something deeper is lost too: these libraries were educational tools where developers learned by reading others’ work. When replaced by ephemeral AI-generated snippets, the teaching mentality that lies at the heart of open source vanishes. We trade understanding for instant answers.
Ronacher suggests a retreat: build it yourself. If a dependency means constant churn, the logical response is to become more self-reliant. Use AI to help, but keep code inside your own walls. This is a weird irony: AI reduces demand for small libraries while simultaneously increasing low-quality contributions to the libraries that remain.
The Future of Open Source
This leads to a bifurcation. On one side, massive enterprise-backed projects like Linux or Kubernetes will thrive with sophisticated gates and AI-filtering tools. On the other side, provincial open source projects run by individuals or small cores will stop accepting outside contributions. The era of radical transparency—anyone can contribute—is giving way to radical curation. The future of open source may belong to the few, not the many. Open source’s community was always somewhat mythical, but AI has made the myth unsustainable. We are returning to a world where only verified humans matter, not those who prompt a machine.
In this new world, the most successful projects will be those that are hardest to contribute to, demanding high human effort and relationship. They will reject slop loops and agentic psychosis in favor of slow, deliberate, and deeply personal development. The bazaar was a fun idea while it lasted, but it could not survive the arrival of robots. The future of open source is smaller, quieter, and much more exclusive. That might be the only way it survives. In sum, we do not need more code; we need more care for the humans who shepherd communities and create code that endures beyond a simple prompt.
Source: InfoWorld News