AI Generated Code Isn’t Cheating: OSS Needs to Talk About It

Remember early 2025? "Vibe coding" was a meme and seemed mostly a tool for casual builders or those new to coding. It's now 2026, and we find ourselves living in a new reality. Industry leaders like DHH, Karpathy, and Lutke are publicly embracing AI-generated code controlled by human prompting.

AI Generated Code Isn’t Cheating: OSS Needs to Talk About It
Collection Cycling Art, Energy, and Locomotion (1889) / George Sturdy and Solomon Young's vehicle of amusement.
Without an AI Coding policy that promotes transparency alongside innovation, Open Source codebases are going to struggle

Remember early 2025? “Vibe coding” was a meme and seemed mostly a tool for casual builders or those new to coding. It was often used disparagingly, or to imply a lack of deep technical expertise. Some very cool basic applications were being built, but AI coding assistants couldn’t reliably function in complex codebases. But what a difference a year has made!

It’s now 2026, and we find ourselves living in a new reality. Some of the most influential voices in software engineering like DHH (Ruby on Rails), Andrej Karpathy (prev OpenAI, Tesla), Tobi Lutke (Shopify), Salvatore Sanfilippo (Redis), and Mitchell Hashimoto (Ghostty, prev Hashicorp) are publicly embracing a new  paradigm: completely AI generated code controlled by human-in-the-loop prompting. It was also recently publicized that Linus Torvalds (creator of Linux and Git) is leveraging AI vibe-coding in his side-projects

AI is everywhere: if you’re a software developer, you’ve almost certainly tried at least one AI-assisted coding solution over the past year. It’s a safe assumption that a large portion of developers are using AI to help them, but we still know shockingly little about how their code was derived. This secrecy is outdated, especially now that the practice is being normalized by industry leaders.

The open source community is built on top of foundations of transparency and collaboration, of which knowledge sharing is a key component. At Mozilla.ai, we believe we must embrace and encourage the disclosure of AI usage as quickly as possible. We need to move away from “Should we AI?” and towards a structure that clearly defines our expectations for where we encourage AI usage and how we document it.

In our project any-llm, we’ve started to iterate on this philosophy by creating a pull request template that requests a few pieces of information whenever a PR is submitted.

Here’s a snippet of the relevant part of our pull request template:

## **Checklist**
  - [ ] I understand the code I am submitting.
  - **AI Usage:**
      - [ ] No AI was used.
      - [ ] AI was used for drafting/refactoring.
      - [ ] This is fully AI-generated.

## **AI Usage Information**
  - AI Model used:
  - AI Developer Tool used:
  - Any other info you'd like to share:

When answering questions by the reviewer, please respond yourself, do not copy/paste the reviewer comments into an AI system and paste back its answer. We want to discuss with you, not your AI :)

Why This Metadata Matters 

Contextual Reviewing

First, we request that the contributors specify their level of AI usage: was AI used to draft and make edits? Or was their contribution completely AI-generated with them only directing it via plain language prompts? Both are acceptable, but it helps a reviewer understand how to approach their review. If we know the code is completely AI generated, we can be candid with our feedback and direct the contributor towards improving their prompting or AI coding configuration to improve quality. Without this transparency, it can be difficult to give feedback since a reviewer doesn’t want to offend the contributor by insinuating that their work came from a bot.

Toolchain Discovery

Second, we request information about the contributors' AI setup: what model(s) and IDE/CLI tools were used? This is valuable metadata for crowdsourcing best practices. Maybe there is one model or tool that works amazingly well with a certain codebase or language! Openly sharing this information allows all of us to learn from each other.

Keeping Review Human

Lastly, we request that any responses to comments come from the contributor themselves and not their AI tool. It is frustrating to write comments without knowing if a human is on the other side reading and responding to the feedback. The open source community is a wonderful place to learn from each other, and that learning happens best when humans talk to humans. Of course, AI can be used to help the contributor brainstorm or improve their grammar, but we think the core discussion should still happen between two humans.

We welcome community opinions and hope to see similar approaches be adopted across the open source community. Let's keep learning and developing together!