The LLVM compiler project has adopted a new policy banning code contributions submitted by AI agents without human approval, as well as AI-assisted contributions when not reviewed and understood by the contributor.
The policy is required because of the increasing number of “LLM [large language model] assisted nuisance contributions to the project,” according to the documentation update. The new policy follows a debate on the matter which highlights issues with AI-assisted code.
LLVM is among the most critical open source projects and its decisions may influence others facing similar problems. The cURL project recently closed its bug bounty program following pressure on maintainers caused by low quality AI submissions. Other projects to propose or adopt AI policies include Fedora Linux, Gentoo Linux, Rust, and QEMU; in most cases these are stricter than that adopted by LLVM.
The LLVM project’s AI policy is summarized as permitting AI assistance provided there is a human in the loop. This means not just glancing over the code, but that the contributor reviews all the code and is able to answer questions about it without reference back to the AI which generated it. In addition, contributors should label contributions that contain substantial AI-generated content. Agents which submit contributions without human approval are therefore forbidden.
There is also a ban on use of AI tools for GitHub issues marked “good first issue.” These are commonly non-urgent issues which are suitable learning opportunities, and use of AI wastes that opportunity.
Some in the community regard the new policy as too permissive. “An overly permissive AI contribution policy betrays the duty of care we have to our users,” said one, while another said “I’m vastly in favor of changing our AI policy to just disallow it.”
The policy identifies a core issue: that use of AI “shifts effort from the implementor to the reviewer.” Maintainer time is a scarce resource, and contributions must be worth more than the time it takes to review them.
Copyright is another issue, and the policy states that AI systems raise unanswered questions around copyright. Contributors are asked to ensure that code submitted conforms to the LLVM license, which is based on Apache 2.0.
Until AI coding tools became popular, LLVM maintainers would often invest time in hand-holding new contributors whose first submissions were of low quality, since doing so provided encouragement and helped them to learn. In the case of AI-assisted submissions though, this is useless as the contributor may not even understand the code.
Another aspect of this problem is that AI influence may be hard to spot or even to define. “Is brace auto-completion banned in this state? What about Visual Studio’s intellicode which suggests a few lines to complete common patterns?” asked another community member, also observing that the project already had a policy encouraging contributors to review generated code.