Fedora council approves policy allowing AI-assisted contributions
Intense discussion approves AI – but subject to full responsibility and disclosure
The Fedora Council has approved AI-assisted contributions to its Linux distribution, following intense debate and subject to strict conditions.
Contributors must take full accountability for their submissions and disclose AI tool usage when AI generates a significant portion of their work unchanged. Minor AI assistance, like grammar checking, doesn't require disclosure..
The policy was drafted in late September, following an AI survey in summer 2024, and was then discussed by the community. It also includes sections on use of AI in project management and Fedora as a platform for AI development. One notable clause states states "any user-facing AI assistant, especially one that sends data to a remote service, must not be enabled by default and requires explicit, informed consent" – something users of commercial operating systems would likely welcome.
Fedora operations architect Aoife Moloney posted yesterday that "the Fedora council has approved the latest version of the AI-assisted contributions policy," after addressing two initial concerns about accountability and transparency.
Accountability is that a contributor is responsible for everything they submit, whether or not it is AI-generated. Transparency was strengthened to state that "you MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes." Words like MUST, MAY and SHOULD in the policy are defined as in RFC 2119, used for internet standards.
A contribution is not necessarily code but could also include content such as documentation and social media posts, design assets, and more. Therefore the policy does not define what a contribution is.
Fedora is a free operating system sponsored by Red Hat and is the most cutting-edge of the company's Linux distributions. New versions are released at six-month intervals, and it is the basis for CentOS Stream which in turn is the basis for Red Hat Enterprise Linux. According to the post on the draft proposal, the Fedora council regards AI as a "transformative technology" and is trying to make its distribution a strong platform for AI development and use, while also guarding against misuse, breach of privacy, and low quality code.
One of the concerns expressed in the policy is that submitting "AI slop" puts too much burden on human reviewers and is therefore not an acceptable contribution. Another is that while AI tools may be used as part of a review process, "AI should not make the final determination on whether a contribution is accepted or not."
- Is PHP declining? JetBrains says yes. And no
- Trust the AI, says new coding manifesto by Kim and Yegge
- Meta will move React to Linux Foundation to address vendor dominance fears
- JetBrains backs open AI coding standard that could gnaw at VS Code dominance
AI assistants are non-deterministic and may generate buggy or badly structured code, or code that the human who requested it does not understand. Some reports indicate that AI is eroding code quality. As such, there are reasonable concerns about the impact of AI on the quality of contributions.
A problem for Fedora, however, as framed by community architect and council member Justin Wheeler, is that "without a policy to provide some kind of guidance, we already run the risk of abuse."
Another aspect highlighted by Wheeler is that in the absence of a policy, a contributor might be "harassed by project members for their use of AI."
If the policy is successful and contributors observe its disclosure and transparency guidelines, this will enable research into the impact of AI assistance on the project, good or bad. ®