The Rust project collected and summarized diverse perspectives from contributors and maintainers on AI usage. Key themes include: AI works best as a tool requiring careful engineering; it's valuable for non-coding tasks like search and data processing but inconsistent for coding; AI-generated contributions are straining maintainer review bandwidth with low-quality PRs; ethical concerns cover training data provenance, power concentration, energy consumption, and copyright risks for open-source licensing. There is broad consensus that naive AI use should be discouraged, contributors must understand their submissions, and reviewers need empowerment to reject low-quality work. Proposed responses range from universal disclosure policies and reputation/web-of-trust systems to using AI itself for first-pass reviews and seeking funding from AI companies to support maintainers.
Table of contents
AI is a tool that one must learn to wield wellMany people find value in AI for non-coding tasksWriting with AI however is tricky to do wellOpinions on coding with AI are… variedThe ethics of AI usageThe legality of AI usageAI and open sourceSo what should we do about all this?Meta-observations and closing thoughtsSort: