The Cloud Hypervisor project has introduced a No AI code policy.
Cloud Hypervisor started life in 2018 as a joint effort between Google, Intel, Amazon, and Red Hat – all of which wanted to share their work on virtualization components to speed their respective efforts to create virtual machine monitors and hypervisors. The participants decided that work was best undertaken using Rust and the rust-vmm project is the result.
Intel took the project in a slightly different direction and led to the creation of Cloud Hypervisor, a Virtual Machine Monitor for cloud workloads. The Linux Foundation took on the project in 2021, when Alibaba, ARM, ByteDance, and Microsoft were also part of the project, alongside Intel.
AMD, Ampere, Germany’s Cyberus Technology and China’s Tencent Cloud have since become supporters, and the project now describes itself as “an open source Virtual Machine Monitor that runs on top of the KVM hypervisor and the Microsoft Hypervisor.” It’s mostly used by public clouds as the hypervisor in their IaaS services, and is customized to work with the hardware they buy in bulk.
The project delivered version 48 last week, complete with the new policy to “decline any contributions known to contain contents generated or derived from using Large Language Models.”
As detailed in the project’s documentation for contributors, the reasons for the ban are “… to avoid ambiguity in license compliance and optimize the use of limited project resources, especially for code review and maintenance.”
That wording suggests Cloud Hypervisor’s maintainers fear legal complications and/or contributions comprised of AI slop.
AI coding tools are almost certainly trained on open source code, but it’s hard for developers to know if the LLMs helping them to write software also snarfed copyrighted code or projects published under restrictive licenses. All of Cloud Hypervisor’s contributors are likely targets for lawsuits, so politely declining to accept AI makes sense.
Even though the project’s participants know it’s probably a futile gesture.
In a thread debating the policy, Cyberus Technology’s Philipp Schuster expressed concern “that this policy will basically be violated starting from day 0 after being merged. We never can ensure code is not at least enhanced with/from LLM.”
In response, a contributor named Bo Chen suggested “we need a procedure to make sure the policy is explicitly acknowledged. One option is to add a pull request template that includes a mandatory checkbox, requiring contributors to affirm they have read and agree to our contribution guide.”
A notable inclusion in version 48 is documentation on how to run Windows 11 guests, which should help those creating cloudy desktop-as-a-service products.
Other additions that may interest include:
- Lifting the maximum number of supported vCPUs on x86_64 hosts using KVM from 254 to a whopping 8192;
- Removing support for Intel’s Software Guard Extensions (SGX);
- Adding support for inter-VM shared memory;
- Faster pausing for VMs that run on many vCPUs.
®