The code also details expectations for AI companies to respect paywalls, as well as robots.txt instructions restricting crawling, which could help confront a growing problem of AI crawlers hammering websites. It "encourages" online search giants to embrace a solution that Cloudflare is currently pushing: allowing content creators to protect copyrights by restricting AI crawling without impacting search indexing.
Additionally, companies are asked to disclose total energy consumption for both training and inference, allowing the EU to detect environmental concerns while companies race forward with AI innovation.
More substantially, the code's safety guidance provides for additional monitoring for other harms. It makes recommendations to detect and avoid "serious incidents" with new AI models, which could include cybersecurity breaches, disruptions of critical infrastructure, "serious harm to a person’s health (mental and/or physical)," or "a death of a person." It stipulates timelines of between five and 10 days to report serious incidents with the EU's AI Office. And it requires companies to track all events, provide an "adequate level" of cybersecurity protection, prevent jailbreaking as best they can, and justify "any failures or circumventions of systemic risk mitigations."
Ars reached out to tech companies for immediate reactions to the new rules. OpenAI, Meta, and Microsoft declined to comment. A Google spokesperson confirmed that the company is reviewing the code, which still must be approved by the European Commission and EU member states amid expected industry pushback.
"Europeans should have access to first-rate, secure AI models when they become available, and an environment that promotes innovation and investment," Google's spokesperson said. "We look forward to reviewing the code and sharing our views alongside other model providers and many others."
These rules are just one part of the AI Act, which will start taking effect in a staggered approach over the next year or more, the NYT reported. Breaching the AI Act could result in AI models being yanked off the market or fines "of as much as 7 percent of a company’s annual sales or 3 percent for the companies developing advanced AI models," Bloomberg noted.