EU launches AI code of practice to balance innovation and safety

The European Commission has launched a project to develop a first general code of practice for artificial intelligence, closely linked to the recently adopted EU Artificial Intelligence Act.
The guidelines aim to set some clear ground rules for AI models such as ChatGPT and Google Gemini, particularly around transparency, copyright and managing the risks posed by these powerful systems.
At a recent online plenary session, nearly 1,000 experts from academia, industry and civil society came together to help shape the content of the Code.
The process is led by a team of 13 international experts, including one of the “godfathers” of artificial intelligence, Yoshua Bengio, who leads a group focused on technology risks. Bengio won the Turing Award, which is effectively the Nobel Prize in computing, so his opinions carry well-deserved weight.
Bengio’s pessimistic view of the catastrophic risks powerful artificial intelligence poses to humanity hints at the direction his team will take.
These working groups will meet regularly to draft the Code, with a final version expected to be completed by April 2025. Once finalized, the guidelines will have significant implications for any company looking to deploy AI products in the EU.
this EU Artificial Intelligence Act A strict regulatory framework is in place for AI providers, but the Code of Practice will be a practical guide that companies must follow. The guidelines will address issues such as making AI systems more transparent, ensuring they comply with copyright law, and developing measures to manage risks associated with AI.
The team drafting the guidelines will need to balance how to develop AI responsibly and safely without stifling innovation, for which the EU has already been criticized. The latest AI models and capabilities from Meta, Apple, and OpenAI have yet to be fully deployed in the EU due to the already strict GDPR privacy laws.
The impact is huge. If done well, the Code could set global standards for AI safety and ethics, giving the EU a leadership role in AI regulation. But if the guidelines are too strict or unclear, they could slow down the development of AI in Europe, pushing innovators elsewhere.
While the EU would undoubtedly welcome the global adoption of its norms, this is unlikely as China and the US appear to be more pro-development than risk-averse. The veto of California’s SB 1047 AI safety bill is a good example of a different approach to AI regulation.
The EU tech sector is unlikely to see general AI, but the EU is also unlikely to be ground zero for any potential AI catastrophe.