Meta has opted not to endorse the European Union’s (EU) new voluntary Code of Practice, which is intended to guide companies in complying with the bloc’s forthcoming AI Act.

The announcement was made by Joel Kaplan, Meta’s Chief Global Affairs Officer, in a recent LinkedIn post.

“Europe is heading down the wrong path on AI,” Kaplan wrote. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which extend far beyond the scope of the AI Act.”

The EU’s Code of Practice, drafted by 13 independent experts, is designed to support organisations in aligning with the AI Act ahead of its full enforcement. Though non-binding, the Code sets expectations around transparency, copyright compliance, and responsible model development.

Under the framework, signatories are required to publish summaries of the data used to train their AI models, ensure compliance with EU copyright legislation, and regularly update information on their AI tools and services. The Code also prohibits the use of pirated content in AI training datasets and requires companies to honour requests from rights holders to exclude their content.

Unlike Meta, Microsoft is open to the idea

In contrast to Meta’s stance, Microsoft appears more open to participation. Brad Smith, President of Microsoft, told Reuters, “I think it’s likely we will sign. We need to read the documents. Our goal is to find a way to be supportive, and one of the things we really welcome is the direct engagement by the AI Office with industry.”

Earlier in July 2025, a coalition of technology companies called on the European Commission to delay implementation of the AI Act, requesting a two-year postponement. However, the Commission has maintained its existing timeline.

Several firms, including OpenAI and the French AI startup Mistral, have already signed the Code of Practice.

In addition, the European Commission has recently issued formal guidelines for providers of general-purpose AI models deemed to carry “systemic risk.” These include huge players such as OpenAI, Anthropic, Google, and Meta. Companies that fall within this category will be required to comply with the legislation by 2 August 2027.

Share this post