In 2021, the European Commission presented the first-ever proposal for a legal framework (regulation) on Artificial Intelligence (AI), which aims to provide a solid legal basis for the use of AI in the European Union (EU) (the “AI Proposal”). The AI Proposal focuses on ‘trustworthy AI’ and lays down a risk methodology to define the nature of obligations linked to developing, importing, and using AI in the EU. This proposal is completed by another, more recent, legislative initiative of the Commission, the proposal for an AI Liability directive (the “AI Liability Proposal”) which aspires to provide common rules for non-contractual damage caused by AI.
The “AI Proposal”
The discussions on the AI Proposal are advancing slowly yet steadily. On the one hand, the Council adopted its “general approach” on 6th December 2022. On the other hand, the European Parliament is currently debating key provisions of this Proposal, such as the definition of an “AI system”, and that of “high risk AI”. The rise of ChatGPT has recently given a new twist to the debate. The forthcoming negotiations (“trilogue”) between the three European Institutions (European Parliament, Council and Commission) will see EU lawmakers trying to reach a final position. It can thus be expected that the EU will, by the end of 2023, have its one legislative framework on AI. However, this timeline is only provisional, since the complexity and sensitivity of some of the topics discussed may very well lead to its extension, as we have seen happen throughout the negotiations on this text.
The “AI Liability Proposal”
On 28th September 2022, the Commission adopted two proposals to adapt liability rules to the digital age. The first one is a proposal to modernise the existing Product Liability Directive to better address digital products and the circular economy. The second one is a proposal for an AI Liability Directive which aspires to alleviate the victims’ burden of proof by facilitating their task to prove that damage is indeed linked to the AI system. This applies to any type of victim (not only individuals but also companies, organisations, etc.). Furthermore, when damage is caused, victims will have easier access to evidence related to the functioning of the AI system. Disclosure of such information will however be subject to safeguards to protect sensitive corporate information, such as trade secrets.
It is currently uncertain to what extent EU legislators will proceed to examine the AI Liability Proposal, in absence of the final text of the AI Proposal, since several cross references to the latter can be found in the AI Liability Proposal. These include critical notions such as that of an “AI system” and “high-risk AI” and any discussion on the provisions of the AI liability Proposal can only be provisional without a final agreement on the exact scope of these notions.
The Chamber of Commerce invites all companies that want to provide comments in connection with these legislative initiatives to do so by contacting us at the following address: email@example.com (contact persons: Kelly Xintara and Katya Vasileva).
 Following a Public Consultation, the Commission decided that a standalone legislative initiative was necessary in relation to non-contractual damage caused by AI systems, notably due to the specific characteristics of AI, including complexity, autonomy, and opacity (the so-called 'black box' effect).