By Vincent M. Kästle and Tobias Wolfenstätter

 

Artificial intelligence (AI) is one of the most groundbreaking technologies of our time. It combines both fascination and uncertainty in a unique way, so that assessing its potential and risks is something of a balancing act. However, it is undisputed that the technological advances of recent years have triggered a transformation process that seems unstoppable. The central assumption is that AI can significantly increase value creation and productivity. It has long since found its way into numerous areas of everyday life. Applications of generative AI models such as GPT or Gemini have reached a broad public and impress with their performance, while fears about the dominance of AI in the labor market, the deepening of social inequalities or even its potentially dangerous influence on the existence of mankind are constantly raising new questions. With the growing hype surrounding AI technologies, the question of how existing legal frameworks can or should deal with this far-reaching development is becoming increasingly important. This article highlights the current regulatory efforts in Europe and the USA.

 

Europe’s Answer: AI Act

As early as 2020, the EU announced the dawn of a digital age, characterized by efforts to create a digital framework for the constantly evolving European Economic Area. As part of these regulatory efforts, AI was to be given greater focus as a driver of technological progress.[1] The most important result of these efforts is the AI Act, which came into force in its basic form on August 1, 2024 and will not only be of importance for European actors, but also for all economic actors with interests in the European Economic Area due to the applicable market location principle (Art. 2 lit. a AI Act).[2] As a regulation, the AI Act is directly applicable in all member states of the European Union and therefore does not require separate implementation into national law. Nevertheless, the AI Act assigns specific tasks to the member states. For example, they are obliged to set up a national supervisory authority that is responsible for implementing and monitoring the regulation (Art. 70 AI Act). The majority of the governance structure is located at EU level. The European Commission has the power to issue delegated acts (Art. 97 AI Act) and guidelines (Art. 96 AI Act). Among other things, the guidelines serve to clarify the definition of AI systems (Art. 96 para. 1 lit. f AI Act). A central role will be taken on by the “AI Office”, which is not only given monitoring and enforcement powers for parts of the AI Act by the European Commission (Art. 88 para. 1 AI Act), but also contributes to the deepening of expertise and competencies in the field of AI at EU level (Art. 64 AI Act). This institution is supplemented by the “AI Board” (Art. 65 para. 1 AI Act), a body made up of one representative from each Member State (Art. 65 para. 1 sentence 1 AI Act). The AI Board supports the European Commission and the Member States in applying the provisions of the AI Act. In addition, economic and non-economic stakeholders are given the opportunity to provide expertise to both the AI Board and the European Commission as part of an “Advisory Forum” (Art. 67 para. 1 AI Act). Another supporting body is the “Scientific Panel”, which consists of experts and provides advice on the implementation of the regulation (Art. 68 para. 1 AI Act).

 

Safety-Oriented Approach

The EU developed the AI Act with the aim of creating a legal framework that is trustworthy, can implement ethical standards, supports jobs, contributes to the development of competitive “AI made in Europe” and influences global standards.[3] Recital 1 makes it clear that AI, despite its high economic value, must not be allowed to fall short of social acceptance. It states that the development and use of AI must always be “in line with the values of the Union” in order to “promote the deployment of human-centered and trustworthy AI”. The main objective of the AI Act is thus clearly defined: between the two poles of “promoting innovation” and “safety”, which must be balanced, a clear emphasis is placed on product safety and the protection of human dignity and fundamental rights. A so-called risk-based regulatory approach is intended to create acceptability and trust in AI and its application.[4] Based on the product safety law, the AI Act assigns AI systems to four risk categories – based on their hazard potential – and subjects developers to a tiered catalog of requirements. The risk categories include prohibited AI systems, high-risk AI systems, limited risk AI systems and AI systems with low or no risk. In addition to this risk system, the AI Act also covers a fifth category: General purpose AI (GPAI) systems.

 

Risk-Based Categories

Prohibited AI systems as defined by Art. 5 AI Act are considered by the EU legislator to pose an unacceptable risk. The AI Act prohibits AI systems that can subliminally influence people, exploit the vulnerable or those in need of protection or be used for private and public social scoring or biometric real-time identification systems. Therefore, the introduction of a social scoring system, such as the one that appears to have been partially implemented in China with the help of AI since 2015, is precluded.[5]

Below this threshold, high-risk AI systems, which are subject to considerable regulation (Art. 6-49 AI Act), are still permitted. This includes AI systems that are either considered a safety component of a regulated physical product under Annex I of the AI Act or are themselves considered such a regulated product or can be assigned to one of the key critical areas under Annex III of the AI Act. Annex I is intended to cover the use of AI in products such as self-driving cars or medical devices. Annex III (see Art. 7 AI Act), which can be adapted by the Commission, provides for classification as high-risk AI systems for areas such as critical infrastructure, education and law enforcement. Providers of high-risk AI systems are subject to increased requirements for compliance with the protection of human rights. For example, they must carry out a human rights impact assessment, fulfill transparency and registration obligations and, depending on the addressee, obtain CE certifications and undergo conformity procedures. According to an estimate by the EU Commission, around 5-15% of the AI systems in question will be classified as high-risk AI systems.[6]

The limited risk AI systems regulated in Art. 50 AI Act are in the risk category below high-risk AI systems. Their risk classification is strongly characterized by their interaction with humans. This is the case with chatbots, for example. Providers of such systems are obliged to ensure that affected persons are clearly informed that they are interacting with an AI system (Art. 50 para. 1 AI Act). It is also necessary for content generated by AI systems – such as audio, image, video or text material – to be labeled accordingly (Art. 50 para. 2 AI Act).

The lowest risk category includes low-risk AI systems with low to no risk (Art. 95 AI Act). For these AI systems, the AI Act only provides that codes of conduct can be developed on a voluntary basis (Art. 95 para. 3 AI Act).[7] The scope of low-risk AI systems includes spam filters or AI-based video games, for example.[8]

The fifth category of GPAI systems is regulated outside the four risk categories described above and concerns AI systems that are based on a general purpose AI foundational model and are capable of serving a variety of purposes both for direct use and for use in other AI systems (Art. 3 No. 66 AI Act). An example of a general purpose AI model would be GPT4, while the browser application ChatGPT would be the general purpose AI system based on it. The catalog of duties for GPAI is based on the degree of performance and it can be differentiated between GPAI models with a lower and those with a higher systemic risk (see Art. 53 and 55 AI Act). Significantly extended documentation and reporting obligations apply if the AI model has exceeded the performance threshold of 1025 Floating Point Operations (FLOPs).[9] In addition, the AI Office and Board are granted the authority to work towards the joint creation of codes of practices (Art. 56 AI Act).

 

Product Liability Directive and AI Liability Directive

The AI Act is – albeit a central one – only one component of the developing AI regulation. In addition to the AI Act, the civil liability of AI systems also has a decisive role to play. For this reason, the regulation of the liability of AI systems was initiated in the course of two separate directives: the updated Product Liability Directive[10] and the draft directive on the adaptation of non-contractual civil liability to AI, the so-called AI Liability Directive (AILD).[11] The directives provide important practical facilitations regarding the burden of proof for injured parties.[12] They also contain provisions that establish presumptions against the providers and developers of AI systems, which are intended to facilitate the enforcement of claims.[13]

In product liability law, the concept of “product” and “defect” has been expanded. Whereas previously only movable objects and electricity were covered by the scope of application, this is now extended to software and therefore in particular to AI systems (Art. 4 No. 1 Product Liability Directive). There are also new disclosure obligations and presumptions of causality (Art. 9 para. 2 lit. a, 4, 5 Product Liability Directive). For example, the Product Liability Directive stipulates that developers of AI systems can be obligated to disclose relevant evidence upon request. In addition, a rebuttable presumption of causality against the developer with regard to the defectiveness of products (including AI systems) will be introduced.[14] 

The AILD will also lead to changes in national liability law. Contrary to what the name might suggest, however, the directive does not provide for an extension of national claims. Instead, the AILD aims to facilitate the burden of proof for fault-based non-contractual claims for damages and thus increase the level of protection for injured parties of AI systems.[15] The AILD is linked to the AI Act and establishes differentiated rules on the burden of proof depending on the risk category to which the AI system is attributed (Art. 2 No. 1-4 AIDL). In German law, this particularly affects torts liability. Contractual claims for damages, on the other hand, are explicitly not to be regulated under EU law (Recital 61, Art. 3 para. 6 Directive (EU) 2019/771 on the sale of goods and Recital 73, Art. 3 para. 10 on Digital content and Digital services Directive (EU) 2019/770).[16]

 

Critical Evaluation

The European Union is positioning itself as a pioneer in the regulation of AI and is pursuing the goal of developing an international leadership role in this area. However, whether this ambitious approach will serve as a safe haven for technological innovations in the long term or could potentially act as a hindrance to the competitiveness of the European economy remains uncertain at present. The actual impact of the AI Act on the regulation of AI can probably only be conclusively assessed after 2027, once its final provisions have fully entered into force. Yet certain effects can already be critically observed. Leading developers of AI have already announced that they will either not release certain functions in Europe at all[17] or only with a considerable delay[18] due to the strict requirements of the AI Act. Therefore, between the possible perception of the AI Act either as a protector against elementary risks or as a brake on decisive technological developments, the latter currently seems more likely to be the case.

 

US-American Approach: Piecework

The US regulatory approach is comparable to a patchwork quilt. Instead of a uniform set of rules, there are a large number of different projects at federal and state level, some of which have gaps in their scope of application and some of which overlap. At federal level, the White House Executive Order on AI issued in October 2023 has made the most headlines.[19] This is not a regulation of AI in the narrower sense, as it is primarily aimed at lower-level authorities. Developers of the “most powerful AI systems” are only required to share safety test results and other critical information with the US government.[20] However, the order does provide certain guidelines for further regulation. The most important goal is to ensure the safety of AI systems. To this end, various authorities such as the National Institute of Standards, the Department of Homeland Security, the Department of Commerce and so forth are tasked with developing tests and standards in their respective fields of activity that AI systems will have to meet before being launched on the public market. The range of requirements is broad. The threats that are to be prevented range from military applications of AI and threats in the area of cyber security to dangers for consumers due to the misleading use of AI. With regard to the latter, it is worth mentioning the proposal that AI-generated content should be labeled with watermarks – similar to Europe. The White House Blueprint for an AI Bill of Right, a draft of which was published together with the Executive Order, furthermore focuses on civil rights.[21] It contains certain principles according to which transparency and data security should be guaranteed when using AI, discrimination by algorithms should be prevented and the possibility of human review should be kept open. However, the legislative process at federal law is still at an early stage and it is not yet possible to predict with certainty the direction in which regulation will develop, as most of the proposals are not yet legally binding.

 

Developments at the State Level

This is different at state level, where around a third of states already have at least one law that explicitly deals with AI.[22] Most of these laws relate to AI and data protection.[23] AI profiling and automated decision-making are seen as a major threat. AI profiling describes the technique of creating a profile of a person by reading personal data en masse, on the basis of which predictions can be made about their preferences and behavior. AI could then make independent decisions about whether to enter into certain contracts with certain consumers. This approach is particularly susceptible to discrimination because, by linking to data collected in the past, AI may make the decision dependent on factors other than those relevant to the decision.[24] Some states therefore provide for an opt-out mechanism that consumers can use to object to the automatic processing of their personal data for profiling purposes.[25] Other laws concern transparency in the use of AI for product and election advertising.[26] In California, for example, there is an obligation to disclose when communication takes place with AI in the form of a bot instead of a real person.[27] Far fewer laws are in place regarding the structure and development of AI at a technical level. However, they are much more controversial.

 

Debate on SB 1047

The debate on California’s “Safe and Secure Innovation for Frontier AI Models Act” (Senate Bill 1047) received particular attention.[28] The special significance of SB 1047 is not only due to the fact that California is generally at the forefront of the debate on the regulation of AI because more than half of the world’s top fifty AI companies are located in California.[29] Rather, it is significant because SB 1047 is the first bill to contain detailed requirements for developers, with the potential for severe fines if they are not met. If SB 1047 were to become law, it could serve as a model for many other states. The bill, which was first drafted in February 2024, was passed by both the Senate and the State Assembly in August 2024 and would have only needed to be signed by Governor Newsom. However, because Newsom decided at the end of September 2024 not to sign the bill and instead vetoed it,[30] it is unlikely that the bill will ever be adopted at the current state of the debate. To do so, both chambers would have to override the veto with a two-thirds majority. The liveliness of the debate is a paradigmatic example of the different views on the right regulatory approach. SB 1047 provides for various regulations to ensure the safety of AI systems. The focus is on large AI systems. SB 1047 would only apply to AI models whose training has cost at least 100 million dollars and in which a computing power greater than 1026 integer or floating-point operations was used (Section 22602 SB 1047). A long catalog of requirements would have to be fulfilled in advance of the training, i.e. at an early stage of the development of such a model (Section 22603 SB 1047). Developers must register the training with the California Attorney General in advance and prepare a report on how they plan to comply with the following requirements: Safeguards against unauthorized access by third parties, the ability to shut down the AI system using a kill switch, and establishing and maintaining a separate security protocol that is reviewed annually and includes detailed descriptions of how the AI system will be tested and what technical security features it will have to avoid critical harm. Large parts of this report must be made available to the public. Once the training has been carried out, further risk assessment based on the tests carried out must also be published before usage begins. In addition, developers must undergo an annual review by an independent third party to ensure compliance with these requirements. A newly established “Government Operations Agency” would in turn be tasked with monitoring the test results and supervising the third parties (Section 22602 SB 1047). Violations could be penalized with fines of 10% of the cost of training the model and 30% for repeated violations (Section 22606 SB 1047). Finally, developers would also be required to provide whistleblower protection.

 

Reasons for the Refusal

Although SB 1047 was unanimously acknowledged to have good intentions, the overall proposal has generated mostly public disapproval. Criticism has been voiced from various directions. First, the threshold of costs and computing power required for the application of SB 1047 is criticized as “arbitrary”. On the one hand, all larger models are subject to SB 1047 without taking into account their risk potential in the individual case. On the other hand, smaller models, which may pose just as great a danger, especially if they are used in the wrong environment, are not covered by the scope at all. Second, the provisions of SB 1047 are criticized for being too imprecise. Key terms such as “critical harm” (e.g. in Section 22602 SB 1047) or “reasonable care” (e.g. in Section 22603 SB 1047) are not yet clarified at this early stage of technological development. For example, it is still unclear what risks AI could actually pose. In the explanation for his veto, Newsom explains the risk of developing chemical, biological or nuclear weapons, which can undoubtedly be classified as “critical.” However, the danger here would be less from AI, as the blueprints for this can be found in the depths of the internet anyway. In reality, the danger would come from the people who put these plans into action by building the infrastructure to actually manufacture these weapons. However, there are already sufficient safeguards against this under existing law, meaning that SB 1047 is at best superfluous and at worst detrimental to innovation. The fear that SB 1047 will disproportionately hinder technological progress can be described as the overarching motive behind the diverse criticism.[31] The many requirements that the developers of AI models have to meet are perceived as too extensive and burdensome. They are maybe manageable for many of the large companies on the market. But the requirements would hit developers from the open-source community and universities particularly hard. This is because they are dependent on the current practice of commercial developers making their earlier models available to the public for free. However, because the many requirements that SB 1047 places on an AI models continue to apply to the original developer with considerable liability risks, it is to be feared that developers will no longer share their models as freely as is currently the case. The development of AI is particularly dependent on cooperation between players from the private sector, the open-source community and academia. Fei-Fei Li, Director of the Human-Centered AI Institute at Stanford University, therefore proposes the opposite approach in her criticism of SB 1047.[32] AI regulation should primarily promote the development of AI, not hinder it. The risks should not be countered by regulating the AI models themselves, but by setting clear limits only on their application. For example, it would go too far to restrict both the foundational technology behind AI-generated deepfakes as well as their distribution. Li instead advocates for a “moonshot mentality”. AI regulation should first and foremost ensure that the USA and California are at the forefront of technological development, since this is the only way to ensure that AI can be shaped according to one’s own requirements and needs.

 

Synopsis

The fact that SB 1047 is unlikely to come into force shows that the critics have gained the upper hand in the US for the time being. However, it would be too simplistic from a European perspective to divide the critics and supporters of SB 1047 into “Team Caution” and “Team Risk” and claim that California has now opted for risk instead of caution. The US-American regulatory approach must be seen as the result of a pragmatic cost-benefit analysis. Many estimate the costs of SB 1047 to be significantly higher than the benefits. SB 1047 is not considered as bad from a legal point of view, but simply as too expensive for too little benefit from a political and economic point of view. This shows a typical US-American understanding of risk, which contrasts with the European one: Risks that already exist and are known today should be minimized. That is why many areas of application for AI that are considered harmful, such as influencing elections, etc., are already regulated in the USA in a similar way to Europe. However, unknown future risks can only be observed and tried to be reacted to in a timely manner when they are known. For this reason, there are still no significant requirements at the technical level of AI development following the rejection of SB 1047 (in California). At least in the near future, the witticism “the U.S. innovates, and the EU regulates?” will therefore remain true.

 

The Authors:

Vincent M. Kästle is a doctoral candidate and research assistant at the chair of Prof. Dr. Felix Maultzsch, LL.M. (NYU) at the Goethe University Frankfurt am Main. He is currently attending an LL.M. program at the University of California, Berkeley.

Tobias Wolfenstätter is a legal trainee at the Higher Regional Court of Frankfurt am Main and a research assistant at the law firm Hogan Lovells.

Editor:

Isabel Cagala, TLB Co-Editor-in-Chief

 

 

 

[1] https://ec.europa.eu/commission/presscorner/detail/en/SPEECH_20_1655 (All hyperlinks were last accessed on January 5, 2025).

[2] Wendt/Wendt, Das neue KI-Recht/Wendt § 3 Rn. 53.

[3] https://www.bundesregierung.de/breg-de/aktuelles/ai-act-2285944.

[4] Wendt/Wendt, Das neue Recht der Künstlichen Intelligenz, § 3 AI Act Rn. 26 ff.

[5] Chibanguza/Steege, NJW 2024, 1769, 1771.

[6] https://digital-strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence.

[7] Ammann/Pohle, CB 5/2024, 137, 137.

[8] Wendt/Wendt, Das neue KI-Recht/Wendt § 4 Rn. 38.

[9] FLOP is a method used by computers to perform arithmetic calculations with numbers that have decimal points, https://www.techopedia.com/definition/floating-point-operation-fpo#:~:text=A%20floating%2Dpoint%20operation%20is,a%20wide%20range%20of%20values.

[10] Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202402853.

[11] Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to AI, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0496.

[12] Lachenmann/Meyer, MMR-Aktuell 2023, 457000.

[13] Wendt/Wendt, Das neue KI-Recht/Wendt § 14 Rn. 2.

[14] Krüger/Wagner, ZfPC 2023, 124, 128.

[15] White Paper on AI A European approach to excellence and trust, p. 18 (available under: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065).

[16] Staudenmayer, NJW 2023, 894, 897.

[17] https://www.techradar.com/computing/artificial-intelligence/openai-s-advanced-voice-is-unavailable-in-the-eu-and-now-we-might-know-why.

[18] https://www.cnbc.com/2024/06/21/apple-ai-europe-dma-macos.html#:~:text=Apple%20Intelligence%20won’t%20launch,to%20antitrust%20regulation%2C%20company%20says&text=Apple%20will%20not%20roll%20out,from%20the%20Digital%20Markets%20Act.

[19] Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, signed by Joe Biden on October 30, 2023.

[20] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[21] https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

[22] https://www.legaldive.com/news/16-states-have-ai-laws-curb-profiling-BCLP-interactive-compilation-state-AI-laws/710878/.

[23] https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html.

[24] Hofmann/Kalluri/Jurafsky, AI generates covertly racist decisions about people based on their dialect, Nature 2024, 633, 147.

[25] Section 1798.120 Cal. Consumer Privacy Act of 2018; Section 6-1-1306 Colorado Privacy Act.

[26] https://www.broadcastlawblog.com/2024/04/articles/11-states-now-have-laws-limiting-artificial-intelligence-deep-fakes-and-synthetic-media-in-political-advertising-looking-at-the-issues/.

[27] Section 17941 Cal. Bus. & Prof. Code § 17941.

[28] https://www.theverge.com/2024/9/11/24226251/california-sb-1047-ai-industry-regulation-backlash.

[29] https://www.axios.com/local/san-diego/2023/08/02/california-san-diego-ai-technology-forbes-brookings.

[30] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

[31] https://www.lawfaremedia.org/article/california-s-proposed-sb-1047-would-be-a-major-step-forward-for-ai-safety-but-there-s-still-room-for-improvement.

[32] https://fortune.com/2024/08/06/godmother-of-ai-says-californias-ai-bill-will-harm-us-ecosystem-tech-politics/.