There are two aspects to consider when it comes to AI and elections. One is the role of AI in shaping how elections are conducted and the risks it poses in promoting deception and disinformation, which is addressed in another part of Kroll’s election series, “What Have We Learned About GenAI and Elections?”
The other consideration, and the focus of this article, is how the outcome of the U.S. and other recent elections in UK, France and elsewhere may impact the regulation of AI development as governments around the world struggle with balancing the need for security and AI risk management against the desire to promote innovation that would unlock the potential of AI technology.
Differing Approaches to AI Regulation and the Potential Impact of Forthcoming Elections
Thus far, regulatory approaches to AI have varied as governments around the world figure out how to deal with a dynamic and rapidly evolving AI landscape.
The EU Artificial Intelligence Act (AI Act), which took effect in August 2024, seeks to harmonize rules across the EU and is the first comprehensive regulatory framework to specifically address AI with a risk-based approach. In short, the higher the perceived risk of AI in a particular use or circumstance, the more the AI Act rules apply. The highest classification bans AI completely where it is deemed a clear threat to fundamental rights. The AI Act seeks to promote trustworthy AI. Of greatest relevance to business are the ethical guidelines, the regulations with which they must comply, and the penalties for noncompliance, which can reach a maximum of EUR 35 million or 7% of global turnover.
It is not yet clear whether this year’s various EU elections will lead to efforts to fundamentally alter the AI Act. Several areas of the act are still not set in stone, and numerous critics argue that it places barriers to innovation. Some EU member states may seek to alter restrictions around high-risk and general-purpose AI, which they view as too restrictive.
In the U.S., a lighter regulatory touch has prevailed thus far. While the Biden administration has not passed comprehensive federal AI legislation, various states, including California, have enacted their own forms of AI regulation, which businesses will need to consider.
The impending U.S. election may alter things, as the Harris and Trump campaigns have expressed differing perspectives. Harris’ preferred approach can be seen in the Biden-Harris October 2023 Executive Order on safe, secure, and trustworthy AI, which enshrined a number of principles for the safe, secure and trustworthy development of AI. Trump has indicated that he favors deregulation and promoting innovation and has been reported as saying he would repeal the executive order because it is “dangerous” and “hinders innovation”.1
In the UK, the previous Conservative government spelled out a "pro-innovation" approach to its 2023 AI Regulation White Paper, aiming to promote innovation through using existing laws and regulators to implement a framework of ethical principles rather than imposing new regulations. It remains to be seen whether the new Labour government will alter this approach. Prime Minister Keir Starmer has indicated preference for regulation, though not as extensive as the EU’s, but details are limited.
China implemented Interim Measures for the Management of Generative Artificial Intelligence Services in August 2023 “to promote healthy development of generative AI, protect national security and public interests, and protect the rights of citizens, legal entities and other organizations.” The measures reflect a regulatory approach that has evolved from industry self-regulation to national standards to specific rules.
The UAE, through its UAE National Strategy for Artificial Intelligence 2031, seems to establish itself as a global leader and hub for AI development. Among its objectives is “optimizing AI governance and regulations” and promoting ethical use of AI through its AI Ethics Principles and Guidelines.