There are several reasons why this is the case. Most importantly, the public’s ability to decipher GenAI content has surpassed initial expectations by commentators. Regulators and mainstream media, and especially fact-checking outfits, have been quick to recognize and call out fake videos and images. These counterweights have likely been more effective than expected in part because the production quality of some GenAI images and videos has not generally been as high as anticipated, enabling rapid identification and flagging of fake content.
Of course, there have been examples in elections around the world of GenAI content intended to impact public perception and voter intentions.
In France, deepfake videos purporting to show the National Rally (NR) leader Marine Le Pen and her niece Marion Maréchal of the Reconquête party, spread on social media. These videos were supposedly posted by a young niece of Le Pen whose account accumulated tens of thousands of followers. The account turned out to be fake and the content synthetic, but not before a significant public engagement and debate.
During the Indian election, GenAI content targeting the integrity of electoral processes and attempting to stoke sectarian tensions among the country’s religious minorities featured prominently in the online discourse surrounding the election.
Meanwhile, in the U.S., there have been two documented uses of the GenAI to impersonate speech by the Democratic candidates for president. This winter, fake robocalls in Joe Biden’s voice were employed urging voters to skip primary election in New Hampshire (although it is worth noting that the political consultant who implemented the scheme now faces criminal charges and a potential $ 6 million fine). Last year, a clip of Kamala Harris – then running for Vice President—speaking at a political rally was altered to make her words sound nonsensical.
While most attention has been devoted to the risk of GenAI video and images, Kroll’s analysis shows that the most problematic content in the EU and UK elections was deepfake audio and simple text. These types of media are generally more believable, harder to detect, as well as easier and cheaper to produce at a credible quality. In particular, GenAI text content is most interoperable with other assets, such as bot farms and fake media outlets, designed to influence elections. GenAI text content is easily disseminated through coordinated inauthentic networks which spread misleading information far and fast and can be subtly changed once in flight to evade detection techniques in a way online platforms, fact-checkers and the public find hard to identify.
As we progress through this year’s historic waves of elections, GenAI content is manifesting more like prior types of electoral and reputational risk than something fundamentally new. One of the most active years of electoral activity in history combined with the novel and awesome power of GenAI left some feeling that there was an informational sword of Damocles hanging over societies around the world. The reality is more nuanced.
Exploiting Misinformation
Despite the more doomsday scenarios posited about the impact of GenAI on elections not yet materializing, firms should take seriously the problem of liar's dividend that politicians also face. As people become more aware of how content can be manipulated, or as they become “sensitized” to GenAI content, it becomes easier for dishonest individuals to make others doubt the real content by claiming it is fake.1
However, the problem cuts the other way as well. While it was always challenging to “prove a negative” (i.e., that something did not happen), in a GenAI world this becomes even harder. If a politician accused of doing "X" is able to produce evidence showing that it is impossible that they actually did “X”, people can simply write off the evidence as being fake itself if they find the original allegation credible.
In the long term, the biggest lesson for firms around the use of GenAI in this year of elections may in fact be the growing ubiquity of beliefs that underline the liar's dividend, as fears of GenAI around elections continued to be trumpeted in ways that attract increased attention. Firms would be wise to revisit their approach to communication strategy and crisis management with an eye towards the liar's dividend.
Positive Uses
Attention has been mostly focused on the negative aspects of AI in politics. However, we should not overlook the positive potential of this new technology and the creative ways political campaigns have been using it. There are lessons here for business too.
In South Korea, AI avatars have been used for campaigning, creating virtual representations of candidates to engage with voters through a different medium. This was particularly popular with younger voters who were most likely to engage with the avatars.
In India, the phenomenon of party-authorized deepfakes of popular deceased politicians was seen on multiple occasions. This use of GenAI was well received by voters and seen as a way of connecting different generations of voters.
A particularly effective use of AI in political campaigns was demonstrated by the Pakistan Tehreek-e-Insaf (PTI) party, led by jailed former Pakistani Prime Minister Imran Khan. In the aftermath of the shock success of the PTI in the 2024 election, an AI-generated victory speech by Khan was viewed as an extremely innovative use of the technology and the social media post in which it was shared accumulated over 6 million views and over 58K reposts.
Meanwhile, Taiwan drafted an ambitious and groundbreaking law that would govern the use and reliability of GenAI models and the risks associated with them. Labeling, disclosure and accountability mechanisms would be established under the legislation. Alongside obligations for AI companies to uphold data protection and privacy rules in model training and enhanced requirements around content verification, individuals would be able to give their carefully defined consent for virtual representations of themselves to be used by businesses in marketing and advertising campaigns. Building on the AI Risk Management Framework established by the U.S. National Institute of Standards and Technology, this legislative proposal charts a potential new path for firms in other parts of the world to consider replicating the effectiveness of legitimate political campaigning in their efforts to engage and interact with their customers within a defined framework. Other countries are watching the evolution of this law with close interest as the issues it addresses will need to be dealt with elsewhere too.
Takeaways
Overall, as we look back on the first eight months of the year of the election, we see that elections in 2024 so far look a lot like those in years past. Candidates running for office have had to deal with mudslinging from their opponents, some of which comes in the form of unfounded rumors. While GenAI may have accentuated some of these rumors, these types of attacks during political campaigns are nothing new and there are legions of campaign professionals who get paid to try to figure out how to manage them. This is not to say that there will not be some consequential GenAI moments in the future, but rather that so far, we have not really seen it.2
The risks posed by GenAI for now remain best viewed through the lens of conventional risk management. GenAI may not have broken the information environment, but it certainly has complicated matters. Firms should be prepared both in their strategy and available toolset to react to political developments and meet direct risks head-on. The use of GenAI will continue to grow and the effect will continue to erode authority and trust. This will have a direct impact on the available time to respond to threats and pressure test the ability of organizations to parse through the noise to separate truth from falsehood and establish authority on relevant issues.