Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
In March, and many took that in part as a warning and partly as an acknowledgement of the potential of interference in elections, Google confirmed that the Gemini artificial intelligence (AI) chatbot will not answer questions about the elections in many countries this year.
That restriction applied to India ahead of the general elections this summer and also stays in place for all other countries going to polls in the remaining months of 2024. Including the US. Even before the turn of this year, a looming threat of AI’s effect on global elections was very clear. Now, it is clearer than it’s ever been.
Thus far just this year, as generative AI tools that include text-to-image and text-to-video generators have become increasingly realistic at a rapid pace, the dangers of AI interference in global political processes are no longer just a threat. There are worrying examples. AI generated robocalls in US President Joe Biden’s voice seemingly attempting to dissuade voters in New Hampshire from voting in the primary; a deepfake in the Slovakian elections seemed to help the pro-Russian opponent, as it was designed to do, and a deepfake of a newsreader alleging that then Prime Minister Rishi Sunak promoted a scam investment platform.
If we feel proud to have arrived at the cusp of generative AI tools that deliver movie-quality video generations and images that are almost impossible to distinguish from real photos, there’s the other side of that coin too. The question really is, how smart (and willing to put in the effort) is the electorate worldwide, to not only identify but also contend with the AI curveballs actors may deploy?
This is something Danielle Allen, who is James Bryant Conant University Professor at Harvard University, and director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, pointed to while speaking at a session during the GETTING-Plurality Research Workshop over the summer. She references this as anxiety in the general public, borne by misinformation that has found speed when new-age technologies are deployed by threat actors.
In April this year, the Microsoft Threat Analysis Centre (MTAC) released their findings having tracked threat actors emerging from Russia, Iran and China, who they say are tasked with spreading disinformation using traditional as well as social media (you may assume the latter is easier than the former; you’d be surprised). The MTAC report clearly states that there has been some form of malicious generative AI leveraged to create content since last summer, and they anticipate that election influence campaigns will include deep and shallow fakes for the widest reach.
Most large tech companies now include labels and information metadata that distinguish generated content from real photos or videos. But for every OpenAI, Meta or Google, there are thousands of other generative AI tools that still don’t have any such transparency policy.
Specific to these three countries and their attempts to influence global elections, particularly the ones in the US, Russia’s attempts are to spread misinformation about the war in Ukraine and undermine US support for the latter. China, they believe, is more focused on tapping the social and political divides to further widen the chasm, and generated deepfakes and misinformation left to be shared organically on both sides of that divide, for the most impact. Iran’s attempts traditionally have been more to interfere with cyber networks in countries including the US, and MITC believes that may remain the case as the country is also stretching attention with the conflicts in the Middle East.
But if you think that generative AI content and deepfakes are the biggest cause of concern for the functioning of democracies and elections, think again.
In the US, there is a certain tech tool in play called Eagle AI (you’re expected to pronounce this as Eagle Eye). Its beginnings are controversial, as is its existence. First of all, you will not find an official website, the latest company information, details about the methodology at work or indeed, who is in charge of this too. It is really as opaque as it gets, and that’s worrying for any democracy.
It was created by someone called Dr John W. “Rick” Richards, a resident of the state of Georgia. A response to what he believed was voter fraud, that led to Donald Trump losing the elections. Those claims have never been confirmed or verified but have nonetheless made for a shrill political pitch since. He’s believed to have downloaded the Georgia state’s voter rolls and alleged to have found people who should not have voted. Reports last suggest that he’s still not provided any evidence of his claim, and now also says he doesn’t have the records anymore.
Accessing voter rolls without government officials approving it, risks compromising confidential personal information otherwise not shared in public.
More to the point, it is continually being used in the garb of weeding out ineligible voters, but that immediately raises concerns that genuine voters (particularly those with a different political thought or living in an area with clear political leanings) may end up getting disenfranchised because election officers will not have enough time to verify each myth or challenge of voter fraud.
Quite how far this tool goes, and the sort of impact it has, remains to be seen. Will it spawn more wannabe tools? More to the broader point of AI wedging itself in elections worldwide, have things already gone beyond anyone’s control?
Vishal Mathur is the technology editor for the Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice-versa. The views expressed are personal.