Generative AI may cast its vote in the 2024 elections

A few weeks ago, I came across this bizarro video uploaded to Instagram, showing a Twitter Spaces chat between Republican presidential candidate Ron DeSantis, Elon Musk, the Devil, and Adolf Hitler, among others. Then, I realized it was uploaded as a campaign video by Donald Trump’s official account.

Though it’s most likely fan-made, I found it odd that his team was tapping into this niche, Gen-Z trend and brazenly platforming this type of absurdist content. Not to mention, this is a deepfake posted by an authoritative figure.

With elections commencing next year in the US, Indonesia, and India, there’s a fear that as tensions rise, supporters will be looking to use AI to spread disinformation. Artificial intelligence democratizes and accelerates the creation of visual and auditory content, making it easy for one to spin their own narratives.

So far, virality has been mostly limited to harmless images like the Pope in a puffer jacket or Indonesian President Jokowi singing songs. However, inklings of more sinister uses have come to light.

Last month, a deepfaked photo of an explosion at the Pentagon circulated causing a US stock market dip. In the same month, generative AI was used in India to manipulate a photo of wrestlers who had been arrested for protesting against the government.

Cases of voice cloning for fake ransom calls are also reported to be on the rise.

In politics, the tech could be used to launch smear campaigns or indulge fear-mongering.

Image credit: Made by Tech in Asia using AI

We have already seen the influence tech can play in swaying elections, like in the case of the Facebook-Cambridge Analytica scandal in 2016. Generative AI could be used to create microtargeted ads for voters, using personal data to churn out information that may subtly sway them.

But political or not, it’s clear this freedom of creativity will create content overload, warping perceptions. Europol predicts that up to 90% of online content could be created or edited by AI by 2026, while a 2022 survey of 16,000 respondents showed that 43% of people could not detect a deepfake video.

Companies have introduced early attempts at regulating this.

Similar to Twitter’s fact-check feature, Google Images will label AI-generated content with a text disclosure. Meanwhile, TikTok has banned undisclosed synthetic media depicting realistic scenes.

Metadata is an integral component in checking if something is genuine. Adobe has developed “content credentials,” which track the authenticity of images and what changes were made to them – including AI-edited images. The cryptography-protected C2PA standard also tracks the provenance of photos.

That said, these contributions are still premature as tracking the authenticity of video and audio content will be harder.

Through the internet, we live in digitally imagined communities and are more connected to current events than ever. But as more of us migrate online, it’s also become increasingly polarizing – dialogue is mostly skewed on sites such as Twitter and Reddit, and identity politics is a given.

With generative AI in the mix, it has become very easy to manipulate content and create high-volume, real-time disinformation echo chambers.

Just as social media has harmed mental health, it would be fair to hypothesize that generative AI could give rise to a further dissolution of reality, blurring the lines between fact and fiction.

Another concern is that because generative AI tech is now open source, those with bad intentions will still be able to access the tools, regardless of companies limiting access. A leaked internal document from Google even theorized that open-source AI will outcompete the company as well as OpenAI.

Regardless, it’s become more critical than ever to push for better detection tools for deepfakes, harsher punishments for perpetrators, and clearer regulations from AI companies and social media sites alike to minimize damage.

This was published as a part of AI Odyssey, a section on generative AI developments featured in Tech in Asia’s emerging tech newsletter.

Delivered every Tuesday via email and through the Tech in Asia website, this free newsletter breaks down the biggest stories and trends in emerging tech. If you’re not a subscriber, get access by registering here.


Shadine Taufik

Fan of all things AI and art.

Related Post

Stay Updated on all that's new add noteworthy

Subscribe Now