Propaganda remains a prominent feature of politics around the world, but with the advanced technology available, determining reality from fake news becomes increasingly challenging. With the growth of Artificial Intelligence (AI), politicians curate false images to evoke a specific emotional response from viewers. Social media acts as the dominant vessel for these videos and images, allowing for the expeditious spread of misinformation among users. Generated from previous interactions with multitudes of posts, AI propaganda can turn into an echo chamber for misinformation.
In the U.S., AI-driven propaganda has evolved from bot tweets to ultra-realistic deepfakes, hyper-tailored messaging and networks of accounts that look human but operate mechanically. Over recent years, AI quietly became a tool for shaping political reality — and not in ways that promote truth or democracy. The result: the playing field of public opinion constantly shifts, and now residents do not even realize it.
“Sometimes AI propaganda can make its way onto my feed, and it really confuses me. It goes between being really obviously fake and super realistic to the point that I can’t tell if it’s real. It’s still sort of new, so I hope we can stop the AI issue before it starts to impact people’s perception of politics and shape the minds of the young,” magnet senior Lena Gray said.
Social media sites, such as X, Instagram and TikTok, harbor the majority of AI propaganda. AI bots flood comment sections, share false information at a massive scale and even argue with real users to sway opinions. While subtle, the consistent integration with day-to-day life creates a dangerous atmosphere for kids, teens and adults on social media. With the sheer number of realistic videos, audios, text and images that AI creates, humans simply cannot keep up. Propagandists can use AI to tailor messages to specific audiences depending on age, region and interests, creating a message that feels authentic. By interacting with the propaganda posts, the site generates an echo chamber of fake news, reflecting a user’s predisposed beliefs continuously onto their feed, reinforcing their preconceived notion about a topic. Constant exposure to misleading information can reshape the mind to believe the lies. On the flip side, when people know that false content exists, they feel inclined to dismiss real content or ponder whether the message delivers the truth. That doubt itself becomes a tool of manipulation. In an analysis done by the News Literacy Project, around 52% of users misidentified AI-generated content and factual content. With the growing realism in the propaganda, users become desensitized to AI content, confusing users as to which posts to believe.

The U.S. government and technology companies attempted to respond, but AI continues to evolve faster than the policies. Still, regulatory discussions weave their way into major conversations, especially ahead of crucial elections. Residents argue that the government and tech companies should share responsibility. On the other hand, the research shows that only around half of the identified AI-generated political items are intended to deceive viewers, but the propaganda can blur with seemingly non-political content, including dog whistles that help shape public opinion. Efforts to label AI-generated content or fact-check posts help slightly, but the constant rate of propaganda released creates a nearly impossible number of videos to check.
“I have seen little blurbs at the bottom of my feed that will say some of the videos are AI-generated, but it’s always the videos that are super obvious, so it almost feels useless. I wish that it were also available in pictures and not just videos, because I feel like the photos are so much harder to determine whether it’s real or not. I know that politicians are trying to get things done about AI, but it’s disheartening when we see President Donald Trump posting AI photos of him taking over countries,” Gray said.
Unfortunately, several viewers disregard the relevance of AI propaganda, believing that the posts do not sway public opinion or help shape political beliefs. This notion proves plain false, with statistical research sustaining the idea that AI propaganda impacts a viewer’s opinion. With politicians weaponizing AI, this issue could lead to long-term mistrust in government officials and hesitant voters. Civic engagement, already low in the U.S., could decrease a sizable amount, creating problems that could impact the future of the state.
The rise of AI in propaganda will not fade away any time soon. As AI tools garner stronger and faster advancements — realistic voices, images and videos — the potential for misuse grows. Still, the future does not all seem gloomy: true journalism, media literacy, transparent tech and policy can limit the harm. What the public truly believes — that AI technology proves worrisome — grants insight into a hopeful future where the U.S. can keep AI tools under control. For now, researchers question AI’s actual impact on elections and civic engagement.
