By: Cate Eberly
Soar AI, a recently released feature on OpenAI, allows for text-to-video generation. Videos can be inputted and asked to edit various aspects or completely generated from text.
The realism of video AI being is bringing about the risk for more misinformation and deepfakes circulating on social media.
According to Merriam-Webster, a deepfake is an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said, especially for viewers who might not have the media literacy skills to determine what is real or fake.
New York Times article “OpenAI’s Soar App: Jaw-dropping for (better or worse)” authors Mike Isaac and Eli Tan said, “Being able to quickly and easily generate video likenesses of people could pour gasoline on disinformation. Creating clips of fake events that look so real that they might spur people into real-world action.”
Currently, Soar AI has a feature where videos display a Soar AI watermark with generated videos, helping viewers on social media platforms like TikTok know what videos have been generated for the time being.
Video generation on social media also expands to politics with political leaders generating content as part of campaigns.
President Trump has shared many AI-generated photos across both Truth Social and X. However, recently, a video was shared of him in a fighter jet dumping waste on protesters participating in the recent “No Kings” protests.
In a New York Times article written by Stuart Thompson, “How President Trump uses AI,” he describes the problems of utilizing AI on a political scale: “The fake imagery attacks his political rivals, depicts him flatteringly, mocks criticism, celebrates his administration and spreads falsehoods about his agenda,” Thompson said.
Similar to the development of AI since 2022, Trump’s usage of AI politically has begun to grow more life-like.
