OpenAI's video generation tool Sora 2 is holding its own near the top of app download charts, but it's also collecting a lengthy list of complaints.
Sora 2 allows anyone to create a realistic looking video from just a few lines of text, vastly improving the programme's first version that was only given limited release by OpenAI back in 2024.
Users can also upload their own headshots and record a few vocal tracks, allowing them to create videos starring themselves in just about any setting or scenario.

Yet since the launch of Sora 2 in late September, currently only in the US and Canada, that ease of use and sense of empowerment has also proven to be ripe for those with nefarious motivations.
And, in a relatively short period of time, OpenAI's video tool has also caused a rising tide of anger among content creators, entertainment companies and film studios due to claims of copyright violations.
Since its launch, Sora 2's user-generated videos of Dr Martin Luther King Jr saying offensive or racist things and perpetuating racist stereotypes have gone viral on social media, prompting his estate to demand changes.
And actor Bryan Cranston of Breaking Bad fame complained of having his likeness replicated without his permission.
The complaints prompted OpenAI to work to put up guardrails for Sora 2.
Perhaps fearing an avalanche of lawsuits for copyright infringement, OpenAI changed its policy, which previously allowed for copyrighted content to be generated and used within the app unless the copyright holders opted out.
After an outcry, however, the company now requires that copyright holders opt in before their content is used.
"We will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls," OpenAI chief executive Sam Altman wrote on his blog.
The speed of the changes in response to complaints has become the new normal in recent years for Silicon Valley tech companies, but the potential ramifications presented by Sora, particularly at a time when worries about fake news and disinformation are rising, have many experts concerned.

Elissa Redmiles, an assistant professor of computer science at Georgetown University's centre for digital ethics, said the current potential problems posed by products like Sora 2 being rushed onto the market are just the tip of the iceberg.
She said despite guardrails, Sora 2 and similar tools like Luma AI's Ray3 or Google's Veo still make it easier for individuals to create deepfakes, potentially allowing for someone's likeness to promote ideas and products they don't endorse.
"They haven't been as careful as we would like them to be," she said, referring to OpenAI's Sora 2, adding that although OpenAI has made changes and promised to continue tweaking policy, so far many of the proclamations have been vague.
"There's been no transparency about how they're assessing likeness and it's actually not an easy problem ... and we haven't seen any publications or public transparency benchmarks about how they're trying to do this protection."
Prof Redmiles said that based on how law enforcement has shed light on AI image generation tools being misused, particularly in the context of non-consensual sexual content, young women and children could be at risk for being exploited by those who might use Sora 2 for criminal purposes.
"You're probably going to have a lot of abusive material being created," she warned.
Sarah Bargal, an assistant professor of computer science at Georgetown who specialises in deep learning and computer vision, echoed Prof Redmiles's concerns.
She said that although many draw parallels between initial concerns in the mid-90s surrounding photo editing software and video generation tools like Sora 2, those comparisons are ill-suited.
"This is really radically different," Prof Bargal explained, adding that photo-editing tools like Adobe Photoshop initially required a certain level of expertise, whereas Sora 2 only requires a few lines of text to make realistic videos that can fool even the most seasoned professionals.

She said that the latest crop of generative AI tools "significantly lower the bar" for creating content often out of nothing, creating a perfect storm for misuse and confusion.
Prof Bargal did caution, however, that eventually fake video detection tools would improve and make videos created with AI much easier to identify.
For every step in the right direction though, she said that the quickening pace of AI developments might hasten a greater number of problems before lawmakers and mental health experts can catch up.
“I am concerned by the yet-to-be matched pace in social science policies, law and other very important and connected disciplines,” she added.
Much like the early apps which gave us the social media boom, the novelty of new technology and a fear of missing out keeps propelling apps like Sora 2 to the top of various app stores, with many choosing to push aside concerns about the problems that might arise.
Prof Bargal also said the rise of Sora 2 also might signal the end of a precedent established over the last few decades of media consumption, in which videos were mostly "treated as proof".
"Nowadays we are becoming more sceptical of the of the videos that we are seeing," she said, before adding that technology might eventually make discernment between real and fake easier. "Models that are trained to detect fake generations are improving."
Prof Bargal cautioned that social media and technology must implement them as well for them to be effective. She also said that some of the burden might also fall on users as well as experts, to keep voicing concerns.
"I think the consistent conversation between big tech companies, social science experts and policy makers needs to be happening continuously."
In a recent interview with technology journalist Rowan Cheung, OpenAI chief Sam Altman defended the company's decision to release Sora 2 and continuously make changes on the fly.
"I think we'll learn to adapt, and we'll learn very quickly that there will be a lot of fake video of you on the internet," he said, pointing out Sora 2's use of watermarks on all of the tool's videos.
"That's just gonna happen, so getting society inoculated to that probably has some value."

