The artificial intelligence video generator, Sora, by OpenAI, is now made available in the U.S. – Open for anyone in the country to produce video content through text prompts. This news comes this Monday, which marks one of the many steps being taken by the company for further expansion in generative AI technologies.
Sora, which was first made available by OpenAI in February, had previously been accessible only to a limited group of artists, filmmakers, and safety testers. But as of Monday, OpenAI has thrown open the platform to the public at large, albeit with some technical glitches. The users faced a lot of hassle signing up for the service throughout the day as the company’s website was not able to take on new users at times due to heavy traffic.
Sora functions as a text-to-video generator, enabling users to create video clips from written descriptions. One example shared on OpenAI’s website shows how a simple prompt—”a wide, serene shot of a family of woolly mammoths in an open desert”—can result in a video featuring three woolly mammoths slowly walking across sand dunes. The tool allows for a wide range of creative possibilities, offering users the chance to explore video storytelling in new, innovative ways.
“We hope this early version of Sora will enable people everywhere to explore new forms of creativity, tell their stories, and push the boundaries of what’s possible with video storytelling,” OpenAI wrote in a blog post.
OpenAI’s Expanding AI Portfolio
OpenAI, which is probably best known for its ubiquitous chatbot ChatGPT, has been actively expanding its portfolio of AI technologies. In addition to Sora, the company has developed a voice-cloning tool and has also integrated an image-generation tool called DALL-E into the features of ChatGPT. Leveraged by Microsoft, the company has rapidly emerged to become a leader in generative AI, and it has seen its valuation explode to nearly $160 billion.
One of the newest creations from OpenAI is Sora, which has furthered its innovation on applications of artificial intelligence. Yet, public release came with scrutiny regarding the development and implications of generative AI.
Before its public release, OpenAI opened it up for testing by select individuals such as tech reviewer Marques Brownlee. Brownlee’s review was also mixed, saying the results “are horrifying and inspiring all at once.” He believed that Sora did exceptionally well in generating landscapes as well as stylistic effects but admitted that the software failed to depict basic principles of physics and often caused unrealistic results. Some film directors who previewed the software also reported encountering visual defects while using it, which had them question its readiness to be used by the world.
OpenAI also experienced difficulties in terms of meeting the regulatory standards, especially about the UK’s Online Safety Act and the EU’s Digital Services Act and General Data Protection Regulation, also known as GDPR. This regulatory issue is the consequence of ongoing debates on whether AI-generated content is ethical or unlawful.
The AI Art Scandal
OpenAI had come under controversy, wherein a group of artists criticized the firm for “art washing” its product. The group, self-named as the “Sora PR Puppets,” criticized the firm for making use of the creativity of artists in generating a good narrative of the AI tool at hand but threatening the survival of human creators. This was the case where an artist made a backdoor to obtain unauthorized access into the tool. Because of this incident, the company temporarily suspended the access of the tool.
Generative AI has been a subject of critique as regards undermining traditional forms of art and expression. Most notably in the field of images and videos, such AI is said to perpetrate plagiarism and theft of human creative works. In terms of AI image and video generation, tools such as Sora, despite making good strides in this technology, still often experience “hallucinations,” incorrect or distorted output, among other errors, which defeats their purpose of reliability.
Threat Of Deepfakes And Misinformation
This leaves misuse as one of the big concerns about Sora and similar AI technology. Deepfakes might be misused to make disinformation or deepfake content for misleading the public. This, for instance, is evident in the manner that already some deepfakes were deployed in spreading false videos about Ukrainian President Volodymyr Zelenskyy calling for a ceasefire and videos claiming that the U.S. Vice President Kamala Harris made scandalous comments over diversity.
With the increasing sophistication of AI-generated media, the risks associated with its use have never been more significant. As Sora and similar tools gain popularity, the need for stronger regulations and safeguards to prevent misuse becomes even more urgent.
ALSO READ | Nancy Mace Faces Backlash As Old Drinking Game Video Surfaces Amid Transgender Debate