theregister.com

OpenAI's Sora lets ChatGPT subscribers churn out janky text-generated videos

OpenAI has put its video generation tool, Sora, into the hands of ChatGPT Plus and Pro users.

Sora will be familiar to anyone who has fed text into an AI to receive an image. Once magical, the technology has rapidly become ubiquitous. Earlier this year, OpenAI showed off Sora, which took it a step further with short videos created from text.

The reaction from some parts of the motion picture industry was as swift as it was inevitable. In February, US filmmaker Tyler Perry reportedly scrapped an $800 million expansion to a film studio after seeing the tech in action.

Things have moved on since then. OpenAI has updated Sora, calling the new version "Sora Turbo," and made it available to ChatGPT Plus and Pro users, so long as they aren't in the UK, Switzerland, or the European Economic Area.

The tool arrived behind the private preview of Google's Veo, although The Register reckoned it would probably ship at some point in December as part of the modestly titled "12 Days of OpenAI."

At its heart, Sora generates short video snippets from entered text. Users can string videos together in sequence, and it can also remix existing videos via text input or allow users to upload their own images for animation.

Videos can be rendered at resolutions from 480p to 1080p, although you'll need the $200 per month ChatGPT Pro subscription if you want the highest resolution. The top-end subscription will also allow videos of up to 20 seconds (up from the five seconds of ChatGPT Plus), and the output won't sport a watermark.

The process is not quick, and it is not difficult to imagine OpenAI's servers groaning under the pressure if the service becomes popular. Unsurprisingly, upping the resolution makes it worse.

The output is interesting, if slightly alarming at times, and not always in a good way. OpenAI said: "It often generates unrealistic physics and struggles with complex actions over long durations." We'd add "walking" to those "complex actions" since it is not hard to find sample videos where the AI has struggled to work out how legs are supposed to move. One YouTuber posted a disturbing video of a strolling giraffe, where the legs were... interesting.

As for where the data behind the scenes has come from, according to OpenAI, the videos are generated using publicly available data plus proprietary data from its partnerships. The company said: "We also partner to commission and create datasets fit for our needs."

It is still early days for Sora. While some of the carefully curated video snippets are impressive, spotting something generated by AI is relatively easy, particularly if limbs or physics are involved. However, considering the pace of AI development and the hype surrounding it, the tech eventually slithering into more of the creative world seems inevitable. ®

Read full news in source page