Hollywood stars

Artificial intelligence means anyone can cast Hollywood stars in their own movies

For years, the only way to create a blockbuster movie featuring a Hollywood star and dazzling special effects was in a big studio. The Hollywood giants were the those who could afford to pay celebrities millions of dollars and sophisticated software licenses to produce elaborate, special-effects-laden films. All of that is about to change, and the public is getting a taste of it thanks to artificial intelligence (AI) tools like DALL-E and OpenAI’s Midjourney.

Both tools use images pulled from the internet and select datasets like LAION to train their AI models to reconstruct similar but entirely original images using text prompts. AI images, which range from photographic realism to mimicking the styles of famous artists, can be generated in as little as 20-30 seconds, often producing results that would take human hours to produce. For example, the illustration of “japanese mech robot detailed synthwave diagram” underneath.

These paid tools usually prevent the direct appearance of celebrities in their releases. But users are finding workarounds with the arrival of free and open-source AI image generation software. The most popular is Stable Diffusion produced by a UK-based startup Stability.AI. The open source tool allows users to generate original images of any celebrity found on the internet. A particularly convincing example (below) shows Bryan Cranston (breaking Bad) donning the costume of a game of thrones character, placing the iconic actor in a setting that many fans could only dream of.

What started as AI image generation quickly turned into experimental animations that combine stable streaming with free add-on tools like Deforum, Colab Googleand Ebsynth. In some cases, add-ons require code manipulation, YouTube, Discord and Reddit are filled with easy-to-interpret elements. tutorials that make them accessible to almost anyone. These DIY animations are just the beginning. Full-fledged original AI videos derivatives of text prompts are already on the way.

Earlier this year, Google Imagen and Meta Make a video previewed AI tools that can generate photorealistic video from text prompts. Open-source releases of these types of AI tools are already being developed – a dynamic that will likely lead to the same type of uncensored content produced by Stable Diffusion.

If the evolution of AI video tools mirrors the rapid improvement in 2D AI image generators, audiences could see their first fully AI-generated feature film within one to two years.

Stable Diffusion hoped to be able to trust its users

The potential misuse of AI video generation tools to violate the intellectual property rights of Hollywood studios and the rights of actors are among the many potential issues that will arise in the coming months. Soon, almost anyone using AI tools might be able to cast celebrities in unauthorized films featuring everything from violent footage to sexually explicit scenes.

Creature Test | Stable streaming Img2Img x EbSynth

For example, the general installation version of Stable Diffusion includes security filters to prevent the generation of explicit images, but the open source community quickly came up with code to circumvent these security barriers. Although sexual content is not the most popular target for the approximately 10 million users of Stable Diffusion, there is at least one Reddit community with more that 6,400 members dedicated to exploring how to use Stable Diffusion to produce sexually suggestive and nude imagery.

In September, when publicly questioned about the possibility of people using Stable Diffusion in potentially problematic ways, such as in the creation of pornography, the company’s CEO, Emad Mostaque, denied responsibility.

“If people use technology to do illegal things, it’s their fault,” Mostaque tweeted. “It is the basis of liberal society.” In the same thread, Mostaque wrote that “if people use it to copy artists’ styles or infringe copyright, they’re behaving unethically and it’s literally traceable because the outputs are deterministic”.

AI-generated content is coming to Hollywood

On October 17, Stability.AI received an investment $101 million to continue to develop its AI models, as well as its in-development Dream Studio product, a proprietary, paid version of its AI model that does not require the same technical skills required to install and operate Stable Diffusion.

But alongside Stable Diffusion’s new billion-dollar valuation, there has been an apparent shift in Stability.AI’s approach to its business. The company’s public messaging has changed with the latest release of Stable Diffusion 1.5 which the startup says has sparked government interest in its free AI tool used by millions.

“We took a step back at Stability AI and chose not to release 1.5 as quickly as we released previous checkpoints,” wrote Stability.AI news director, Daniel Jeffries, on October 20. “We have heard from regulators and the general public that we need to focus more on security to ensure that we take all possible measures to ensure that people use Stable Diffusion for illegal purposes or harm people.

This extra caution from the Stable Diffusion team may be welcomed by film and music studios (there is a AI music generation tool in progress as well) which could be impacted by AI generation tools in the months and years to come.

Nevertheless, the next indie movie starring his favorite actor could soon be produced in the bedroom of someone who uses an AI video generator to create unlikely Hollywood mashups and put them on the internet, legal ramifications be damned. At this stage, the question is not if it will happen, but when.