AI

Openai shuts down Ghibli’s craze – now users turn to open source

When Openai released its latest image generator a few days ago, they probably didn’t expect it to make the internet bend.

But that’s more or less what’s going on, with millions rushing to turn their pets, selfies and favorite memes into something that looks like it’s straight out of Studio Ghibli movies. All you need is to add Tips like “in Ghibli” studio style.

Studio Ghibli is the legendary Japanese animation studio for anyone who is unfamiliar Thousands and Thousands of Seek,,,,, Kiki’s delivery serviceand Princess Mononoke.

Its soft, hand-painted style and magical settings are instantly recognizable – surprisingly, new models using OpenAI are easy to imitate. sBig media is full of anime versions of cats, family portraits and internal jokes.

This surprised many people. Typically, Openai’s tools can resist any prompting for the artist or designer’s name, as this suggests, or explicitly, copyrighted images prevail in training data sets.

However, there was a time when it seemed no longer important. Even Openai CEO Sam Altman even changed his profile photo to a Ghibli-style image and posted it on X:

Once, more than a million people signed Chatgpt within an hour.

Then, quietly, it stopped many people from working.

The user starts to notice the prompts to quote Ghibli, and even tries to describe the style more indirectly, no longer returning the same result.

Some tips were completely rejected. Others just made Universal Art and looked the same as the virus the day before. Now, many models have been updated and many are speculating. Openai introduced copyright restrictions behind the scenes.

Openai later said that despite the rapid development of the trend, they restricted Ghibli-style images by adopting a “conservative approach” and refused to try to create images in images similar to those of the living artist.

This kind of thing is not new. This also happens in dall·e. A model starts with a bunch of flexibility and loose guardrails, catch up on the fire online, and then dial quietly, usually in response to legal issues or policy updates.

The original version of DALL·E can do things that were later disabled. The same seems to have happened here.

A Reddit commentator explained:

“The problem is that it’s actually like this: a closed model issuance, it’s much better than anything we have. The closed model is heavily overwritten. The open source model is coming, and it’s getting closer to the Nerfed version.”

Openai’s sudden retreat will cause many users to look elsewhere, and some will turn to open source models, e.g. Fluxdeveloped by Black Forest Labs from stability AI.

Unlike OpenAI’s tools, Flux and other open source text do not apply server-side restrictions on image tools (or at least they are looser and limited to illegal or profane material). Therefore, they did not filter out the hints that reference Ghibli-like images.

Of course, control does not mean that open source tools avoid ethical issues. Models such as Flux are often trained with the same scratch data as debates on style, consent and copyright.

The difference is that they are not affected by company risk management – ​​meaning creative freedom is wider, but so is the grey area.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button