OpenAI’s new DALL-E model draws anything — but bigger, better, and faster than before

2 years ago 155

Early last year OpenAI showed off a remarkable new AI model called DALL-E (a combination of WALL-E and Dali), capable of drawing nearly anything and in nearly any style. But the results were rarely something you’d want to hang on the wall. Now DALL-E 2 is out, and it does what its predecessor did much, much better — scarily well, in fact. But the new capabilities come with new restrictions to prevent abuse.

DALL-E was described in detail in our original post on it, but the gist is that it is able to take quite complex prompts, such as “A bear riding a bicycle through a mall, next to a picture of a cat stealing the Declaration of Independence.” It would gladly comply, and out of hundreds of outputs find the most likely to meet the user’s standards.

DALL-E 2 does the same thing fundamentally, turning a text prompt into a surprisingly accurate image. But it has learned a few new tricks.

First, it’s just plain better at doing the original thing. The images that come out the other end of DALL-E 2 are several times bigger and more detailed. It’s actually faster despite producing more imagery, meaning more variations can be spun out in the handful of seconds a user might be willing to wait.

“A sea otter in the style of Girl with a Pearl Earring” turns out pretty good.

Part of that improvement comes from a switch to a diffusion model, a type of image creation that starts with pure noise and refines the image over time, repeatedly making it a little more like the image requested until there’s no noise left at all. But it’s also just a smaller and more efficient model, some of the engineers who worked on it told me.

Second, DALL-E does what they call “inpainting,” essentially smart replacement of a given area in an image. Say you have a picture of your place but there are some dirty dishes on the table. Simply select that area and describe what you want instead: “an empty wooden table,” or “a table without dishes on it,” whatever seems logical. In seconds, the model will show you a handful of interpretations of that prompt, and you can pick whatever looks best.

You may be familiar with something similar in Photoshop, “context-aware fill.” But that tool is more for filling in a space with more of the same, like if you want to replace a bird in an otherwise clear sky and don’t want to bother with clone stamping. DALL-E 2’s capabilities are much greater, able to invent new things, for example a different kind of bird, or a cloud, or in the case of the table, a vase of flowers, or a spilled bottle of ketchup. It’s not hard to imagine useful applications for this.

Notably, the model will include things like appropriate lighting and shadows, or choose correct materials, since it’s aware of the rest of the scene. I use “aware” loosely here — no one, not even its creators, knows how DALL-E represents these concepts internally, but what matters for these purposes is that the results suggest that it has some form of understanding.

Examples of teddy bears in an ukiyo-e style and a quaint flower shop.

The third new capability is “variations,” which is accurate enough: you give the system an example image and it generates as many variations on it as you like, from very close approximations to impressionistic redos. You can even give it a second image and it will sort of cross-pollinate them, combining the most salient aspects of each. The demo they showed me had DALL-E 2 generating street murals based on an original, and it really did capture the artist’s style for the most part, even if it was probably clear on inspection which was the original.

It’s hard to overstate the quality of these images compared with other generators I’ve seen. Although there are almost always the kinds of “tells” you expect from AI-generated imagery, they’re less obvious and the rest of the image is way better than the best generated by others.

Almost anything

I wrote that DALL-E 2 can draw “almost anything” before, though there’s not really any technical limitation that would prevent the model from convincingly drawing anything you can come up with. But OpenAI is conscious of the risk presented by deepfakes and other misuses of AI-generated imagery and content, and so has added some restrictions for their latest model.

DALL-E 2 runs on a hosted platform for now, an invite-only test environment where developers can try it out in a controlled way. Part of that means that all their prompts for the model are evaluated for violations of a content policy that prohibits, as they put it, “images that are not G-rated.”

That means no: hate, harassment, violence, self-harm, explicit or “shocking” imagery, illegal activities, deception (e.g. fake news reports), political actors or situations, medical or disease-related imagery, or general spam. In fact much of this won’t be possible as violating imagery was excluded from the training set: DALL-E 2 can do a shiba inu in a beret, but it doesn’t even know what a missile strike is.

In addition to prompts being evaluated, the resultant imagery will all (for now) be reviewed by human inspectors. That’s obviously not scalable, but the team told me that this is part of the learning process. They’re not sure exactly how the boundaries should work, which is why they’re keeping the platform small and self-hosted for now.

In time DALL-E 2 will likely be turned into an API that can be called like OpenAI’s other functions, but the team said they want to be sure that’s wise before taking the training wheels off.

You can learn more about DALL-E 2 and test out some semi-interactive examples over at the OpenAI blog post.

Read Entire Article