DALL-E and the Future of Art
DALL-E represents a breakthrough in the field of generative AI. Here's a quick overview of how to use it and what it means for the art world.
Join the DZone community and get the full member experience.
Join For FreeAs we have seen in our previous article, artificial intelligence (AI) is transforming many aspects of our lives, including the world of art. With the help of machine learning algorithms and generative models (i.e., a machine “learns” a piece of information and uses it to generate a new image), AI can now create works of art that are often indistinguishable from those created by human artists.
AI-generated images are a type of art that is generated through algorithms that analyze patterns and styles in existing art and use this information to create new and original pieces. AI-generated art can take many forms, including paintings, sculptures, music, and even fashion.
One of the most fascinating aspects of this type of art is that it can produce completely unprecedented and often unexpected works. The algorithms used can generate unique patterns, colors, and textures that a human artist may not have considered. As a result, AI-generated art has the potential to push the boundaries of traditional art forms and create new experiences.
Despite the many possibilities afforded to us by this type of AI, it has also sparked debate about the role of technology in the creative process. Some argue that AI-generated art lacks the emotional depth and nuance of human-created art, while others see it as a new frontier for artistic expression.
Today we are going to talk about one of the most famous AI art generators of all, one that shares its beginnings with GPT-3. It is called DALL-E.
What Is DALL-E?
DALL-E is an AI-based image generator created by OpenAI, a research organization focused on developing advanced technologies in artificial intelligence. Launched in January 2021, DALL-E is designed to create unique images from descriptions we provide in the form of text.
The name “DALL-E” is an acronym for “Dali” and “Eve,” in reference to the surrealist artist Salvador Dali and the character Eve from the Pixar movie WALL-E. The project builds on OpenAI’s previous work with GPT-3, a language model capable of generating human-like text.
DALL-E uses a combination of neural networks and deep learning techniques to create its images. The model was trained on a dataset of various images and text captions, allowing it to understand the relationships between different objects and concepts.
The development of DALL-E shows the potential of machine learning to create unique and imaginative visual art. The model has already been used to create a wide range of images, from landscapes, animals, and objects to even characters that could pass for protagonists in any story.
Overall, DALL-E represents a breakthrough in the field of generative AI, demonstrating the growing potential of machine learning to create authentic and imaginative visual art.
How to Use It
To use this AI, the first thing to do is to register on their website.
Once we have successfully registered, we appear in the DALL-E creation laboratory, where we can perform the tests we want.
As we can see, it is very similar to the playground we had for GPT-3.
The first thing that catches our attention are several examples of the first and most famous images that have been created with DALL-E. Then it offers us several options of functionalities, like uploading an image to edit it, telling it to surprise us with the Surprise Me button, or giving the AI an idea of what we want it to generate.
To show you the potential of this AI, we are going to ask it to draw a robot with a Disney-like style, and as if it were digital art, what is better than a robot when we are talking about an AI?
To start, we will click on the text box provided and write our prompt (if you have read our previous articles, you will know what I’m talking about, if not, I invite you to go through them). For this case, I have written the following:
“a cute robot, Disney style, digital art”
And the result we have obtained is as follows:
The image that most caught my attention is the second result, so we will keep it because as we progress we will be testing our robot.
Options Within the Image Generation
DALL-E has several options when generating images. Two of the most famous are Edit and Outpainting.
Outpainting
This option is all about users being able to continue the artwork of an image beyond the edges and the original image. On their blog, we have an explanation of what it is and how it works, but we also have a great example of what can be done.
We are all familiar, either from art history or from the movie, with the painting entitled “The Girl with the Pearl," a painting by Johannes Vermeer.
The fact is that another artist named August Kamp, thanks to this technology, represented what the enlarged margins of the painting could look like and kept extending the work until he achieved the following result:
Little by little, he was telling the work what was around it, such as the kitchen, the windows, and the shelves… until he came up with what we are looking at right now. In his blog, you can see a video of the process.
Now we are going to do a test with the robot we did before. To do this, select the image in our results and click on the Edit option, which will open an editor.
Below the image we will have several tools, and, for this case, we will select the first one called Select. With this, we will move the blue box to the area where we want to extend our image, and we will tell DALL-E again what to draw. For this example, I have asked it to draw another robot next to him, and we will click Generate.
As we can see in the result, it has stretched the image to the left, which is where we had indicated, and has respected the background, the colors, and the light, and has drawn what we could say is an old version of our first robot.
Edit
This feature consists of adding something to an image by giving a mask to the original. The masks on an image are a widespread concept in programs such as Photoshop and are another resource to create transparency in a layer. In this case, we can see that we have an image of an empty room in which we add a sofa just where we have selected the number two. If you want to see more examples, on this website you can see some more.
We are going to do another test with our two robots. In the same editor, we will now select the Eraser option and erase an area of the image. I have decided to erase the area between the two robots.
And now I’m going to ask DALL-E to draw a heart between the two of them by writing in the text box the following:
“Cute robots with a metallic heart in the middle, Disney style, digital art”
The results are as follows:
How to Do It With Python
If you have read our second article about GPT-3 and its use with Python, this is going to be very easy for you. We will follow the same steps, it consists of installing the openAI library, importing it, and adding our API_KEY to the code.
In this case, we will indicate a prompt, the number of images to make, and the size of this. The result can be obtained in the form of bytes or by accessing a URL that hosts the generated image. An example of code would be the following:
If we would like to explore more about its API and know more parameters we can ask DALL-E, we have its documentation that will help us to expand our knowledge in this link.
How Much Does It Cost?
To test DALL-E, we have a free option for new accounts in which we will receive 50 credits to use during the first month and then 15 credits monthly. Each credit is equivalent to each time we hit Generate regardless of whether we like the result or not.
If we would like to buy credits to have more generations of images, the price for buying 115 credits is 15 dollars. In your profile, you will be able to see the credits you have left, although DALL-E will warn you from time to time about the credits you have available in the form of alerts in its editor.
Conclusion
Artificial intelligence is poised to have a significant impact on the art world, and AI-generated art is fast becoming a legitimate and growing area of artistic practice. As technology evolves, it is likely to generate new and innovative approaches to creative expression and redefine what we consider art.
One of the main benefits of AI-generated art is the ability to quickly generate works based on existing patterns and styles. This provides a powerful tool for artists and designers to explore new creative directions and push the boundaries of traditional art practice.
However, the rise of this new technology has raised concerns about the role of the artist in the creative process. Despite these concerns, the impact of AI on the art world is inevitable. This world will continue to evolve and gradually become more accessible, and with it, we will see an increase in the use of AI-generated art in a wide range of contexts, from commercial design to fine art.
Regardless of how the relationship between AI and artists develops, it is clear that the art world is on the cusp of a major transformation, especially in the way we think about and create art.
Published at DZone with permission of Isaac Alvarez. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments