Top
Insights
5.3.2023

AI vs. Human Creativity: Who Will Win the Battle for Artistic Supremacy?

Unveiling the Potential Pitfalls and Possibilities of AI in Creativity and Design

Simó Tamás

Designer

The other day I shared Ivan Kramskoi's painting "Christ in the Wilderness" on my Instagram stories. I often use this platform to share paintings that I find visually appealing or interesting. However, I didn't include the name of the painting or the artist in my story. After sharing it, one of my friends asked me if it was a genuine painting or if it was generated by artificial intelligence. That question got me thinking and made me start typing about the role of AI in graphic design and how it may be embraced (or not) in the future.

Left: Christ in the Desert - Painting by Ivan Kramskoi Right: Christ in the Desert - Generated by Midjourney

A Little Retrospect

Over the last few years, the term "artificial intelligence" has gained increasing prominence in our daily lives. However, studies on intelligent machines date back to the 1950s. Since its beginning, AI research has aimed to simulate human brain functions, problem-solving, and formal logic. But is it truly intelligence or merely an imitation of it? 

In recent years, machine learning has emerged as the dominant technique in the field of AI. Rather than being programmed to complete a specific task, machines are trained through experience. By repeatedly gathering and analysing data, they make decisions based on what they have learned. 

One notable achievement in the history of AI was in 1996 when chess grandmaster Garry Kasparov famously defeated Deep Blue, IBM’s chess-playing computer. At the time, Deep Blue was capable of evaluating 100 million different chess positions per second. However, the following year, Deep Blue went on to defeat Kasparov in a rematch, demonstrating the remarkable progress that had been made in the field of AI.

Grandmaster Gary Kasparov is playing chess against a computer named Deep Blue.

The Board Has Been Flipped

Since Deep Blue, most of the pieces have been rearranged, so to speak. Most of us don’t even recognise to what degree AI is shaping our most trivial experiences. Just think of Google’s advanced web search engines that predict what you are looking for, the curated content on your social media that you surf through during lunch break, or how you pay for a cab ride using facial recognition. AI is there. 

Let’s take a closer look at Amazon, which has been an early adopter of machine learning and artificial intelligence technology. Over the years, the company restructured itself to leverage these technologies in multiple ways. They have used the flywheel methodology, an engineering concept introducing how businesses can conserve energy and keep up the momentum. The flywheel requires an initial burst of momentum to start, but you can keep it moving by adding energy consistently after this initial burst. Amazon leverages it in different parts of its business, driving skyrocketing growth. They use it in product recommendations, customer support, and even warehouse and delivery optimisation. While these technologies have made our lives more comfortable, their impact on the meaningfulness of work remains to be seen.

The tables sure turn quickly. And replaceability has become another soaring concern for many. An article from 2020 contained a list of jobs in which humans will most unlikely to be replaced by AI.  As a designer, I was relieved to see creatives, artists, and writers on it. Then here we are two years later, with Midjourney and ChatGPT all over the public chatter. 

For those who don't know, Midjourney is an artificial intelligence program that creates images from textual descriptions. For example, I gave the program the following prompt, “banana chair”, and it generated the following images in seconds.

Pictures Midjourney generated for the text prompt “banana chair”

You can obtain any of the images generated by Midjourney in high resolution or request new ones within seconds. Essentially, Midjourney creates "artworks" in a blink of an eye, a task that would take humans hours or even days to accomplish. Other programs like DALL-E, Stable Diffusion, and Tengr.ai are also doing similar things. Another powerful tool  called uizard that can turn UI sketches and screenshots into fully editable designs, and soon it will be able to transform text prompts into user interfaces. While this may seem daunting for designers, who might worry about losing their edge, it's important to remember that design is about more than just creating pretty things. 

The question arises: is this a revolution? Are these AIs genuinely truly creative? It's a complex matter to explore.

The Technology Behind - Decoding Image Generation

This new technology is definitely considered a revolution, and it is shaking things up like a tornado!  In the article mentioned earlier, the main argument against replacing creatives with AI was that AI couldn't tap into a creative state. However, after taking a spin with Midjourney and DALL-E, my initial impression was that AI can indeed think outside the box. But where does this creativity stem from?

To create the image, the machines employ two neural networks. The first network uses the prompts given by the user to generate the image, while the second network analyses the generated image with reference images. It compares the images and assigns a score to determine the accuracy of the generated image. If the image does not match the reference image accurately enough, the system can generate a new image to be more precise. However, the second phase is where most issues with the technology arise.

While some AIs only have access to reference images, others are web scraping and getting graphical elements from platforms like Dribbble or Behance and images from stock websites such as Shutterstock or Gettyimages. It doesn’t mean that the machine is copying a bunch of images from the internet and transforming them into a collage. When generating a landscape, the AI will analyse reference images from the internet and look for patterns to follow. For example, if the image is a landscape, the AI will notice that the upper part of the reference images is often blue due to the sky and generate data accordingly. While examining the creation pattern of AI, we found something interesting: AI cannot simply create an image in a specific style, it will recognise all the different patterns. If we ask AI to generate a photorealistic image of a girl in the style of Hendrik Kernstern, we would not only get a photo of his iconic painting-like photograph, but the girl would also likely resemble the one captured by the artist himself.

This is why another trendy image generator Lensa.ai, used to generate painted avatars of uploaded photos, has signatures in the bottom right corner on some of the images generated. That’s because many painters usually sign their work in the bottom right corner, which becomes a frequent pattern for AI.

Exploring the Limits 

Experiment #1 - The Patterns

Playing around with Midjourney, here’s an experiment with the following prompt: minimalist logo for a casual elegant clothing brand, navy blue colour, white background, seamless, golden ratio. I used this prompt two times, and this is what I got:

For the third time, I modified the prompt to the following: minimalist logo for a casual elegant clothing brand, navy blue color, white background.

It’s quite obvious that when the AI searches for patterns, it analyses every prompt given. For example, the prompt "golden ratio" was interpreted in two different ways, resulting in images with well-known spiral curves or the gold colour appearing in most of the “logos”. Interestingly, a third occasion resulted in a clothing and stationary mockup with the logo on it, possibly because many web photos with the same description have mockups in their presentation. The AI cannot "see" what's in the picture, it only recognises frequently used patterns. So if we flood the internet with pictures of black circles labelled "minimalist website layout", and the AI learns from that dataset, it will generate black circles for that prompt. That basically means that the current version of this technology is not capable of logical thinking, it can only create visual elements based on patterns.

Experiment #2 - The Ruler

Check the above results when prompting the AI to create a modern minimalistic website layout design for a creative branding agency. Upon closer inspection, I noticed that the AI rarely produces the layout itself as a standalone image. Instead, it's usually presented in a mockup-like form or as multiple layouts stacked together. This is a common pattern used on platforms like Dribbble to showcase websites in a more visually appealing manner. Additionally, the images lack clarity, and the text content appears to be a mishmash of different wordings from various website designs.

When creating a website, designers commonly use grid systems to ensure that website layouts are consistent, transparent, and adaptable on various devices. However, when examining the AI-generated layouts, it's evident that the AI isn't following a specific grid system. Instead, it combines various grid systems from the source to create a single piece. So if we inspect these layouts with rulers, we get inconsistencies throughout the design. It appears that the AI uses different reference images for various sections of the website.

Experiment #3 - The Standards

Now let’s take a look at design standards. Design standards are fundamental building blocks that help designers convey messages in a way that users can easily understand. Some of the most important design principles are visual hierarchy, contrast, balance and space, repetition, colour and variety. However, when examining AI-generated images, it is evident that the AI is only using only some of these principles. It is possible that it is because the designers who trained the machines used them in their work rather than the AI's own knowledge.

While the big picture seems to be in line with the design principles, there are many inconsistencies in the details, such as spacing and alignment issues. For example, as we can see in the image below, there are spacing inconsistencies between the navigation links and alignment discrepancies between the different sections. After spending some time analysing these images, one thing is more and more prominent: AI is clearly struggling with design principles that require an eye for small details like aligning things perfectly. While AI has limitations in this area, it is important to note that this technology is still in its early stages, and these errors may be resolved in time.

Unpacking the Risks and Benefits in Creative Work 

There are plenty of articles about the dangers of AI. And this is not another attempt to paint a doomsday scenario. Instead, I would like to recommend two great movies. The first one is Her by Spike Jonze, which revolves around the topic of how AI can cause the alienation of human beings. The second one is Ex Machina by Alex Garland, about artificial intelligence's possible self-awareness and singularity. 

Moving away from the topic of AI movies, let's take a look at the photos below. Notice anything similar? 

Well, here's the thing - none of these photos are actual photographs. While the examples presented in the experiments were generated with Midjourney 4, these photo-realistic images were created with Midjourney 5, the latest version of Midjourney. Midjourney 5 has made significant strides in creating images that look like they were taken with a camera. In particular, it is now much better at rendering skin texture and natural light and can even create realistic-looking human hands.

This is an excellent example of how rapidly this technology is advancing and the potential it holds for various industries. While it's clear that this program has already had an impact on the design industry, its influence could extend even further to fields like photography and modelling.

As amazing as this technology is, and despite being an avid user of technology, I also maintain a healthy scepticism towards it. Often, we tend to only focus on the positive aspects of technological advancements and overlook the potential downsides. 

As the saying goes, "Comfort zone is a beautiful place, but nothing ever grows there." My first concern is that, as human beings, we are becoming increasingly comfortable with the aid of these machines. And with this comfort comes the risk of losing the most valuable aspect of creative work - the process of crafting. As a designer and hobby photographer, I can attest that without this process, the work doesn't feel as fulfilling. The process of creating, being in the flow state, and being fully present in the moment is what produces unique and beautiful results. Humans need this process in order to grow, as it fuels our creativity. Without it, our ability to innovate may stagnate.

My second concern is related to copyright infringement. Most of these AIs are scraping data from the internet without the consent of the original creators. There is a document available online that lists over 1000 artists whose work has been used to train Midjourney and Stable Diffusion. These artists may have grounds to join a class action lawsuit, as these machines have not only used their work without permission but also monetised the results without giving credit to the original artists. In fact, even Getty Images, one of the world's leading photo libraries, has sued the creators of Stable Diffusion AI for scraping their content.

Images generated by Stable Diffusion with GettyImages watermarks.

In conclusion, from my perspective, the hype surrounding this technology often surpasses its actual capabilities, leading many creatives to overly rely on it.  But in fact, it is far from being able to take over our work, as it lacks logical thinking and merely collects and combines frequent patterns from the internet. AI does not (yet) consider strategic goals, user personas, user flows, or other essential elements that are necessary for a successful project. Currently, it only generates visually pleasing results. However, despite these limitations, let's also explore how we can potentially use this technology effectively.

AI is undoubtedly a powerful and valuable tool in the field of design, but we must learn to use it appropriately. Instead of relying on AI to do our job for us, we should use it to enhance the quality of our work. There are several ways we can achieve this. For example, we can train AI to compare and analyse our designs using design standards and grid systems, providing us with advice on potential improvements while still leaving the final choice to us. While AI can create stunning visuals, problem-solving remains predominantly a human skill. We can also train AI to evaluate our designs from a user experience perspective, offering recommendations based on data to create more user-friendly designs. Or how about training them to check if our work has similarities to other works on the internet, so we can minimise the risk of copyright issues? 

The possibilities are truly endless, and what we can do today is embrace their use and continue pushing boundaries to enhance our processes and work quality. Should designers be afraid of the future? I think not unless they are afraid to learn new things. Personally, I believe that contesting the merits of AI does not lead us further. It's not a competition between human creativity and AI but rather an opportunity to leverage the values of human ingenuity and craftsmanship while using AI as a crutch along the way in a rapidly changing technological landscape.

Register Now

Digital Due diligence

Start your digital journey in the right direction

We’re curious in which car do you see yourself taking this roadtrip. If you need a helping hand figuring out the road to grow, check out our Consultancy services and don’t hesitate contacting us.

Take me there

Digital Feasibility Audit

Explore the true potential of your digital endeavours

When was the last time you lost sight of a seemingly minor detail that kicked back? While thinking about it, check out how we can help you with uncovering the digital opportunities and risks that were not evident from the start.

Take me there