Skip to main contentSkip to navigationSkip to navigation
Photorealistic AI-generated images of (L-R) the Pope in a puffer, Donald Trump getting arrested and Joe Biden with a mullet.
Fake images generated by AI are becoming increasingly realistic, including (L-R) the Pope in a puffer, Donald Trump getting arrested and Joe Biden with a mullet. Composite: AI images via Twitter users Pop Base/Eliot Higgins/Cam Harless
Fake images generated by AI are becoming increasingly realistic, including (L-R) the Pope in a puffer, Donald Trump getting arrested and Joe Biden with a mullet. Composite: AI images via Twitter users Pop Base/Eliot Higgins/Cam Harless

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can’t – do

This article is more than 1 year old

Experts have sounded a warning on artificial intelligence as it becomes increasingly sophisticated and harder to detect

Generative AI – including large language models such as GPT-4, and image generators such as DALL-E, Midjourney, and Stable Diffusion – is advancing in a “storm of hype and fright”, as some commentators have observed.

Recent advances in artificial intelligence have yielded warnings that the rapidly developing technology may result in “ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

That’s according to an open letter signed by more than 1,000 AI experts, researchers and backers, which calls for an immediate pause on the creation of “giant” AIs for six months so that safety protocols can be developed to mitigate their dangers.

But what is the technology currently capable of doing?

It can generate photorealistic images

Midjourney creates images from text descriptions. It has improved significantly in recent iterations, with version five capable of producing photorealistic images.

Incredible to see so much Gen AI progress in one year

Credit: https://t.co/wyQIILjPss pic.twitter.com/fJwJq492Tc

— Kris Kashtanova (@icreatelife) March 23, 2023

Midjourney v5 has pushed into photorealism, a goal which has eluded the computer graphics industry for decades (!) 🤯

Insane progression, and all that by 11 people with a shared dream.

🧵 Let's explore what these breakthrough in Generative AI mean for 3D & VFX as we know it... pic.twitter.com/GlycHcPQqA

— Bilawal Sidhu (@bilawalsidhu) March 25, 2023

These include the faked images of Trump being arrested, which were created by Eliot Higgins, founder of the Bellingcat investigative journalism network.

Making pictures of Trump getting arrested while waiting for Trump's arrest. pic.twitter.com/4D2QQfUpLZ

— Eliot Higgins (@EliotHiggins) March 20, 2023

Midjourney was also used to generate the viral image of Pope Francis in a Balenciaga puffer jacket, which has been described by web culture writer Ryan Broderick as “the first real mass-level AI misinformation case”. (The creator of the image has said he came up with the idea after taking magic mushrooms.)

AI-generated image of Pope Francis goes viral on social media. pic.twitter.com/ebfLK4F850

— Pop Base (@PopBase) March 25, 2023

Image generators have raised serious ethical concerns around artistic ownership and copyright, with evidence that some AI programs have being trained on millions of online images without permission or payment, leading to class action lawsuits.

Tools have been developed to protect artistic works from being used by AI, such as Glaze, which uses a cloaking technique that prevents an image generator from accurately being able to replicate the style in an artwork.

It can convincingly replicate people’s voices

AI-generated voices can be trained to sound like specific people, with enough accuracy that it fooled a voice identification system used by the Australian government, a Guardian Australia investigation revealed.

How an AI voice clone fooled Centrelink – video

In Latin America, voice actors have reported losing work because they have been replaced by AI dubbing software. “An increasingly popular option for voice actors is to take up poorly paid recording gigs at AI voiceover companies, training the very technology that aims to supplant them,” a Rest of World report found.

AI voice cloning is getting shockingly good.

This video by ElevenLabs uses Leonardo DiCaprio's famous climate change speech and turns it into other cloned actors' voices.

You can even clone your own voice on their website. pic.twitter.com/L38vAvcU7Z

— Rowan Cheung (@rowancheung) March 13, 2023

It can write

GPT-4, the most powerful model released by OpenAI, can code in every computer programming language and write essays and books. Large language models have led to a boom in AI-written ebooks for sale on Amazon. Some media outlets, such as CNET, have reportedly used AI to write articles.

Video AI is getting a lot better

There are now text-to-video generators available, which, as their name suggests, can turn a text description into a moving image.

"Will Smith eating spaghetti" generated by Modelscope text2video

credit: u/chaindrop from r/StableDiffusion pic.twitter.com/ER3hZC0lJN

— Magus Wazir (@MagusWazir) March 28, 2023

Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators

abs: https://t.co/5xCsj4PNRj
github: https://t.co/BdSzlepGQG pic.twitter.com/XY4piH6j4v

— AK 🤗 in SF for the Open-Source AI meetup (@_akhaliq) March 24, 2023

It can turn 2D images into 3D

AI is also getting better at turning 2D still images into 3D visualisations.

Everyone will be able to be anyone soon.

MegaPortraits by SamsungLabs uses new neural architectures that produce high-quality avatars from medium-resolution videos and high-resolution images.

Deepfakes are getting scary good. pic.twitter.com/tCOljxt60H

— The Rundown AI (@TheRundownAI) March 19, 2023

3D capture is moving so fast - I scanned & animated this completely on an iPhone.

Last summer you'd need to wrangle COLMAP, Instant NGP, and FFmpeg to make NeRFs.

Now you can do it all inside Luma AI's mobile app. Capture anything and reframe infinitely in post!

Thread 🧵 pic.twitter.com/hDngpVBas6

— Bilawal Sidhu (@bilawalsidhu) March 19, 2023

After weeks of research and development I finally managed to turn AI generated images into 3d scenes, refine them in real time in a non-destructive and streamlined workflow... It's beyond camera projection since you can make entire scenes viewable in any angles.. It's not a… pic.twitter.com/4pfAF9skPZ

— Aloner One (@al_oner_one) March 23, 2023

It makes factual errors and hallucinates

AI, particularly large language models that are used for chatbots such as ChatGPT, is notorious for making factual mistakes that are easily missed because they seem reasonably convincing.

For every example of a functional use for AI chatbots, there is seemingly a counter-example of its failure.

Prof Ethan Mollick at the Wharton School of the University of Pennsylvania, for example, tested GPT-4 and was able to provide a fair peer review of a research paper as if it were an economic sociologist.

Not sure how to feel about this as an academic: I put one of my old papers into GPT-4 (broken into into 2 parts) and asked for a harsh but fair peer review from a economic sociologist.

It created a completely reasonable peer review that hit many of the points my reviewers raised pic.twitter.com/VTVwkB8ubL

— Ethan Mollick (@emollick) March 19, 2023

However, Robin Bauwens, an assistant professor at Tilburg University in the Netherlands, had an academic paper rejected by a reviewer, who had likely used AI as the reviewer suggested he familiarise himself with academic papers that had been made up.

A reviewer rejected my paper, and instead suggested me to familiarize myself with the following readings. I could not find them anywhere. After a control in GPT-2, my fears where confirmed. Those sources where 99% fake...generated by AI. https://t.co/ynx2igLObW

— Robin Bauwens (@BauwensRobin) March 27, 2023

The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: “Given the start of a sentence, it will try to guess the most likely words to come next.”

Why does chatGPT make up fake academic papers?

By now, we know that the chatbot notoriously invents fake academic references. E.g. its answer to the most cited economics paper is completely made-up (see image).

But why? And how does it make them? A THREAD (1/n) 🧵 pic.twitter.com/kyWuc915ZJ

— David Smerdon (@dsmerdon) January 27, 2023

In February, Bing launched a pre-recorded demo of its AI. As the software engineer Dmitri Brereton has pointed out, the AI was asked to generate a five-day itinerary for Mexico City. Of five descriptions of suggested nightlife options, four were inaccurate, Brereton found. In summarising the figures from a financial report, Brereton found, it also managed to fudge the numbers badly.

It can create (cursed) instructions and recipes

ChatGPT has been used to write crochet patterns, resulting in hilariously cursed results.

GPT-4, the latest iteration of the AI behind the chatbot, can also provide recipe suggestions based on a photograph of the contents of your fridge. I tried this with several images from the Fridge Detective subreddit, but not once did it return any recipe suggestions containing ingredients that were actually in the fridge pictures.

It can act as an assistant to do administrative tasks

“Advances in AI will enable the creation of a personal agent,” Bill Gates wrote this week. “Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.”

For years, Google Assistant’s AI has been able to make reservations at restaurants via phone calls.

OpenAI has now enabled plugins for GPT-4, enabling it to look up data on the web and to order groceries.

2/ Collaborations with major companies

Here's an example of meal planning for your weekend:

• Restaurant recommendation for Saturday (OpenTable)
• Recipe for Sunday (ChatGPT)
• Calculate calories (WolframAlpha)
• Order the ingredients (Instacart) pic.twitter.com/qz01ch8fh3

— Alex Banks (@thealexbanks) March 25, 2023

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed