It’s now impossible to tell if an image is AI or a photo. Where do photographers go from here?

Alex Baker

Alex Baker is a portrait and lifestyle driven photographer based in Valencia, Spain. She works on a range of projects from commercial to fine art and has had work featured in publications such as The Daily Mail, Conde Nast Traveller and El Mundo, and has exhibited work across Europe

It's now impossible to tell if an image is AI or a photo. Where do photographers go from here?

So it finally happened. No, not the zombie apocalypse, but that moment when I found it impossible to tell the difference between an AI-generated image and a real photograph. And I am both fascinated and horrified in equal amounts. This realisation came off the back of OpenAI’s announcement of Sora, their new video generator, which is staggeringly impressive in terms of realism.

While Sora is thankfully not yet available to the general public (and hopefully won’t be for the entire 2024 US election yearhere’s why), it will certainly put a bomb under anyone currently working in the film industry. I spent much of the weekend staring into the creative void, wondering how the next few years would look for us as photographers and filmmakers.

A (very) brief history of Generative AI

In order to understand how we’ve got to this point, we need to step into the past. Well, a year or so back, at least. Machine learning isn’t anything new, it’s been around as long as computers have existed. However, as photographers, we are interested in AI image and video generation and how those will impact the future of our livelihoods.

The scene for photographers changed rapidly in 2021 when Dall-E was launched as one of the first text-to-image generators. At first, the images seemed hilariously bad. How could this possibly be a threat with six fingers on each hand? But then we witnessed the evolution of Dall-E in a matter of months into something ever more sophisticated.

Other text-to-image generators joined, such as Stable Diffusion and Midjourney, and the quality of output kept getting better. “Could this be a threat to our work?” creatives everywhere wondered.

Meanwhile, ChatGPT was launched to the general public and quickly became a household name. Video generators have been added, with companies such as Meta and Microsoft getting in on the game.

Fake images

But these new image generators have not been without problems. We’ve seen issues with fake images going viral and deepfakes of pop stars and politicians. Even the Pope wasn’t immune. There is also the huge ongoing issue of copyright and the fact that these models have largely been trained on copyrighted material.

All of this really hit home this weekend when I saw the image that Sora created. It’s not just a video generator but can make photo-realistic stills as well. And I was honestly shaken by how real it looked.

Why is Sora so alarming?

The video examples are, of course, impressive. However, in my opinion, you can still tell that the videos are AI-generated. Strange movements and slight inconsistencies give it away in a similar way that the still images used to with too many fingers.

This image (below) is nothing truly staggering. It looks like a perfectly ordinary medium-close-up photograph of a middle-aged woman in a knitted hat. Her skin has texture, and her lashes are complete with uneven mascara. There’s even a reflection and catch light in her eyes. The depth of field drops off naturally, just as it would if shot with any DSLR camera.

Only a crochet expert might be able to tell that the stitches in the hat are not real. This could easily be a photo. Except it isn’t and has never been a photograph. This woman does not exist.

What we have here is a whole new level of photorealism, and the generated video quality will most likely catch up to this in a short amount of time as well.

Impact on society

At this point, Pandora’s box has been well and truly opened when it comes to AI. The big tech companies seem to be obsessed with draining any last drop of any joyful, creative process that they possibly can. This is not just going to hit the creative industries either. AI will impact every single industry that currently exists, for better or for worse.

Governments and laws are lumbering dinosaurs in comparison with AI tech. As a society, we won’t be able to react quickly enough to the changes happening around us. I believe that in the next five years, we will see mass industry and employment shake-ups in the likes that we haven’t seen since the Industrial Revolution.

And back to those deepfake images. 2024 is set to be a bumper year in terms of important elections around the world. People should have very little trust in the content they are watching and viewing online.

The big tech companies are doing very little about containing this issue. If 47 million people were able to view the Taylor Swift deepfakes before they were taken down, imagine what sort of impact this technology could potentially have on the US presidential election outcome.

Simply put, we can no longer trust the veracity of an image or a video.

Where does this leave us as photographers?

Admittedly, it does all feel a little doom and gloom, particularly when I consider that 100% of my income could be wiped out by AI (writing, photography, video). That’s a little scary and certainly enough to make me want to cry into my cereal in the morning.

Fortunately, I have a few friends who work in tech, and they were able to coax me back down from the proverbial ledge and reassure me somewhat that this worst-case scenario of all our creative work being obliterated probably won’t come to pass. Or at least not without creating new opportunities along the way.

New tech hype curve

Bay Backner, Assistant Professor of Emerging Technologies at Berklee Valencia, reminded me of the Gartner hype cycle, which happens with all new technological fads.

It's now impossible to tell if an image is AI or a photo. Where do photographers go from here? gartner hype curve
By Jeremykemp at English Wikipedia, CC BY-SA 3.0

“All emerging technologies enter a similar cycle of hype in the press and social media, followed by the inevitable “trough” where interest and investment wain,” she explains.

“We are currently in the AI hype cycle, whereas two years ago, we were in the metaverse hype cycle, and the year before that, it was NFTs.” On this matter, Backner is absolutely on point. Not a day goes by without some media jumping on the AI bandwagon. And probably, we are all fuelling this quasi-mass hysteria (myself included).

Backner goes on to explain that the cycle happens quite naturally, and usually, the really interesting point in the technology happens once all the hype has died down.

“That’s not to say that these technologies disappear with the end of the hype – quite the opposite,” she says. “We consistently see a new wave of market leaders emerge after the first bubbles burst. AI will have an enormous impact on the way we live in 3 – 5 years; it just won’t be in the ways that the current evangelists nor the doomsayers are hyping.”

Camera brands don’t seem worried

This surely is good news. Essentially, then, what Backner is saying is that the creative environment will certainly change. No one is disputing that. But think back to how things looked 30 years ago; things are very different now, and not for the worse, just different.

From a photography and filmmaking perspective, there are probably more jobs in related fields now than there were before. Perhaps this will be true of the AI revolution as well.

Certainly, the major camera brands don’t seem to be worried about people no longer buying cameras in favour of generating AI images. Canon has at least two major new mirrorless bodies due to launch this year, and Sony and Nikon won’t be resting on their laurels either.

What we are seeing, however, are these brands making major investments in other emerging technologies such as virtual reality, spatial computing, and, of course, AI incorporated into the cameras and lenses.

As an added layer of precaution, camera brands are working together to incorporate content authenticity tags within the metadata of images. This should help to establish whether an image is a real photograph or not. This will become increasingly important for photojournalism.

A glimmer of hope?

So basically, we are already on the AI train, and at this point, we can’t really get off. So we must, as photographers, accept it and learn to move with the changes. We can keep serving our clients to the best that we can and diversify and sidestep into other related fields as and when those opportunities occur.

I, for one, never expected to become interested in creating VR180 videos, and yet, here I am in middle age, enjoying that new challenge.

Sora and others like it will change the current landscape of photography and filmmaking; that is the only certainty. How we react to it and help society steer it towards the greater good is up to all of us.

Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

Alex Baker

Alex Baker

Alex Baker is a portrait and lifestyle driven photographer based in Valencia, Spain. She works on a range of projects from commercial to fine art and has had work featured in publications such as The Daily Mail, Conde Nast Traveller and El Mundo, and has exhibited work across Europe

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *

One response to “It’s now impossible to tell if an image is AI or a photo. Where do photographers go from here?”

  1. R.E. Lehmann Avatar
    R.E. Lehmann

    Just wanted to let you know that I appreciate you guys 🤓 📸