Photographs no longer have any truth value, such is the ease of manipulation. With Google Pixel AI offering a staggering level of falsehood indistinguishable from reality, will Apple be forced to follow suit? It seems too late to ban such apps.
The photos in this article - generated using consumer-level software, is terrifying. (Article contains strong language.)
No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus — for as long as any of us has been here, photographs proved something happened.
Talking about (Google) AI enhanced images.
A photo, in this world, stops being a supplement to fallible human recollection, but instead a mirror of it.
Altered images is a problem that has been around for years. In the early 80’s I photographed a group of employees that had won an award and found out later that one of the team had been absent that day. Long story short, since I was shooting in black & white I photographed the employee separately and added her to the picture before it was published.
The “proof” in those days was the negative, later when digital photography arrived software like Photoshop became the problem. I’d be surprised if there isn’t/will be a way to identify generated/altered images.
But that doesn’t mean a lot of people will accept the “proof”.
The big change is providing zero effort tools for everyone one click away and the issues and harm it may do at a very local level (eg among college friends) as well as nationally.
Recent riots in England were caused entirely by falsified (and easily disprovable) information on social media. Having “photographic evidence” will enhance that effect.
There’s a hope that AI can detect AI, although this will in itself be an arms race of detection and evasion. Networks like X could easily have halted the flow of disinformation, but didn’t.
I’m extremely pessimistic on what’s to come with this. Manipulation of information has always been a problem, of course, but the magnitude of what’s available, how easy it is to spread through social media and cut-to-the-bones news rooms, and how heavily disinformation is being used by political and corporate entities…and then factor in individuals with an axe to grind or just plain maliciousness for it’s own sake…
Again, not a new problem, but this isn’t headed to a good place…
It’s not just a problem of physical manipulation. I can’t remember where I heard it recently, but there have been cases where a photo was published that was 100% original out of camera, but either the scene was staged or the angle was carefully chosen to emphasise certain aspects of an event.
Back in the caves, I bet stories were embellished a lot.
I am so scared when I saw theVerge that a photo can be manipulated to “show” a lady having drug abuse. People can use this function to damage others’ reputation they really hate.
And what I worry more is that surrounding friends and colleagues usually believe these so-called evidences. Before these AI generated images we don’t think the gossip as gossip, rumours as rumours, and images can make us think there are no reasons to not believe.
I even can’t trust the Internet. Seems like they tell you a lot of things you don’t know, but are they true? I really doubt…
Photography and all made images (paintings etc.) are “representations” and always have been: they can’t and don’t show objective reality: for a start, the maker has chosen what to make an image of, they’ve framed the scene to choose what to include and leave out and there are a million ways in which they convey their subjective interpretation of the image content (e.g. looking up to revered figures, making it bright or dark overall etc.). Looking at images is also about how we construct their meaning: how we interpret them.
It’s the idea that photographs are objectively “true” at any level that opens up the possibility of making images deliberately to mislead: whether for propaganda or for fraud. It’s an understandable misconception: we tend to assume that the more detail we can see, the better we can understand a scene and the more confident we can be in interpreting it and deciding how to react. Photographic levels of detail “feel” less sketchy (literally)
The latest tools make it much easier and quicker for many more people to be able to produce any image they can describe. Just like modern tech lets many more people publish their words to a large audience, communicate with people far away etc. etc.
The problem is not the tech, but that some people choose to use it to exploit, harm or seek other evil goals. The real problem is the ancient one of those who choose to hurt and abuse others, and how we defend against them. Even if we could ban all technologies that could be used for evil, people would still find other ways to be evil.
You can replace those three dots with almost anything:
audio
video
text
Sources have to be verified. Nothing on its own has truth value until its verification. Software generated media makes it more easy to fake things these days, but there always have been fake photos, videos and texts.
“I saw it on TV with my own eyes. It has to be true.” Yeah… No…!
When the internet became popular there was this weird discussion in Germany, if “the internet with all its dangers” should be permitted in schools and if schools even should be liable if they failed to protect their pupils against those “dangers”. The discussion - of course - was ridiculous. The school principal of my former school made it clear in a paper that the internet, technology and whatever else never are dangers as long as everybody uses his/her competency and due diligence to verify the sources before regarding anything as “truth” or “fact”. Back then, the teachers did a fantastic job teaching how to verify sources. That is and always has been something of utter importance.
And this is what everything is about: to use the thing that is sitting between your shoulders and to create a sense of awareness and media competency. It is not easy. But if everybody is able to use those skills, there is nothing to fear about.
“AI” may accelerate this learning process. At least, that is what I am hoping for. And yes, we may have a rough road ahead on this one. For sure.