A photo of Iran’s bombed schoolgirl graveyard went around the world. Was it real, or AI? (www.theguardian.com)
from HellsBelle@sh.itjust.works to world@lemmy.world on 17 Mar 11:41
https://sh.itjust.works/post/56946872

The cemetery of Minab, photographed as it prepares to bury more than 100 of the town’s young girls, is one of the defining images of the US-Israeli war on Iran, bluntly capturing the devastating civilian toll.

But is it real?

Ask Gemini, the AI service powered by Google, and the answer you receive is no – in fact, Gemini claims the photograph is from two years earlier and more than 2,000km (1,240 miles) away. Rather than graves for small girls killed by a missile, the image “depicts a mass burial site in Kahramanmaraş, Turkey” after the 7.8 magnitude earthquake that struck in 2023. “This specific aerial perspective became one of the most widely shared images of the disaster,” Gemini says, “illustrating the sheer scale of the loss.”

The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.

#world

threaded - newest

givesomefucks@lemmy.world on 17 Mar 11:49 next collapse

Ask Gemini, the AI service powered by Google

How about we don’t rely on AI to say if something is AI?

How about we take the time to train our actual brains to spot the difference?

Like, that’s the silver lining of dlss 5 for me. Theyre just slapping that stupid AI filter and getting a bunch of blowback, but there was a very brief window where people kept using the same filter on normal pictures.

Not only is that shit useless, it’s functionally dangerous because that blurs the lines. It picks up all the hallmarks of AI, giving false positives that an image isn’t real.

I’m hoping the dlss 5 blowback will kill the AI upscaling of random modern images. And people actually start using their fucking brains.

unexposedhazard@discuss.tchncs.de on 17 Mar 12:38 collapse

I mean yeah thats the entire point of the article. All the slop machines were wrong and actual manual research was required to find the truth.

The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.

lemmie689@lemmy.sdf.org on 17 Mar 12:00 next collapse

<img alt="" src="https://lemmy.sdf.org/pictrs/image/9ccbd458-4b1c-49d9-a6ba-863eec9f397a.png">

NoForwadSlashS@piefed.social on 17 Mar 12:42 next collapse

It’s incredible that the masses have embraced so fully a technology that makes them dumber, as well as the internet itself. It’s as if, in the 90s, we decided that we should embrace chain emails as the way forward, rather than embracing Wikipedia.

Akh@lemmy.world on 17 Mar 12:56 next collapse

Yeah I dont trust ai for any reason

diablexical@sh.itjust.works on 17 Mar 16:13 next collapse

I just tested it, gemini says:

What it depicts: The photo shows an aerial view of newly dug graves at a cemetery in Minab, Iran. These graves were prepared in early March 2026 for the victims of the February 28, 2026, airstrike on the Shajareh Tayyebeh Primary School.

Ironic that an article about ai misinformation is itself misinformation. Maybe it was written by ai?

cecilkorik@lemmy.ca on 17 Mar 16:23 next collapse

You can give Gemini the exact same prompt and context 100 different times and you might get 95 very similar responses and 5 wildly different responses.

I don’t understand why people think a random text generator can ever be relied on for truth. It has no concept of truth. It is a random text generator. A pretty consistent one, but still fucking random. It has no intelligence. It is not intelligent. Stop acting like it is. Its conclusions are meaningless. They do not contain actual meaning. They are random.

dandelion@lemmy.blahaj.zone on 17 Mar 19:42 collapse

oof, might need to work on your tech and media literacy there 🫠

JoeMontayna@lemmy.ml on 17 Mar 17:32 next collapse

Does it matter? It happened, and has been widely confirmed.

dandelion@lemmy.blahaj.zone on 17 Mar 19:41 next collapse

the news story is about how LLMs are resulting in people being fed false “fact-checks”, being told the image is fake when it’s not

it’s not strictly about whether the image is real or not, which the facts are clear about

village604@adultswim.fan on 17 Mar 19:45 collapse

It absolutely matters if AI images of war are being passed off as real.

Misinformation and propaganda were already a huge issue and AI just ramped it up.

GreenKnight23@lemmy.world on 17 Mar 19:19 collapse

we asked AI about the AI generated image to check if it was AI generated

we asked AI about the image to check to see if it was AI generated

we asked AI about the image to verify claims of the context

all of these have one fatal flaw that completely misses the point.

we asked AI

<img alt="1000003242" src="https://lemmy.world/pictrs/image/f98e12d1-3b44-4841-8c53-91a6c8df8a18.jpeg">

village604@adultswim.fan on 17 Mar 19:47 collapse

Yes, that flaw is exactly what they’re pointing out. They’re saying, “we asked AI a question we knew the answer to and it was wrong.”

Leaving out the context is bad.