A harrowing image captured in Gaza, showing a severely malnourished young girl held in her mother's arms, has become the latest flashpoint in the ongoing battle over truth, technology, and the Israel-Hamas war.
The photo, taken on August 2, 2025, by AFP photojournalist Omar al-Qattaa , documents the frail, skeletal frame of nine-year-old Mariam Dawwas amid growing fears of mass famine in the besieged Palestinian enclave. Israel's blockade of the Gaza Strip has cut off critical humanitarian aid, pushing over two million residents to the brink of starvation.
But when users turned to Elon Musk's AI chatbot, Grok, on X to verify the image, the response was stunningly off the mark. Grok insisted the photo was taken in Yemen in 2018, claiming it showed Amal Hussain, a seven-year-old girl whose death from starvation made global headlines during the Yemen civil war.
That answer was not just incorrect — it was dangerously misleading.
When AI becomes a disinformation machine
Grok's faulty identification rapidly spread online, sowing confusion and weaponising doubt. French left-wing lawmaker Aymeric Caron, who shared the image in solidarity with Palestinians, was swiftly accused of spreading disinformation, even though the image was authentic and current.
"This image is real, and so is the suffering it represents," said Caron, pushing back against the accusations.
The controversy spotlights a deeply unsettling trend: as more users rely on AI tools to fact-check content, the technology's errors are not just mistakes — they're catalysts for discrediting truth.
A human tragedy, buried under algorithmic error
Mariam Dawwas, once a healthy child weighing 25 kilograms before the war began in October 2023, now weighs just nine. "The only nutrition she gets is milk," her mother Modallala told AFP, "and even that's not always available."
Her image has become a symbol of Gaza's deepening humanitarian crisis. But Grok's misfire reduced her to a data point in the wrong file, an AI hallucination with real-world consequences.
Even after being challenged, Grok initially doubled down: "I do not spread fake news; I base my answers on verified sources." While the chatbot eventually acknowledged the error, it again repeated the incorrect Yemen attribution the very next day.
The photo, taken on August 2, 2025, by AFP photojournalist Omar al-Qattaa , documents the frail, skeletal frame of nine-year-old Mariam Dawwas amid growing fears of mass famine in the besieged Palestinian enclave. Israel's blockade of the Gaza Strip has cut off critical humanitarian aid, pushing over two million residents to the brink of starvation.
But when users turned to Elon Musk's AI chatbot, Grok, on X to verify the image, the response was stunningly off the mark. Grok insisted the photo was taken in Yemen in 2018, claiming it showed Amal Hussain, a seven-year-old girl whose death from starvation made global headlines during the Yemen civil war.
That answer was not just incorrect — it was dangerously misleading.
Grok, is that Gaza?
— AFP News Agency (@AFP) August 7, 2025
This photo by AFP's Omar al-Qattaa shows a skeletal girl in Gaza on August 2, 2025. But Elon Musk's Grok AI chatbot says the image was taken in Yemen nearly seven years agohttps://t.co/hHeKx34qpb pic.twitter.com/QWrwjFIUca
When AI becomes a disinformation machine
Grok's faulty identification rapidly spread online, sowing confusion and weaponising doubt. French left-wing lawmaker Aymeric Caron, who shared the image in solidarity with Palestinians, was swiftly accused of spreading disinformation, even though the image was authentic and current.
"This image is real, and so is the suffering it represents," said Caron, pushing back against the accusations.
The controversy spotlights a deeply unsettling trend: as more users rely on AI tools to fact-check content, the technology's errors are not just mistakes — they're catalysts for discrediting truth.
A human tragedy, buried under algorithmic error
Mariam Dawwas, once a healthy child weighing 25 kilograms before the war began in October 2023, now weighs just nine. "The only nutrition she gets is milk," her mother Modallala told AFP, "and even that's not always available."
Her image has become a symbol of Gaza's deepening humanitarian crisis. But Grok's misfire reduced her to a data point in the wrong file, an AI hallucination with real-world consequences.
Even after being challenged, Grok initially doubled down: "I do not spread fake news; I base my answers on verified sources." While the chatbot eventually acknowledged the error, it again repeated the incorrect Yemen attribution the very next day.
You may also like
'Bigg Boss 19' set to abide by the 'a new season, a new sarkaar' rule
Italians are so livid Brits are adding two ingredients to pasta they've called the embassy
Downton Abbey star dealt devastating blow as BBC show 'cancelled' after one series
Ed Sheeran hits back at TikTok star over 'embarrassing' Ipswich shirt number
They Used The F Word': Racist Assault On 6-Year-Old Indian Girl In Ireland Sparks Outrage