A malnourished Palestinian girl receiving treatment at a hospital in Gaza City. Reuters
A malnourished Palestinian girl receiving treatment at a hospital in Gaza City. Reuters
A malnourished Palestinian girl receiving treatment at a hospital in Gaza City. Reuters
A malnourished Palestinian girl receiving treatment at a hospital in Gaza City. Reuters

Incorrect Grok answers amid Gaza devastation show risks of blindly trusting AI


Cody Combs
  • English
  • Arabic

A spike in misinformation amid the dire situation in Gaza has highlighted how imperfect artificial intelligence systems are being used to perpetuate it.

Reaction to a recent social media post from US Senator Bernie Sanders in which he shared a photo of an emaciated child in the besieged Palestinian enclave shows just how fast AI tools can spur the spread of incorrect narratives.

In the post on X, he accused Israeli Prime Minister Benjamin Netanyahu of lying by promoting the idea that there was "no starvation in Gaza".

A user asked Grok, X's AI chatbot, for more information on the origin of the images.

"These images are from 2016, showing malnourished children in a hospital in Houdieda, Yemen, amid the civil war there ... They do not depict current events in Gaza," Grok said.

Several other users, however, were able to verify that the images were in fact recently taken in Gaza, but initially those voices were overtaken by hundreds who reposted Grok's incorrect answer.

Proponents of Israel's continuing strategy in Gaza used the false information from Grok to perpetuate the narrative that the humanitarian crisis in Gaza was being exaggerated.

Initially, when some users tried to tell the chatbot that is was wrong, and explained why it was wrong, Grok stood firm.

"I'm committed to facts, not any agenda ... the images are from Yemen in 2016," it insisted. "Correcting their misuse in a Gaza context isn't lying – it's truth."

Later however, after metadata and sources confirmed that the photos had been taken in Gaza, Grok apologised.

Grok later apologised for falsely claiming that a photo of an emaciated child in Gaza was a photo from Yemen. (Grok)
Grok later apologised for falsely claiming that a photo of an emaciated child in Gaza was a photo from Yemen. (Grok)

Another recent incident involving Grok's confident wrong answers about the situation in Gaza also led to the spread of falsehoods.

Several images began circulating on social media purporting to show people in Egypt filling bottles with food and throwing them into the sea with hopes of them reaching Gaza.

While there were several videos that showed similar efforts, many of the photos circulating were later determined to be fake, according to PolitiFact, a non-partisan independent reporting fact-check organisation.

This is not the first time Grok's answers have come under scrutiny. Last month, the chatbot started to answer user prompts with offensive comments, including giving anti-Semitic answers to prompts and praising Adolf Hitler.

High stakes and major consequences

AI chatbot enthusiasts are quick to point out that the technology is far from perfect and that it continues to learn. Grok and other chatbots include disclaimers warning users that they can be prone to mistakes.

In the fast-paced social media world, however, those fine-print warnings are often forgotten, while the risks from the ramifications of misinformation increase substantially – most recently with regard to the Gaza war.

Israel's campaign in Gaza – which followed the 2023 attacks by Hamas-led fighters that resulted in the deaths of about 1,200 people and the capture of 240 hostages – has killed more than 60,200 people and injured about 147,000.

The war has raged against a backdrop of technology development that's causing ample confusion.

"This chilling disconnect between reality and narratives online and in the media has increasingly become a feature of modern war," wrote Mahsa Alimardani and Sam Gregory in a recent analysis on AI and conflict for the Carnegie Endowment think tank.

The experts pointed out that while several tools can be used to verify photos and video in addition to flagging possible AI manipulation, it will take broader efforts to prevent the spread of misinformation.

Technology companies, they say, must "share the burden by embedding provenance responsibly, facilitating globally effective detection, flagging materially deceptive manipulated content, and doubling down on protecting users in high-risk regions".

AI's triumphs and continuing tribulations

A lot of the recent misinformation and disinformation controversies related to AI and modern conflict can be traced back to the various AI tools and how they handle images.

Stretching back to the earliest days of AI, particularly in the 1970s and 1980s, researchers sought to replicate the human brain – more specifically the brain's neural networks that consist of neurons and electrical signals that strengthen over time, giving humans the ability to reason, remember and identify.

This research project from 2012, spearheaded by Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton was one of the first to realise the potential for AI to help recognize images. (University of Toronto)
This research project from 2012, spearheaded by Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton was one of the first to realise the potential for AI to help recognize images. (University of Toronto)

As computer processors have become increasingly powerful and more economical, replicating those neural networks – often called "artificial neural networks" in the technology world – have become significantly easier.

The internet, with its seemingly endless photos, videos and data, has also become a way for those neural networks to be constantly trained on information.

Some of the earliest uses of AI involved software that made it possible to identify images. This was demonstrated back in 2012 by Alex Krizhevsky, then a student at the University of Toronto, whose research was overseen by British-Canadian computer scientist Geoffrey Hinton, considered to be the godfather of AI.

"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images," his paper on deep convolutional neural networks read. "Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging data set." He added, however, that the network had the potential to degrade and pointed out it was far from perfect.

AI has since improved by leaps and bounds, though there is still room for improvement.

The latest AI chatbots like OpenAI's ChatGPT and Google's Gemini have capitalised on powerful CPUs and GPUs, making it possible for just about anyone to upload an image and ask the chatbot to explain what the image is showing.

This 2012 research project from students at the University of Toronto helped pave the way for AI image recognition.
This 2012 research project from students at the University of Toronto helped pave the way for AI image recognition.

For example, some users have uploaded pictures of plants they can't recognise into chatbots to identify. When it works, it is helpful; when it doesn't, it's usually harmless.

In the world of mass media, however, and more broadly the world of social media, when chatbots are wrong – such as Grok was about the Gaza photos – the consequences can have wide-reaching effects.

French business

France has organised a delegation of leading businesses to travel to Syria. The group was led by French shipping giant CMA CGM, which struck a 30-year contract in May with the Syrian government to develop and run Latakia port. Also present were water and waste management company Suez, defence multinational Thales, and Ellipse Group, which is currently looking into rehabilitating Syrian hospitals.

Titanium Escrow profile

Started: December 2016
Founder: Ibrahim Kamalmaz
Based: UAE
Sector: Finance / legal
Size: 3 employees, pre-revenue  
Stage: Early stage
Investors: Founder's friends and Family

While you're here
The specs

Engine: 1.5-litre turbo

Power: 181hp

Torque: 230Nm

Transmission: 6-speed automatic

Starting price: Dh79,000

On sale: Now

How to help

Send “thenational” to the following numbers or call the hotline on: 0502955999
2289 – Dh10
2252 – Dh 50
6025 – Dh20
6027 – Dh 100
6026 – Dh 200

Countries recognising Palestine

France, UK, Canada, Australia, Portugal, Belgium, Malta, Luxembourg, San Marino and Andorra

 

Updated: August 04, 2025, 6:37 AM