Ali lived in what had been a relatively untouched neighbourhood in eastern Gaza city until the night of October 12, 2024, when from nowhere an Israeli bomb struck. Shaken but unhurt, the IT technician fled with his laptop to the Sheikh Radwan neighbourhood in the north of the city to live with his aunt.
Two weeks later shortly before midnight, the Palestinian was on his laptop working on the rooftop in search of a stronger signal to upload files via a VPN when he heard a drone circling overhead.
“It was closer than usual. Then seconds later I saw a red light coming down on to the rooftop right in front of me, no more than 20 metres away,” he says.
The blast threw him off his chair but he was largely unscathed. As he ran back down the stairs his aunt’s family shouted: “The strike was for you! What were you doing? Who were you communicating with? Who do you have connections with?” Ali’s uncle told him to pack his bags and leave.
He rushed to an IT expert friend’s house who suggested Ali’s activities had been analysed by artificial intelligence and he was flagged on the suspicious list “for my ‘unusual behaviour’ of working with international companies, using encryption programmes and spending long hours online”.
Amid the devastation of Gaza, Ali and many others believe there is an unseen, pervasive AI presence that is watching, listening and waiting for those on its target list to show their faces.

To survive, Ali now accesses the internet under strict security measures and in very short bursts. “Their AI systems see me as a potential threat and a target,” he says, sharing the fears of many trapped Palestinians that a machine is now determining their fate.
In another such tale, less than three minutes after two young men had entered the first floor of an apartment block, a bomb struck killing not only the pair but also Mohsen Obeid’s mother, father and three sisters.
Mohsen, 34, was devastated and baffled. His family had no links to Hamas or any other faction. “We were innocent civilians,” he later told The National.
It was only after the attack in May last year that a consoling neighbour told him that he had seen the two young men, “presumably from the resistance”, entering the house.
The Obeid family were in their second-floor flat in Al Faluja, north of Gaza city, completely unaware that the Israelis' state-of-the-art AI system had almost certainly used its immense data harvesting tools that gave the men a high “suspicion score”.
In an investigation into these Gaza deaths, The National found:
- Israel operates a 20-second decision review known as a TCT (time constrained target) once a potential victim is picked up by the AI. These strikes are conducted on known Hamas operatives but also involve civilians
- Israeli’s state-of-the-art AI system uses data harvesting to give Gazans a high “suspicion score” that sets up a battlefield hit
- Israel operates a series of AI systems that are routinely run with a level as low as 80 per cent confidence of confirming a legitimate target
- Its known AI systems are Lavender, which raids data banks to generate potential confirmation of the target as a percentage; Gospel, which identifies static military targets such as tunnels; and Where’s Daddy?, which computes when a person is in a certain place
- The target acquisition relies on facial recognition and other tools, including mapping a person’s gait and cross-checking identities
- The target set includes as many as 37,000 Palestinians – compete with their photographs, videos, telephone and social media data – profiled in the systems.
Hoovering data
That, as much as anything else, sealed their fate. The system code-named Lavender had mined data collected over many years, said Chris Daniels, an AI specialist at Flare Bright, a machine-learning company.
“Gaza is a hugely surveilled place and Israel has taken that imagery, for however many years recording it all and feeding it into the system. They are just hoovering up all that data, visual facial recognition, phones, messages, internet, social media and because you've got two million people that is a huge amount for a machine-learning programme.”
Tal Hagin, an open-source intelligence analyst and software specialist based in Israel, believes there is an inflection point coming, if not already reached, where AI will make most battlefield decisions.

“The question is, are we at a point already where AI is taking over command, making decisions on the battlefield or are we still in the era of AI simply being an assistant, which is a huge difference.”
Certainly, in the early stages of the current Gaza conflict, the Israeli military was “eliminating different targets at a very, very increased speed” that would have required machine-generated information.
David Kovar, a cyber-AI specialist who has contracts with the US Department of Defence, said Israel had developed an enormous amount of their targeting information with AI “but they really weren't putting humans in the loop to validate whether these targets were legitimate”.
Human input
The strikes on Ali and the Obeid family out of the many thousands that have taken place in Gaza over the past 21 months raise serious concerns over the machine-driven mission Israel is carrying out. Does AI select the right people? Do humans have enough input? Are there any controls?
What The National’s investigation has found are not only questions over the accuracy of Lavender AI’s decision-making but also many civilians, like Mohsen’s family, who have apparently been killed by the system and that a high-collateral death toll is accepted from AI information.

The Israeli army, Mohsen said, “killed my family based on information generated by artificial intelligence” and without verifying if others were present in the building.
At the other end of the attack are Israeli military's AI "target officers" who are apparently content to go with a 80 per cent probability to confirm a target for strike, despite the collateral consequences, said Mr Daniels, also a former British army officer with strong Israeli military and intelligence connections.
“When it's 83.5 per cent and the human in the loop goes, ‘yes’ either that’s a good enough number – or if there's a low-value target, the number might be 95 per cent – but for a high-value target it could be as low as 60 per cent.” The “tolerance for errors” within the Israeli command room, he has been told, was “immensely high”.
“There's an element of dangerous errors pretty quickly if you remove the human,” he added.
The National spoke to Olivia Flasch, a lawyer who has advised the UN on the laws of armed conflict. She said: “It's prohibited to launch an attack that's expected to cause injury to civilians, that's excessive in relation to the concrete military advantage that is anticipated."
If a commander was 80 per cent sure that the target was the “mastermind of a terrorist organisation” and with him dead the war was likely to end, “that's a high military advantage”, she said. The assessment did not apply to rank-and-file fighters.
Sadly, for Mohsen and many others in Gaza, it appears the system named after the garden herb is now more redolent of death.
Killer systems
Lavender has been fed a vast mine of data including images taken from covert surveillance, open-source intelligence and information from the justice system of Palestinians that Israeli intelligence determined belonged to Hamas or other groups in Gaza.
The Lavender system does not necessarily generate targets but instead processes information that is generated and displayed for an intelligence officer. It is understood this then travels up a chain of command to a higher-ranking intelligence officer who will take into account civilian casualties when authorising a strike mission.
The National is also aware that there are other AI systems used by the Israelis whose codenames have not yet been disclosed – it is unclear what their capabilities are, such as in terms of precision targeting.
Insiders worry that Lavender has spawned a form of warfare where the human touch is largely absent at vital points.

Mr Kovar's information suggested that if a person was spotted above ground, moving between buildings, and AI had 80 per cent confidence this was a legitimate target, “they're going to take that shot”, he said, despite the risk of “collateral damage”.
The National has spoken to security sources, experts and viewed open-source intelligence piecing together how the system works from acquiring a target to their “elimination”.
When a person with a high “suspicion score” has their face recognised and location identified by AI, machine-driven analysis goes to work. This will include studying the person’s gait, their location and, using Gospel, their expected destination, alongside a wealth of other data processed within seconds.
Mr Kovar said there considerable effort had been "put into human facial and gait recognition, how people walk and move, and where they’re going”.
The system also significantly speeds up the ability make observe, orientate, decide and act (Ooda) loop decisions, allowing for a rapid military response.
“If AI can get you through that Ooda loop, from an image to identifying who the human is, then saying, ‘OK, we're going to take the shot faster than a human can do it’ particularly [if] a human has to go to check with higher-ups, then they're going to use the AI," he said.
The assembled Lavender information goes to an operator, giving the potential confirmation of the target as a percentage (for example, 83.5 per cent) alongside their suspicion score, suggesting how senior a figure might be.
The AI system also significantly increases the speed of response to hitting a target without having to wait for authorisation from senior officers.

The Lavender system was first disclosed by the Israeli outlet +972 in April last year, with Israeli sources claiming that operators were permitted to kill 15 or 20 civilians to eliminate even low-ranking Hamas members.
The Gospel AI system first appeared on Israeli armed forces' websites in 2021, describing an algorithm-based tool that identifies static military targets such as tunnels or fighters’ homes, that can assist a rapid response if a suspect enters them. This has then been aligned to another system called Where’s Daddy? that can compute when a person is in a certain place.
“Where’s Daddy? is used to track individuals that have been targeted by Lavender and it strikes individuals once they've entered their homes,” said Ms Amaral. This might explain why most of Israel's attacks on AI-identified targets take place on buildings.
Key champion
Outlandish as that might seem, Brig Yossi Sariel, head of Israel’s Unit 8200, the specialist team that introduced Lavender, wrote a book called The Human-Machine Team: How to Create Synergy Between Human & Artificial Intelligence That Will Revolutionise Our World.
In it, he describes a “target machine” which processes people’s connections and movements via social media, mobile phone tracking and household addresses.
Brig Sariel is allegedly a key driver behind the use of AI and, while the system’s precise workings remain highly secretive, it is understood Lavender generates a numerical “suspicion score” that, if high enough, will lead to a target for elimination.
Generating that high suspicion score makes death in Gaza a near inevitability, whether you were a member of Hamas or Palestinian Islamic Jihad or neither.
The strikes on the fighters, particularly in the early months of the campaign, were incessant, contributing to the body count that now stands at more than 57,570, with up to 20,000 of those combatants.

Iran nuclear origins
The evolution of this terrifyingly efficient killing machine that the Israelis have created will affect future wars can be traced back to the Iranian nuclear scientists’ assassination programme that, before Israel’s air strikes last month, culminated in the killing of Prof Mohsen Fakhrizadeh in 2020.
After that remote attack, the Israelis knew they could successfully target someone in distant Iran, so why not on their own doorstep? The success of the facial recognition in Iran drove a new advance in warfare spawning the Lavender system.
Currency exchange killing
Ramy’s family has two shops, one each in Gaza city and Rafah, a currency exchange business that they have run for 50 years.
So it was a shock when in early December 2023 their building in Gaza city took a direct hit from a drone-fired missile that fortunately failed to explode.
Confused, they had no idea why they were targeted because they believed the Israeli military would strike only if they had a specific reason.
Two weeks later, the branch reopened but within another two days it was struck again and this time the missile detonated, killing Ramy's brother Mohammed, two employees and several bystanders.
They suspended in-person services but then in April 2024 one of their data entry assistants was killed near his home by an Israeli bomb. It later transpired that the employee, who had no role in money transfers, had been affiliated to Hamas for a some time.
With the two attacks on their business, Ramy was certain the Gospel and Lavender had identified and tracked him to their business premises. “But the artificial intelligence didn't take into account the presence of dozens of civilian casualties that would result from targeting him in a commercial location,” he said.
“My brother died in that strike, even though he had absolutely no connection to any faction,” he added. “He was martyred simply because he happened to be next to someone, who wasn’t some high-ranking [Hamas] figure – just a regular guy with a political affiliation.”
Instances such as this raise doubts over trusting AI’s judgment on who precisely is "the enemy”. Noah Sylvia, a research analyst for emerging military technology at the Rusi think tank, concurs.
The Israeli military insists that human analysts verify every target, yet he raises the serious issue that “we don't know whether or not the (AI) models are creating the targets themselves”.
Another AI specialist, Nilza Amaral of the Chatham House think tank, agrees. “There is no requirement for checking how the machine is making those decisions to select targets,” she said. “Because it seems there are many, many people who aren't involved in military operations that have been killed”.
As many as 37,000 Palestinians – compete with their photographs, videos, telephone and social media data – have reportedly had their data entered on to the Lavender system.
“The Israelis created as many targets as they could and put them in a bank that would have tens of thousands of targets, because they were always expecting the next war with Hamas,” said Mr Sylvia.

Fusion warfare
Among the new technology introduced to Gaza is an upgraded tank, the Merkava 5 Barak, which was fitted with AI, sensors, radar and small cameras before deployment.
Inside the Barak are touch screens to input information that allows soldiers to rapidly transfer data to the AI “target bank” that is fed to an operations room at a secret location.
“These tanks are big-sensor platforms sucking in all the data,” said Mr Daniels, an AI analyst and has a military background.
In addition, there are ever-present drones over Gaza, mostly the Heron and Hermes variants, using their surveillance equipment and cameras to track people, phone calls and potentially encrypted messages.
With Israeli satellite coverage and covert observation posts, this makes the 363 square kilometres of the Gaza Strip the most surveilled land in the world. It has also allowed the Israeli military to strike targets with astonishing speed.
Mission score
Lavender suspicion score is important because it is understood that if the AI picks up a “high-value target”, then the operators will be willing to accept significant collateral damage, that is the deaths of non-combatants, to kill a senior commander.
“They're doing that sort of risk calculation,” said Mr Kovar. “Rightly or wrongly, they are dialling back on the required confidence interval for taking those shots and I think that's part of the reason we've seen a lot of collateral damage.”
Israeli sources have confirmed that while the target information is rapidly digested by AI, F-15s, F-16s or F-35s will be circling overhead along with armed drones.
The Lavender operator, with input from Shin Bet intelligence, will then make the final click to authorise the strike, sending a missile rapidly hurtling towards the target.
“I’ve heard that the human operators would spend about 20 seconds to confirm a target, just to double-check that they were male,” said Ms Amaral.
That 20-second decision is what the military call a TCT (time constrained target) being picked up by AI and a strike has to be made as soon as possible. While these strikes are conducted on known Hamas operatives, on whom a lot of intelligence has been collected by Lavender, it is unclear what civilian casualties Israel is prepared to take to eliminate the person.
AI errors
Israelis’ “tolerance for errors is immensely high”, said Mr Sylvia. He said the data input did not account for “biases” in the people who created the model, with an argument that “decades of dehumanisation of Palestinians” might have influenced them.
The error factor was echoed by an Israeli source involved in AI and intelligence-gathering that The National interviewed under condition of anonymity. “This is war and people will always make mistakes under the stress of combat,” he said.
But suggesting that AI was taking out lots of innocent people was “fantastical” and unlikely, he argued. “Yes, Lavender is being used a lot but this has not created some dystopian future where machines are out of control,” the Israel officer insisted.
Lavender tweets
Despite the AI programme’s secrecy, analysis by The National showed that dating back to July last year, there had been more than 50 strikes published on the Israeli military's X account which it claimed were “intelligence based” and had used “additional intelligence”.
Many of the posts featured pictures or videos of strikes accompanied by the statement: “Following intelligence-based information indicating the presence of Hamas terrorists, the Israeli military conducted precise strikes on armed Hamas terrorists gathered at two different meeting points in southern Gaza.”
One video, from August 13 last year, shows two men carrying long-barrelled weapons, probably AK47s, walking behind a donkey cart. Seconds later, a missile strikes them, leaving the animal apparently unharmed in what is understood to have been AI-driven targeting.
Many “elimination” posts on X also show videos or pictures of Hamas members in Israel during the October 7 attacks that experts believe were also fed into the Lavender database.
In a strike on Ahmed Alsauarka, a squad commander in the Nukhba force who participated in the October 7 killings, Israeli targeting on June 20 last year is thought to have assessed his gait and facial features before sending in the bomb that Israeli claimed did not harm any civilians.

Israel's response
The Israeli military told The National that humans remained firmly in control and that Lavender did not dictate strikes on humans.
“The AI tools process information, there is no tool used by the military that creates a target, the human in the chain has to create the target [for the] Israeli military,” the army said.
“All target strikes are made under international law. We have never heard of Lavender putting forward targets that have not had human approval.”
It added that the AI was not “a generative machine that creates its own rules” but “a rules-based machine” and the sources that feed it information were always humans.
“Lavender takes a defined set of sources and there are people whose job is to make sure that the sources that are feeding Lavender are precise, accurate and have human control. It then creates a recommendation for who the intelligence officer should look into.”
Data feeds
But machine-generated killings at scale are a growing concern for those who have helped build these systems. The amount of intelligence generated by surveillance in the modern world, let alone warfare, is such that it is indigestible by humans. “It would take you days to go through just a single hour’s worth of footage,” said Mr Sylvia.
While data is key to Lavender’s effectiveness, the machine can only be as good as the information it is given. It cannot be blamed if it is fed faulty data.
Questions remain over the “digital literacy” of senior commanders who do not fully understand the nuances or shortcomings of AI. Ultimately, the experts say, the AI models will reflect the people that are using them.
Mr Kovar argued that “theoretically” AI could allow a much higher degree of accuracy with more rigorous target profiling given the information known about individuals in Gaza.
But machine learning also causes some uncertainty and possibly unchecked autonomy, as it is unknown if Lavender has “self-created” people who it believes are threats.
Machine legal?
That creates a concern over the legalities of using AI for military means, an entirely new area of warfare but one that will certainly take hold given its “success” in Gaza.
Matt Mahoudi, an adviser to Amnesty International on the legal use of AI in war, says Lavender is “totally in violation of international human rights and humanitarian law” and is a system that “erodes the presumption of innocence”.
“Lavender is based on unlawfully obtained mass surveillance data,” he added. “AI systems that turn up tens of thousands of targets on the basis of arbitrary data, would make any scientist say it’s flawed and discriminatory.”
Robert Buckland, a barrister and former Conservative cabinet minister, also raised the issue that the system was “only as good as the data” inputted and had the danger of being “incomplete, historic or out of date", which would then make it “rubbish”.
But that is countered by Ms Flasch’s argument of “military advantage” that would justify killing civilians if taking out a terrorist mastermind could conclude the war.
Machines supreme
Countries are developing technology quicker than laws can keep up with, said Lord Carlisle, a barrister and former British MP. “There is a degree of urgency about this,” he said, but it usually took “critical events to make decisions happen”.
That, he agreed, raised the Terminator scenario into which played the worrying prospect that it is now known that AI can hallucinate or lie.
“I don't think we're going to end up with Terminator, but my concern is that we're going to be in a more automated battlefield and get close to that Terminator scenario,” Mr Kovar said.
This feeds into Mr Daniels’ warning that “when AI fails, it fails horribly”. This could have catastrophic consequences, as machines "don't have a conscience”.
That means compassionless AI could “keep prosecuting a war to achieve the desired effect” whereas humans “at some point go ‘yeah, that's enough suffering’,” and end the conflict.
Lightning advances that Israel has made in AI during its war on Gaza and elsewhere have raised the stakes for future wars in which humans might have little control.
Some names have been changed to protect witness identity