Many countries are increasingly experimenting with the use of AI in military technologies and for ensuring national security. Among the leaders in military AI is undoubtedly Israel. Modern technology has found its use in the military for both offensive and defensive purposes, in cybersecurity, and in the fight against terrorism. However, Israel often uses AI controversially, as evidenced especially in the recent months of armed conflict in Gaza. Artificial intelligence is massively deployed to track and kill Hamas members, but this also results in many innocent civilian casualties. On the other hand, the technology helps protect Israel from attacks by other states.
Automated killing by Israel
War is undoubtedly the best place for testing and experimenting with modern military technologies. Thus, we can observe numerous attempts by Israel to deploy AI in various contexts. An integral part of Israel’s war against Hamas is the use of military technologies based on AI. One of the most prominent systems is Gospel. Its task is to identify and subsequently designate potential targets, ranging from individuals to military equipment to buildings used by the terrorist group for planning their operations. After a detailed AI analysis, a recommendation for the target of an attack is provided to the responsible person. However, there is one fundamental problem – artificial intelligence does not always provide accurate information, resulting in many civilian casualties. Military personnel confirm that the aim is to produce targets at a high pace, and considerations for collateral damage must take a backseat. Gospel also helps compile a list of dangerous individuals, which in November of last year was said to contain around 30-40 thousand alleged Hamas fighters. Information about the system’s functioning is not known to the public, making it impossible to verify the reliability and impartiality of the AI system. The issue is that based on these lists, many people, often civilians, have been killed.
A newer system called Lavender has significantly contributed to the controversy surrounding Israel’s AI technologies deployed in the war. Tel Aviv uses it to select targets for bombing in Gaza. In recent months, it reportedly identified nearly 40,000 alleged Hamas members. The identified individuals were subsequently killed in their homes, often along with their families. Secret military sources speak of another alarming practice in the use of Lavender. During the first weeks of the war, officers were allowed to kill members of the terrorist group regardless of civilian casualties. For lower-ranked operatives, the permissible collateral damage was 15-20 civilians, while for higher-ranked individuals, it went up to hundreds of civilians. The aim was to dramatically speed up decision-making processes and destroy Hamas quickly. As a result, many civilians were killed. International law explicitly prohibits states from targeting civilian infrastructure and non-combatant residents. Only in absolutely necessary cases does the law allow for civilian casualties during conflict.
However, Israel denies all these accusations, stating that the system merely analyses potential targets and not specific individuals. According to the official version, all data are subsequently evaluated by analysts. Anonymous sources claim the exact opposite. According to them, there is no additional verification of the results, and any AI suggestion is taken as legitimate. But the problem doesn’t end there. Among the killed individuals were also people without any probable connection to Hamas. AI is only needed, for example, if a person is in a WhatsApp group with a terrorist or has recently changed their phone several times. This would automatically classify the person as a terrorist. According to some estimates, one in ten people was wrongly identified. Each person also had a risk level score (from 1 to 100). If a certain threshold was exceeded (which changed during the war), the individual was automatically killed.
The problem with using AI in weapon systems is the lack of a legal framework to regulate their deployment. Although existing international conventions should apply to modern technologies, this step is challenging in practice. Israel currently operates in a legal grey area, exploiting the fact that these systems are not yet regulated. However, at the UN level, there are various efforts to change the status quo.
Palestine is under sharp surveillance by artificial intelligence
Israel has long used various technologies designed for the surveillance and monitoring of individuals in the occupied territories of Palestine. Among the more prominent is an AI system for detecting individuals passing through checkpoints in the West Bank. This system, called Red Wolf, primarily serves to scan the faces of Palestinians using high-resolution cameras. Individuals can pass through the checkpoint only after successful identification. In this step, artificial intelligence must compare the face with a photograph from the state’s extensive internal database. If the system does not recognise the person, Israeli soldiers manually photograph them and add the image to the database. The situation took on a completely new dimension with the Blue Wolf system, which is also used in the West Bank. As part of an initiative, soldiers were encouraged to photograph as many people as possible and add them to internal databases. For this effort, Israelis could earn various rewards. Units that photographed the largest number of Palestinians, including vulnerable groups such as children, were given prizes, for instance, state-paid dinners.
Until this spring, similar systems were not used on the residents of Gaza. This changed quite quickly due to the war. As part of a new program, Tel Aviv implemented facial recognition systems throughout the Gaza Strip. The entire operation is highly classified, and its existence was revealed only through anonymous sources within the military who decided to speak out. The systems are deployed to help locate Hamas operatives or members of other terrorist groups. The problem is that these technologies are highly flawed and often misidentify individuals. As a result, innocent people were identified as individuals on watchlists, and, based on this AI assessment, many were detained and imprisoned. In reality, they had nothing to do with Hamas or other armed groups operating in Gaza. Another significant issue is the enormous lack of transparency surrounding the entire program. No one really knows how the technology works, who has access to the data, or whether it can reliably identify people. Originally, the purpose of this technology was entirely different. Tel Aviv was trying to find abducted victims from the October Hamas attack. This quickly spiralled, and AI now serves as a useful tool for monitoring Palestinians.
AI helps Israel defend itself
It cannot be overlooked that AI has, in some cases, helped Israel defend against enemy attacks. Perhaps the most notable instance was preventing a rocket attack from Iran in April of this year. At that time, Tehran launched dozens of missiles and drones in response to a military operation by Tel Aviv against the Iranian embassy in Syria. Despite the large scale of the attack, Israel managed to successfully avert it thanks to the help of allies, especially the Iron Dome air defence system. An integral part of the Iron Dome is artificial intelligence, which assesses the trajectory of incoming targets and determines the likelihood of hitting civilian objects. AI also assists in directing defensive missiles to neutralise incoming threats. A crucial feature of the Iron Dome is its ability to identify low-flying projectiles, which most air defence systems struggle to do. This is why Israelis can effectively shoot down drones. However, these capabilities work only with a low number of incoming objects. For example, when Hamas launched thousands of rockets on October 7th, the Iron Dome could not successfully thwart the attack. The Arrow 3 anti-ballistic missile system, along with the military’s GPS signal manipulation to prevent precise strikes, also ensured the security of Israel’s airspace during the Iranian attack.
Artificial intelligence is becoming an essential part of many advanced military technologies, but this also brings several risks. Although it can help states like Israel when attacked by other entities, the high error rate of the systems can lead to excessive automated killing of civilians. With AI, we observe decision-making speed and target selection at levels never seen before. However, this comes at a cost. The technology is far from neutral; it is biased and has many prejudices. Innocent people bear the brunt of this fact. International initiatives are emerging to ban fully autonomous weapon systems and regulate AI-using weapons. However, these efforts lag behind the rapid development of technology, giving states huge room for creative manoeuvring on the edge of the law. As seen in Israel’s example, states often cross moral lines to test new systems and gain a strategic advantage over their adversaries.