AI System Reveals Shocking Truth About Israeli Bombing Campaign in Gaza – Exclusive Testimonies Exposed

Tel Aviv, Israel – The Israeli military’s recent bombing campaign in Gaza has raised questions about the use of artificial intelligence (AI) in warfare. According to intelligence sources, the Israeli military employed an AI-powered database known as Lavender, which identified 37,000 potential targets with links to Hamas during the conflict. This revelation sheds light on the unprecedented use of AI systems in modern warfare and the ethical implications of such technology.

The use of Lavender allowed Israeli intelligence officials to streamline the target identification process during the six-month war. The AI system was developed by the Israel Defense Forces’ elite intelligence division, Unit 8200, and played a central role in the conflict. By rapidly processing data, Lavender identified low-ranking Hamas operatives, leading to airstrikes on their homes using unguided munitions.

Intelligence officers involved in using Lavender provided candid testimony about the targeting process, revealing how the AI system generated a database of potential targets. The system’s algorithm was refined over time, achieving a 90% accuracy rate in predicting targets. This led to the widespread use of Lavender as a tool for recommending targets, alongside another AI system called the Gospel, which focused on identifying buildings rather than individuals.

The testimonies also highlighted the relaxed targeting processes during the conflict, with pre-authorized allowances for civilian casualties in airstrikes. The IDF’s permissive policy regarding collateral damage led to concerns about the high death toll in Gaza, with reports of thousands of Palestinian casualties. Experts in international humanitarian law expressed alarm at the acceptance of high collateral damage ratios, particularly for lower-ranking militants.

Despite the IDF’s assertion that operations were conducted in accordance with international law, concerns remain about the moral and legal justifications for the bombing strategy. Some intelligence officers have questioned the approach taken by their commanders, noting the devastating impact on civilians in Gaza. The testimonies offer a glimpse into the challenges of using advanced AI systems in the context of modern warfare and the complex ethical considerations that arise.

As the debate continues about the use of AI in military operations, the testimonies from the intelligence officers underscore the need for greater transparency and accountability in the development and implementation of such technology. The implications of using AI in warfare extend beyond the battlefield, raising fundamental questions about the future of conflict and the protection of civilian lives.