The Rise of Autonomous Warfare: Lethal Autonomous Weapon Systems (LAWS)
Use of AI-powered weapons during an ongoing conflict, such as the war between India and Pakistan, Israel and Gaza the war between Ukraine and Russia
By Leena
Abstract
The sudden imposition of Artificial Intelligence (AI) in contemporary warfare has entirely reshaped how military operations are conducted. From intelligence gathering and surveillance to autonomous targeting and lethal action, AI-facilitated systems now impact strategic and tactical decision-making in wholly new ways. This article analyses the strategic, legal, and moral implications of Lethal Autonomous Weapon Systems (LAWS), and in this analysis, there are specific case studies of the Ukraine conflict, the Gaza conflict, and Operation Sindoor by the Indian military. These conflicts illustrate the growing role of AI in processing intelligence, identifying threats in real-time, and autonomous precision strikes. While AI increases pace, precision, and scale on the battlefield, its use also gives rise to severe concerns about accountability, bias in algorithms, proportionality, and compliance with international humanitarian law. With AI as an integral force multiplier in conflict, resolving these matters is necessary to maintain the rule of law as well as ethical standards in warfare.
Introduction
Artificial Intelligence (AI) has now emerged as a game-changing capability in military affairs, transforming war fighting, intelligence gathering, and target selection and engagement. Military operations were hitherto based primarily on human decision, discretionary analysis of data, and geographically located persons. But with the sudden growth of machine learning, big data analysis, computer vision, and robotics, AI technologies began to complement and, in certain cases, replace human functions in combat missions. The integration of AI into military platforms is not simply a shift in technology, but in military policy and strategy.
Application of AI in warfare can be divided into four broad categories: intelligence, surveillance, and reconnaissance (ISR); command and control; autonomous systems (drones and unmanned ground vehicles); and lethal autonomous weapon systems (LAWS). These systems can examine vast quantities of data from satellites, sensors, and surveillance feeds in real time and detect threats, track movements, and forecast the moves of enemies quicker and more accurately than ever before.
Autonomous drones powered by AI are in many ways the most visible expression of this shift. These systems are capable of conducting reconnaissance missions, charting intricate environments, and launching precision attacks with minimal or no human interaction. Similarly, decision support systems powered by AI aid commanders in integrating complex battlefield intelligence into understandable action steps, support strategic planning, and speed up the decision process in high-pressure situations. AI is also being utilised in cyber warfare operations to detect, counter, or retaliate against cyber threats autonomously.
But some real problems come with military uses of AI. The largest ones are accountability, transparency, and international humanitarian law (IHL) compliance. If an autonomous weapon misidentifies a civilian as a combatant, assigning legal fault is difficult. Furthermore, the risk of algorithmic bias, system malfunction, or adversarial attacks against AI systems presents fundamental questions of reliability and control.
Conventional Warfare
Conventional Warfare, traditionally defined as conflict between states employing Regular, organised military forces, is a core idea in the doctrine of war and international relations. It is defined by symmetrical combat between national forces employing conventional arms small arms, armoured vehicles, artillery, naval fleets, and air power under hierarchical command structures. Compared to irregular or hybrid conflict, conventional war follows more strictly international laws of armed conflict, such as bounded combatants, frontlines, and goals.
conventional warfare is increasingly intersecting with cyber warfare, information operations, and AI-enabled systems, challenging traditional models of command and control. The integration of precision technologies, autonomous systems, and real-time surveillance is blurring the lines between conventional and unconventional tactics, creating what some scholars describe as “conventional-plus” or “algorithmic warfare.” Conventional warfare remains a central pillar of military conflict, both in theory and practice. While its character is evolving, particularly through technological augmentation, its strategic logic using organised force to achieve political ends continues to shape global security dynamics. Understanding conventional warfare is, therefore, critical not only for military professionals and policymakers but also for legal scholars and humanitarian actors tasked with regulating and mitigating the conduct of war.
Modern Warfare
Modern warfare is characterised by the intersection of advanced technologies, asymmetric warfare, and multi-domain operations that expand beyond the traditional battlefields. Contrary to the manpower-dominant, tank- and gun-centric 20th-century wars, wars in the 21st century are becoming more dependent on digital technology, unmanned systems, and information dominance. Advances in modern warfare follow technological innovations and the changing character of threats globally.
At the heart of modern war lies the confluence of new technologies such as artificial intelligence (AI), cyber, unmanned systems, hypersonic weapon systems, and space capabilities. They have revolutionised the nature of state power projection, observation capacity, and tactical advantage. New warfare is also marked by its multi-domain nature, including land, air, sea, cyber, and space. Operations are no longer confined to physical geography alone but spill over into cyberattacks against strategic infrastructure, electronic warfare, and information operations directed against public opinion and enemy cohesion. Cyber warfare is central to military strategy. The 2007 Estonian cyberattacks and the 2022 cyberattack on Ukraine’s digital infrastructure illustrate how non-kinetic assets can achieve strategic objectives without the application of physical force.
AI-Based Weapons and Emerging New Technology in Defence
The Atomic and Jet Age
The 1940s also witnessed a revolutionary shift in war with the discovery and use of nuclear weapons during World War II. The atomic bombs dropped on Hiroshima and Nagasaki in 1945 not only ended the war but also ushered in a new era of strategic deterrence. The post-war era started witnessing the outbreak of the nuclear arms race, particularly between the Soviet Union and the United States. This era also witnessed the general use of jet propulsion, significantly enhancing the speed and manoeuvrability of aircraft, as evidenced in the American F-86 Sabre and the Russian MiG-15 during the Korean War. Technology for radars also matured during this time to become the cornerstone of early warning and air defence systems. The developments precipitated a new model of warfare, a warfare AI.
Cold War Technology and Space Militarisation
The 1960s and 1970s were overshadowed by the Cold War and were marked by technological rivalry and superpower ideological opposition. This period witnessed the creation of Intercontinental Ballistic Missiles (ICBMs) and Submarine-Launched Ballistic Missiles (SLBMs), which provided the core of nuclear deterrence. These systems facilitated the presence of second-strike retaliation capability, leading to the Mutually Assured Destruction (MAD) doctrine.
Satellites were becoming more militarised throughout this time for communications, weather monitoring, and surveillance. The American Corona project and the Soviet Zenit series made it possible to spy around the world and monitor intelligence. Also, electronic warfare capabilities were coming on board, targeting radar jamming, signals intelligence (SIGINT), and electronic countermeasures.
Digital Warfare and Strategic Defence Initiatives
The 1980s introduced the digital revolution in military systems. Computers were integrated into command and control structures, enabling real-time coordination of forces. The U.S. invested heavily in Stealth technology, leading to the deployment of the F-117 Nighthawk. Its ability to evade radar represented a breakthrough in air superiority.
Another notable development was the rise of cruise missiles, such as the U.S. Tomahawk and Soviet Kh-55, which combined precision with long-range capabilities. Ronald Reagan’s Strategic Defence Initiative (SDI), although never fully realised, proposed the use of space-based missile defence systems, highlighting the growing interest in space as a military frontier.
Precision Warfare and Network-Centric Operations
The 1991 Gulf War demonstrated a new kind of warfare, one driven by precision-guided munitions (PGMs), real-time satellite imagery, and network-centric warfare (NCW). PGMs such as laser- and GPS-guided bombs allowed the U.S.-led coalition to strike with unprecedented accuracy, minimising civilian casualties and reducing the need for large-scale ground invasions.
The concept of NCW emphasised the integration of information across the battlefield to improve situational awareness and decision-making. The era also marked the early deployment of Unmanned Aerial Vehicles (UAVs) for surveillance, laying the groundwork for their later weaponisation.
The Rise of Drones and Cyber Warfare
The post-9/11 era saw the rapid militarisation of UAVs. Platforms like the MQ-1 Predator and MQ-9 Reaper were equipped with Hellfire missiles and deployed extensively in counterterrorism operations across the Middle East, Afghanistan, and Africa. These systems enabled remote warfare, allowing operators thousands of miles away to conduct precision strikes.
Simultaneously, cyber warfare emerged as a serious concern. The 2007 cyberattacks on Estonia and the Stuxnet virus, which disrupted Iran’s nuclear program in 2010, highlighted the potential of cyberspace as a domain of strategic warfare. Governments began investing in both offensive and defensive cyber capabilities, often blurring the line between military and civilian infrastructures.
Autonomous weapons, and lethal autonomous weapon systems (LAWS)
It is imperative to understand the basics of artificial intelligence (AI), autonomous weapons, and lethal autonomous weapon systems (LAWS) to grasp the utilisation of advanced technologies in modern warfare. These are the precepts upon which an understanding of the implications of these technologies and how they are being utilised in modern theatres of conflict must be based.
When computers or computer programs imitate human intellectual activities such as learning, reasoning, problem-solving, and decision-making, this is called artificial intelligence (AI). AI may be applied in the military to manage large sets of data, recognise trends, detect anomalies, and assist operational decision-making in real-time. Predictive analytics, computer vision, natural language processing, and machine learning algorithms constitute AI technology. They work in a broad range of military operations, from the analysis of satellite photographs and the coordination of supplies to operating unmanned vehicles and commanding the battlefield.
AI is among the primary impulses for autonomous weapons systems. AI makes it possible for machines to function in extreme conditions without a human’s input in real-time. The degree of complexity within the system depends on the level of autonomy with which it is granted, and so comes the following category: autonomous weapons.
Semi-autonomous systems, where human intervention is required for major decisions, such as target selection or attack.
Supervised autonomous systems, where human supervision is maintained and humans can intervene if necessary.
Completely autonomous systems are capable of conducting their mission, including lethal force, without the direction of human beings.
Autonomous weapons involve integrating drones, loitering munitions, or unmanned ground vehicles (UGVs) utilising AI for navigation, object detection, target ordering, and engagement procedures. Ethical and operational debate intensifies when these technologies move into the lethal autonomy segment.
Lethal Autonomous Weapons Systems (LAWS) refer to a type of autonomous weapon that is specialised to inflict deadly force independently of human control within the crucial function of selecting and targeting targets. It is hence feasible to make a LAWS operational so it would search, identify, and strike a target without there being human decision-making within the sequence. LAWS are basically outside the usual parameters of military conventional norms and induce basic ethical, legal, and accountability problems.
Full-Spectrum Autonomy, Quantum, and Hypersonic
In the current decade, defence innovation is being driven by AI integration, swarm robotics, hypersonic weapons, and quantum technologies. Nations are investing heavily in autonomous drone swarms, AI-powered command systems, and hypersonic glide vehicles (HGVs) capable of penetrating missile defences due to their speed and manoeuvrability.
Simultaneously, quantum computing and quantum encryption are being explored for secure communications and next-generation navigation systems. The militarization of space has accelerated, with anti-satellite (ASAT) tests and plans for space-based sensors and defence platforms.
Recent Trends Use of AI in Warfare
Case Study I: Operation Sindoor: India’s Precision Strike in the Age of AI Warfare
In Operation Sindoor, the use of Artificial Intelligence (AI) marked a transformative shift in how India conducted precision military operations. AI was deployed across several layers of the battlefield, from intelligence gathering to real-time decision-making, targeting, and threat interception.
Operation Sindoor, launched by the Indian Armed Forces in May 2025, marked a significant advancement in integrating artificial intelligence (AI) within military operations. This operation, targeting terrorist infrastructure in Pakistan and Pakistan-occupied Jammu and Kashmir, showcased the deployment of AI-driven systems across various defence domains, enhancing precision, efficiency, and strategic effectiveness.
Akashteer – AI-Enabled Air Defence Control
The Akashteer system, an automated air defence command and control network, was deployed to manage India’s response to retaliatory aerial threats (drones, UAVs, missiles). Akashteer automatically collected data from radars, sensors, and missile batteries. It used AI to identify, classify, and prioritize aerial threats, issuing real-time alerts to air defence units. It was deployed in Operation Sindoor to network various radar stations, ground-based weapons, and surveillance units into a unified battlefield grid.
Result: Over 300 enemy drones were intercepted and neutralized during and after the strikes.
BrahMos Supersonic Missile: Offensive Precision Tool
The BrahMos missile, jointly developed by India’s DRDO and Russia’s NPOM, was used as a primary strike asset. It delivers a 200–300 kg warhead at speeds of Mach 2.8–3.0 and is capable of hitting targets with sub-meter accuracy.
In Operation Sindoor, air-launched and ground-launched BrahMos variants were used to neutralise:
- Terrorist launch pads and command centres in PoK.
- High-value communication and ammunition dumps.
- Infrastructure used for infiltration and logistics.
S-400 Triumf: Strategic Air Defence Umbrella
The S-400 Triumf, acquired from Russia, served as the backbone of India’s aerial defence during Operation Sindoor. With a detection range of 600 km and engagement capabilities of up to 400 km, the S-400 enabled Indian forces to create a defensive shield over key installations and border sectors.
During the operation, S-400 batteries were strategically deployed in northern command sectors, providing 360-degree radar coverage. Their role was twofold:
- Defensive countermeasure against any potential Pakistani Air Force retaliation.
- Airspace denial capability, deterring enemy aircraft and drones from entering sensitive Indian zones.
The psychological and tactical deterrence provided by the S-400 also allowed Indian aircraft and drones to operate with reduced threat overhead, facilitating extended reconnaissance and target validation missions.
India carried out several precise and well-planned military actions to achieve its objectives.
According to the PIB Report, the Indian Armed Forces launched coordinated and accurate missile strikes on 9 terrorist bases, 4 located in Pakistan (including Bahawalpur and Muridke) and 5 in Pakistan-occupied Kashmir (such as Muzaffarabad and Kotli). These locations were key command centres of Jaish-e-Mohammed (JeM) and Lashkar-e-Taiba (LeT), responsible for major attacks like Pulwama (2019) and Mumbai (2008).
In retaliation for Pakistani drone and missile attacks on Indian cities and military bases on May 7, 8, and 9, 2025, India deployed kamikaze drones intending to neutralise Pakistan’s air defence capabilities, including disabling Lahore’s air defence system.
India’s air defence systems successfully intercepted all incoming threats, resulting in minimal loss of life or property. In contrast, Pakistan’s HQ-9 air defence system was exposed as weak. On the night of May 9 and 10, 2025, India’s counteroffensive became a historic milestone when, for the first time, a country successfully attacked the air bases of a nuclear-armed nation.
Within just three hours, India targeted 11 military installations, including Noor Khan, Rafiqui, Murid, Sukkur, Sialkot, Pasrur, Chunian, Sargodha, Skardu, Bholari, and Jacobabad.
Satellite images before and after the strike on Shahbaz Airbase in Jacobabad clearly show the scale of destruction.
The attack targeted major ammunition depots and airbases such as Sargodha and Bholari, where F-16 and China’s JF-17 fighter jets were stationed. As a result, nearly 20% of Pakistan’s air force infrastructure was destroyed.
Turkish-Origin Drones Used By Pakistan
The Byker YIHA III is a tactical kamikaze drone (also known as a loitering munition), designed for one-way missions. Key Features: Type: Loitering munition Range: Approx. 100–150 km Warhead: High-explosive payload (typically 5–10 kg) Navigation: GPS and inertial guidance Launch Mode: Portable launcher or vehicle-mounted.
The Songar is a Turkish-made armed quadcopter developed by Asisguard. It is notable for being one of the first small rotary-wing drones equipped with a functional firearm system, capable of firing in semi-automatic or burst mode. Key Features: Type: Rotary-wing (quadcopter) Armament: 5.56 mm light machine gun (up to 200 rounds) Range: Up to 10 km (operational radius) Payload Options: Grenade launcher variants available.
The Bayraktar TB2 is one of the most prominent and widely exported medium-altitude long-endurance (MALE) drones developed by Turkey’s Baykar Technologies. Key Features: Type: Fixed-wing MALE UAV Wingspan: 12 meters Endurance: Over 24 hours Payload Capacity: ~150 kg Weapons: Laser-guided MAM-L and MAM-C munitions Range: 150–300 km (line-of-sight, extended via relay stations)
Chinese fighter Jet
The JF-17 Thunder, a lightweight multirole fighter jointly developed by Pakistan and China, serves as a cornerstone of the Pakistan Air Force’s (PAF) combat fleet. Equipped with modern avionics, including a glass cockpit and AESA radar in its Block III variant, the JF-17 is designed for air-to-air and air-to-ground missions.
In Operation Sindoor, Pakistan deployed Byker YIHA III drones, Songar Armed Drone, Bayraktar TB2 and JF-17 for strikes on forward Indian bases and radar installations. However, these drones were largely ineffective against India’s integrated air defence systems, which detected and intercepted them using radar and anti-drone technologies.
Over 50 individuals, including Squadron Leader Usman Yusuf and 4 airmen, were killed in the bombing of Bholari Airbase. Several Pakistani fighter jets were also destroyed.
Under Operation Sindoor, India executed precise strikes on several terrorist hubs and military facilities in Pakistan.
After Pakistani artillery and mortar attacks targeted civilian areas in the Poonch-Rajouri sector along the Line of Control, Indian forces retaliated, destroying terrorist bunkers and Pakistani army positions that were targeting civilians.
Case Study II: Russia-Ukraine War
The conflict between Russia and Ukraine has now emerged as a benchmark case study for the application of Artificial Intelligence (AI) in contemporary warfare, specifically with autonomous weapons platforms. Though conventional warfare remains in play, Tanks, artillery and troops, the landscape of war has been transformed with the deployment of autonomous drones, loitering weapons, and AI-based decisional systems capable of real-time target identification, pursuit, and, at times, strike delivery with minimal human intervention. This war is perhaps the first major conflict in which autonomous weapons, backed by AI, have been used in large numbers and various forms.
Loitering Munitions and Kamikaze Drones
One of the most apparent and influential AI-fueled autonomous weapons used during the conflict is the employment of loitering munitions or “kamikaze drones.” These are sent into contested skies where they patrol until a target is detected. After a target is identified, typically with the aid of AI-driven visual identification, they plummet and detonate on impact.
Ukraine has employed a variety of loitering munition variants, ranging from the United States-provided Switchblade 300 and Switchblade 600 systems to Phoenix Ghost, which is a locally developed platform optimised for Ukrainian theatre conditions. They can autonomously detect tanks, artillery locations, and enemy forces based on specified signatures and destroy them on their own.
Russia has also employed loitering drones, the most prominent of which are the Lancet series. The Lancet systems feature autonomous navigation and target identification features, employing onboard AI to detect and engage Ukrainian military equipment. Lancet drones are deployed from mobile launchers and have been successful at engaging howitzers, radar sites, and air defence systems.
Autonomous Navigation and Swarm Drones
Ukrainian autonomous weapons are not restricted to strike missions. Ukraine has tested drone swarming technology, a notion where numerous autonomous drones fly in concert through AI algorithms used for communicating, collision avoidance, and performing intricate group manoeuvres without human input.
Though the full swarming potential is currently in development stages, Ukraine has already showcased limited-scale swarm-like operations wherein drones with rudimentary AI systems undertake surveillance, jamming, and attack missions in tandem. Such drones can be used in GPS-denied areas, owing to AI-driven visual mapping and inertial navigation systems.
Furthermore, AI is being exploited to improve self-navigating capabilities in far-range strike UAVs. To illustrate, low-cost, extended-range UAVs have been launched by Ukraine, which can attack far into Russian territory using premapped paths and adapting to terrain real-time factors enabled through AI.
Semi-Autonomous Ground Robots and Turrets
On the battlefield, autonomous and semi-autonomous systems have also been used to a limited extent. Ukraine has deployed remote-controlled or semi-autonomous ground vehicles for reconnaissance, mine-clearing, and even weaponised turret systems that can detect and engage targets with AI support.
An example of this is AI-enabled machine gun platforms at checkpoints or frontlines, which employ thermal and motion sensors to detect human bodies and determine threat levels. Although these are typically overseen by humans, the targeting process is increasingly left to AI to accelerate response time.
Russia has used the ZALA Lancet series of loitering munitions, which have been used to hit Ukrainian targets. ZALA Lancet: The drones are armed with optical-electronic and TV guidance systems to autonomously detect and attack targets. They have been used to attack different categories of targets such as air defence systems, artillery, and ships.
Ukraine’s Delta System: Ukraine has built and rolled out the Delta situational awareness system that brings information together from a multitude of sources into a real-time battlefield picture. Delta: The cloud-based platform gathers information from drones, satellite systems, and ground-based systems to offer an integrated picture of the battlefield. It aids planning, coordinates military forces, and exchanges intelligence securely.
Case Study III: Israel-Gaza Conflict
The Israel-Gaza conflict has become one of the world’s first live labs for next-generation military artificial intelligence (AI) systems. With more warfare being conducted in the digital sphere, the Israeli Defence Forces (IDF) have led the charge in applying AI to operational strategy, especially via tools such as Lavender and The Gospel. These technologies aim to automate and streamline the target identification and attack coordination process. Though these systems have improved battlefield effectiveness, they also present deep legal and ethical concerns regarding the place of AI in contemporary warfare.
AI Targeting Systems: Lavender and The Gospel
Lavender is an AI tool allegedly created to detect potential Hamas agents through massive metadata caches, such as mobile phone activity, communication networks, and geolocation information. As reported by investigative reporting by +972 Magazine (Nir, Yuval, & Furer, 2024), Lavender identified up to 37,000 people as targets in a single escalation cycle. Though IDF officials characterise the system as an “assistive tool” and not a complete autonomous platform, reports show that human oversight was frequently minimal, only 20 seconds per target before approval for airstrikes.
The Gospel, another AI-driven tool, provides a complementary capability by combining surveillance feeds, intelligence inputs, and targeting data to suggest or verify high-priority strike targets, specifically infrastructure tied to militant operations. It is utilised mainly to facilitate strike planning against buildings, tunnels, or suspected weapons stockpiles.
These systems are part of a wider network of AI-powered technologies, including real-time data fusion platforms and semi-autonomous drones, which enable the IDF to launch thousands of strikes over short periods. The integration of machine learning, automated surveillance, and algorithmic targeting has added a degree of scalability and efficiency not witnessed before in conflict.
AI systems such as Lavender and The Gospel represent a paradigm shift in contemporary warfare away from human-led decision-making and towards data-led automation. Though they provide important tactical benefits, they also create deep ethical and legal issues.
AI-based Weapons More Humane
Risks of algorithmic bias and overreliance on AI
As militaries more widely employ Artificial Intelligence (AI) in targeting, surveillance, and operational decision-making, fear of algorithmic bias and AI system overdependence has escalated. These dangers have been witnessed in both the Russia-Ukraine conflict and the Israel-Gaza war, where AI-powered platforms have taken prominent roles in battlefield intelligence and strike coordination.
Algorithmic Bias in Target Identification
AI for military targeting depends on large data sets and pattern-recognition software, which can sometimes encode biases or make unjustified assumptions. The Israeli Defence Forces’ (IDF) use of the AI system Lavender in Gaza, which identified more than 30,000 people as possible Hamas operatives, is an example of the risks of data-driven profiling. Lavender allegedly used metadata like phone use patterns, location history, and social relationships. In a crowded and well-monitored setting such as Gaza, such standards are inadequate to differentiate between combatants and civilians. Misclassification based on skewed or incomplete information is a likely route towards unlawful killings and breaches of the principle of distinction under International Humanitarian Law.
The same issues are repeated in Ukraine, where Russia and Ukraine have employed AI-powered facial recognition (e.g., Clearview AI and other bespoke software) and predictive targeting technology. Such tools can warn of individuals based on online activity or affiliations without sufficient context. In conflict zones where nontraditional combatants and unclear military-civilian distinctions exist, the room for mistake is very tight and possibly fatal.
Overreliance and Automation Bias
Increasing reliance on AI threatens to erode human judgment and control. In Israel’s AI strike processes, human checking was frequently cut to as short as 20 seconds per target, investigative reports say. This shallow checking subverts the function of effective human control and can encourage automation bias, where operators rely on machine suggestions even when they have doubts.
During the Russia-Ukraine conflict, autonomous drones and automated artillery fire systems have proven operationally beneficial but raised questions regarding decoupling human commanders from deadly decisions. For example, Ukraine’s application of AI-embedded reconnaissance drones has been praised for boosting efficiency but condemned for their potential to attack civilians in disputed areas where combatants do not wear uniforms or fight openly.
Future of AI-based weapons
Artificial Intelligence (AI) is essentially transforming military power worldwide. AI weapons incorporating autonomy, data handling, and adaptive decision-making are moving away from theory and into the real world. They provide increased speed, accuracy, and efficiency but pose challenging ethical and legal issues. This essay examines the future of AI weapons through concrete examples and their probable effects on warfare and global security.
- Skyborg Program – United States
Overview: The U.S. Air Force’s Skyborg program aims to develop AI-enabled unmanned combat aerial vehicles (UCAVs) that can autonomously perform missions and team with manned aircraft.
Status: As of 2024, the Air Force plans to formalize Skyborg as a program of record, integrating it into the broader Collaborative Combat Aircraft (CCA) initiative.(Defense Daily)
- MQ-28A Ghost Bat – Australia
Overview: Developed by Boeing Australia, the MQ-28A Ghost Bat (formerly known as the Loyal Wingman) is an AI-powered UCAV designed to operate alongside manned aircraft, providing support in combat missions.
Status: As of October 2024, eight Block 1 vehicles have been built with over 100 hours of flight testing. Additional Block 2 airframes are under production, featuring improved capabilities.
- Sukhoi S-70 Okhotnik-B – Russia
Overview: The Sukhoi S-70 Okhotnik-B is a stealth UCAV developed by Russia, intended to operate in conjunction with the Su-57 fighter jet, utilising AI for autonomous operations.(Grey Dynamics)
Status: Recent reports indicate that Russia is preparing to deploy the Okhotnik alongside the Su-57, highlighting advancements in manned-unmanned teaming strategies.
- ALFA-S Swarm Drones – India
Overview: India’s Air Launched Flexible Asset-Swarm (ALFA-S) project focuses on developing AI-enabled swarm drones capable of performing various missions, including surveillance and electronic warfare.
Status: The Indian Air Force has unveiled plans to acquire ALFA-S drones, with development trials expected to commence later in 2024.(IDRW)
- Jiu Tian Drone Mothership – China
Overview: China’s Jiu Tian is a large UAV designed to act as a “mothership,” capable of deploying swarms of AI-powered drones for various military operations.
Status: The Jiu Tian was unveiled during the Zhuhai Airshow and is set to enhance China’s air power by enabling swarm tactics.(thesun.co.uk)
- Tempest Future Combat Air System – UK
Overview: The UK’s Tempest program aims to develop a sixth-generation fighter aircraft incorporating AI, advanced sensors, and stealth capabilities, as part of the Future Combat Air System (FCAS).
Status: The UK continues to assess platform options for Tempest’s autonomous wingman, focusing on integrating AI and collaborative combat technologies.(Flight Global)
India is working on its own Autonomous Underwater Vehicles (AUVs) for naval operations, with the help of companies like Sagar Defence Engineering, utilising indigenous AI-powered drones. These are robotic submarines without a crew, driven by artificial intelligence, created to improve essential naval operations like mine clearance (safely finding and neutralising underwater mines), surveillance (keeping an eye on maritime areas), and reconnaissance (collecting intelligence).
Stealthy Aerial Surveillance from Submarines: India’s Defence Research and Development Organisation (DRDO) is developing Underwater-Launched Unmanned Aerial Vehicles (ULUAVs). These drones can be launched from a submerged submarine, enabling aerial surveillance of an area while keeping the submarine hidden, which helps maintain an important stealth advantage.
Enhanced Efficiency with Coordinated Drone Swarms: These AUVs are being crafted to work together in swarms. This allows several drones to collaborate, exchanging information and covering larger areas more efficiently and swiftly. This collaborative method greatly enhances the effectiveness of tasks such as thorough seafloor mapping and extensive coastal monitoring.
AI-Boosted Real-Time Ocean Surveillance for Threat Detection: India is bringing together AI-powered sonar and radar systems to enhance its naval defence. These sophisticated systems leverage artificial intelligence to swiftly and precisely identify underwater threats, such as enemy submarines or unmanned vehicles, while also recognising unusual or suspicious naval activities. Operation Sindoor highlighted India’s innovative use of AI-powered underwater drones to enhance Underwater Domain Awareness (UDA), which involves gaining a thorough understanding of all activities taking place in the underwater environment.
Recommendations:
India has a strong defence partnership with Israel, but there are still AI-driven military systems that India has not yet acquired. Some Israeli AI defence technologies that India has not purchased:
- AI-Powered Iron Beam Laser Defence System
A directed-energy weapon that uses high-powered lasers to neutralise drones, rockets, and missiles. AI-driven target tracking and interception algorithms ensure rapid response against aerial threats. India has invested in DRDO’s laser-based air defence, but has not acquired Israel’s Iron Beam system.
- AI-Enhanced Sky Dew Aerostat Surveillance System
An AI-powered airborne radar system that provides persistent surveillance and early warning against aerial threats. Uses machine learning algorithms to detect and classify enemy aircraft and missiles. India has developed Akashteer for air defence, but has not acquired Sky Dew’s AI-driven surveillance capabilities.
- AI-Driven Carmel Armoured Combat Vehicle
An autonomous AI-assisted combat vehicle designed for urban warfare and battlefield reconnaissance. Uses AI-powered decision-making systems to optimise navigation and targeting. India has focused on UGVs (Unmanned Ground Vehicles) but has not acquired Carmel’s AI-driven combat systems.
- AI-Powered Harpy NG Loitering Munitions
Next-generation AI-driven kamikaze drones are designed for electronic warfare and radar destruction. Can autonomously identify and neutralise enemy radar installations without human intervention. India has developed indigenous loitering munitions, but has not purchased Israel’s Harpy NG drones.
- AI-Assisted Cyber Warfare & Electronic Defence Systems
AI-powered cyber intelligence platforms are capable of detecting and neutralising cyber threats in real time. Israel has developed AI-enhanced cyber deception and counter-hacking technologies.
India has not yet acquired some of Russia’s most advanced AI-driven defence systems. Some key AI-powered military technologies that Russia has developed but India has not purchased:
- S-500 Prometey Air Defence System
Next-generation missile defence system capable of intercepting hypersonic missiles, satellites, and intercontinental ballistic missiles (ICBMs). Extended range of up to 600 km, surpassing the S-400 currently in India’s arsenal. Russia has offered joint production of the S-500 to India, but no formal agreement has been signed yet.
- AI-Powered Sukhoi Su-57M Fighter Jet
AI-assisted flight control, navigation, and target selection, reducing pilot workload and improving combat efficiency. Stealth capabilities and advanced radar systems make it competitive with U.S. F-22 and F-35 fighters. India operates Su-30MKI jets but has not yet acquired the AI-enhanced Su-57M.
- AI-Driven Electronic Warfare Systems
Russia has developed AI-powered electronic warfare platforms capable of jamming enemy radar, disrupting communications, and countering cyber threats. These systems are integrated into modern Russian fighter jets and missile defence networks, but India has not yet procured them.
- AI-Assisted Autonomous Combat Vehicles
Russia is developing AI-powered unmanned ground combat vehicles (UGVs) for urban warfare and reconnaissance. These autonomous tanks and robotic infantry units are designed for high-risk combat zones, but India has not yet invested in them.
- AI-Enhanced Space Warfare Systems
Russia has AI-driven satellite defence and orbital surveillance technologies to counter space-based threats. India is advancing its own space defence programs but has not yet acquired Russia’s AI-powered orbital strike platforms.
Some Other Future Outlook
AI-Powered Autonomous Underwater Combat: As we look to the future, India may see advancements in its naval capabilities, potentially featuring advanced AI-driven underwater combat systems. These systems might have the ability to detect and respond to threats on their own, which means they could locate, recognise, and address dangers with little need for human intervention.
Conclusion
The incorporation of Artificial Intelligence into warfare has irreversibly reshaped the battlefield of contemporary warfare. As evident through the India-Pakistan, Russia-Ukraine and Israel-Gaza conflicts, AI-powered systems ranging from autonomous drones and loitering munitions to sophisticated target recognition algorithms such as Operation Sindoor, Lavender and The Gospel have become the focal point of offensive and defensive capabilities. Such technologies have added speed, accuracy, and scalability to military interventions, enabling states to pursue tactical goals with unheralded efficiency.
But this evolution of war is at the expense of quite serious ethical, legal, and humanitarian implications. The deployment of weapons relying on AI has brought severe concerns regarding compliance with International Humanitarian Law (IHL), particularly its principles of distinction, proportionality, and responsibility. Pattern recognition and metadata-based technologies, as compared to human discretion, risk classifying civilians as combatants, a move that not only goes against IHL but one that de-legitimises war. The same applies to over-reliance on AI and the creation of automation bias that undermines substantial human agency and accountability in lethal decision-making.
The Pahalgam attack was a premeditated and strategic assault with both operational and psychological objectives. It sought to destabilise the region during a politically sensitive time, provoke escalation, and reassert the presence of Pakistan-backed terrorism in the Kashmir Valley. The Indian military’s subsequent launch of Operation Sindoor, a precision counter-terror operation, was a direct response aimed at dismantling terror launch pads and sending a strong message regarding India’s evolved and technology-enabled security doctrine.
The Israeli-Palestinian conflict shows how AI can be both a tactical multiplier and a humanitarian danger at the same time. The use of AI systems to generate massive target lists with minimal human oversight is characteristic of the danger of shifting vital ethical choices onto opaque algorithms.
References
Government of India. “DGMO Briefs Media on Operation Sindoor.” Press Information Bureau, May 10, 2025. https://www.pib.gov.in/PressReleasePage.aspx?PRID=2128133
S-400, Akash, Electronic War: How India’s Air Defence Systems Protect Skies from Pakistan Ceasefire Violations.” Hindustan Times, May 10, 2025. https://www.hindustantimes.com/india-news/s400-akash-electronic-war-how-indias-air-defence-systems-protect-skies-from-pakistan-ceasefire-violations-101746968323741.html
“Operation Sindoor: DGMOs Show Wreckage of Chinese PL-15 Missile, Turkish Drones; Say Layered Air Defence Grid Proved Impenetrable.” The Times of India, May 11, 2025. https://timesofindia.indiatimes.com/india/operation-sindoor-dgmos-show-wreckage-of-chinese-pl-15-missile-turkish-drones-says-layered-air-defence-grid-proved-impenetrable/articleshow/121105230.cms
Government of India. “India’s Retaliatory Strikes Demonstrate Pinpoint Accuracy and Technological Superiority.” Press Information Bureau, May 12, 2025. https://www.pib.gov.in/PressReleasePage.aspx?PRID=2128746
“Explained: How Operation Sindoor Demonstrates Capabilities of Made-in-India Defence Technology.” The Indian Express, May 11, 2025.
https://indianexpress.com/article/explained/how-operation-sindoor-demonstrates-capabilities-of-made-in-india-defence-technology-10007250/.
Arkin, Ronald C. Governing Lethal Behavior in Autonomous Robots. CRC Press, 2009. https://doi.org/10.1201/9781420091113
Asaro, Peter. “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making.” International Review of the Red Cross 94, no. 886 (2012): 687–709. https://doi.org/10.1017/S1816383112000768
Boulanin, Vincent, and Maaike Verbruggen. Mapping the Development of Autonomy in Weapon Systems. Stockholm International Peace Research Institute (SIPRI), 2017. https://www.sipri.org/sites/default/files/2017 11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117.pdf
Crootof, Rebecca. “The Killer Robots Are Here: Legal and Policy Implications.” Cardozo Law Review 36, no. 5 (2015): 1837–1915. https://cardozolawreview.com/killer-robots-are-here/
Human Rights Watch. Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. August 2020. https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and
United Nations Office for Disarmament Affairs (UNODA). The Legality of Autonomous Weapon Systems under International Humanitarian Law. 2023. https://www.un.org/disarmament/publications/occasionalpapers/no-38-legality-autonomous-weapons
Nir, Yuval, and Furer. “Lavender: Israel’s AI-Powered Kill List.” +972 Magazine, April 3, 2024. https://www.972mag.com/lavender-ai-israel-gaza/
Vincent, James. “Ukraine Is Using AI to Target Russian Troops.” The Verge, March 22, 2023. https://www.theverge.com/2023/3/22/23651640/ukraine-war-ai-targeting-palantir
Dwoskin, Elizabeth. “How AI Is Powering Ukraine’s Defense.” The Washington Post, May 10, 2023. https://www.washingtonpost.com/technology/2023/05/10/ai-ukraine-war/
Cummings, M.L. “Artificial Intelligence and the Future of Warfare.” Chatham House, January 2017. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf
Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. W. W. Norton, 2018. https://wwnorton.com/books/Army-of-None/
Taddeo, Mariarosaria, and Luciano Floridi. “How AI Can Be a Force for Good in Warfare.” Nature, February 2021. https://www.nature.com/articles/d41586-021-00312-2
Defence Update. “Israel’s Iron Dome Missile Defence Explained.” Updated 2023. https://defense-update.com/20230601_iron_dome.html
International Committee of the Red Cross (ICRC). Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. 2021. https://www.icrc.org/en/document/autonomous-weapon-systems
Palantir Technologies. “Supporting Ukraine’s Defence with AI.” Palantir Press Room, 2023. https://www.palantir.com/blog/supporting-ukraines-defense/
Federation of American Scientists. “Military Applications of Artificial Intelligence: Ethical Concerns.” February 2023. https://fas.org/publication/military-ai-ethics/
Bode, Ingvild. “Meaning-Less Human Control in Weapon Systems.” Journal of Conflict and Security Law 27, no. 1 (2022): 45–67. https://doi.org/10.1093/jcsl/krac003
Kiss, Yudit. “The Israel Defense Forces and AI: Changing the Ethics of War?” Global Research, November 2023. https://www.globalresearch.ca/israel-ai-changing-ethics-of-war/5836823
AI Now Institute. Algorithmic Accountability in Armed Conflict. 2022. https://ainowinstitute.org/reports.html
NATO Communications and Information Agency. “Artificial Intelligence for Defence.” NATO, 2023. https://www.ncia.nato.int/about-us/our-work/artificial-intelligence.html
Mozur, Paul, and Chin, Josh. “Facial Recognition and Warfare in Ukraine.” The New York Times, June 12, 2022. https://www.nytimes.com/2022/06/12/technology/ukraine-facial-recognition-clearview.html
Future of Life Institute. Lethal Autonomous Weapons Pledge. 2022. https://futureoflife.org/lethal-autonomous-weapons-pledge/
Elbit Systems. “SkyStriker Loitering Munition.” Technical Datasheet, 2023. https://elbitsystems.com/product/skystriker-loitering-munition/
IAI (Israel Aerospace Industries). “Harop Loitering Munition.” https://www.iai.co.il/p/harop
International Panel on the Regulation of Autonomous Weapons (iPRAW). Weapons Autonomy and Legal Accountability. December 2021. https://www.ipraw.org/wp-content/uploads/2021/12/iPRAW_Legal_Accountability_Report.pdf
Israel is using an AI system to find targets in Gaza. Experts say it’s just the start. By Geoff Brumfiel Dec. 14, 2023 https://www.opb.org/article/2023/12/14/israel-ai-warfare-gaza-gospel/
Israel’s Military Uses AI Targeting System in Warfare 14 December 2023 https://thenota.com/post/2023/dec/14/israel-military-ai-targeting-system/?
https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons
Roles and Implications of AI in the Russian-Ukrainian Conflict July 20, 2023 Sam Bendett https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict
Weaponising Artificial Intelligence in the Russia-Ukraine war February 2024 https://www.researchgate.net/publication/379345343_Weaponizing_Artificial_Intelligence_in_the_Russia-Ukraine_war
U.S. Air Force Expects to Make Skyborg Program of Record in Fiscal 2024
https://www.defensedaily.com/u-s-air-force-expects-to-make-skyborg-program-of-record-in-fiscal-2024/air-force/(Defense Daily)
Exclusive: Russia Ready To Deploy Okhotnik Stealth Drone with Su-57 Fighter Jet
https://armyrecognition.com/news/aerospace-news/2025/exclusive-russia-ready-to-deploy-okhotnik-stealth-drone-with-su-57-fighter-jet-in-new-manned-unmanned-strike-strategy(Army Recognition)
IAF Eyes Fighter Jet-Launched ALFA-S Swarm Drones
https://idrw.org/iaf-eyes-fighter-jet-launched-alfa-s-swarm-drones/(IDRW)
Watch Terrifying Vision of Secret Chinese Drone Mothership That Launches AI Killer Swarms
https://www.thesun.co.uk/news/35019763/chinese-drone-mothership-killer-swarms/(thesun.co.uk)
UK Continues to Assess Platform Options for Tempest’s Autonomous Wingman
https://www.flightglobal.com/defence/uk-continues-to-assess-platform-options-for-tempests-autonomous-wingman/162686.article(Flight Global)
Financial Express. “Indian Navy to Boost Underwater Capabilities with Indigenous AUVs by Sagar Defence Engineering.” The Financial Express, March 25, 2024.
Times of India. “DRDO Plans India’s First Underwater-Launched UAV, Awards Contract to Pune Startup Sagar Defence.” The Times of India, May 14, 2024.
India Defence and Aerospace Bulletin. “Swarm Intelligence & Collaborative Unmanned Drone Systems for Maritime Surveillance.” IDAB, December 31, 2024.
Bharat Shakti. “India’s New Maritime Edge: How Operation Sindoor Redefined Underwater Drones & Deterrence.” BharatShakti.in, April 9, 2024.
Times of India. “Indrajaal Rolls Out AI-Driven Anti-Drone System to Protect Critical Infrastructure.” The Times of India, February 27, 2024.