This paper titled "Whale Songs of Wars Not Yet Waged: The Demise of Natural-Born Killers through Human-Machine Teamings Yet to Come" is written by Dr. Ben Zweibelson, the director of the U.S. Space Command's Strategic Innovation Group and published in the Journal of Advanced Military Studies, a peer-reviewed academic journal published by the U.S. Marine Corps University Press.
It explores the evolving relationship between humans and artificial intelligence (AI) in warfare, arguing that future battlefields may push human decision-makers and operators to the sidelines or even eliminate them entirely. As AI systems become more advanced, they may develop the ability to think for themselves and make decisions beyond human comprehension or control. This could lead to a shift in the nature of warfare, where humans become less capable handlers of increasingly superior weaponized capabilities. The author warns that this transformation could occur gradually or suddenly, and that it is essential to consider the ethical, moral, and legal implications of such a future.
Main Points:
Current human-machine dynamics in security affairs position the human operator as the primary decision maker, with AI providing augmentation and support.
As AI technology advances, humans are increasingly being shifted to an on-the-loop or off-the-loop role, where AI takes more responsibility in warfare and defence decisions.
Human operators may eventually be pushed behind the loop or even out of the loop entirely, as AI systems become more sophisticated and capable.
The development of general AI, which could match or exceed human cognitive abilities in all possible ways, raises significant concerns about the future of warfare and the role of humans in it.
The author argues that it is crucial to engage in deep philosophical thinking about the potential consequences of AI-enabled warfare, and to consider the ethical, moral, and legal implications of such a future.
Implications:
The rise of AI in warfare has the potential to revolutionize the nature of conflict, potentially leading to a shift away from human-directed warfare.
The development of autonomous weapon systems raises significant ethical, moral, and legal concerns, and it is essential to address these issues before such systems are deployed.
The future of warfare is uncertain, and it is important to consider the potential implications of AI-enabled technologies on the role of humans in conflict.
It is crucial to engage in ongoing research and dialogue on the ethical, moral, and legal aspects of AI in warfare, and to develop appropriate safeguards and regulations to ensure responsible use of these technologies.
Potential implications of AI-enabled technologies on the role of humans in conflict:
Reduced need for human soldiers: AI-enabled systems could potentially replace human soldiers in a variety of roles, such as combat, surveillance, and logistics. This could lead to a reduction in the number of human casualties in war.
Increased precision and lethality: AI-enabled systems could improve the precision and lethality of weapons, leading to more effective and efficient warfare. This could also reduce the risk of civilian casualties.
Faster decision-making: AI-enabled systems can process information and make decisions much faster than humans. This could give them a significant advantage in combat situations, where speed and accuracy are essential.
New types of warfare: AI-enabled technologies could enable new types of warfare that are not possible with human soldiers.
Ethical concerns: The use of AI-enabled weapons raises a number of ethical concerns, such as the potential for autonomous weapons to kill without human intervention. It is important to develop clear ethical guidelines for the use of AI in warfare.
The role of humans in future conflict:
Despite the potential for AI to revolutionize warfare, it is unlikely that humans will be completely replaced by machines in conflict. Humans will still be needed to provide oversight and control of AI systems, to make complex decisions, and to provide moral and ethical guidance.
The role of humans in future conflict is likely to change, however. Humans may be increasingly focused on tasks that require creativity, strategic thinking, and moral judgment. AI systems could complement human capabilities by providing real-time information, analyzing complex data, and automating routine tasks.
It is important to consider the potential implications of AI-enabled technologies on the role of humans in conflict, and to develop appropriate policies and regulations to ensure that these technologies are used responsibly and ethically.
The need for deep philosophical thinking in AI-enabled warfare:
The ethics of autonomous weapons: Is it ethical to use weapons that can kill without human intervention? What are the moral implications of using AI-enabled weapons to target and kill?
The responsibility for AI-enabled weapons: Who is responsible for the actions of AI-enabled weapons? Is it the developer who created the AI, the commander who ordered the use of the weapon, or the state that deployed the weapon?
The legality of AI-enabled weapons: Are AI-enabled weapons legal under international law? What are the legal implications of using AI-enabled weapons to target and kill?
The potential consequences and ethical, moral, and legal implications of AI-enabled warfare are complex and far-reaching. It is crucial to engage in deep philosophical thinking about these issues to develop appropriate policies and regulations for the use of AI in warfare.
The future of warfare is uncertain, but it is clear that AI-enabled technologies will play a significant role. It is important to consider the potential implications of these technologies on the role of humans in conflict and to develop appropriate policies and regulations to ensure that they are used responsibly and ethically.