BY TSHILIDZI MARWALA
AS ARTIFICIAL intelligence (AI) breaks frontiers anew, it stands to reason that the very nature of conflict will undergo a seismic change too. We are already starting to see a manifestation of this. Last year, for example, an Iranian nuclear scientist, Mohsen Fakhrizadeh, was killed by an assassin. What was intriguing is that this assassination occurred from a computer screen at an undisclosed location more than 1,000 miles away through an AI system. This was a high-tech, computerized sharpshooter, which featured AI and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute. The New York Times referred to it as a “straight-out-of-science-fiction story”.
Though it sounds somewhat fantastic, it is a development we must take seriously. The very nature of conflict has undergone a paradigm shift of its own. The United Nations (UN) stated a couple of years ago that this is a new era of conflict as “conflicts now tend to be less deadly and often waged between domestic groups rather than states”. Yet, as we watch Russia’s invasion of Ukraine continue to unfold, we cannot dismiss the threat of greater interstate conflict in the future, particularly cognizant of the changing global dynamics. There is a multitude of levels to this – we could see AI weaponized or the manifestation of a war over who will emerge as the first AI superpower or as my colleagues Eyasu Habtermarian and Monica Lagazio and I proposed in 2007, AI could be used to predict militarized interstate disputes, with the implication that it is not all as dire as it seems. The crux of the matter is that technology will determine how wars will be fought, and we have to respond now. As Carayannis and Draper concluded in a journal article this year: “Artificial superintelligence emerging in a world where war is still normalised constitutes a catastrophic existential risk.”
The question perhaps becomes, how do we mitigate the risk factors? One argument made is that there should be a call for greater mediation tactics. For example, states could establish a Universal Global Peace Treaty to constrain the risk of AI warfare. In 2014, the UN Convention on Certain Conventional Weapons (CCW) discussed lethal autonomous weapons systems for the first time but failed to garner consensus on the issue. In 2018, the #CyberMediation initiative was developed to determine the impact of technology on mediation, including their benefit and risks, to create a platform for collaboration between technology companies, mediators and policymakers. Another argument is that there needs to be an adequate prediction of the possibility of conflict between states. For example, Habtermarian, Lagazio and I found that neural networks, or a series of algorithms that recognize patterns, have already been implemented to predict militarized interstate disputes. Yet, Support Vector Machines (SVMs), which use a classification system for algorithms, proved to be more effective as a prediction technique. It is apparent that much more can be gained from tapping into the potential of AI.
For example, through Natural Language Processing, which analyses languages, we can follow debates on social media channels to better understand the dynamics in specific regions and the impact this could have on the international community. There also needs to be a push toward achieving the tenets set out by the Sustainable Development Goals (SDGs), as these underlying factors are often drivers of conflict. Better tracking systems through satellites or drones could give us a clearer picture of resource challenges at a state level. Moreover, there is scope to utilize AI at a decision-making level. As I have written extensively, machines represent “the ideal concept of intelligence” and thus should be used in decision-making as it exemplifies rationality. These are strategies we need to adopt at a global level to ensure that we center human good over all else.
The takeaway is that the subversion from how AI can escalate global conflicts to how AI can prevent global conflicts is important in this endeavor. To paraphrase the UN, the concern has to be on exploring these emerging technologies so that they can be deployed to de-escalate violence and increase international stability. The 19th-century English author H G Wells once said, “If we don’t end war, war will end us.” As technology becomes more entrenched in our lives, this should be our attitude towards the use of AI in conflict. The aim is not to best each other with intelligent weaponry but rather to ensure that deadly and often senseless conflict is avoided. To lean into the words of World Economic Forum (WEF) Executive Chairman Klaus Schwab, this is how we prioritize the promise of the fourth industrial revolution (4IR) over the peril.
– The writer is the outgoing Vice-Chancellor and Principal of the University of Johannesburg and on March 1, 2023, will become Rector of the United Nations University