The unchecked advance of Artificial Intelligence (AI) could pose a serious threat to human existence, said a new report published by BMJ Global Health.
The report, titled ‘Threats by Artificial Intelligence to human health and human existence’, has been authored by Frederik Federspiel, Ruth Mitchell, Asha Asokan, Carlos Umana, and David McCoy. Outlining risks that remind one of The Terminator film series, in which AI becomes fully sentient and takes over the world, becoming a formidable adversary to humans, the abstract of the report said: “While Artificial Intelligence (AI) offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic and security-related determinants of health.
“We describe three such main ways misused narrow AI serves as a threat to human health: through increasing opportunities for control and manipulation of people; enhancing and dehumanising lethal weapon capacity; and by rendering human labour increasingly obsolescent.”
The fact that AI can improve itself constantly was considered a risk factor by the researchers. “We then examine self-improving ‘Artificial General Intelligence’ (AGI) and how this could pose an existential threat to humanity itself. Finally, we discuss the critical need for effective regulation, including the prohibition of certain types and applications of AI, and echo calls for a moratorium on the development of self-improving AGI.”
The report presented three threat segments under ‘Threats to human health and well-being from the potential misuse of AI’. These segments are:
● Threats to democracy, liberty, and privacy: AI can rapidly manipulate massive data sets; enable misinformation campaigns; and increase surveillance.
● Threats to peace and safety: AI can be used to develop and deploy lethal autonomous weapon systems.
● Threats to work and livelihoods: AI can cause large-scale replacement of the human workforce through automation.
The report listed threats from AGI, including “attacking or subjugating humans”, “disrupting systems” and “using up resources”.
AI linked with polarisation and extremist views
The report summary said that the massive data organisation and analytical capabilities of AI “can be put to good use” by spreading information and countering terrorism. But this capability was a double-edged sword, because it could also be misused “with grave consequences”.
The authors contended that misuse of AI power had led to “the rise in polarisation and extremist views observed in many parts of the world” and had enabled a few to create “a vast and powerful personalised marketing infrastructure capable of manipulating consumer behaviour”.
AI-powered autonomous weapons can kill humans en masse
Lethal machines with a mind of their own and a very low opinion of humans is the staple stuff of Hollywood. The report authors present a very plausible picture of how this could happen in real life.
The report summary said: “Weapons are autonomous in so far as they can locate, select and ‘engage’ human targets without human supervision. This dehumanisation of lethal force is said to constitute the third revolution in warfare, following the first and second revolutions of gunpowder and nuclear arms.”
These autonomous lethal weapons could be firearms and explosives attached to “small, mobile and agile devices (e.g., quadcopter drones) with the intelligence and ability to self-pilot and capable of perceiving and navigating their environment”.
The report warned that such autonomous lethal weapons could be “cheaply mass-produced” in order to “kill at an industrial scale”.
The authors wrote: “For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill en masse without human supervision.”
Efforts to use AI for good and prevent or limit the harm
The report referred to global efforts to use AI for the benefit of humankind and to prevent — or, at least, limit — the harm it can cause. It spoke of the High-level Panel on Digital Co-operation established by the United Nations in 2020, followed by a call from the head of the UN Office of the Commissioner of Human Rights in September 2021, urging all nations “to place a moratorium on the sale and use of AI systems until adequate safeguards are put in place to avoid the ‘negative, even catastrophic’ risks posed by them”. Some progress has been made, but the report admitted that “the UN still lacks a legally binding instrument to regulate AI and ensure accountability at the global level”.