People not only don’t understand the threat of AI, there is a huge, deeply ingrained misunderstanding of the threat, promoted by the media. And it will probably be a fatal (for humanity) misunderstanding if we don’t get this straightened out now and act immediately on the current misuse of AI that will kill billions or all of us.
Our thinking is understandably, inherently limited by our past experiences. Some great movies and TV shows have created a collective, strong mindset about the AI risk that is dangerously wrong. Our reference on thinking about the AI threat is movies of bad computers and robots. Most media coverage, computer scientists and most AI experts talking about the AI threat, and the public are wrongly focused on evil computers. We’ve all seen 2001: A Space Odessey, Battlestar Galactica and Terminator and that drives thinking about the AI threat—very incorrectly. Taleb also notes that we are more influenced by emotions, personal events, than statistics. These movies are very emotional, powerful in dominating our thinking about the AI threat.
The AI experts warning that AI (computers) will soon reach a stage where Artificial general intelligence (AGI) software {human-like intelligence and ability to learn on its own} develops some form of consciousness or self-defense instinct and AI computers will employ robots and other means to kill humans. This may or may not be the biggest risk, but the popularity of this AGI bad computers/robot threat undermines dealing with the current AI threat. If AI is a future threat, not a current risk, there is little urgency to act now to stop it. This is why the misunderstanding of AI threats is dangerously, potentially fatally wrong and must be corrected.
The big, fatal problem of AI is that bad people (not bad computers) will take other technologies and means of causing tremendous harm and magnify their destructive power. This is the far more likely, sooner and inevitable threat of AI—not evil computers and robots (not yet). Here are some examples:
AI being used now to design drugs to cure illnesses and help people is going to be used by North Korea, terrorists, biologists concerned about human overpopulation, etc. to bioengineer a fantastic virus to kill off home sapiens. If it’s a nation state they may develop a vaccine before they release it, and quickly close their borders as the virus spreads in the targeted country.
AI will be used to develop new methods to enrich uranium or other materials to make nuclear weapons. This was illustrated in the Collapse Survivor App in a military -style training exercise where an easy way to enrich uranium enabled Iranian backed terrorists to detonate nuclear devices in downtown New York City, Kansas City and Los Angeles:
AI will be used to develop new poisons, perhaps one optimized for municipal water systems. Then a North Korean or Iranian agent can dump a gallon of an AI optimized new poison (undetectable) into the water system versus a tanker truck required for an old fashioned poison.
AI can be applied to Nanotechnology to develop self-replicating nanobots that consume all plant matter on Earth while building more of themselves (the “gray goo†disaster scenario).[i] Nano technology researchers are trying to be careful to not accidentally create or release something that has this kind of disastrous outcome. Bad people/groups/nations will use AI and nano technology to deliberately cause such a catastrophic disaster.
AI controlled drones delivering viruses, poisons, nano devices, you name it to kill humans is the current threat we face. The only way to end this list is a generic: AI designs some other novel new means—including ones that humans wouldn’t even think of—to efficiently, cost effectively, kill billions of humans.
We may not even get to the AGI level threat of bad computers and robots because AI will likely be used by bad people to kill off humans before that happens and could set back our economy and society so much that we don’t develop AGI. That is a lesson of history, and the likely outcome when you carefully analyze the threat as an intelligence officer (my profession) would. Every technology ever developed by man has been abused and used to kill. Fire, gunpower, chemicals, the Internet, social media, phones, you name it—it will be misused.
A recent Collapse Survivor exercise training simulation dealt with a scenario covering the threats of Artificial Intelligence and how people around the world decide that they are not just going to let big companies develop and promote AI and allow it to kill off most of humanity. AI is the worst technology mankind has developed in terms of its ability to kill people, worse than nuclear weapons, or bioengineering–any technology–since AI can be misused by bad people to design the most effective means to use existing technologies to kill—and develop innovative, entirely new ways to kill we are completely unprepared for. In this 6 day simulation, anti-AI groups got a protest movement going that succeeded in generating widespread support and a ban on AI was achieved. A video that covers this simulation is available at:
New technologies, especially AI, have fundamentally changed our safety and likelihood of survival. We have entered the “Age of Collapse,†an era where AI leverages existing technologies and develops new ones that will kill millions or billions of people and cause a collapse (economy not operating, widespread loss of law and order) that may kill even more.[ii] When mankind suffers a severe disaster like an H5N1 (60% lethal) pandemic or nuclear attacks, our fragile, interdependent economic system, with irresponsible government and a population that is increasingly dependent and unable to survive without long distance water and food shipments, will fail, law and order will vanish, and most of mankind may perish.
Artificial Intelligence has raised the likelihood and severity of all existing collapse threats and will invent new ones.
AI is a major reason why the latest update of the Disaster Preparedness “Probability of Collapse Model†raised the annual likelihood of a collapse to 16-57%. The likelihood of a collapse disaster is highly uncertain, but it is not low. A video on this model and estimate is available:
Artificial intelligence makes all the man made collapse disaster threats worse: more deadly, easier to do, harder to detect. Every existing technology can now be made more lethal by bad people employing Artificial Intelligence to develop more efficient means of killing people and escaping counter measures. Long before AI and robots reach a stage of Artificial General Intelligence, where AI systems develop a self-preservation instinct, AI will be used by bad people to kill off billions or most of humanity.
https://www.britannica.com/technology/grey-goo
Dr. Drew Miller, “The Age of Bioengineered Viral Pandemics and Collapse,†Institute for Defense Analyses, IDA document: D-5335, October, 2014; https://www.ida.org/research-and-publications/publications/all/t/th/the-age-of-bioengineered-viral-pandemics-and-collapse