Elon Musk, while speaking at the MIT’s AeroAstro Centennial Symposium, said something that feels like the bitter truth we will have to face in the next few decades. He said, “With artificial intelligence, we’re summoning the demon.” Gulp it down. It’s true. Going by the rate of progress in AI, our dependability on it, and the lack of regulatory oversight on its development, we are headed towards self-made chaos. These worrisome facts about artificial intelligence will make you wonder whether in our process of creating AI we being intelligent or blatantly foolish.
1. Robots can be designed to predict the future. This makes them very powerful over humans and threatens the civilization:
When supercomputer Nautilus was fed millions of news stories about Osama Bin Laden, it was able to predict Laden’s location with an accuracy of 200 kilometers. An AI system named “MogIA” correctly predicted the results of the last three elections and Donald Trump’s win.
The power to be able to predict the future is one of the greatest powers that a person, or in this case, a robot can have. Artificial intelligence has been able to forecast several important events like the revolutions in Tunisia, Egypt, and Libya including the removal of the Egyptian president. When the SGI Altix supercomputer named “Nautilus” was fed millions of articles about Osama Bin Laden from January 1979 to April 2011, it accurately predicted his location in Northern Pakistan with a 200-kilometer radius. The city of his capture was Abbottabad when it was thought that Laden was in Afghanistan. The machine has a processing power of 8.2 teraflops and runs on 1,024 Intel Nehalem cores.
Another AI system, MogIA, used 20 million data points from various online platforms like YouTube, Twitter, and Facebook and correctly predicted the last four American presidential elections including the winners of Democratic and Republican primaries. Genic.ai, an Indian start-up. is the founder of MogIA that was created in 2004 and is getting smarter with time. (1, 2)
2. Humans have invented artificial intelligence systems that can do better than humans and even replace humans entirely:
Scientists at Oxford have invented an artificial intelligence system that can lip-read better than humans. AI can beat humans at strategy games, can read better, analyze better, draft better legal contracts, and even write better pop songs.
An AI called, “Watch, Attend and Spell,” can lip-read accurately and can even tell the difference between words that have similar movements of the mouth like “mat” and “bat.” It has been adapted by using thousands of hours of BBC news footage and has examined more than 118,000 sentences and added more than 17,5000 words to its vocabulary. It can also predict what people will say next. The uses of this machine are plenty, but it will definitely put human lip-readers out of business.
Since 1997, the Deep Blue computer has been beating humans at chess. The DeepMind system developed by Google beats even the best strategy game players. There is an AI built by Alibaba that can read much better than humans. Another AI program named “Flow Machines” created an amazing Europop album in collaboration with French songwriter Benoît Carré. When it comes to the legal field, the AI systems have performed better at drafting legal contracts when compared to the work of twenty experienced attorneys and also identified legal issues with better accuracy. (1, 2)
3. Intelligence explosion is an infinite expansion of technology. A super-intelligent machine can design an advanced machine, re-write its software to become even more intelligent, and keep repeating the process:
Facebook designed chatbots to negotiate with one another. Soon, the chatbots made up their own way of communication.
Imagine an endless loop of the invention. That is what artificial intelligence is capable of. Known as an “intelligence explosion,” it is a possible outcome of humanity’s actions of developing AI. This recurrent, self-improvement of AI will lead to artificial superintelligence that will be beyond our control. A British mathematician and cryptologist, I.J. Good predicted this in 1965.
Researchers at the Facebook Artificial Intelligence Research lab used machine-learning techniques to create “dialogue bots” in 2017, but the bots created their own language of negotiating, and the researchers had to tweak one of their models to stop the program from going out of their control. This was considered as a sign of what was coming and a warning that AI was capable of gaining complete autonomy. (1, 2)
4. Robots have been granted citizenship. This makes the threat of a “robot takeover” much more real:
In 2017, Sophia, a robot, was granted human rights and the citizenship of Saudi Arabia. This enabled Sophia to live among humans and learn more about them making the AI threat very real.
One of the most important things that AI needs to perform is collection and analysis of data. Should the threat of AI to humans be real, AI would need to collect more data about humans by living among us. And that looks possible because of Sophia, a social, humanoid robot modeled after Audrey Hepburn who is a citizen of Saudi Arabia and is the first non-human to have a United Nations title. She was activated on February 14, 2016. Her creator, David Hanson, said in 2017 that Sophia will advocate for human rights in Saudi Arabia. She was also given a visa by Azerbaijan when she wanted to attend Global Influencer Day Congress in Baku. The robot sparked a lot of controversy in the scientific community because of her ability to imitate human actions and facial expressions. Sophia uses visual-data processing and facial recognition and can make simple conversations on pre-defined topics. (1, 2)
5. Humans may feel emotionally connected to machines, decreasing their interaction with other humans considerably:
The secretary of Joseph Weizenbaum, the founding father turned critic of AI, got emotionally involved with a chatbot named “ELIZA” that Weizenbaum had programmed himself. Some soldiers get too attached to their bomb-disposal robots and begin treating them as if they had personalities.
In 1966, computer scientist Joseph Weizenbaum published a simple computer program called “ELIZA.” It performed natural-language processing and was capable of engaging in a conversation with humans. Weizenbaum modeled it on the conversation style of Carl Rogers. Despite knowing that the program was nothing but simulation and its users were not conversing with a real person, Weizenbaum found that people would reveal things of a personal nature to it. Once his secretary, who was too attached to it, asked Weizenbaum to leave the room while she used the software. Because of this, Weizenbaum became a critic of artificial intelligence and felt that it would interfere with social progress.
Soldiers who use bomb-disposal robots get so attached to them that they conduct a funeral for the robots “when they die” and experience a feeling of loss. In a study conducted by researcher Julie Carpenter, she found that soldiers mourned the robots and treated them like humans. Many of the soldiers even named them after their loved ones or celebrities. This emotional connect of humans with machines might prove to be detrimental to mental health. (1, 2)
6. AI machines can predict whether a human is gay or straight based on the photos of their faces:
This can be a threat to the LGBT community and have wrong applications that may lead to targeting, misuse, harassment, and have psychological implications for closeted people.
By only using a photograph, an algorithm predicted whether a person on a dating site was gay or not with an accuracy of 81% for men and 74% for women. When the algorithm was used on five photographs, the accuracy increased to 91% for men and 83% for women. This study conducted by Stanford University has raised several ethical and scientific questions. As of 2017, there were 72 countries in the world where homosexuality was a crime according to a report by International Lesbian, Gay, Bisexual, Trans and Intersex Association (ILGA). The use of this algorithm could be made to target gay populations and harass them by invading their privacy. It can trigger mental health issues in closeted people who find it hard to face the ridicule that comes with “coming out.” In eight countries across the world, homosexuality is punishable by death. The use of the algorithm in such countries could directly result in loss of life. (source)
7. An AI-enabled self-driving car can be programmed to kill a human:
Artificial intelligence researchers fed an AI program millions of human survey responses and taught it to decide who a self-driving car should kill.
In 2016, the researchers at MIT set up a website called the “Moral Machine” where visitors were to cast a vote about who should an autonomous vehicle with failed breaks kill when. in both the scenarios, loss of life was inevitable. Votes were cast by 1.3 million people. Ariel Procaccia, an assistant professor at Carnegie Mellon University, teamed up with Iyad Rahwan, one of the MIT researchers, and created an AI that would evaluate whom a self-driving car should kill should such a situation arise. In simpler words, a self-driving car was enabled with an AI in the developmental stage that could “think” and kill humans by making ethical decisions. (source)
8. AI can lead humans farther from the reality and make them less acceptant to the inevitable truth of death:
Google’s Artificial-Intelligence Bot says that the purpose of living is “to live forever.” A software developer gathered over 8,000 lines of text from friends and family of her recently deceased best friend and used it to create an artificial intelligence simulation of him.
Death is the natural way of life and letting go is must for a human. But AI offers a human being with a third option—and not really a healthy one—to create a simulation of a human being. This makes a person less accepting towards death, ignorant of the reality, and neck-deep into virtual waters. Eugenia Kuyda fed the text messages of her deceased best friend, Roman Mazurenko, into a neural network built by developers of her AI startup, Luka. She created a bot that mimicked a person’s way of talking, and soon Roman was in front of her—in a way. But was that what she needed? In Kuyda’s own words, “It’s definitely the future; I’m always for the future. But is it really what’s beneficial for us? Is it letting go, by forcing you to actually feel everything? Or is it just having a dead person in your attic? Where is the line? Where are we? It screws with your brain.” It was very much like Black Mirror’s episode, “Be Right Back.” (source)
9. Careers in fields such as law, medicine, and finance will be rendered almost useless due to AI:
In 2016 Stephen Hawking said, “We are at the most dangerous moment in the development of humanity” and that the “rise of artificial intelligence [was] likely to extend job destruction deep into the middle classes, with only the most caring, creative, or supervisory roles remaining.”
We have seen many futuristic movies where bots have taken the place of humans, and humans are going unemployed. This might not be fiction completely. Automation of jobs has already begun affecting the workers in factories and soon, as Stephen Hawking said, only those jobs that can be performed by humans will prevail. Entire industries will disappear! The human touch in most industries except the ones where care, creativity, or supervision are required will vanish. It is estimated that 47% of the jobs in the United States will fall to automation in the next 20 years—not just blue-collar jobs but also the white-collar ones. In fields like finance, law, and medicine machines will make a drastic impact.
Hawking said that automation will cause greater economic inequality in the world. “We are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it,” he also said. (1, 2)
10. There is a general scientific consensus that in a few decades either humanity will have achieved immortality or will have been extinct due to AI’s growth:
In January 2015, Elon Musk, Stephen Hawking, and dozens of other artificial intelligence experts signed an open letter urging researchers not to create something that cannot be controlled. It has been predicted that technological singularity will be triggered by humans around 2040.
This four-paragraph long open letter by AI experts entitled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter” asks, “How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must “do what we want them to do.” The letter supports the hypothesis of “technological singularity” which states that the growth of artificial superintelligence will abruptly cause technological growth which will impact the human civilization drastically and unfathomable. We are set for massive changes in the upcoming decades, and none of the predicted outcomes seem beneficial.
The famous scientist Dr. Ian Pearson said that humans will achieve immortality by 2050 and have the ability to not die. But because of a start-up, named “Humai,” immortality may be a step closer and achievable by 2040. The start-up plans to restore the brain as it ages using cloning technology and use nanotechnology and AI to store data of the conversational styles and behavioral patterns of humans among other things. UK-based stem cell bank “StemProject” could soon develop treatments that may allow humans to live until age 200. Many scientists believe that we are close to achieving immortality. But should we really be immortal? Is it really an advantage?
On the other hand, Stephen Hawking and some other scientists feel that AI will lead humanity to extinction. Hawking believed that we have passed the point of no return and soon there will come a day when machines will be dominant, giving birth to a new form of life that will outperform human life. (1,2,3,4)
In the process of creating artificial intelligence are we escaping from our values? John C. Havens, executive director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems wrote,
“The greatest threat that humanity faces from artificial intelligence is not killer robots, but rather, our lack of willingness to analyze, name, and live to the values we want society to have today.”