10 Facts About Artificial Intelligence That’ll Leave You with Mixed Feelings
We are in an age of immense technological advancement. The advances are happening at such a dizzyingly fast pace that we are left with little time to process, contemplate, or analyze them or come to terms with what they mean to us or our future. Though artificial intelligence is helping us make wonderful progress like the recent breakthrough with the protein folding problem, it also leaves us to wonder if our future is going to be dystopian. Here are some facts about artificial intelligence that’ll leave you with mixed feelings.
1 IBM’s supercomputer, Summit, can calculate in just one second what would take you six billion years to finish. Even if every single person on the planet worked at one calculation per second, it would take 305 days to do the same number of calculations.
In 2014, the United States Department of Energy contracted IBM, Nvidia, and Mellanox for $325 million to build two supercomputers, Summit and Sierra, that were believed to surpass China’s Sunway TaihuLight. Summit, located at the Oak Ridge National Laboratory, Tennessee, is for civilian scientific research and Sierra, located at the Lawrence Livermore National Laboratory, California, is for nuclear weapons simulations.
Summit has 4,608 nodes with 9,216 IBM POWER9 CPUs and 27,648 Nvidia Tesla GPUs, 600 GB of coherent memory, and 800 GB of non-volatile RAM. It is capable of 200 petaflops (that’s 200,000 trillion calculations per second). It has 185 miles of fiber-optic cables and 250 petabytes of storage which is equivalent to 74 years of HD video.
During a genomic analysis, the supercomputer clocked 1.88 exaops (exa being one billion billion) and is expected to reach 3.3 exaops, making it the first ever computer to reach exa operations per second. As of November 2018, Summit is the fastest supercomputer in the world, followed by Sierra, and Sunway TaihuLight. (1, 2)
2 There is an artificial intelligence program designed by Nvidia’s engineers that can create realistic photos of people, none of whom actually exist in real life.
Since the blurry, vague, black-and-white faces created in 2014, the artificial intelligence program known as “generative adversarial network” (GAN) has come a long way in just four years. The program uses a method called “style transfer” which takes characteristics of one image and blends them with others to create new ones. This method is also used in many apps like Prisma and Facebook to convert a photo into an impressionist or cubist painting.
As shown in the image above, the characteristics in the source A images of people are blended with those of people in source B. The result is 15 different images of people who don’t really exist yet look real. There are still some finer details that the program does get wrong. Often, the eye color is different in both eyes, hair looks blurry or as if painted by a brush, or the ears are asymmetric. If there is text or numbers in the background, they appear illegible. (1, 2)
3 In 2017, Facebook created two AI chatbots that talk to each other, but they had to be shut down because they began making up their own language to communicate after finding English less efficient.
Artificial intelligence operates on a reward system, with the program being rewarded for the right action and punished for the wrong one, enabling it to learn. The two artificial intelligence (AI) robots, Bob and Alice, were capable of negotiating with other’s agents in order to form conclusions. Apparently, they realized that real English wasn’t that easy to use and so began to use phrases that seem to be gibberish but are actually a sort of shorthand which they find to be more efficient.
In one conversation Bob would say “I can i i everything else,” and Alice would reply “Balls have zero to me to me to me…” According to the researchers, the repetition of “i” and “to me” indicates how the AI operates. For example, when Bob said “i i can i i i everything else,” it meant “I’ll have three and you have everything else.”
According to Facebook AI researcher Dhruv Batra, in the absence of reward, the robots just “drift off from understandable language and invent code-words for themselves. Like if I say “the” five times, you interpret that to mean I want five copies of this item.” (source)
4 There is an artificial intelligence program that is capable of distinguishing authentic Jackson Pollock’s paintings from faked ones with a 93% accuracy.
Jackson Pollock’s iconic paintings often inspire awe and critique, with some saying even children and monkeys can make them. They are also considered easy to fake. In 2014, an East Hampton-based painter, John Re, was arrested for 60 forgeries for which collectors paid $1.9 million.
Lior Shamir from Lawrence Technological University, Michigan, USA, has developed a software that can “characterize the low-level numerical differences between original Pollock drip paintings and drip paintings done by others attempting to mimic this signature style” through computational methods.
The software analyzes a scan of a painting and extracts 4,024 numerical image descriptors such as the fractals that form during drip painting movement which are beyond human visual perception. The program has shown to be 93% accurate. (source)
5 By tracking your eye movements, artificial intelligence can predict your personality and determine which of the four “Big Five personality traits” you have.
The University of South Australia in collaboration with University of Stuttgart, Flinders University, and the Max Planck Institute for Informatics, used state-of-the-art machine-learning algorithms as part of research to show the relationship between human personality and eye movements. After tracking the eye movements of 42 participants during their everyday tasks, researchers evaluated their personality traits through questionnaires.
They found that the algorithm recognized four out of the Big Five personality traits (neuroticism, extraversion, agreeableness, and conscientiousness) successfully. According to Dr. Tobias Loetscher of the University of South Australia, these results can improve human-machine interactions by helping computers and robots be more natural and interpret human social signals better. (source)
6 Soon, your smart devices can be equipped with AI programs that use satellite data to detect the amount of pollution generated by power plants in real-time. Your power consumption can then be adjusted to lessen pollution.
The nonprofit artificial intelligence company WattTime was selected by Google’s philanthropic wing Google.org for a $1.7 million grant via the Google AI Impact Challenge. The idea is to use publicly available satellite images like those from Copernicus and Landsat along with data from a few private companies like Digital Globe. These images are taken at various wavelengths including thermal infrared to detect heat and are processed by algorithms to detect signs of emissions from power plants.
WattTime has recently launched Automated Emissions Reduction (AER), a technology that enables effective monitoring and usage of emissions data by making it available to the public. AER detects when a power plant is producing the cleanest energy. It can be installed through a smart plug on smart devices such as thermostats, appliances, and electric vehicles. Using real-time grid data and machine learning, the program automatically adjusts power consumption ensuring least carbon footprint. (source)
7 Facebook‘s new AI program DeepFace is almost as good as humans at recognizing people in photographs. While humans are correct 97.53% of the time, DeepFace is correct 97.25% of the time, only a fraction of a percent behind us.
The reason tag suggestions on Facebook are so accurate is because of its already existing, advanced facial-recognition technology. In 2014, researchers at Facebook published a paper about a new artificial intelligence system called “DeepFace.” Unlike other existing software, DeepFace can create three-dimensional models of the faces in photos. These models are analyzed using 120 million different parameters by a neural networks technology known as “deep learning.” The result is that DeepFace can detect faces even from side views with great accuracy. (source)
8 NASA sometimes uses evolutionary algorithms that mimic Darwinian evolution in order to design antennas for radio communications, especially when provided with unusual design specifications. The result is highly efficient yet strangely shaped evolved antennas.
Detecting unusual radiation patterns requires antennas that meet unusual design requirements which are made using an evolutionary algorithm that imitates Darwinian evolution. The program, starting with simple-shaped antennas adds or modifies elements in a semi-random manner to design new antenna shapes. After the new antennas are evaluated, the ones with good scores are selected and bad ones discarded, just like in natural selection.
This process is repeated enough times until the shape satisfying the criteria and that outperforms the best manual designs is evolved. The first evolved antenna was made in the mid-1990s. A recent example of an evolved antenna is used in Space Technology 5 (ST5), a NASA mission, successfully launched in 2006 to take measurements in Earth’s magnetosphere. The LADEE spacecraft also uses evolved antennas. (source)
9 According to a Google AI bot, the purpose of living is “to live forever.”
According to the paper released by Google in 2015, an advanced type of “chatbot” that learns how to talk based on examples of existing dialogue began giving creative answers to philosophical questions. The existing dialogue included IT helpdesk troubleshooting chat service and movie transcripts taken from OpenSubtitles. Though the bot seems to have performed alright helping with troubleshooting, the conversation took a whimsical turn with OpenSubtitles data set as can be seen in the above image. (source)
10 Adrian Thompson, a researcher at the University of Sussex, let a computer program a chip, and it turned out to be highly efficient although the inner workings of which are almost impossible to understand.
When design specifications don’t have enough information or an existing circuit needs to modify itself to make up for flaws or changes in an operational environment, the solution is evolvable hardware. The idea was pioneered by Thompson, a researcher at the Department of Informatics, University of Sussex, England, when he created a tone discriminator with just 40 programmable logic gates.
The idea is to use an evolutionary algorithm to develop a new circuit from the existing ones. One example where it is useful is when the deep-space probes encounter sudden high radiation levels, and the circuit must adapt itself to retain its original function despite the drastic change in environment. (source)
10 Unique Islands that Are Ruled by Animals
10 Technologies that Very Few People Know Exist