Artificial intelligence (AI) refers to the broad branch of computer science focused on creating systems capable of performing human tasks. AI systems can take the form of software (e.g., algorithms), hardware (e.g., robotic arms on an assembly line), or a combination of both (e.g., semi-autonomous vehicles). While the term "artificial intelligence" may bring to mind images of sentient humanoid robots like those of popular science fiction literature and film, most AI exists as computer systems composed of algorithms and large amounts of data entered by humans. Because they simulate human intelligence, AI systems depend upon human input to acquire skills, knowledge, and reasoning. AI has enabled the automation of many tasks, from grading exams and transcribing spoken words to vacuuming carpets and driving cars.
Most people living in the United States already encounter or use AI in their daily lives and welcome its further development. In a 2017 Northeastern University/Gallup poll, 85 percent of US adults reported using at least one device, program, or service that featured AI elements, and 77 percent indicated that AI's impact on people's lives will be "mostly" or "very" positive over the next decade. Despite the generally optimistic public sentiment, many policymakers, ethicists, and activists advise caution, warning that the expansion of AI could spur massive job losses, worsen economic inequality, and create new forms of discrimination.
Early Developments in Artificial Intelligence
While the theoretical roots of AI stretch back to Classical Greece (384 BCE–323 BCE) and the philosopher Aristotle's methods of reasoning, AI as a modern academic discipline did not begin until the mid-twentieth century. Pioneers of the field were inspired to consider the question "Can machines think?" as posed by Alan Turing in a landmark 1950 paper, "Computing Machinery and Intelligence." Turing's paper developed a hypothetical test, which he called the "imitation game," to assess the likelihood that a computer could convince a human that it was also a human. Commonly referred to as "the Turing test," the original test continues to be modified for different criteria and used to evaluate the abilities of AI technologies.
Many AI fundamentals were developed in the field's early decades. Cognitive and computer scientist John McCarthy coined the term artificial intelligence at a small conference at Dartmouth College in 1956. The first computer program meant to mimic human problem-solving, Logic Theorist, was presented at the same conference. Logic Theorist introduced heuristics to programming languages. Heuristics are processes for identifying patterns and prioritizing data to solve problems more quickly. Machine learning, which describes systems that utilize incoming data to improve performance with experience, was established in 1959. The computer program ELIZA, which changed the way humans interact with machines, followed in 1966. Regarded as the first "chatbot," ELIZA was programmed to respond to textual input as a psychotherapist might. These early achievements in developing machines that could learn, make reasoned decisions, and engage conversationally with users provided the foundation for the huge strides in twenty-first-century AI technologies.
Narrow Artificial Intelligence
AI technologies can be separated into two types. An AI technology designed to solve a certain type of problem or perform a specific set of tasks is called narrow artificial intelligence and is sometimes referred to as "weak" or "soft" AI. As of 2019, all existing AI technology falls into this category. The other type of AI, artificial general intelligence (AGI), remains theoretical. AGI, which is sometimes called human level machine intelligence (HLMI) or "strong" AI, refers to systems that can perform as well as the most gifted humans at all intellectual tasks.
Computer scientists have made major advances in narrow AI since Arthur Samuel's checkers program, and games continue to be a testing ground for improving AI. In 2011, IBM demonstrated their progress in developing AI when one of their computers, Watson, won first place over two champion contestants on the game show Jeopardy! Video games also make extensive use of AI technology. Alphabet Inc. has invested significant resources in developing AI through several research projects, including Google DeepMind and Google Brain. Google DeepMind impressed AI researchers in 2015 when its AlphaGo program beat several expert players at Go, a strategy board game that is considered much more complex than chess.
Smartphones and home automation systems, too, have made use of advanced chatbots capable of performing tasks far beyond the predictable responses of ELIZA. Intelligent personal assistants such as Apple's Siri and Amazon's Alexa can learn to recognize an individual voice and retrieve information from the Internet, make phone calls, order goods and services, and run other applications. Critics have expressed concern over the invasiveness of AI systems like Alexa, however, because such technologies are designed to always learn more about their users. For instance, advertising software that tracks online browsing and monitors credit card activity has proven adept at recommending products to purchase but has also raised concerns about privacy. With cloud computing enabling companies to develop an ever-expanding range of internet-connected products that can communicate with one another, consumers' concerns over the use and ownership of their data have deepened.
Narrow AI has already resulted in the loss of human life and other large-scale problems, as it is used in a wide range of complicated applications like automated weapons systems, self-driving cars, and surgical equipment. The first pedestrian fatality caused by a self-driving car, for example, occurred in New Mexico in March 2018, and, as of September 2019, the autopilot AI for Tesla's semi-autonomous cars has contributed to four accidents involving driver fatalities. Likewise, AI technologies carry the risk of performing their assigned task too well or with incorrect information. AI algorithms used in automated high-frequency stock market trading, for instance, have been blamed for heightening economic risk. Because automated traders are programmed to monitor a broad range of internet activity and include that data in their decision-making processes, erroneous social media posts or sensationalized headlines can bring on a quick and sudden crash in stock values known as a flash crash. As algorithms are continually refined and updated, however, the extent of AI's responsibility for flash crashes or other market volatility remains a topic of debate as of 2019.
Anticipating Strong Artificial Intelligence
The sweeping advances made in narrow AI technology over the past century have been enabled by changes in hardware. Technologists often explain the rapid advancement in terms of the evolution of Moore's Law, which originated with the premise that the number of transistors that can fit on a computer chip will double every two years, and Dennard scaling, which states that smaller transistors require less power to perform the same tasks. Much of AI research depends on the validity of these predictions that technology will improve steadily in performance and price. Most technologists believe that AGI will be developed in the future. Predictions on what it will entail and when it will be achieved vary widely.
While some technologists believe AGI has the potential to rid the world of disease and eradicate poverty, critics warn that AGI also has the potential to exacerbate existing social and economic inequities. Public and private organizations have begun to use narrow AI to improve efficiency beyond production and distribution, representing an expansion of the types of decision-making power granted to AI systems. For example, some businesses use AI to perform the first stage of hiring. As hiring processes are especially vulnerable to racial, ethnic, and gender bias, social scientists are concerned that AI used for even mundane tasks like filtering applications comes pre-programmed with unconscious biases.
Law enforcement agencies at all levels of government, meanwhile, have begun to use AI technologies like facial recognition software to identify and locate criminal suspects. US government research conducted on facial recognition software, however, has revealed significant racial disparities in the AI's ability to accurately verify individuals' identities. According to a 2019 report from the National Institute of Standards and Technology, facial recognition tools used by US law enforcement are five to ten times more likely to misidentify black people than white people, with black women being misidentified at significantly higher rates than black men, white women, and white men.
As more consequential decision-making power is granted to AI's algorithms, public enthusiasm for AI development has become somewhat offset by calls for regulation and oversight. According to survey results published by Oxford University's Future of Humanity Institute in 2019, 82 percent of Americans mostly or totally agreed that AI and robot technologies require careful management. Among the AI-related issues in need of governmental regulation, respondents expressed the greatest concerns over data privacy and surveillance; cybersecurity and defense; digital manipulation; autonomous weapons and vehicles; and algorithmic value alignment and hiring bias. However, the same survey indicated poor public understandings of AI terminology and the types of industries, programs, and services already deploying AI. The study's authors contend that public awareness and education initiatives should accompany research and regulatory frameworks.
In early 2019 President Donald Trump issued an executive order regarding the development of artificial intelligence technology in the United States. The executive order directed the National Science and Technology Council's (NSTC's) Select Committee on Artificial Intelligence to coordinate an action plan on a comprehensive federal AI initiative. The Select Committee released an update of a 2016 AI Research and Development Plan in June 2019 and outlined eight primary priorities for AI policy. Among the eight strategic priorities are making long-term investments in AI research; understanding and addressing the ethical, legal, and social implications of AI; ensuring safety and security of AI systems; and creating standards and benchmarks for evaluating AI technologies.
Meanwhile, the Growing Artificial Intelligence Through Research (GrAITR) Act was introduced in Congress in April 2019. The bill seeks to mandate the establishment of a comprehensive AI initiative run by independent federal agencies. As part of the White House Office, the NSTC is intended to advise the president on policy positions and exists outside of congressional oversight. The GrAITR Act would task federal agencies with creating and coordinating a National Artificial Intelligence Initiative, which would subject federal AI policy and action to congressional oversight.
Beyond AGI lies the prospect of the technological singularity, the imagined future point at which AI technology advances beyond the human capacity to control it. The singularity depends on a system achieving artificial superintelligence, or an intelligence so sophisticated that it exceeds the understanding of present-day humans. Some technologists such as Google CEO Larry Page extoll the potential unimaginable benefits of such an all-encompassing change in how people live. Alternatively, concerns that artificial superintelligence would lead to human extinction have been raised by other thinkers such as theoretical physicist Stephen Hawking and investor and inventor Elon Musk.