Artificial Intelligence

Citation metadata

Date: 2017
Encyclopedia of Emerging Industries
Publisher: Gale, part of Cengage Group
Document Type: Industry overview
Pages: 6
Content Level: (Level 4)

Document controls

Main content

Full Text: 
Page 44

Artificial Intelligence

NAICS CODE(S)

518210

541511

541512

541519

INDUSTRY SNAPSHOT

In the simplest terms, the artificial intelligence (AI) industry seeks to create machines that are capable of learning and intelligent thinking. It includes the development of computer-based systems that can learn from past behaviors and apply that knowledge to solving future problems. AI, which predates the computer age, draws from a variety of academic fields, including mathematics, computer science, linguistics, engineering, physiology, philosophy, and psychology. Although it did not truly emerge as a stand-alone field of study until the late 1940s, the foundation for AI was formed by logicians, philosophers, and mathematicians during the eighteenth and nineteenth centuries.

AI technology is used in such varied fields as robotics, information management, computer software, transportation, e-commerce, military defense, medicine, manufacturing, finance, security, and emergency preparedness, among others. In fact, the many different sectors in which AI could be used has led to many different estimates of the overall size of the AI market. Writing for Tech Emergence in March 2016, Daniel Faggella discussed some of the various research forecasts. Bank of America Merrill Lynch believed the AI market would be around $70 billion for AI systems by 2020; BCC Research, covering “smart machines,” had the market at only $41 billion by 2024. Still another research firm, Tractica, covered enterprise AI systems and placed this sector at a mere $202.5 million in 2015. However, all analysts expected AI to grow strongly for the remainder of the decade, illustrating that regardless of the metrics employed, AI usage was increasing.

ORGANIZATION AND STRUCTURE

The AI industry is powered by a blend of small and large companies, government agencies, and academic research centers. Major research organizations within the United States include the Brown University Department of Computer Science, Carnegie Mellon University's School of Computer Science, the University of Massachusetts Experimental Knowledge Systems Laboratory, NASA's Jet Propulsion Laboratory, the Massachusetts Institute of Technology (MIT), the Stanford Research Institute's Artificial Intelligence Research Center, and the University of Southern California's Information Sciences Institute.

In addition, a large number of small and large companies fund research efforts and the development of new products and technologies. Software giants like IBM, Microsoft, and Oracle are heavily involved in the development and enhancement of business intelligence, data mining, and customer relationship management software. Large corporate enterprises often have their own research divisions devoted to advancing AI technologies.

BACKGROUND AND DEVELOPMENT

The history of artificial intelligence predates modern computers, dating back to very early instances of human thought. The first formalized deductive reasoning system, Page 45  |  Top of Articlewhich is known as syllogistic logic, was developed in the fifth century BCE by Aristotle. In subsequent centuries, advances were made in the fields of mathematics and technology that contributed to AI, including the development of mechanical devices like clocks and the printing press. By 1642 CE, French scientist and philosopher Blaise Pascal had invented a mechanical digital calculating machine.

During the eighteenth century, attempts were made to create mechanical devices that mimicked living things. Among them was a mechanical automaton developed by Jacques de Vaucanson that was capable of playing the flute. Later, de Vaucanson created a life-sized mechanical duck that was constructed of gold-plated copper. In an excerpt from Living Dolls: A Magical History of the Quest for Mechanical Life that appeared in the February 16, 2002, issue of the Guardian, the duck was described by author Gaby Wood: “It could drink, muddle the water with its beak, quack, rise and settle back on its legs and, spectators were amazed to see, it swallowed food with a quick, realistic gulping action in its flexible neck. Vaucanson gave details of the duck's insides. Not only was the grain, once swallowed, conducted via tubes to the animal's stomach, but Vaucanson also had to install a ‘chemical laboratory’ to decompose it. It passed from there into the ‘bowels, then to the anus, where there is a sphincter which permits it to emerge’.”

Other early developments included a form of binary algebra developed by English mathematician George Boole that gave birth to the symbolic logic used in later computer technology. About the same time, the Analytical Engine was developed. This programmable mechanical calculating machine was used by Ada Byron (Lady Lovelace) and another English mathematician named Charles Babbage.

British mathematician Alan Turing was a computing pioneer whose interests and work contributed to the development of AI. In 1936 he wrote an article that described the Turing Machine, a hypothetical general computer. In time, this became the model for general purpose computing devices, prompting the Association of Computing Machinery to bestow an annual award in his honor. During the late 1930s, Turing defined algorithms—instruction sets used during problem solving—and envisioned how they might be applied to machines. In addition, Turing worked as a cryptographer and created a machine named Colossus to decipher German communications for the Allied forces during World War II. In 1950 he developed the now famous Turing Test, arguing that if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent.”

As a result of the efforts of other early computing pioneers, including John von Neumann, the advent of electronic computing in the early 1940s allowed the modern AI field to begin in earnest. However, the term “artificial intelligence” was not actually coined until 1956. That year, Dartmouth University mathematics professor John McCarthy hosted a conference that brought together researchers from different fields to talk about machine learning. By this time the concept was being discussed in such varied disciplines as mathematics, linguistics, physiology, engineering, psychology, and philosophy. Other key AI players, including MIT scientist Marvin Minsky, attended the summer conference at Dartmouth. Although researchers were able to meet and share information, the conference failed to produce any breakthrough discoveries.

A number of milestones were reached during the 1950s that set the stage for later developments, including an AI program called Logic Theorist. Created by the research team of Herbert A. Simon and Allan Newell, the program was capable of proving theorems. It served as the basis for another program the two men created called General Problem Solver, which in turn set the stage for the creation of so-called expert systems. Also known as rule-based systems, experts systems consist of one or more computer programs that focus on knowledge of a specific discipline or field (also known as a domain). The system then functions as an expert within the domain. Another noteworthy development was the creation of the List Processing (LISP) programming language in applications.

The AI field benefited from government funding during the 1960s, including a $2.2 million grant to MIT from the Department of Defense's Advanced Research Projects Agency. A number of new AI programs were developed in the 1960s and 1970s, including the very first expert systems. DENDRAL was created to interpret spectrographic data for identifying the structure of organic chemical compounds. MYCIN, another early expert system, was developed at Stanford University and introduced in 1974. It was applied to the domain of medical diagnosis, as was the INTERNIST program developed at the University of Pittsburgh in 1979.

By the 1980s, the AI industry was still very much in development. However, it had established links to the corporate sector and continued to evolve along with technology in general. Of special interest to the business market were expert systems, which were in use at Boeing and General Motors, among others. One estimate placed the value of AI software and hardware sales at $425 million in 1986. By 1993 the U.S. Department of Commerce reported that the AI market was valued at $900 million. At that time some 70 to 80 percent of Fortune 500 companies were applying AI technology in various ways.

Page 46  |  Top of Article

According to the Association for the Advancement of Artificial Intelligence (AAAI), important AI advances during the 1990s occurred in the areas of case-based reasoning, data mining, games, intelligent tutoring, machine learning, multi-agent planning, natural language understanding, scheduling, translation, uncertain reasoning, virtual reality, and vision. In addition to use in the private sector, AI continued to evolve within the defense market during the 1990s. Applications included missile systems used during Operation Desert Storm. Some of the more dramatic AI milestones during the late 1990s included chess champion Garry Kasparov's 1997 loss to the Deep Blue chess program, as well as the MIT Artificial Intelligence Lab's Rodney Brooks's creation of an interactive, drum-playing humanoid robot named COG in 1998. At the beginning of the twenty-first century, AI technology was being adopted at a rapid pace in computer software, medicine, defense, security, manufacturing, and other areas.

By 2004 companies like Sony marketed intelligent consumer robot toys, such as the four-legged AIBO that could learn tricks, communicate with human owners via the Internet, and recognize people with AI technology like face and voice recognition. Burlington, Massachusetts-based iRobot sold a robotic vacuum cleaner called Roomba for $200. Some other applications included business intelligence and customer relationship management, defense and domestic security, education, and finance. Technical concentrations that were growing the fastest included belief networks, neural networks, and expert systems.

Developments in artificial intelligence continued steadily to the end of the decade. For example, in June 2008, Roadrunner, a supercomputer built by IBM and housed at Los Alamos National Laboratory, became the world's first computer to achieve sustained operating speeds of one petaflop (a petaflop is a million billion, or a quadrillion), allowing Roadrunner to process a million billion calculations per second. One of the new supercomputer's potential applications was to perform calculations to certify the reliability of the U.S. nuclear weapons stockpile, with no need for underground nuclear tests. In the week after Roadrunner achieved its petaflop speed, researchers tested the code known as PetaVision, which models the human vision system, and found that, for the first time, a computer was able to match human performance on certain visual tasks like distinguishing a friend from a stranger in a crowd of people or faultlessly detecting an oncoming car on a highway. According to Terry Wallace of the Los Alamos National Laboratory, “Just a week after formal introduction of the machine to the world, we are already doing computational tasks that existed only in the realm of imagination a year ago.”

By 2010 artificial intelligence had grown at such an accelerated pace that the AI field had developed its own disciplines, including machine learning, computer vision, speech recognition, and natural language understanding, which worked both independently and in concert with one another.

In 2013 the European Commission (EC) released a report in which it noted that the estimated size of the worldwide market for AI was EUR 700 million (US$900 million) that year. The EC report specifically highlighted the use of artificial intelligence technology for analyzing big data. Many organizations had collected large volumes of data that would be very time consuming for human analysts to study. With AI software, these organizations could extract insights from their big data with less overhead.

Artificial intelligence continued to gain traction in the mid-2010s within many fields, including consumer products and the newspaper industry. A successful campaign on the crowdfunding website Indiegogo funded the development of a home companion robot named Jibo that used sensor technology to recognize the emotions of household members. This highlighted the reduction of cost in AI technology as well as the potential of breakthroughs supported by funding from the general public, not just the major technology companies that had traditionally dominated AI research.

The newspaper industry adopted AI technology to automatically generate articles about new stories in certain fields, most notably sports and finance. These articles relied on information in fixed formats, so software could collect the numbers from a baseball game or a quarterly earnings report and add standard text strings to it to generate an article. In June 2014, the Associated Press announced that it would use technology from Automated Insights to automatically write articles about quarterly earnings reports, and it had already used technology from this company to report information about football games. While AI technology has led to fears that robots will replace human workers, the Associated Press said that the automated report writing software would free up its reporters to work on more detailed stories.

CURRENT CONDITIONS

Showing how large technology firms were deeply invested in AI, three stories from 2016 involved AI and the companies Google, Facebook, and Microsoft. AlphaGo, an AI system operated by DeepMind (a division of Google), made headlines in March 2016 when it defeated 17-time world champion Lee Se-dol in a five-match Go tournament, according to an article by Matt Burgess for Wired (U.K.) that month. AlphaGo won the series 4–1. The victory was significant as “the 3,000-year-old Chinese Page 47  |  Top of Articleboard game has proved notoriously hard to master for AI developers due to the sheer number of possible moves.” AlphaGo employed a 12-layer neural network to both learn from past Go games and chart a path to victory.

Alex Hern, writing for the Guardian in April 2016, reported “Facebook is using an artificial intelligence system to automatically caption photos in an effort to increase the accessibility of its website and apps.” The project was in its early stages, and therefore many of the descriptions were perfunctory, at best. However, like other AI systems, the goal was for the computer to improve over time until its end product was largely indistinguishable from that created by a person. Due to concerns over inaccuracy, the captions would not be created if the machine was less than 80 percent accurate in its description.

A poor conjunction of humanity and an AI technology could be seen in the rollout, and subsequent removal, of an AI called Tay by Microsoft. Reporting for Tech-Crunch in March 2016, Sarah Perez explained how Tay was designed to respond “to tweets and chats on GroupMe and Kik,” two social media sites. Like many AIs, Tay was designed to “learn” from the tweets it received and adjust its own responses accordingly. Unfortunately, many Tweets and comments received by Tay were racist or otherwise offensive, so the machine learned to respond in a like vein. Microsoft therefore shut down Tay until the program could be improved.

PIONEERS

John McCarthy. Many consider John McCarthy (1927–2011) to be the father of modern AI research. Born on September 4, 1927, in Boston, McCarthy's father was a working-class Irish immigrant and his mother was Jewish Lithuanian. Both were politically active and were involved with the Communist Party during the 1930s. McCarthy subsequently developed an interest in political activism, although he rejected Marxism.

After skipping three grades in public school, McCarthy graduated from the California Institute of Technology in 1948. A doctoral degree in mathematics followed from Princeton University in 1951, where he also accepted his first teaching position. In 1953 McCarthy moved to Stanford to work as an acting assistant professor of mathematics. Another move came in 1955, when he accepted a professorship at Dartmouth.

The following year, McCarthy made a significant mark on the field of artificial intelligence by coining its name at a summer conference that he hosted to explore the concept of machine learning with researchers from other institutions and disciplines. Another milestone was reached in 1958, when McCarthy joined MIT as an associate professor and established the first research lab devoted to AI.

At MIT, McCarthy developed LISP, which became the standard language used by the AI community to develop applications. His work at MIT involved trying to give computers common sense. After moving back to Stanford University in 1962, McCarthy established an AI research lab and continued to work on AI common sense and mathematical logic.

In honor of his accomplishments, the Association for Computing Machinery presented McCarthy with the Alan Mathison Turing Award in 1971. He also received the Kyoto Prize in 1988, the National Medal of Science in 1990, and the Benjamin Franklin Medal in Computer and Cognitive Science in 2003. In addition to his academic work, McCarthy was also a former president of the AAAI.

Marvin Minsky. Another founding father of AI, Marvin Lee Minsky was born in New York on August 9, 1927, to eye surgeon Dr. Henry Minsky and Zionist Fannie Reiser. After attending the Bronx High School of Science and graduating from the Phillips Academy in Andover, Massachusetts, Minsky spent one year in the U.S. Navy. In 1946 he enrolled at Harvard University with the intent of earning a degree in physics. Instead, he pursued an eclectic mix of courses in subjects that included genetics, mathematics, and psychology. He became intrigued with understanding how the mind works and was exposed to the theories of behavioral psychologist B. F. Skinner. Minsky did not accept Skinner's theories and developed a model of a stochastic (pertaining to a family of random variables or a probability theory) neural network in the brain, based on his grasp of mathematics. Minsky switched his major and ended his time at Harvard in 1950 with an undergraduate mathematics degree.

Minsky then attended Princeton, where he and Dean Edmonds built an electronic learning machine called the Snarc. Using a reward system, Snarc learned how to successfully travel through a maze. Upon earning a doctorate in mathematics in 1954, Minsky worked briefly as a research associate at Tufts University before accepting a three-year junior fellowship at Harvard, where he was able to further explore theories regarding intelligence. In 1958 Minsky joined MIT, where he worked on the staff of the Lincoln Laboratory. The following year he made a significant impact on AI when he and John McCarthy established the Artificial Intelligence Project. Minsky worked as an assistant professor from 1958 to 1961 and then as an associate professor until 1963, when he became a professor of mathematics.

Several important developments in Minsky's career occurred in 1964, including becoming a professor of electrical engineering and the evolution of the project he had started in 1959 with McCarthy into MIT's Artificial Intelligence Laboratory. Minsky served as the lab's Page 48  |  Top of Articledirector from 1964 to 1973, and it became a place where researchers were allowed to be adventurous. It was there that some of the first automatic robots were developed, along with new computational theories. In recognition of his efforts, Minsky received the Alan Mathison Turing Award from the Association for Computing Machinery in 1970.

In 1974 Minsky was named the Donner professor of science in MIT's Department of Electrical Engineering and Computer Science. While he continued to serve as a professor in the Artificial Intelligence Laboratory, Minsky also began conducting his own AI research when the lab's work went in theoretical directions that differed from his own. He became critical of some of the lab's research as being too narrow in focus.

Minsky has shared many of his thoughts about AI with the public via articles in magazines like Omni and Discover. His 1986 book, The Society of Mind, provides a mechanized and detailed theory about how the mind functions and how it may be duplicated one day. Minsky moved to MIT's Media Laboratory in 1989 and was named the Toshiba Professor of Media Arts and Sciences. In 1992 he coauthored a science fiction novel with Harry Harrison titled The Turing Option, which was based on his theory. Beyond his academic work, Minsky has founded several companies, including General Turtle Inc., Logo Computer Systems Inc., and Thinking Machines Corp. Minsky passed away in January 2016.

INDUSTRY LEADERS

IBM. By the mid-2010s, IBM had moved beyond showing off its Watson supercomputer as a proof of concept by offering business applications for the AI technology. Cloud services remained prominent in the industry because of the remote access they offered, and IBM expanded its cloud lineup to include Watson. While the cloud version of Watson had been scaled down to some extent, this made the actual AI technology faster, according to Serdar Yegu-lalp in an article in InfoWorld on November 15, 2013. IBM made Watson available with both an app store and developer tools, which would allow other companies to use Watson to create their own AI software. This initiative allowed IBM to gain from applications created by other companies for its Watson software.

Google. The 2014 acquisition of the British firm Deep-Mind Technologies gave Google an even stronger position in artificial intelligence. With this move, Google recruited some employees in a relatively rare specialty (deep learning experts). This software specialty involves programs that can learn from collections of information, such as the data that Google has stockpiled through its web search business.

BIBLIOGRAPHY

“About Us.” Association for the Advancement of Artificial Intelligence, September 1, 2012. Available from www.aaai.org .

“At Your Fingertips.” Management Compass, September 18, 2012.

Burgess, Matt. “Google's DeepMind Wins Historic Go Contest 4–1.” Wired (U.K.), March 15, 2016. Available from http://www.wired.co.uk/news/archive/2016-03/15/alphagodeepmind-google-wins-lee-sedol .

Colford, Paul. “A Leap Forward in Quarterly Earnings Stories.” The Definite Source (blog), Associated Press, June 30, 2014. Available from http://blog.ap.org/2014/06/30/a-leapforward-in-quarterly-earnings-stories/ .

Constantin, Sarah. “Gesture Recognition, Mind Reading Machines, and Social Robotics.” h+, February 8, 2011.

“Creating Artificial Intelligence Based on the Real Thing.” New York Times, December 5, 2011.

Derrick, Stuart. “Revolution: Search Special Report: Smarter, More Relevant.” Marketing, September 6, 2012.

Dervojeda, Kristina, Diederik Verzijl, Fabian Nagtegaal, Mark Lengton, and Elco Rouwmaatet. “Big Data: Artificial Intelligence.” European Commission, September 2013. Available from http://ec.europa.eu/DocsRoom/documents/13411/attachments/2/translations/en/renditions/native .

“Engineers Make Artificial Skin Out of Nanowires.” Science Daily, September 13, 2010.

Faggella, Daniel. “Valuing the Artificial Intelligence Market, Graphs and Predictions for 2016 and Beyond.” Tech Emergence, March 7, 2016. Available from http://techemergence.com/valuing-the-artificial-intelligence-market2016-and-beyond/ .

Hern, Alex. “Facebook Is Using AI to Tag Your Pictures to Help Blind People.” Guardian, April 5, 2016. Available from https://www.theguardian.com/technology/2016/apr/05/facebook-ai-tag-pictures-blind-people-machine-learning .

Higgenbotham, Stacey. “Vicarious Gets $15M to Search for the Key to Artificial Intelligence.” Gigaom, August 21, 2012.

Huang, Gregory T. “Is Jibo the Next Roomba, or a Bigger Test for Consumer Robots?” Xconomy, July 29, 2014. Available from http://www.xconomy.com/boston/2014/07/29/is-jibothe-next-roomba-or-a-bigger-test-for-consumer-robots .

“Intel and Carnegie Mellon University Develop Retail Robot.” PC World, October 2012.

Klein, Theresa (as told to Flora Lichtman). “Rough Sketch: ‘We Made a Robot That Moves Like a Person.’” Popular Science, October 2012.

Lohr, Steve. “How Big Data Became So Big.” New York Times, August 11, 2012.

Lomas, Natasha. “Artificial Intelligence: 55 Years of Research Later—and Where Is AI Now?” ZDNet, February 8, 2010.

Masterson, Michele. “Siri, Meet Nina.” Speech Technology Magazine, September–October 2012.

——— “Speech Goes to School.” Speech Technology Magazine, September–October 2012.

Morgenthaler, Gary. “AI's Time Has Arrived.” BusinessWeek, September 21, 2010.

“NASA Says Mars Rover Curiosity Arm Tests Are Nearly Complete.” Entertainment Close-up, September 20, 2012.

Page 49  |  Top of Article

“News 2012.” MIT Computer Science and Artificial Intelligence Laboratory, September 20, 2012.Available from www.csail.mit.edu .

“Our Research.” Face Perception and Research Laboratory, University of Texas at Dallas, September 20, 2012. Available from http://www.utdallas.edu/bbs/facelab/ .

Perez, Sarah. “Microsoft Silences Its New A.I. Bot Tay, after Twitter Users Teach It Racism [Updated].” TechCrunch, March 24, 2016. Available from http://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-aftertwitter-users-teach-it-racism/ .

Regalado, Antonio. “When Machines Do Your Job: Researcher Andrew Mcafee Says Advances in Computing and Artificial Intelligence Could Create a More Unequal Society.” MIT Technology Review, July 11, 2012.

——— “Is Google Cornering the Market on Deep Learning?” MIT Technology Review, January 29, 2014. Available from http://www.technologyreview.com/news/524026/is-googlecornering-the-market-on-deep-learning/ .

Robotics Industry Association. “Rethink Robotics Revolutionizes Manufacturing with Humanoid Robot.” Robotics Online, September 21, 2012. Available from www.robotics.org .

Sheriden, Cris. “Is Artificial Intelligence Taking Over the Stock Market?” Financial Sense, March 2, 2012.

“Smartest Machine on Earth.” PBS, May 2, 2012. Available from www.pbs.org .

Thompson, Clive. “Smarter Than You Think—Who Is Watson?” New York Times, June 16, 2010.

Von Buskirk, Eliot. “Virtual Musicians, Real Performances.” Wired, March 2, 2010.

Yegulalp, Serdar. “Watson as a Service: IBM Preps AI in the Cloud.” InfoWorld, November 15, 2013. Available from http://www.infoworld.com/t/cloud-computing/watsonservice-ibm-preps-ai-in-the-cloud-230901 .

Source Citation

Source Citation   

Gale Document Number: GALE|CX3664200016