Artificial Intelligence

Citation metadata

Date: 2014
Encyclopedia of Business and Finance
From: Encyclopedia of Business and Finance(Vol. 1. 3rd ed.)
Publisher: Gale, part of Cengage Group
Document Type: Topic overview
Pages: 5
Content Level: (Level 4)

Document controls

Main content

Full Text: 
Page 31

Artificial Intelligence

Artificial Intelligence (AI) is the branch of computer science and engineering devoted to the creation of intelligent machines and the software to run them. This process Page 32  |  Top of Articleis “artificial” because once it is programmed into the machine, it occurs without human intervention. AI is generally applied to the theory and practical application of a computer's ability to think as humans do. AI capability is designated as either strong AI or weak AI. Strong AI is a computer system that employs an active consciousness in its decision making. This is a machine that can reason, reach logical (and correct) conclusions, and solve problems independently. Critics of AI systems argue that such a machine is unrealistic, and even if it were possible, a true artificially intelligent machine is unwanted due to ethical concerns or the overt fear of “conscious machines” turning on their human creators.

Strong AI is a popular and familiar theme in movies and television shows, especially in the science-fiction genre. The soothing female voice of the ship's computer and on the Starship Enterprise in Star Trek is an example of the quest for benevolent and useful intelligent machines as virtual assistants. However, other movies have presented the darker side of strong AI in the psyche of audiences, such as the terrifying Cyborg in the Terminator film series, which presents an intelligent and unstoppable machine bent on the destruction of humankind.

Modern working applications of AI are examples of weak AI. Current AI research focuses on developing computers that use intelligent programming to automate routine human tasks. For example, many customer service call centers are automated by AI. When a recorded voice asks for a “yes” or “no” response to choose a menu item, the computer on the other end of the telephone is using weak AI to listen to the caller's response and select the appropriate action based on caller input. These computers are trained to recognize speech patterns, dialects, accents, and replacement words such as “oh”—rather than “zero”—for the number 0.

Long before the development of computers, the notion that thinking was a form of computation motivated the formalization of logic as a type of rational thought. These efforts continue today. Graph theory provided the architecture for searching a solution space for a problem. Operations research, with its focus on optimization algorithms, uses graph theory to solve complex decision-making problems.


AI uses syllogistic logic, which was first postulated by the Greek philosopher Aristotle (384–322 BC). This logic is based on deductive reasoning. For example, if A equals B, and B equals C, then A must also equal C. Throughout history, the nature of syllogistic logic and deductive reasoning was shaped by grammarians, mathematicians, and philosophers. When computers were developed, programming languages used similar logical patterns to support software applications. Terms such as “cybernetics” and “robotics” were used to describe collective intelligence approaches and led to the development of AI as an experimental field in the 1950s.

The term “artificial intelligence” was coined by John McCarthy (1927–2011) at the Massachusetts Institute of Technology in 1955. McCarthy was the genius behind logic programming and helped create LISP (“LISt Processing”), the programming language used to write AI programs. McCarthy and Marvin Minsky (1927–) opened their original AI lab in 1959 to write AI decision-making software.

Allen Newell (1927–1992) and Herbert Simon (1916–2001) pioneered the first AI laboratory at Carnegie Mellon University, also in the 1950s. The best-known name in the AI community is Alan Turing (1912–1954). Turing was a mathematician, philosopher, and cryptographer and is often credited as the founder of computer science as a discipline separate from mathematics. He contributed to the debate of whether a machine could think by developing the Turing test. The Turing test uses a human judge engaged in remote conversation with two parties: another human and a machine. If the judge cannot tell which party is the human, the machine passes the test.

Originally, teletype machines were used to maintain the anonymity of the parties. In the 21st century chat sessions are used to test the linguistic capability of AI engines. Linguistic robots called Chatterbots (such as Jabberwacky) are popular computer programs designed to simulate an intelligent conversation with human users.

The Defense Advanced Research Projects Agency (DARPA), which played a significant role in the birth of the Internet by funding ARPANET, also funded AI research in the early 1980s. Nevertheless, when results were not immediately useful for military application, funding was cut. Since then, AI research has moved to other areas including robotics, computer vision, and other practical engineering tasks.


One of the early milestones in AI was Newell and Simon's General Problem Solver (GPS). The program was designed to imitate human problem-solving methods. This and other developments such as Logic Theorist and the Geometry Theorem Prover generated enthusiasm for the future of AI. Simon went so far as to assert that in the near-term future, the problems that computers could solve would be coextensive with the range of problems to which the human mind has been applied.

Difficulties in achieving this objective soon began to manifest themselves. New research based on earlier successes encountered problems of intractability. A search for alternative approaches led to attempts to solve typically occurring cases in narrow areas of expertise. Some Page 33  |  Top of Articleof these problems were addressed with the introduction of computers with large amounts of memory in the early 1970s. Researchers understood that enormous amounts of knowledge might be required to solve even a simple AI problem, and they began to build more knowledge into AI applications to solve more complex problems.

Edward Feigenbaum (1936–) developed the first of the expert systems, which reach conclusions by applying reasoning techniques based on sets of rules. These were the first commercially successful form of AI software. A seminal model was MYCIN, developed specifically to diagnose blood infections. Having about 450 rules, MYCIN was able to outperform many experts by quickly ruling out unlikely diagnoses and narrowing the possible choices to a manageable few. This and other expert systems research led to the first commercial expert system, R1, implemented at Digital Equipment Corporation (DEC) to help configure client orders for new mainframe and minicomputer systems. R1's implementation was estimated to save DEC about $40 million per year.

AI research on expert systems during the 1980s revived commercial interest in funding AI projects and the 1990s proved to be the most exciting time for AI research and applications. In May 1997 the IBM chess-playing computer Deep Blue beat the reigning world chess champion Gary Kasparov (1963–) of Russia. Since that win, other AI systems have successfully driven a vehicle without a human occupant 131 miles across a desert, travelled 55 miles in city traffic without incident and, in February 2011, in a widely televised event, IBM's question-answering system Watson beat the two best all-time Jeopardy! champions by a significant margin.

In the 21st century, the emergence of faster, more powerful computers and new funding for AI research has commercialized AI technology for medical diagnostics, logistics, data mining, and help desk and call center operations. In many cases, these programs do their work without any direct human interaction or intervention other than the human who initiates the query. AI systems are also present in more consumer-oriented products such as the KINECT gaming device from Microsoft and Siri, the “vocal assistant” in Apple's iPhone.


While precise definitions are still the subject of debate, AI may be thought of as the branch of computer science that is concerned with the automation of intelligent behavior. The intent of AI is to develop systems that have the ability to perceive, to learn, to accomplish physical tasks, and to emulate human decision making. AI seeks to design and develop intelligent agents as well as to understand them.

AI research has proven to be the breeding ground for computer science subdisciplines such as pattern recognition, image processing, neural networks, natural language processing, and game theory. For example, optical character recognition software that transcribes handwritten characters into typed text (notably with tablet personal computers and personal digital assistants) was initially a focus of AI research.

Additionally, expert systems used in business applications owe their existence to AI. Manufacturing companies use inventory applications that track both production levels and sales to determine when and how much of specific supplies are needed to produce orders in the pipeline. Genetic algorithms are employed by financial planners to assess the best combination of investment opportunities for their clients. Other examples include data mining applications, surveillance programs, and facial recognition applications.

Multiagent systems are also based on AI research. Use of these systems has been driven by the recognition that intelligence may be reflected by the collective behaviors of large numbers of very simple interacting members of a community of agents. These agents can be computers, software modules, or virtually any object that can perceive aspects of its environment and proceed in a rational way toward accomplishing a goal.


Four types of systems will have a substantial impact on applications: intelligent simulation, information-resource specialists, intelligent project coaches, and robot teams.

Intelligent simulations generate realistic simulated worlds that enable extensive affordable training and education that can be made available anytime and anywhere. Examples might be hurricane crisis management, exploration of the impacts of different economic theories, tests of products on simulated customers, and technological design testing features through simulation that would cost millions of dollars to test using an actual prototype.

Information-resource specialist systems (IRSS) will enable easy access to information related to a specific problem. For instance, a rural doctor whose patient presents with a rare condition might use IRSS to assess competing treatments or identify new ones. An educator might find relevant background materials, including information about similar courses taught elsewhere.

Intelligent project coaches (IPCs) could function as coworkers, assisting and collaborating with design or operations teams for complex systems. Such systems could recall the rationale of previous decisions and, in times of crisis, explain the methods and reasoning previously used to handle that situation. An IPC for aircraft design could enhance collaboration by keeping communication flowing among the large, distributed design staff, the program managers, the customer, and the subcontractors.

Robot teams could contribute to manufacturing by operating in a dynamic environment with minimal Page 34  |  Top of Articleinstrumentation, thus providing the benefits of economies of scale. They could also participate in automating sophisticated laboratory procedures that require sensing, manipulation, planning, and transport. The AI robots could work in dangerous environments with no threat to their human builders.


As mentioned in the introduction, AI is both a common topic in science fiction and an actor in our daily lives as we interact with everything from traffic lights to the help desk at our online merchant. As AI becomes more pervasive it raises ethical issues we must address and creates the specter of omnipotent technology that we no longer control.

At California's Institute of the Future, a think tank established in 1968 by the Rand Corporation to help organizations plan for the future, the question of Robot Rights are being considered in anticipation of the not-to-distant-future where robots work for, and interact with, their human creators.

Other ethical considerations include the possibility of expert systems replacing entry-level workers in customer-facing jobs. These robotic workers never tire, do not take vacations, do not unionize, and never ask for a raise. This has already happened to auto workers, assembly-line workers, and other repetitive manual-labor positions. In the future, expert systems might do your taxes, manage your retirement portfolio, or pull you over to give you a speeding ticket (actually, the ticket would go to your car, since it will probably be driving itself).

The ultimate ethical challenge is the prospect of the benevolent machine becoming malevolent and turning on its human master. This has been the subject of endless speculation in popular science-fiction novels and movies, but noted futurist Ray Kurzweil (1948–) predicts that by 2045 AI will be able to improve itself without any intervention from humans. In other words, the AI will be better, faster, smarter, and less prone to “breakdown” than the humans it serves. What are the ethics involved in how we interact with a superior machine? And why should we presume that the intelligent machine which is immune to disease, fear, death, or emotion should be in any way sympathetic to our system of morality?

Is there an ethical argument to be made that at some point in time our biological entities would be a burden to the continued growth of the superior machine culture, and we should be rendered obsolete? Have not we done the same thing to hundreds of species, from microbes to primates, which slowed our technological progress?


Edward Fredkin (1934–), a distinguished professor of physics at Carnegie Mellon University, posits that artificial intelligence is “the next stage in evolution.” His vision is shared by many scientists, researchers, futurists, and science-fiction devotees.

The research and commercial application of weak AI and true expert systems are progressing at a dizzying pace. At the same time, some strong AI robot systems are coming online to compete for DARPA grants and monetary prizes. Most of these are hazardous environment or battlefield robots that are faster, stronger, more battledamage resistant, and able to carry equipment loads that far exceed the strongest human soldier. In addition, these robots use laser-guided weaponry that virtually guarantees a shoot-kill sequence. They are unaffected by stress, noise, fear of personal harm, or other “miss-inducing” reactions to the “fog of war.”

As with many advancements in technology, these may begin with military applications, but it is a short leap from a robotic battlefield medic to an AI medic conducting a search and rescue mission after a natural disaster or an AI pilot flying an F-35 Advanced Stealth Fighter to one flying a commercial airliner.


A variety of disciplines have influenced the development of AI. These include philosophy (logic), mathematics (computability, algorithms), psychology (cognition), engineering (computer hardware and software), and linguistics (knowledge representation and natural-language processing). As AI continues to redefine itself, the practical application of the field will change.

AI supports national competitiveness as it depends increasingly on capacities for accessing, processing, and analyzing information. The computer systems used for such purposes must also be intelligent. Health care providers require easy access to information systems, so they can track health care delivery and identify the most effective medical treatments for their patients' conditions. Crisis management teams must be able to explore alternative courses of action and make critical decisions. Educators need systems that adapt to a student's individual needs and abilities. Businesses require flexible manufacturing and software design aids to maintain their leadership position in information technology, and to regain it in manufacturing. AI will continue to evolve toward a rational, logical machine presence that will support and enhance human endeavors.


Artificial intelligence. (2012). In S. D. Hill (Ed.), Encyclopedia of management (7th ed., pp. 17–21). Detroit, MI: Gale.

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York, NY: Viking.

Page 35  |  Top of Article

Kurzweil, R. (2012). How to create a mind: The secret of human thought revealed. New York, NY: Viking.

Lucci, S., & Kopec, D. (2013). Artificial intelligence in the 21st century: A living introduction. Dulles, VA: Mercury Learning and Information.

Rada, R. (2009). Artificial intelligence and investing. In M. Khosrow-pour (Ed.), Encyclopedia of information science and technology (2nd ed., Vol. 1, pp. 237–240). Hershey, PA: Information Science Reference.

Russell, S. J., & Norvig, P. (2014). Artificial intelligence: A modern approach (3rd ed.). Harlow, UK: Pearson Education.

Warwick, K. (2012). Artificial intelligence: The basics. New York, NY: Routledge.

Mark Jon Snyder
Lisa Gueldenzoph Snyder

Source Citation

Source Citation   

Gale Document Number: GALE|CX3727500026