Robots and Social Robotics

Citation metadata

Editor: Michael Shally-Jensen
Date: 2011
Encyclopedia of Contemporary American Social Issues
From: Encyclopedia of Contemporary American Social Issues(Vol. 4: Environment, Science, and Technology. )
Publisher: ABC-Clio
Document Type: Topic overview
Pages: 9
Content Level: (Level 5)

Document controls

Main content

Full Text: 
Page 1607

Robots and Social Robotics

Long an inspiration for science fiction novels and films, the prospect of direct, personal, and intimate interaction between humans and robots is the focus of contemporary debate among scientists, futurists, and the public. The term robot comes from the play R.U.R., for “Rossum's Universal Robots,” written by Czechoslovakian author Karel Capek in 1920. In this play, humanoid automata overthrow and exterminate human beings, but because the robots cannot reproduce themselves, they also face extinction. This play was internationally successful at the time, engaging public anxieties produced by rapid industrialization, scientific change, and the development of workplace automation.

In the play, inventor Rossum's robots are fully humanoid. These forms of robot are sometimes referred to as androids, or gynoids for machines with feminine characteristics. Humanoid or anthropomorphic robots represent only one kind of robot, however. Robots vary in the degree of automation, as well as the extent to which they are anthropomorphic. The sophisticated animatronic human figures of amusement parks represent some of the best imitations of human movement, although these robots’ programming controls all of their actions. Social robotics focuses on the representation of human communication and social interaction, although no systems to date are capable of independent locomotion, and they resemble human forms and faces only slightly. Industrial robots are designed not to mimic the human form at all but to efficiently conduct specific manufacturing processes. Industrial robots are the least humanlike in form and movement of all the forms of robots.

Levels of Control

The degree to which a robot is capable of autonomous or self-directed responses to its environment varies. Many if not most robotic systems are extremely limited in their responses, and their actions are completely controlled by programming. There are also robots whose actions are controlled directly by a human operator. For example, bomb-squad robots are controlled by a human operator who, using cameras and radio or other wireless connections, can control the detailed operations of the robot to defuse a bomb. Page 1608  |  Top of ArticleOnly a handful of experimental systems have more than a very limited range of preset responses to environmental stimuli, going beyond rote conversations for social robots to simple algorithms for navigating obstacles for mobile robots. It has been, for example, very difficult to develop a reliable robot that can walk with a human gait in all but the most controlled environments.

These different levels of control connect robotics to cybernetics or control theory. The term cybernetics comes from the Greek term kybernos, or governor. There are many kinds of cybernetic systems. For example, the float in the tank of a toilet that controls water flow and the thermostat on the wall that controls temperature are simple forms of cybernetics where information about the environment (feedback) is translated into a command for the system. For floats, the feedback is of a straightforward mechanical nature. Thermostats use a very simple electrical signal to tell a furnace or air conditioner to turn on or off. Animatronics at amusement parks or complex robotic toys use information about the balance of the device and its location in relation to obstacles to compute changes in position, speed, and direction. The more complex the desired behavior or system and the more independent the robot is supposed to be, the more complex, and thus costly, the information needed in terms of sensors for collecting data, and the greater the computing power needed to calculate and control the possible responses of the device to its environment.

Debating Limits and Social Values

The cost and complexity of a robot with a broad range of responses to the environment point to the first of two controversies surrounding robotics. The first controversy surrounds the limits to automation on a theoretical level. Is there anything that cannot be done by a robot or automated system? The second set of controversies is about the desirability of robotic systems, particularly in terms of their impact on labor and economics. That is, even if we can automate something, should we? These two sets of controversies overlap in several places.

Debates about the limits to automation within the robotics and artificial intelligence communities have many dimensions. There are debates, for example, as to whether certain kinds of knowledge or action can be successfully automated. For example, can medical knowledge be fully captured in automatic diagnosis systems? There are also intense technical debates as to what algorithms or programs might be successful. Simple mimicry or closed programs that map out every possibility are considered weak in comparison with cost-effective and reliable substitutes for developing algorithms that can generate appropriate responses in a more open-ended system. One of the continuing debates has to do with the balance between anthropomorphism and specificity. Human beings are good at a lot of different tasks, so it is very difficult, and perhaps inefficient, to try to make robot systems with that degree of generalizability. A robot that can do one very specific thing with high accuracy may be far superior and Page 1609  |  Top of Articlecost-effective, if less adaptable (and less glamorous) than a generalized machine that can do lots of things.

The most publicly debated controversies surrounding robots and robotics concern economics and labor. Superficially, robots replace human workers. But because robots lower production costs, their implementation can also expand production and possibly increase employment. The workers displaced may not get new jobs that pay as well as the jobs taken over by automation, however, and they may also be at a point in their working lives where they cannot easily retrain for new work. Robots as labor-saving technologies do not make sense in places where there is a surplus of labor and wages are very low.

The first implementations of robots into workplaces did displace human workers and often degraded work. Work was deskilled, as knowledge and technique was coded into the machine. This deskilling model holds for some cases of automation, but it also became apparent that these automatic systems do not always or necessarily deskill human labor. It is possible to adapt automation and computer systems to work settings in which they add information to work processes rather than extracting information from people and embedding it in machines. In the information systems approach, human labor is supported by data collection and robotics systems, which provide more information about and control over processes. The automation-versus-information debate has been complicated by office automation systems, which lead to debates about whether new technologies in the workplace centralize managerial control or decentralize decision processes in organizations.

Marx's labor theory of value is best at explaining the nuances of the economics of robotics implementation. In this theory, workers do not get the full value of their efforts as wages. The surplus is extracted by owners as profit. As the size of the labor pool increases, wages are driven downward and automation becomes economically undesirable. Skilled labor is the ideal target for automation because of the higher proportional wage costs, yet complex work is the most expensive to implement. Routine labor, often perceived to be low-skill, is targeted for replacement by robotic systems, but the economic benefits of automation for routine labor are ambiguous. To paraphrase Norbert Wiener, one of the fathers of modern cybernetics, anything that must compete with slave labor must accept the conditions of slave labor, and thus automation generally depresses wages within the occupational categories automated. Of course new jobs also emerge, to build and maintain the machines, and these are generally high-skill and high-wage jobs with a high degree of work autonomy. So, consider the automatic grocery-store checkout system. There are usually four stations and one clerk, and it seems to save the wages of at least three checkout clerks to have customers themselves using the automatic system. But the costs of design, implementation, and upkeep of these systems maybe very high: the wages of one programmer maybe more than that of the four clerks replaced. So it is not clear in the long term whether automatic checkout systems will save money for grocery stores or for customers.

Page 1610  |  Top of Article

Practical Considerations

There are two continuing problems confronting the implementation of robotics and automatic systems. The first is the productivity paradox, where despite the rapid increases in computing power (doubling approximately every 18 months) and the sophistication of robotics, industrial productivity increases at a fairly steady 3 percent per year. This huge gap between changes in technology and changes in productivity can be explained by several factors, including the time needed to learn new systems by human operators, the increasing costs of maintaining new systems, and the bottlenecks that cannot be automated but have the greatest influence on the time or costs associated with a task.

The second problem with robotics implementation is the perception of the level of skill of the tasks targeted for automation. For example, robots are seen by some groups of roboticists and engineers to be somehow suited for use in taking care of the elderly. The work of eldercare is perceived as low-skill and easy to conduct; it is also seen to be undesirable and thus a target for automation. Although the work is definitely low-paying and difficult, there may be a serious mismatch between the actual complexity of the work and the wages, leading to the labor shortage. The work of taking care of the elderly may not be as routine as it is perceived to be by outsiders and thus may be extremely difficult to automate with reliability or any measure of cost-effectiveness.

So perceptions about work as much as economic issues shape the implementation of robotic systems. These perceptions about the nature of work and the nature of robots play themselves out in popular media. In the 1920s, whether in Capek's R.U.R or the film Metropolis by Fritz Lang, robots on stage and screen represented sources of cultural anxiety about the rapid industrialization of work and the concentration of wealth. More recent films, such as the Terminator and Matrix series, are similarly concerned with our dependence on complex technological systems and robotics, and the extent to which robots take on lives of their own and render human beings superfluous. The media representations magnify real problems of worker displacement and questions about human autonomy that are embodied in robotic systems.

Social Robotics

Autonomous machines that can interact with humans directly by exhibiting and perceiving social cues are called social robots. They are the materialization of futuristic visions of personable, socially interactive machines popularized by fictional characters like Star Wars’ R2-D2 and C-3PO, The Terminator's T-800, and AI's David. Topics of contention in social robotics concern the capability of machines to be social, the identification of appropriate applications for socially interactive robots, their potential social and personal effects, and the ethical implications of socially interactive machines.

Since the 1960s, the primary use of robots has been for repetitive, precise, and physically demanding jobs in factories and dangerous tasks in minefields, nuclear “hot spots,” Page 1611  |  Top of Articleand chemical spills. In contrast, today's social robotics projects envision new roles for robots as social entities—companions and entertainers, caretakers, guides and receptionists, mediators between ourselves and the increasingly complex technologies we encounter daily, and tools for studying human social cognition and behavior. Although social robotics projects have their start in academic, corporate, and government labs, social robots are coming into closer contact with the general public. In 2003, Carnegie Mellon University (CMU) unveiled the world's first Roboceptionist, which gives visitors to the Robotics Institute information and guidance as it engages in humorous banter. Researchers in Japan have developed a number of different social robot prototypes and working models.

Human-Robot Interactions

Social robots are built with the assumption that humans can interact with machines as they do with other people. Because the basic principles of human–human interaction are not immediately obvious, roboticists have developed a variety of approaches for defining social human–robot interaction. In some cases, social roboticists use a range of individual traits to define social machines: the capacity to express and perceive emotion; the skill to engage in high-level dialogue; the aptitude to learn and recognize models held by other agents; the ability to develop social competencies, establish and maintain social relationships, and use natural social cues (gaze, gestures, etc.); and the ability to exhibit distinctive personality and character. Cynthia Breazeal describes Kismet, the first robot designed specifically for face-to-face interaction, as a “sociable robot.” By using the term sociable, Breazeal emphasizes that the robot will be pleasant, friendly, and fond of company. Such robots, though potentially agreeable assistants, cannot be fully social because they would not be capable of the range of social behavior and affective expression required in human relationships. In qualifying robot sociality, Kerstin Dautenhahn uses a more systemic view and emphasizes the relationship between the robot and the social environment. She differentiates between “socially situated” robots, which are aware of the social environment, and “socially embedded” robots, which engage with the social environment and adapt their actions to the responses they get.

Although roboticists cite technological capabilities (e.g., processor speed, the size and robustness of hardware and software components, and sensing) as the main barrier to designing socially interactive robots, social scientists, humanities scholars, and artists draw attention to the social and human elements that are necessary for social interaction. Philosophers John Searle and Daniel Dennett contest the possibility of designing intelligent and conscious machines. Psychologist Colwyn Trevarthen and sociologist Harry Collins argue that humans may interpret machines as social actors, but the machines themselves can never be truly social. Social psychologist Sherry Turkle shows how social robots act as “relational machines” that people use to project and reflect on their ideas of self and their relationships with people, the environment, and new Page 1612  |  Top of Articletechnologies. Other social scientists argue that the foundation for human and possibly robot sociality is in the subtle and unconscious aspects of interaction, such as rhythmic synchronicity and nonverbal communication. These approaches suggest that gaining a better understanding of human sociality is an important step in designing social robots. Both social scientists and roboticists see robots as potentially useful tools for identifying the factors that induce humans to exhibit social behavior towards other humans, animals, and even artifacts.

Sidebar: HideShow

The Uncanny Valley

The “uncanny valley” hypothesis, proposed by Japanese roboticist Mori Masahiro, suggests that the degree to which a robot is “humanlike” has a significant effect on how people react to the robot emotionally. According to Masahiro, as a robot is made more humanlike in appearance and motion, humans will have an increasingly positive emotional response to the robot up to a certain point. When the robot resembles a human almost completely but not quite, people will consider it to be repulsive, creepy, and frightening—much as they do zombies and corpses. Once it becomes impossible to differentiate the robot from a human, the response becomes positive again. Although it is widely discussed and cited in social robotics literature, the uncanny valley hypothesis has not been experimentally tested. One of the difficulties is that the main variables involved, humanlike qualities and familiarity, are themselves quite complex and not easily defined.

Although it is generally agreed that a robot's appearance is an important part of its social impact, the variety of social robot shapes and sizes shows that there is little agreement on the appropriate design for a robot. David Hanson's K-bot and Hiroshi Ishiguro's Actroid and Geminoid robots resemble humans most closely, including having specially designed silicone skin and relatively smooth movements. These robots are known as androids. Along with humanoid robots, which resemble humans in shape, androids express the assumption that a close physical resemblance to humans is a prerequisite for successful social interaction. This assumption is often countered by the hypothesis that human reactions to an almost-but-not-quite-human robot would be quite negative, commonly known as the “uncanny valley” effect. In contrast, Hideki Kozima's Keepon and Michio Okada's Muu robots are designed according to minimalist principles. This approach advocates that a less deterministic appearance allows humans to attribute social characteristics more easily. Researchers often use a childlike appearance for robots when they want to decrease users’ expectations from machines and inspire people to treat them like children, exaggerating their speech and actions, which makes technical issues such as perception easier. Surprisingly, research in human–robot interaction (HRI) has shown that machines do not have to be humanlike at all to have social Page 1613  |  Top of Articlecharacteristics attributed to them. People readily attribute social characteristics to simple desktop computers and even Roomba vacuum cleaners.

Finding a Place for Robots

Roboticists claim that social robots fundamentally need to be part of a society, which would include both humans and machines. What would a future society in which humans cohabit with robots look like? Information technology entrepreneurs such as Bill Gates forecast robotics as the next step in the computing revolution, in which computers will be able to reach us in ever more intimate and human ways. Ray Kurzweil, futurist and inventor, sees technology as a way for humanity to “transcend biology,” and Hans Moravec claims that, by the year 2040, robots will be our cognitive equals—able to speak and understand speech, think creatively, and anticipate the results of their own and our actions. MIT professor Rodney Brooks views the robots of the future not as machines but as “artificial creatures” that can respond to and interact with their environments. According to Brooks, the impending “robotics revolution” will fundamentally change the way in which humans relate to machines and to each other. A concurring scenario, proposed by cognitive scientists such as Andy Clark, envisions humans naturally bonding with these new technologies and seeing them as companions rather than tools. In his famous Wired magazine article “Why the World Doesn't Need Us,” Bill Joy counters these technologically optimistic representations of technological advancement by recasting them as risks to humanity, which may be dominated and eventually replaced by intelligent robots.

Views echoing Joy's concerns are common in American fiction, film, and the media. This fear of robots is colloquially known as the “Frankenstein complex,” a term coined by Isaac Asimov and inspired by Mary Shelley's novel describing Dr. Frankenstein's loathing of the artificial human he created.

Robotics technologies are regularly suggested as viable solutions for social problems facing developed nations, particularly the steady increase in the elderly population and attendant rising demand for caretaking and domestic assistance services. The Japanese Robotics Association (JARA) expects advanced robotic technologies to be a major market by 2025. In May 2004, Japan's Ministry of Economy, Trade and Industry (METI) made “partner robots” one of seven fields of focus in its latest industrial policy plan. Visions of a bright future for commercial robots have been put into question by difficulties in finding marketable applications. Sony's AIBO, which was credited with redefining the popular conception of robots from that of automated industrial machines to a desirable consumer product, was discontinued in 2006. Mitsubishi did not sell even one unit of its yellow humanoid Wakamaru. Honda's ASIMO has opened the New York Stock Exchange, visited the European Parliament, shaken hands with royalty, and been employed by IBM as a $160,000-per-year receptionist, but Honda has yet to find a viable application for it in society at large.

Page 1614  |  Top of Article

Sidebar: HideShow

Robots as Cultural Critique

Artists engage in robotics to provide cultural and social critiques and to question common assumptions. White's Helpless Robot upends our expectations of robots as autonomous assistants by getting humans to aid the immobile robot by moving it around. In the Feral Robotic Dogs project, Natalie Jeremijenko appropriates commercial robotic toys and turns them into tools that the public can use for activist purposes, such as exploring and contesting the environmental conditions of their neighborhoods. The Institute for Applied Autonomy's Little Brother robot uses cuteness to distribute subversive propaganda and circumvent the social conditioning that stops people from receiving such materials from humans. Simon Penny's Petit Mai and Tatsuya Matsui's robots Posy and P-Noir engage the assumptions of the robotics community itself and ask them to question their motives and approaches to building robots that interact with humans.

Similar concerns about applications have kept NEC from marketing its personable robot PaPeRo. In the United States, social robots such as Pleo and Robosapiens have been successful as high-tech toys. The most commercially successful home robotics application to date, however, is the iRobot vacuum cleaner Roomba, which had sold over 2.5 million units as of January 2008.

Ethical Concerns

Social robots bring up novel ethical challenges because both roboticists and critics envision them to have profound and direct, intended as well as unintended, impacts on humans as well as the environment. Even with their current limited capabilities, interactions with social robots are expected to change not only our understanding but also our experiences of sociality. Although social roboticists overwhelmingly focus on the potential positive influences of these machines, their emphasis on the technical challenges of making social machines can produce designs that have unanticipated consequences for their users, individuals who perform jobs for which the robots were designed, and society in general. Critics have questioned the effects that interaction with machines rather than humans can have on the quality of interaction, especially in the case of vulnerable populations such as children and the elderly. The introduction of robots into certain occupations—such as nursing, the caregiving professions in general, and teaching—is not always seen as a benefit to existing employees. People are concerned that they may have to work harder to compensate for the robot's deficiencies or that their work has been devalued and reduced to an unskilled, mechanical operation. The rise of unemployment that was experienced as a result of factory automation raises further concerns about the effects of robots taking over service sector jobs. The development of socially oriented robotic technologies also calls on us to consider the limitations and capabilities of our social institutions (family, friends, schools, government) and the pressures they Page 1615  |  Top of Articleface in supporting and caring for children and the elderly (e.g., extended work hours for both parents, dissolution of the extended family and reliance on a nuclear family model, ageism and the medicalization of the elderly).

Jennifer Croissant and Selma Sabanovic

Further Reading

Breazeal, Cynthia, Designing Sociable Robots (Intelligent Robotics and Autonomous Agents). Cambridge, MA: MIT Press, 2002.

Noble, David, Forces of Production: A Social History of Industrial Automation. New York: Oxford University Press, 1986.

Reeves, Byron, and Clifford Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. CSLI Lecture Notes. Stanford, CA: Center for the Study of Language and Information Publications, 2003.

Siciliano, Bruno, and Ousamma Khatib, eds., Springer Handbook of Robotics. Berlin: Springer, 2008.

Thomas, Robert J., What Machines Can't Do: Politics and Technology in the Industrial Enterprise. Berkeley: University of California Press, 1994.

Volti, Rudi, Society and Technological Change, 6th ed. New York: Worth, 2009.

Wallach, Wendell, Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2009.

Wood, Gaby, Edison's Eve: A Magical History of the Quest for Mechanical Life. New York: Anchor Books, 2002.

Zuboff, Shoshana, In the Age of the Smart Machine: The Future of Work and Power. New York: Basic Books, 1988.

Zylinska, Joanna, ed., The Cyborg Experiments: Extensions of the Body in the Media Age. New York: Continuum, 2002.

Source Citation

Source Citation   

Gale Document Number: GALE|CX1762600212