The idea of a robot becoming a household fixture that performs regular service tasks is accepted by most and even used by many, with products such as Amazon’s Alexa penetrating the market with great force. Asking a robot to switch off the light or bring you a Latte Macchiato is hardly a remarkable feature — doesn’t quite surmount to the Star Wars’ C-3PO. Before we unleash our fantasy of a sophisticated talking and walking robot-buddy, let’s have an introduction to a robot with a more humble beginning than those of Hollywood: in a lab with researchers swarming around to make him as intimate to human experience as a true companion could be. Unlike C-3PO, Travis is a small non-anthropomorphic robot with a slight creature-like resemblance capable of basic gesturing (e.g., nodding, swaying). At 11 inches, he is just tall enough to have his head roughly in line with a seated person facing him when Travis is placed on a desk.
Gurit Birnbaum, an associate professor at the Baruch Ivcher School of Psychology at the IDC, together with Dr. Guy Hoffman from Cornell University, set out to answer a critical question in human-robot interactions: can a robot’s actions illicit an emotional response in humans that could have desirable long term effects on well-being? They set up two studies to examine the influence of a robot’s responsiveness on human disclosure of a negative event, and whether or not a robot’s responsiveness during disclosure of a positive event would serve as an emotional support in a different context in the future.
For the first study, a hundred and two undergraduate volunteers from an Israeli University had a one-on-one session with the robot where they were prompted to talk about a current
problem or stressor in their life for up to seven minutes. The robot was programmed to interact with a stock of sentences and gestures that were minimally adjusted behind the scenes, unknown to the participants, to better fit the content of an individual’s story. Some of the pre-set phrases were, “You must have gone through a very difficult time” or “I completely understand what you have been through,” as well as the robot displaying physical responsiveness by gently swaying back and forth to show intimacy and nodding in affirmation to the person’s speech. The findings affirmed the hypothesis — a responsive robot did produce to a genuine companion-like interaction with the human, as shown by the participants’ approach behaviors of leaning in towards the robot and their favorable reaction to the robot’s simple response cues. The results indicate that people perceive a valuable relationship with such a robot, acting as a source of consolation in distressful times, and facilitating a sense of trust, which is inherent to bonding in human relationships.
Building on the robot’s promising supportive role in regulating human psychological states, the second study was done in a different context: disclosing a positive event. Participants were again videotaped while telling the robot about a recent positive dating event. The main interest here was to discover whether an interaction with a responsive robot (versus a non-responsive) one would improve self-perception during a subsequent stressful task. After interacting with the robot, the participants were instructed to spend two minutes talking to a potential romantic partner about various topics such as their hobbies, positive traits and future plans while being video taped. Afterwards, the participants answered a self-evaluation questionnaire about their own perceived mate-value based on the video recording. The sense of responsiveness from the robot promoted positive self-evaluation and confidence in romantic pursuits, encouraging participants to see themselves as more appealing partners.
The robot’s responsiveness proved to increase people’s perception of its appealing traits and likelihood to use it in time of stress. Gurit Birnbaum’s paper, What robots can teach us about intimacy: The Reassuring Effects of Robot Rsponsiveness to Human Disclosure, also sites the following potential in human-robot interaction, “These findings suggest that humans not only utilize responsiveness cues to ascribe social intentions to robots, but they actually adjust their behavior towards responsive robots; want to use such robots as a source of consolation; and feel better about themselves while coping with challenges after interacting with these robots” (Birnbaum and colleagues 2016). Her research team’s findings emphasize the importance of designing assistive robots that appropriately display responsive behavior, verbal and non-verbal, to aid human’s psychological needs far beyond basic care.