About

 

About Abuse II: The CHI 2006 Workshop On Misuse and Abuse of Interactive Technologies

Co-Chairs: Antonella de Angeli,  Sheryl Brahnam, Peter Wallis, Alan Dix

Location: Montréal, Québec, Canada

CHI 2006 website: www.chi2006.org

Download cfp for Abuse II as pfd file

Abstract

The goal of this workshop is to address the darker side of HCI by examining how computers sometimes bring about the expression of negative emotions. In particular, we are interested in the phenomena of human beings abusing computers. Such behavior can take many forms, ranging from the verbal abuse of conversational agents to physically attacking the hardware. In some cases, particularly in the case of embodied conversational agents, there are questions about how the machine should respond to verbal assaults. This workshop is also interested in understanding the psychological underpinnings of negative behavior involving computers. In this regard, we are interested in exploring how HCI factors influence human-to-human abuse in computer mediated communication. The overarching objective of this workshop is to sketch a research agenda on the topic of the misuse and abuse of interactive technologies that will lead to design solutions capable of protecting users and restraining disinhibited behaviors.

Introduction

Current HCI research is witnessing a shift from a materialistic perspective of viewing the computer as a tool for cognition to an experiential vision where the computer is described as a medium for emotion. Until recently, scientific investigations into the user's emotional engagement in computing were relatively few.

Since the turn of the century, a number of CHI workshops have launched investigations into the emotional component of the user's computing experience. For example, the CHI 2002 workshop Funology: Designing Enjoyment explored how fun and enjoyment could better be integrated into computer interface design. The organizers were puzzled by the fact that making computers fun to use had failed to generate significant interest despite Carroll's and Thomas's [1] call to the HCI community in 1988 for a systematic study of enjoyable computing. Current research in funology echoes Norman's [2] conclusions about aesthetics: fun matters--fun interfaces work better.
Unfortunately, enjoyment is not something added to an emotionally neutral computing experience. The user's experiences are colored by a host of emotions, many of them negative. Negative feelings do more than tarnish the user's experience, however. As Websveen et. al., [3] noted, "In human-product communication people also express emotion (often negative); for instance, they may shove a chair, bang a printer, or slam a door. While this behavior might offer some relief, it does not enhance communication or the experience. On the contrary, if we forcefully express our negative emotions we can break the product and diminish the beauty of interaction" (p. 60).

Abuse: The darker side of human-computer interaction [4] may well have been the first workshop to explicitly address negative emotions in computing and their behavioral consequences. The papers presented in that workshop demonstrated that interface design and metaphors can inadvertently rouse more than user dissatisfaction and angry reactions: they can promote a wide range of negative behaviors that are directed not only towards the machine but also towards other people.

An example of a metaphor that encourages abuse of the interface is the human-like interface, e.g., embodied conversational agents and automated voice systems. Although human-like interfaces are intended to make interaction with the computer more natural and socially engaging, examination of interaction logs demonstrates that users are prone to verbally abusing these interfaces [5]. In terms of promoting the abuse of other people, email, message boards, and chatrooms make it easy for people to engage in cyberbullying, flaming, and sexually embarrassing comments, accusations, and revelations.
In Abuse: The darker side of human-computer interaction it was concluded that a comprehensive understanding of HCI factors that promote negative behaviors is necessary if we are to begin designing interfaces that enhance the user's computing experiences and encourage user collaboration with the interface and with other users.

The primary goals of Misuse and Abuse of Interactive Technologies, the second Abuse workshop, is to work out a definition of computer-mediated abuse that is relevant to HCI, to define design factors that promote the misuse and abuse of interactive technologies, and to sketch a research agenda that will lead to design solutions capable of protecting users and restraining disinhibited behaviors.

Issues

Misuse and Abuse of Interactive Technologies intends to analyze the phenomenon of computer-mediated abuse from several perspectives and with regard to different applications. The topic is likely to be of interest to a range of research streams in HCI, including studies of computers as social actors, affective computing, and social analyses of on-line behavior. The purpose of this interdisciplinary workshop is to bring together researchers who have encountered instances of abusive behavior in HCI, who might have given some thought to why and how it happens, and who have some ideas on how pro-active, agent based interfaces should respond. We expect to generate a debate on the subject of computer-mediated abuse, the abuse of agents as cultural artefacts, its effect on the agent's task, believability, and, in general, on interface design. This discussion should provide a foundation for understanding the misuse and abuse of interactive technologies and for developing a systematic approach to designing interfaces that counter these abuses.

As software is evolving from the tool metaphor to the agent one understanding the role of abuse in HCI and its effect on the task-at-hand becomes increasingly important. People tend to misuse and abuse tools, it is true, but no one expects a hammer (or a desktop) to respond. With the agent model, however, software can be autonomous and is expected to take responsibility for its actions. Conversational agents are a clear case of a software entity which might be expected to deal with user verbal assaults. Virtual assistants, to take a classic application instance, should not just provide timely information; a virtual assistant must also be a social actor and participate in the games people play. Some of these games appear to include abusive behavior.

At first glance, abusing the interface, as in the example above, may not appear to pose much of a problem-nothing that could be accurately labeled abuse since computers are not people and thus not capable of being harmed. That the human abuse of human-like agents is not considered a serious problem is evidenced by the fact that the research literature is mostly silent about this issue. Nevertheless, the fact that abuse, or the threat of it, is part of the interaction, opens important moral, ethical and design issues. As machines begin to look like and behave more like people, it is important to ask how they should behave when verbally attacked. Is it appropriate for machines to ignore aggression? If agents do not acknowledgement verbal abuse will this only serve to aggravate the situation? If potential clients are abusing virtual business representatives, then to what extent are they abusing the businesses or the social groups the human-like interfaces represent?

Another concern is the potential that socially intelligent agents, especially embodied conversational agents, have of taking advantage of customers, especially children, who innocently attribute to these characters such warm human qualities as trustworthiness [6]. It is feared that these relationship building agents could be used as a potent means of marketeering, branding, and advertising [7], dangerous for children and adults alike (take, for instance, the virtual girl friends offered at v-girl.com that are designed to probe men's spending habits, ply men for demographic information, and generate income by petulantly demanding virtual presents). Socially intelligent agents have the potential of exploiting our emotional needs and propensity for suspending disbelief.

In addition to the issues and questions posed above, some of the larger questions and issues we hope to address during the workshop are the following:

  • " How does the misuse and abuse of the interface affect the user's computing experience?

  • " How do different interface metaphors (embodied conversational characters, windows, desktop) shape a propensity to misuse or abuse the interface?

  • " What design factors trigger or restrain disinhibited behaviors?

  • " How does computer-mediated abuse differ from other forms of abuse e.g., the abuse of people, symbols, flags, sacred objects, and personal property? Is it appropriate to use the term abuse in this context?

  • Abuse can be a part of our social world. It is something we avoid. How can we develop machines that learn to avoid user abuses?

As the workshop is intended to be interdisciplinary, the questions and methodologies discussed will be of interest to a broad audience, including social scientists, psychologists, computer scientists, and those involved in the game industry. To help inform our questioning, we would also welcome philosophical and critical investigations into the abuse of computing artifacts. For more information, see the CFP.
 

References

[1] Carroll, J.M. and Thomas, J.C. Fun. SIGCHI Bulletin, Vol. 19: 3 (1988) 21-24.
[2] Norman, D.A. Emotion and Design: Atrractive Things Work Better. Interactions, Vol. 4: (2002) 36-42.
[3] Wensveen, S., Overbeeke, K., Djajadiningrat, T., and Kyffin, S. Freedom of Fun, Freedom of Interaction. Interactions, Vol. 11: 4 (2004) 59 - 61.
[4] Angeli, D.A., Brahnam, S., and Wallis, P. Abuse: The Darker Side of Human Computer Interaction. Interact 2005. Rome, Italy:(2005) 91-92.
[5] Angeli, D.A. and Carpenter, R. Stupid Computer! Abuse and Social Identity. Abuse: The dark side of human-computer interaction:(2005) 19-25.
[6] Bickmore, T. and Picard, R. Establishing and Maintaining Long-Term Human-Computer Relationships. ACM Transactions on Computer Human Interaction (ToCHI), Vol. 12: 2 (2005) 293-327.
[7] Duck, S. Talking Relationships into Being. Journal of Social and Personal Relationships, Vol. 12: 4 (1995) 535-540.
 

About Abuse I: The Interact 2005 Workshop On Abuse: The Darker Side of Human-Computer Interaction

Co-Chairs: Antonella de Angeli,  Sheryl Brahnam, Peter Wallis

Download cfp for Abuse I as pfd file

There seems to be something innate in the human/computer relationship that brings out the dark side of human behaviour. Anecdotal evidence suggests that of the utterances made to chat-bots or embodied conversational agents (ECA) in public places, 20-30% are abusive. Why is that? Is it simply that a quarter of the human population are 'sick' and find abusing a machine to be in some way therapeutic? If so, it says something about human nature that is quite disturbing and in need of further study. Perhaps the phenomena is directly caused by a new technology. In the early days of computer-mediated communication there was a tendency for people to abuse each other, but this has become far less common. Will the extent to which people abuse ECAs just naturally become a thing of the past?

The Turing Test has also had a considerable influence on appropriate behaviour when talking to a computer. To what extent is abuse simply a way people test the limits of a conversational agent? Perhaps the problem is one of design: the aesthetics of everyday things is key to their success, and perhaps abuse is simply one end of a continuum of the 'aesthetics' of interactive things with human-like behaviour. The extent to which abuse occurs seems to indicate something fundamental about the way humans interact. Is abuse simply the most noticeable phenomena connected to something fundamental in the way we humans communicate?

The purpose of this workshop is to bring together engineers, artists and scientists who have encountered this phenomenon, who might have given some thought to why and how it happens, and have some ideas on how pro-active, agent-based interfaces, typified by chat-bots or ECAs, should respond to abuse.
 

Agent-Based Interfaces

Unlike conventional software, agent-based interfaces can be expected to deal with abuse. Whereas most software is designed as a tool to be used by a human, conversational agents convey a sense of autonomous action. With the computer-as-tool metaphor, the human operator is responsible for outcomes. One cannot really blame the hammer when one hits one's thumb, and one cannot, really, blame MS-Word when one hasn't read the manual.  With the agent metaphor of software, the agent is in some sense aware of its surroundings (situated) and responsible (autonomous) for its behaviour. An ECA can be expected to be pro-actively helpful - they can be expected to take responsibility in some sense for the user's experience. If the user is expecting this, and he or she starts abusing the system, them the system's response will be interpreted as a social act. As Reeves and Nass point out in "The Media Equation" (CSLI Publications, 1996) we often treat machines as if they are in some way human. Whereas we know we are playing make believe when we abuse a hammer for a sore thumb, we are often not aware of anthropomorphic behaviour when dealing with computers. In some contexts, ignoring abuse simply encourages more. Are the ECA that ignore abuse simply "pushing our buttons" and encouraging more?  

ECA, by their very nature, interact with us at a social level and must play by our rules. It is hard for us as social actors to separate the machine's actions from those we see other people use everyday. Consequently, until we have a better understanding of the relationship between human and virtual agents, the commercial potential of such agents is questionable. A virtual personal assistant for instance cannot simply provide timely information; if that information is to be believed, the agent must present itself as a reliable source. Although we might know, in a conceptual way, that the interface does not change the quality of the data from a particular source, we cannot help but respond to the conventions used by the human agents we have grown up with. If abuse, or the threat of it, are part of those conventions, then a trusted virtual assistant will need to be able to play that game.

Similar concerns arise using ECA on corporate web-sites. To what extent does the ECA represent the organisation? If potential clients are abusing your virtual representative, then to what extent are they abusing your organisation? How can the agent change the views of clients, and and to what extent is the process not about providing information, but about behaving well on the social front?

Those interested in automated learning and virtual tutors must also consider the social skills relevant to the process of imparting knowledge.  If the student's interface is perceived as a social actor, and the student has no respect for that agent, then to what extent might a student be expected to learn?

In the other direction, virtual characters in computer based games might benefit from being able to generate abusive behaviour at an appropriate level. To what extent is abuse a means of social positioning and a key process in the establishment of identity and group membership? Games such as "Leisure Suit Larry" are explicitly about social relations but the behaviour of the characters involved is, like that of the monsters in "Quake," obviously not human. A better understanding of how we humans manage our relations could be key to the next generation of products for this multi-billion dollar industry.

Abuse is prevalent and easily detected in human computer interfaces that use conversational agents. Studying the phenomena we hope will lead to a better understanding of human behaviour in a social context, and to human-computer interfaces that interact in a natural manner. What is natural however needs more study, and the way people abuse computers is hopefully a key to next generation of pro-active computer interfaces.


About the Web

Purpose

Agentabuse.org is intended to become a dynamic repository for workshop announcements and relevant research that explores aggressive behaviour towards agent interfaces and strategies for dealing with abuse.

Web Design

The first abuse workshop, Abuse: The darker side of HCI, was held in Rome and the theme of this website reflects this fact. The images found on this web are manipulated photographs of some of Rome's famous "talking sculptures."  These talking sculptures began to speak in the 16th century when, in the middle of the night, poets and essayists hung placards on them that mocked the ruling class. Passersby would congregate to read these satires in the morning before officials would break up the crowds and remove the offending literature.

 On the homepage is the reclining statue of Silenus, known as Babuino, located in the central via del Babuino.  Today he "talks" to the people of Rome via the graphitti that often crowds the wall behind him. 

The watermark in the upper left hand corner of this page and on all the text pages in this website is the image of Pasquino. He stands on the square just behind piazza Navona. Today the people of Rome plaster him with poems and satires that mock the corruption and incompetence of current political leaders. 

Our chatbot on the talk page is Madama Lucrezia. She stands on the corner of Palazzetto Venezia, in piazza San Marco. She probably represents Isis as she was taken from a temple dedicated to this goddess. 

As it is not uncommon for the people of Rome to use these sculptures to express opposing views, the talking sculptures of Rome are known to argue with one another. Looking at them, it is not too hard to imagine them coming to blows: their noses are smashed and their arms broken, and time and weather and layers of plastered pages ripped off by angry officials have scratched away their faces and forms. For us the talking sculptures of Rome have come to epitomize the abused artifact.