Navigation page

Pages

Sunday, February 11, 2018

Folk psychology and Alexa


Paul Churchland made a large splash in the philosophy of mind and cognitive science several decades ago when he cast doubt on the categories of "folk psychology" -- the ordinary and commonsensical concepts we use to describe and understand each other's mental lives. In Paul Churchland and Patricia Churchland, On the Contrary: Critical Essays, 1987-1997, Paul Churchland writes:
"Folk psychology" denotes the prescientific, commonsense conceptual framework that all normally socialized humans deploy in order to comprehend, predict, explain, and manipulate the behavior of . humans and the higher animals. This framework includes concepts such as belief, desire, pain pleasure, love, hate, joy, fear, suspicion, memory, recognition, anger, sympathy, intention, and so forth.... Considered as a whole, it constitutes our conception of what a person is. (3)
Churchland does not doubt that we ordinary human beings make use of these concepts in everyday life, and that we could not dispense with them. But he is not convinced that they have a scientifically useful role to play in scientific psychology or cognitive science.

In our ordinary dealings with other human beings it is both important and plausible that the framework of folk psychology is approximately true. Our fellow human beings really do have beliefs, desires, fears, and other mental capacities, and these capacities are in fact the correct explanation of their behavior. How these capacities are realized in the central nervous system is largely unknown, though as materialists we are committed to the belief that there are such underlying neurological functionings. But eliminative materialism doesn't have a lot of credibility, and the treatment of mental states as epiphenoma to the neurological machinery isn't convincing either.

These issues had the effect of creating a great deal of discussion in the philosophy of psychology since the 1980s (link). But the topic seems all the more interesting now that tens of millions of people are interacting with Alexa, Siri, and the Google Assistant, and are often led to treat the voice as emanating from an intelligent (if not very intelligent) entity. I presume that it is clear that Alexa and her counterparts are currently "question bots" with fairly simple algorithms underlying their capabilities. But how will we think about the AI agent when the algorithms are not simple; when the agents can sustain lengthy conversations; and when the interactions give the appearance of novelty and creativity?

It turns out that this is a topic that AI researchers have thought about quite a bit. Here is the abstract of "Understanding Socially Intelligent Agents—A Multilayered Phenomenon", a fascinating 2001 article in IEEE by Perrson, Laaksolahti, and Lonnqvist (link):
The ultimate purpose with socially intelligent agent (SIA) technology is not to simulate social intelligence per se, but to let an agent give an impression of social intelligence. Such user-centred SIA technology, must consider the everyday knowledge and expectations by which users make sense of real, fictive, or artificial social beings. This folk-theoretical understanding of other social beings involves several, rather independent levels such as expectations on behavior, expectations on primitive psychology, models of folk-psychology, understanding of traits, social roles, and empathy. The framework presented here allows one to analyze and reconstruct users' understanding of existing and future SIAs, as well as specifying the levels SIA technology models in order to achieve an impression of social intelligence.
The emphasis here is clearly on the semblance of intelligence in interaction with the AI agent, not the construction of a genuinely intelligent system capable of intentionality and desire. Early in the article they write:
As agents get more complex, they will land in the twilight zone between mechanistic and living, between dead objects and live beings. In their understanding of the system, users will be tempted to employ an intentional stance, rather than a mechanistic one.. Computer scientists may choose system designs that encourage or discourage such anthropomorphism. Irrespective of which, we need to understand how and under what conditions it works.
But the key point here is that the authors favor an approach in which the user is strongly led to apply the concepts of folk psychology to the AI agent; and yet in which the underlying mechanisms generating the AI's behavior completely invalidate the application of these concepts. (This approach brings to mind Searle's Chinese room example concerning "intelligent" behavior; link.) This is clearly the approach taken by current designs of AI agents like Siri; the design of the program emphasizes ordinary language interaction in ways that lead the user to interact with the agent as an intentional "person".

The authors directly confront the likelihood of "folk-psychology" interactions elicited in users by the behavior of AI agents:
When people are trying to understand the behaviors of others, they often use the framework of folk-psychology. Moreover, people expect others to act according to it. If a person’s behavior blatantly falls out of this framework, the person would probably be judged “other” in some, e.g., children, “crazies,” “psychopaths,” and “foreigners.” In order for SIAs to appear socially intelligent, it is important that their behavior is understandable in term of the folk-psychological framework. People will project these expectations on SIA technology and will try to attribute mental states and processes according to it. (354)
And the authors make reference to several AI constructs that are specifically designed to elicit a folk-psychological response from the users:
In all of these cases, the autonomous agents have some model of the world, mind, emotions, and of their present internal state. This does not mean that users automatically infer the “correct” mental state of the agent or attribute the same emotion that the system wants to convey. However, with these background models regulating the agent’s behavior the system will support and encourage the user to employ her faculty of folk-psychology reasoning onto the agent. Hopefully, the models generate consistently enough behavior to make folk-psychology a framework within which to understand and act upon the interactive characters. (355)
The authors emphasize the instrumentalism of their recommended approach to SIA capacities from beginning to end:
In order to develop believable SIAs we do not have to know how beliefs-desires and intentions actually relate to each other in the real minds of real people. If we want to create the impression of an artificial social agent driven by beliefs and desires, it is enough to draw on investigations on how people in different cultures develop and use theories of mind to understand the behaviors of others. SIAs need to model the folk-theory reasoning, not the real thing. To a shallow AI approach, a model of mind based on folk-psychology is as valid as one based on cognitive theory. (349)
This way of approaching the design of AI agents suggests that the "folk psychology" interpretation of Alexa's more capable successors will be fundamentally wrong. The agent will not be conscious, intentional, or mental; but it will behave in ways that make it almost impossible not to fall into the trap of anthropomorphism. And this in turn brings us back to Churchland and the critique of folk psychology in the human-human cases. If computer-assisted AI agents can be completely persuasive as mentally structured actors, then why are we so confident that this is not the case for fellow humans as well?

2 comments:

  1. seems to be in the air:
    https://rsbakker.wordpress.com/2018/02/11/optimally-engaged-experience/

    ReplyDelete
  2. Treating machines like humans will lead us to treat humans like machines. Human and machine will meet somewhere in the middle.

    ReplyDelete