Showing posts with label CAT_cognition. Show all posts
Showing posts with label CAT_cognition. Show all posts

Monday, July 5, 2021

Did the Iliad have an author?

Did the Iliad have an author? Since this is probably the best known text from the ancient Greek world,  one might find the question a puzzling one: of course the Iliad had an author; it was Homer. But it turns out that this answer is no longer accepted by experts in classical literature -- and hasn't been for at least ninety years. Adam Kirsch's recent piece in the New Yorker, "The Classicist Who Killed Homer," sheds light on the topic, and also raises highly interesting questions about the nature of imagination, narrative, and story-telling. Kirsh's piece is a discussion of Robert Kanigel's biography of Milman Parry, Hearing Homer's Song: The Brief Life and Big Idea of Milman Parry. Parry was a young professor of classics at Harvard in the 1930s, and his treatment of "Homer" created, according to Kanigel, a permanent change in the way that classicists conceived of the making of the Iliad and the Odyssey.

The time of Homer -- or at least, the time at which the oral poems that eventually became the Iliad and the Odyssey originated -- is perhaps five or six centuries before the time of Socrates; it was ancient history, even for the ancient Greeks. Homer is indeed discussed by Aristotle, Herodotus, and Plutarch, but with essentially no basis in historical fact. So how could modern scholars -- scholars in the nineteenth or twentieth centuries -- arrive at evidence-based conclusions about the authorship of these great works? This is the question that Parry sought to answer; and here is Kirsch's summary of Parry's considered conclusion: 

Parry’s thesis was simple but momentous: “It is my own view, as those who have read my studies on Homeric style know, that the nature of Homeric poetry can be grasped only when one has seen that it is composed in a diction which is oral, and so formulaic, and so traditional.” In other words, the Iliad and the Odyssey weren’t written by Homer, because they weren’t written at all. They were products of an oral tradition, performed by generations of anonymous Greek bards who gradually shaped them into the epics we know today. Earlier scholars had advanced this as a hypothesis, but it was Parry who demonstrated it beyond a reasonable doubt. (73)

The primary clue that Parry pursued was the most evident stylistic fact about the poems: their meter and their continual use of stylized epithets for the key actors. The epithets and the meter of the verses give the oral poet a manageable framework from which to create line after line of verse.

Rather, [the poet] had a supply of ready-made epithets in different metrical patterns that could be slotted in depending on the needs of the verse, like Tetris blocks. As Parry wrote in one of his papers, “The Homeric language is the work of the Homeric verse,” not the other way around. (75)

Most interesting is the account that Kirsch provides of Parry's method of research and argument. Biblical scholars came to the conclusion that the Hebrew Bible was not the work of a single author as well. But their arguments were largely textual: each of the books had a distinctive style and vocabulary, and it was straightforward to argue that these texts are an amalgam of multiple earlier texts. Parry proceeded differently in his treatment of the Iliad and the Odyssey. From a textual point of view, these poems are fairly consistent over their thousands of lines. But Parry asked himself a different question: how do pre-literate communities compose and transmit their stories? And he investigated this question through fieldwork in the 1930s. He undertook to observe the process of the creation of an oral tradition in the making. He functioned as a kind of "ethno-poeticist" -- an observer and collector of oral traditions in these "spoken-word" communities of Yugoslavia.

Here is an especially interesting part of Parry's research in Yugoslavia:

Parry’s research showed that, in an oral-performance tradition, it makes no sense to speak of a poem as having an authentic, original text. He found that, when he asked a guslar to perform the same poem on consecutive days, the transcripts could be dramatically different, with lines and whole episodes appearing or disappearing. With the guslar he considered the most gifted, a man in his sixties named Avdo Međedović, Parry tried an experiment: he had Međedović listen to a tale he’d never heard before, performed by a singer from another village, and then asked him to repeat it. After one hearing, Međedović not only could retell the whole thing but made it three times longer, and, in Lord’s recollection, much better: “The ornamentation and richness accumulated, and the human touches of character imparted a depth of feeling that had been missing.” (75)

What is interesting to me in this experiment is the light it sheds on the cognitive and creative process of the oral poet him- or herself. What seems to be going on in this account is a complex act of narrative cognition: hearing the unfamiliar story, linking it to a broader context of allusions and metaphors within the ambient oral tradition, remembering the story, and retelling the story with embellishments and refinements that make it more complex and more aesthetically satisfying to the listening audience. Parry seems to be observing the process of "oral poetry composition and transformation" in action, through the skilled intellectual and poetic work of the guslar Međedović. It is skillful improvisation joined with an immersion in a tradition of heroes and other stories, leading to a better and even more satisfying story. If this were Tolstoy's work, we might say that the refinement of the story is the result of a repetitive process of drafting, editing, rewriting, and enhancement; but that iterative process is plainly absent in Međedović's performance. Instead, Međedović is given the frame of the story and the key details, and -- in real time -- he weaves together an ornamented and rich version of the story "with a depth of feeling that had been missing." This is a very plausible mechanism for explaining the richness and complexity of the storylines of the Iliad and the Odyssey -- not a single Tolstoy writing an epic, but a series of more and less talented "guslars" in the pre-Athenian world rehearsing, refining, and extending the stories in a way that is astonishing in its comprehensiveness and richness by the time it was collected and recorded.

Kirsch doesn't provide this analogy, but we might say that Parry proceeded somewhat analogously to Darwin in his careful observation of finches and other organisms in the Galapagos, supporting eventually a powerful hypothesis about the genesis of species (natural selection based on differential reproductive success). In Parry's case, the result is an account of the multigenerational genesis of stories told by specialized story-tellers like Međedović -- or the proto-Homers who contributed to the construction of the Iliad and the Odyssey over a period of centuries.

Notice how different this process of story composition and transmission is from other kinds of familiar narratives -- novels and academic histories, for example. When David Hume attempted to tell the story of a century of English politics in The History of England, his narrative is structured by written sources, extensive notes, and a narrative plan. And it is an iterative process of editing and revision, with a conception of the whole that guides corrections throughout the narrative. When Tolstoy composed War and Peace, he too had the opportunity of revision, reconciliation, and recomposition, to ensure consistency of plot and character development. The oral poet, by contrast, is doing his creative work in real time; no corrections, no going back to an earlier chapter, no reminding himself/herself of the gist of the plot in earlier stages of the story. This presents an entirely different problem of creative cognition for the oral poet that is quite different from that facing the historian or the novelist. Memory, metaphor, fable, and humor through the unexpected all play a role in the oral poet's performance.

Is there a field of cognitive psychology that studies narrative improvisation? Interestingly enough, there is. Here is an interesting research report (link) from a group of researchers at the Georgia Institute of Technology who have studied improv theater. The group includes Brian Magerko, Waleed Manzoul, Mark Riedl, Allan Baumer, Daniel Fuller, Kurt Luther, and Celia Pearce. They describe the object of their research in these terms:

Improvisation is a relatively understudied aspect of creativity and cognition. One way of viewing improvisation is as the act of real-time dynamic problem solving [12]. One of the most recognizable manifestations of improvisational problem solving comes from the theatre arts community. Improvisational theatre – or simply improv – is a rich source of data for reaching a better understanding of improvisational problem solving and cognition [11, 32]. This is in part due to the diversity of performative activities in improv, which allows us to manipulate independent variables for purposes of experimentation, and the decoupling from real-world problems (e.g., emergency management) that are hard to control or recreate. Focusing on improv theatre, we can more specifically define improvisation as a creative act to be the “creation of an artifact and/or performance with aesthetic goals in real-time that is not completely prescribed in terms of functional and/or content constraints.” Our definition here intentionally focuses on the process of creating; improvisation is viewed as an active endeavor that is equally, or more, important than the final product. That is, how you get to an outcome is more important than the outcome.

Like the topic of skilled bodily performance discussed elsewhere here (link), there is a great deal of room for important research on the question of improvisational narrative composition. This refreshes my own notion that many of the most ordinary parts of human life repay fascinating results when studied from a fresh point of view.

(I realize that I myself have had a little bit of personal experience of this kind of story-telling. Over the past six years or so I have developed a tradition with my grandchildren of an ongoing series of stories about a young French boy (Pierre) who worked with the French intelligence agency in the 1960s. Pierre has many adventures, and each story is initiated by a "seed event" that I bring to mind and then embroider with many exciting and laughable adventures. Among other things, I've learned that drama and humor must be mixed -- the boys love absurd situations and wordplay as much as they enjoy complicated and sinister plots with figures like the mysterious X and Y. Most recently on vacation we enjoyed a few new stories based on Pierre's secret visit to Dien Bien Phu and Dien Bien Phuie. In a very simple way, this is the work of a guslar! Here are a few of the stories that I've written down and recorded for the grandchildren during the pandemic; link.)


Thursday, April 1, 2021

Learning and engagement


John Dewey's Democracy and Education is over a century old. But it still seems strikingly modern, even avant-garde, when compared to many pedagogical practices currently in place in both secondary and post-secondary schools. Here is one line of thought that is especially insightful: that learning is a constructive and active process for the learner, not a question of passive acquisition of "knowledge". Learning involves acquiring new ideas, new perspectives, and new questions for oneself. And these processes require an engagement on the part of the learner that is as active and creative as is the learning done by a basketball player with a great coach. A good teacher is one who can motivate and stimulate the student to taking this journey -- not one who can supply a full menu of pre-established solutions to the student.

Here is a particularly rich description of Dewey's conception of learning and the relationship between teacher and student. He formulates his thinking about the learning that children do; but I find the passage entirely applicable to university students as well.

The joy which children themselves experience is the joy of intellectual constructiveness—of creativeness, if the word may be used without misunderstanding. The educational moral I am chiefly concerned to draw is not, however, that teachers would find their own work less of a grind and strain if school conditions favored learning in the sense of discovery and not in that of storing away what others pour into them; nor that it would be possible to give even children and youth the delights of personal intellectual productiveness—true and important as are these things. It is that no thought, no idea, can possibly be conveyed as an idea from one person to another. When it is told, it is, to the one to whom it is told, another given fact, not an idea. The communication may stimulate the other person to realize the question for himself and to think out a like idea, or it may smother his intellectual interest and suppress his dawning effort at thought. But what he directly gets cannot be an idea. Only by wrestling with the conditions of the problem at first hand, seeking and finding his own way out, does he think. When the parent or teacher has provided the conditions which stimulate thinking and has taken a sympathetic attitude toward the activities of the learner by entering into a common or conjoint experience, all has been done which a second party can do to instigate learning. The rest lies with the one directly concerned. If he cannot devise his own solution (not of course in isolation, but in correspondence with the teacher and other pupils) and find his own way out he will not learn, not even if he can recite some correct answer with one hundred per cent accuracy. We can and do supply ready-made “ideas” by the thousand; we do not usually take much pains to see that the one learning engages in significant situations where his own activities generate, support, and clinch ideas—that is, perceived meanings or connections. This does not mean that the teacher is to stand off and look on; the alternative to furnishing ready-made subject matter and listening to the accuracy with which it is reproduced is not quiescence, but participation, sharing, in an activity. In such shared activity, the teacher is a learner, and the learner is, without knowing it, a teacher—and upon the whole, the less consciousness there is, on either side, of either giving or receiving instruction, the better. (chapter 12, kl 2567)

What is this process that Dewey is describing, this process of active "learning" on the part of the student? It is one in which the student is led to "engage in significant situations where his own activities generate, support, and clinch ideas"; it is a situation of active grappling with a problem that the student does not yet fully understand; it is a situation in which the student develops new cognitive tools, frameworks, and questions through the active and engaged mental struggle she has willingly undertaken. She has grown intellectually; she has the excitement of realizing that her perspective and understanding of something important has changed and deepened. The language of gestalt psychology is suggestive here -- the sudden shift of a set of lines on paper into a representation of a smiling face, the rearrangement of one's thought processes so a confusing set of words and ideas suddenly make sense. It is something like what Kuhn describes as a paradigm shift, except that it is a continual process of intellectual change.

What does Dewey mean here by saying that an idea cannot be conveyed from one person to another? He does not doubt that words, sentences, and paragraphs can be shared, or that the student cannot incorporate those words into sentences. But his key point is profound: knowledge and understanding require more than understanding the grammar of a sentence; instead, the student needs to have an intellectual framework about the question in play and an active inquiring mental curiosity in terms of which he or she "thinks" the idea for herself. I do not understand entropy if I simply parrot the definition of the word; rather, I need a framework of ideas about gases, random motion, kinetic energy, and statistical mechanics within the context of which I can give "entropy" a conceptual place.

Anyone who teaches philosophy to undergraduates must be especially receptive to this challenge. The task, somehow, is to help the student make the problem her own -- to see why it is perplexing, to want to dig into it, to be eager to discover new angles on it, to see how it relates to other complicated issues. So in teaching Kant or Arendt, the goal is not to get the student to memorize the list of the antinomies of reason or the three versions of the categorical imperative, or precisely what is meant by "the banality of evil". Rather, it is to help the student to discover the problem that Kant or Arendt was grappling with, why it was important, why it is difficult, and maybe how it can be solved in a different way. 

The student needs somehow to put himself or herself into the mindset of a person on a journey of discovery, creating his or her own conceptual structures and questions about the terrain, without falling into the complacency of thinking she is simply a tourist with an excellent guide. And, after all, if there is nothing new to think about Aristotle or Nussbaum, then what is the purpose of studying them in the first place? Why would it matter to a student that she has read the Nichomachean Ethics cover to cover if she hasn't somehow been stimulated through her own efforts of imagination and discovery to think new and original thoughts?

This insight into the learning process is evident in philosophy, but surely it must be essentially the same kind of challenge in teaching literature, sociological theory, thermodynamics, or even advanced accounting. When I read Stephen Greenblatt on Shakespeare -- or when I hear him lecture on "racial memory" of Vilnius -- I am stimulated to new thinking, new ideas of my own, and a striking lack of interest on Greenblatt's part in being an "authority". Greenblatt somehow succeeds in creating a Dewey-like learning environment, both in his writing and in his teaching.

The past year of teaching courses in a synchronous hybrid online mode, preparing lectures for asynchronous use and using Zoom meetings for class discussions, has brought this set of challenges to the top of mind for me. What kinds of "prompts", questions, topics for discussion, and asynchronous exercises can I use to help students in these courses develop the appetite for taking the intellectual journey themselves? And how can the instructor help the student see that this is an activity of imagination and thinking that she herself wants to involve herself in? How can the instructor help the student to shift perspective from "learning the content of a course about Greek ethics from the professor" to "working my way through some fascinating texts in Greek ethics, seeing some new perspectives, and getting occasional stimulating questions from my professor"? The first is the tourist's perspective, while the second is the explorer's perspective.

In a way, we might say that the role of the teacher that Dewey describes is like that performed by Socrates: posing questions -- perhaps irritating and persistent questions -- but provoking those around him to think much harder about "justice", "piety", and "good manners", and not providing a substantive doctrine of his own. Socrates was sometimes criticized for suggesting that no substantive beliefs about morality could be justified, but that was not his pedagogy. Rather, his commitment was to the idea of hard thinking without pat answers. And one would like to imagine that some of his students eventually came to develop rich, imaginative, and non-dogmatic minds that allowed them to probe new questions and create new solutions. (It is interesting to reflect that Plato was one of those students, and Aristotle was a student of Plato. I think historians of philosophy would judge that both Plato and Aristotle were highly original thinkers, but that Plato's approach was somewhat more dogmatic, while Aristotle's was more open-minded and experimental.)


Saturday, February 23, 2019

Bodily cognition


Traditional cognitive science has been largely organized around the idea of the brain as a computing device and cognitive systems as functionally organized systems of data-processing. There is an emerging alternative to this paradigm that is described as "4E Cognition," where the four "E's" refer to cognition that is embodied, embedded, enactive, and extended. For example, there is the idea that perception of a fly ball is constituted by bodily awareness of arms and legs as well as neurophysiological information processing of visual information; that a paper scratch-pad used to assist a calculation is part of the cognitive process of calculation; or that a person's reliance on her smartphone for remembering names incorporates the smartphone into the extended process of recognizing an acquaintance on the street.

The 4E-cognition approach is well represented in The Oxford Handbook of 4E Cognition, edited by Albert Newen, Leon de Brun, and Shaun Gallagher, which provides an exposure to a great deal of very interesting current research. The fundamental idea is the questioning of the "brain-centered" approach to cognition that has characterized much of the history of cognitive science and neuroscience -- what participants refer to as "representational and computational model of cognition"; RCC. But the 4E approach rejects this paradigm for cognition. 
According to proponents of 4E cognition, however, the cognitive phenomena that are studied by modern cognitive science, such as spatial navigation, action, perception, and understanding others emotions, are in some sense all dependent on the morphological, biological, and physiological details of an agent's body, an appropriately structured natural, technological, or social environment, and the agent's active and embodied interaction with this environment. (kl 257)
Here is a summary statement of the chief philosophical problems raised by the theory of "4E cognition", according to the introduction to the volume provided by Newen, de Brun, and Gallagher:
Thus, by maintaining that cognition involves extracranial bodily processes, 4E approaches depart markedly from the RCC view that the brain is the sole basis of cognitive processes. But what precisely does it mean to say that cognition involves extracranial processes? First of all, the involvement of extracranial processes can be understood in a strong and a weak way. According to the strong reading, cognitive processes are partially constituted by extracranial processes, i.e., they are essentially based on them. By contrast, according to the weak reading, they are non-constitutionally related, i.e., only causally dependent upon extracranial processes. Furthermore, cognitive processes can count as extracranial in two ways. Extracranial processes can be bodily (involving a brain–body unit) or they can be extrabodily (involving a brain–body–environment unit).

Following this line of reasoning, we can distinguish between four different claims about embodied cognition:

a. A cognitive process is strongly embodied by bodily processes if it is partially constituted by (essentially based on) processes in the body that are not in the brain;
b. A cognitive process is strongly embodied by extrabodily processes if it is partially constituted by extrabodily processes;
c. A cognitive process is weakly embodied by bodily processes if it is not partially constituted by but only partially dependent upon extracranial processes (bodily processes outside of the brain);
d. A cognitive process is weakly embodied by extrabodily processes if it is not partially constituted by but only partially dependent upon extrabodily processes. 
The last version of the claim (d) is identical with the property of being embedded, i.e., being causally dependent on extrabodily processes in the environment of the bodily system. Furthermore, being extended is a property of a cognitive process if it is at least partially constituted by extrabodily processes (b), i.e., if it extends into essentially involved extrabodily components or tools (Stephan et al. 2014; Walter 2014). (kl 259)
These are metaphysical problems on the whole: what is the status of cognition as a thing in the world, and where does it reside -- in the brain, in the body, or in a complex embedded relationship with the environment? The distinction between "constituted by" and "causally affected by" is a metaphysically important one -- though it isn't entirely clear that it has empirical consequences.

Julian Kiverstein's contribution to the volume, "Extended cognition," appears to agree with this point about the metaphysical nature of the topic of "embedded cognition". He distinguishes between the "embedded theory" (EMT) and "extended theories" (EXT), and proposes that the disagreement between the two families of theories hangs on "what it is for a state or process to count as cognitive" (kl 549). This is on its face a conceptual or metaphysical question, not an empirical question.
I show how there is substantial agreement in both camps about how cognitive science is to proceed. Both sides agree that the best explanation of human problem-solving will often make reference to bodily actions carried out on externally located information-bearing structures. The debates is not about how to do cognitive science. It is instead, to repeat, a debate about the mark of the cognitive: the properties that make a state or process count as being of a particular cognitive kind. (kl 590)
Embedded and extended theorists therefore agree that internal cognitive processes will often not be sufficient for explaining cognitive behaviors. (kl 654)
It might be thought to be analogous to the question, "what is the global trading network?" (GTN), and the subsequent question of whether systems of knowledge production are part of the global trading network (constitutive) or merely causally relevant to the GTN (extended causal relevance). But it is difficult to see how one could argue that there is a fact of the matter about the "reality of the global trading system" or the "mark of the cognitive". These look like typical issues of conceptual demarcation, guided by pragmatic scientific concerns rather than empirical facts about the world.

Kiverstein addresses this issue throughout his chapter, but he arrives at what is for me an unsatisfactory reliance on a fundamental distinction between conceptual frameworks and metaphysical reality:
I agree with Sprevak, however, that the debate between EXT and EMT isn't about the best conceptual framework for interpreting findings in cognitive science. It is a debate in metaphysics about "what makes a state or process count as mental or non-mental" (Sprevak 2010, p. 261) (kl 654)
The central claim of this chapter has been that to resolve the debate about extended cognition we will need to come up with a mark of the cognitive. We will need to say what makes a state or process count as a state or process of a particular cognitive kind. (kl 951)
But debates in metaphysics are ultimately debates about conceptual frameworks; so the distinction is not a convincing one. And, contrary to the thrust of the second quote, it is implausible to hold that there might be a definitive answer to the question of "what makes a state count as a state of a particular cognitive kind." (Here is an earlier post on conceptual schemes and ontology; link.)

What this suggests to me is not that 4E theory is misguided in its notion that cognition is embedded, embodied, extended, and enactive; rather, my suggestion here is that the metaphysical questions about "constitution of cognition" and "the real nature of cognition" might be put aside and the empirical and systematic ways in which human cognitive processes are interwoven with extra-bodily artifacts and processes be investigated in detail.

Also interesting in the volume is Tadeusz Wiesław Zawidzki's treatment of "mindshaping". This topic has to do with another aspect of extended cognition, in this case the ability humans have to perceive the emotional and intentional states of other humans. Zawidski takes on the more traditional idea of "mindreading" (not the spooky kind, just the idea that human beings are hard-wired to perceive behavioral expressions of various mental states when performed by other people). He argues instead that our ability to read other people's emotions and intentions is the result of a socially/culturally constructed set of tools that we learn. And, significantly, he argues that the ability to influence the minds of others is the crucial social-cognitive ability that underlies much that is distinctive in human history.
The mindshaping hypothesis rejects this assumption [of hardwired interpersonal cognition], and proposes an alternative. According to this alternative, our social accomplishments are not due to an individual, neurally implemented capacity to correctly represent each other’s mental states. Rather, they rely on less intellectualized and more embodied capacities to shape each other’s minds, e.g., imitation, pedagogy, and norm enforcement. We are much better mindshapers, and we spend much more of our time and energy engaged in mindshaping than any other species. Our skill at mindshaping enables us to insure that we come to have the complementary mental states required for successful, complex coordination, without requiring us to solve the intractable problem of correctly inferring the independently constituted mental states of our fellows. (chapter 39)
Here is how Zawidzki relates the mindshaping hypothesis to the 4E paradigm:
The mindshaping hypothesis is a natural ally of “4E” approaches to human social- cognition. Rather than conceptualize distinctively human social cognition as the accomplishment of computational processes implemented in the brains of individuals, involving the correct representation of mental states, the mindshaping hypothesis conceptualizes it as emerging from embodied and embedded practices of tracking and molding behavioral dispositions in situated, socio-historically and culturally specific human populations. Our socio-cognitive success depends essentially on social and hence extended facts, e.g., social models we shape each other to emulate, both concrete ones, e.g., high status individuals, and “virtual” ones, e.g., mythical ideals encoded in external symbol systems. And social cognition, according to the mindshaping hypothesis, is in a very literal sense enactive: we succeed in our socio-cognitive endeavors by cooperatively enacting roles in social structures. (chapter 39)
 This is an interesting approach to the important phenomenon of interpersonal perception. And it has immediate empirical implications: are there cross-cultural differences in "mindshaping" practices? Are there differences within a given culture according to socially relevant characteristics (gender, race, class)? Is it possible to track historical changes in the skills associated with human "mindshaping" practices? Were Victorian aristocrats different in their mindshaping capacities from their counterparts a century earlier or later?

There are many instructive implications of research within the umbrella of 4E cognitive science. But perhaps the most important is the license it gives researchers to think more broadly about knowledge, perception, intention, belief, and emotion than the narrowly neurophysiological versions of cognitive science would permit. This perspective allows researchers to pay attention to the interdependencies that exist between consciousness, thought, bodily action, joint activity, social context, and artifact that are difficult to incorporate into older cognitive theories. The model of the mind as the expression of a brain-computer-information machine is perhaps one whose time has passed. (Sorry, Alan Turing!)

Saturday, March 3, 2018

Consensus and mutual understanding


Groups make decisions through processes of discussion aimed at framing a given problem, outlining the group's objectives, and arriving at a plan for how to achieve the objectives in an intelligent way. This is true at multiple levels, from neighborhood block associations to corporate executive teams to the President's cabinet meetings. However, collective decision-making through extended discussion faces more challenges than is generally recognized. Processes of collective deliberation are often haphazard, incomplete, and indeterminate.

What is collective deliberation about? It is often the case that a collaborative group or team has a generally agreed-upon set of goals -- let's say reducing the high school dropout rate in a city or improving morale on the plant floor or deterring North Korean nuclear expansion. The group comes together to develop a strategy and a plan for achieving the goal. Comments are offered about how to think about the problem, what factors may be relevant to bringing the problem about, what interventions might have a positive effect on the problem. After a reasonable range of conversation the group arrives at a strategy for how to proceed.

An idealized version of group problem-solving makes this process both simple and logical. The group canvases the primary facts available about the problem and its causes. The group recognized that there may be multiple goods involved in the situation, so the primary objective needs to be considered in the context of the other valuable goods that are part of the same bundle of activity. The group canvases these various goods as well. The group then canvases the range of interventions that are feasible in the existing situation, along with the costs and benefits of each strategy. Finally, the group arrives at a consensus about which strategy is best, given everything we know about the dynamics of the situation.

But anyone who has been part of a strategy-oriented discussion asking diverse parties to think carefully about a problem that all participants care about will realize that the process is rarely so amenable to simple logical development. Instead, almost every statement offered in the discussion is both ambiguous to some extent and factually contestable. Outcomes are sensitive to differences in the levels of assertiveness of various participants. Opinions are advanced as facts, and there is insufficient effort expended to validate the assumptions that are being made. Outcomes are also sensitive to the order and structure of the agenda for discussion. And finally, discussions need to be summarized; but there are always interpretive choices that need to be made in summarizing a complex discussion. Points need to be assigned priority and cogency; and different scribes will have different judgments about these matters.

Here is a problem of group decision-making that is rarely recognized but seems pervasive in the real world. This is the problem of recurring misunderstandings and ambiguities within the group of the various statements and observations that are made. The parties proceed on the basis of frameworks of assumptions that differ substantially from one person to the next but are never fully exposed. One person asserts that the school day should be lengthened, imagining a Japanese model of high school. Another thinks back to her own high school experience and agrees, thinking that five hours of instruction may well be more effective for learning than four hours. They agree about the statement but they are thinking of very different changes.

The bandwidth of a collective conversation about a complicated problem is simply too narrow to permit ambiguities and factually errors to be tracked down and sorted out. The conversation is invariably incomplete, and often takes shape because of entirely irrelevant factors like who speaks first or most forcefully. It is as if the space of the discussion is in two dimensions, whereas the complexity of the problem under review is in three dimensions.

The problem is exacerbated by the fact that participants sometimes have their own agendas and hobby horses that they continually re-inject into the discussion under varying pretexts. As the group fumbles towards possible consensus these fixed points coming from a few participants either need to be ruled out or incorporated -- and neither is a fully satisfactory result. If the point is ruled out some participants will believe their inputs are not respected, but if it is incorporated then the consensus has been deformed from a more balanced view of the issue.

A common solution to the problems of group deliberation mentioned here is to assign an expert facilitator or "muse" for the group who is tasked to build up a synthesis of the discussion as it proceeds. But it is evident that the synthesis is underdetermined by the discussion. Some points will be given emphasis over others, and a very different story line could have been reached that leads to different outcomes. This is the Rashomon effect applied to group discussions.

A different solution is to think of group discussion as simply an aid to a single decision maker -- a chief executive who listens to the various points of view and then arrives at her own formulation of the problem and a solution strategy. But of course this approach abandons the idea of reaching a group consensus in favor of the simpler problem of an individual reaching his or her own interpretation of the problem and possible solutions based on input from others.

This is a problem for organizations, both formal and informal, because every organization attempts to decide what to do through some kind of exploratory discussion. It is also a problem for the theory of deliberative democracy (link, link).

This suggests that there is an important problem of collective rationality that has not been addressed either by philosophy or management studies: the problem of aggregating beliefs, perceptions, and values held by diverse members of a group onto a coherent statement of the problem, causes, and solutions for the issue under deliberation. We would like to be able to establish processes that lead to rational and effective solutions to problems that incorporate available facts and judgments. Further we would like the outcomes to be non-arbitrary -- that is, given an antecedent set of factual and normative beliefs by the participants, we would like to imagine that there is a relatively narrow band of policy solutions that will emerge as the consensus or decision. We have theories of social choice -- aggregation of fixed preferences. And we have theories of rational decision-making and planning. But a deliberative group discussion of an important problem is substantially more complex. We need a philosophy of the meeting!

Monday, December 12, 2016

More on cephalopod minds


When I first posted on cephalopod intelligence a year or so ago, I assumed it would be a one-off diversion into the deep blue sea (link). But now I've read the fascinating recent book by Peter Godfrey-Smith, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, and it is interesting enough to justify a second deep dive. Godfrey-Smith is a philosopher, but he is also a scuba diver, and his interest in cephalopods derives from his experiences under water. This original stimulus has led to two very different lines of inquiry. What is the nature of the mental capacities of an octopus? And how did "intelligence" happen to evolve twice on earth through such different pathways? Why is a complex nervous system an evolutionary advantage for a descendent of a clam?

Both questions are of philosophical interest. The nature of consciousness, intelligence, and reasoning has been of great concern to philosophers in the study of the philosophy of mind. The questions that arise bring forth a mixture of difficult conceptual, empirical, and theoretical issues: how does consciousness relate to behavioral capacity? Are intelligence and consciousness interchangeable? What evidence would permit us to conclude that a given species of animal has consciousness and reasoning ability?

The evolutionary question is also of interest to philosophers. The discipline of the philosophy of biology focuses much of its attention on the issues raised by evolutionary theory. Elliott Sober's work illustrates this form of philosophical thinking -- for example, The Nature of Selection: Evolutionary Theory in Philosophical Focus, Evidence and Evolution: The Logic Behind the Science. Godfrey-Smith tells an expert's story of the long evolution of mollusks, in and out of their shells, with emerging functions and organs well suited to the opportunities available in their oceanic environments. One of the evolutionary puzzles to be considered is the short lifespan of octopuses and squid -- just a few years (160). Why would the organism invest so heavily in a cognitive system that supported its life for such a short time?

A major part of the explanation that G-S favors involves the fact that octopuses are hunters, and a complex nervous system is more of an advantage for predator than prey. (Wolves are more intelligent than elk, after all!) Having a nervous system that supports anticipation, planning, and problem solving turns out to be an excellent preparation for being a predator. Here is a good example of how that cognitive advantage plays out for the octopus:
David Scheel, who works mostly with the giant Pacific octopus, feeds his animals whole clams, but as his local animals in Prince William Sound do not routinely eat clams, he has to teach them about the new food source. So he partly smashes a clam and gives it to the octopus. Later, when he gives the octopus an intact clam, the octopus knows that it’s food, but does not know how to get at the meat. The octopus will try all sorts of methods, drilling the shell and chipping the edges with its beak, manipulating it in every way possible … and then eventually it learns that its sheer strength is sufficient: if it tries hard enough, it can simply pull the shell apart. (70)
Exploration, curiosity, experimentation, and play are crucial components of the kind of flexibility that organisms with big nervous systems bring to earning their living.

G-S brings up a genuinely novel aspect of the organismic value of a complex nervous system: not just problem-solving applied to the external environment, but coordination of the body itself. Intelligence evolves to handle the problem of coordinating the motions of the parts of the body.
The cephalopod body, and especially the octopus body, is a unique object with respect to these demands. When part of the molluscan “foot” differentiated into a mass of tentacles, with no joints or shell, the result was a very unwieldy organ to control. The result was also an enormously useful thing, if it could be controlled. The octopus’s loss of almost all hard parts compounded both the challenge and the opportunities. A vast range of movements became possible, but they had to be organized, had to be made coherent. Octopuses have not dealt with this challenge by imposing centralized governance on the body; rather, they have fashioned a mixture of local and central control. One might say the octopus has turned each arm into an intermediate-scale actor. But it also imposes order, top-down, on the huge and complex system that is the octopus body. (71)
In this picture, neurons first multiply because of the demands of the body, and then sometime later, an octopus wakes up with a brain that can do more. (72)
This is a genuinely novel and intriguing idea about the creation of a new organism over geological time. It is as if a plastic self-replicating and self-modifying artifact bootstrapped itself from primitive capabilities into a directed and cunning predator. Or perhaps it is a preview of the transition that artificial intelligence systems embodying adaptable learning processes and expanding linkages to the control systems of the physical world may take in the next fifty years.  

What about the evolutionary part of the story? Here is a short passage where Godfrey-Smith considers the long evolutionary period that created both vertebrates and mollusks:
The history of large brains has, very roughly, the shape of a letter Y. At the branching center of the Y is the last common ancestor of vertebrates and mollusks. From here, many paths run forward, but I single out two of them, one leading to us and one to cephalopods. What features were present at that early stage, available to be carried forward down both paths? The ancestor at the center of the Y certainly had neurons. It was probably a worm-like creature with a simple nervous system, though. It may have had simple eyes. Its neurons may have been partly bunched together at its front, but there wouldn’t have been much of a brain there. From that stage the evolution of nervous systems proceeds independently in many lines, including two that led to large brains of different design. (65)
The primary difference that G-S highlights here is the nature of the neural architecture that each line eventually favors: a central cord connecting periphery to a central brain; and a decentralized network of neurons distributed over the whole body.
Further, much of a cephalopod’s nervous system is not found within the brain at all, but spread throughout the body. In an octopus, the majority of neurons are in the arms themselves— nearly twice as many as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch, but also the capacity to sense chemicals— to smell, or taste. Each sucker on an octopus’s arm may have 10,000 neurons to handle taste and touch. Even an arm that has been surgically removed can perform various basic motions, like reaching and grasping. (67)
So what about the "alien intelligence" part of G-S's story? G-S emphasizes the fact that octopus mentality is about as alien to human experience and evolution as it could be.
Cephalopods are an island of mental complexity in the sea of invertebrate animals. Because our most recent common ancestor was so simple and lies so far back, cephalopods are an independent experiment in the evolution of large brains and complex behavior. If we can make contact with cephalopods as sentient beings, it is not because of a shared history, not because of kinship, but because evolution built minds twice over. This is probably the closest we will come to meeting an intelligent alien. (9)
This too is intriguing. G-S is right: the evolutionary story he works through here gives great encouragement for the idea that an organism in a complex environment and a few bits of neuronal material can evolve in wildly different pathways, leading to cognitive capabilities and features of awareness that are dramatically different from human intelligence. Life is plastic and evolutionary time is long. The ideas of the unity of consciousness and the unified self don't have any particular primacy or uniqueness. For example: 
The octopus may be in a sort of hybrid situation. For an octopus, its arms are partly self—they can be directed and used to manipulate things. But from the central brain’s perspective, they are partly non-self too, partly agents of their own. (103)
So there is nothing inherently unique about human intelligence, and no good reason to assume that all intelligent creatures would find a basis for mutual understanding and communication. Sorry, Captain Kirk, the universe is stranger than you ever imagined!

Saturday, March 12, 2016

Wendt's strong claims about quantum consciousness


Alex Wendt takes a provocative step in Quantum Mind and Social Science: Unifying Physical and Social Ontology by proposing that quantum mechanics plays a role in all levels of the human and social world (as well as all life). And he doesn't mean in the trivial sense that all of nature is constituted by quantum-mechanical micro-realities (or unrealities). Instead, he means that we need to treat human beings and social structures as quantum-mechanical wave functions. He wants to see whether some of the peculiarities of social (and individual) phenomena might be explained on the hypothesis that mental phenomena are deeply and actively quantum phenomena. This is a very large pill to swallow, since much considered judgment across the sciences concurs that the macroscopic world — billiard balls, viruses, neurons — are on a physical and temporal scale where quantum effects have undergone “decoherence” and behave as strictly classical entities.

Wendt’s work rests upon a small but serious body of scholarship in physics, the neurosciences, and philosophy on the topics of “quantum consciousness” and “quantum biology”. An earlier post described some tangible but non-controversial progress that has been made on the biology side, where physicists and chemists have explored a possible pathway accounting for birds’ ability to sense the earth’s magnetic field directly through a chemical process that depends upon entangled electrons.

Here I’d like to probe Alex’s argument a bit more deeply by taking an inventory of the strong claims that he considers in the book. (He doesn’t endorse all these claims, but regards them as potentially true and worth investigating.)
  1. Walking wave functions: "I argue that human beings and therefore social life exhibit quantum coherence – in effect, that we are walking wave functions. I intend the argument not as an analogy or metaphor, but as a realist claim about what people really are. (3) ... "My claim is that life is a macroscopic instantiation of quantum coherence. (137) ... "Quantum consciousness theory suggests that human beings are literally walking wave functions. (154)
  2. "The central claim of this book is that all intentional phenomena are quantum mechanical. (149)  ... "The basic directive of a quantum social science, its positive heuristic if you will, is to re-think human behavior through the lens of quantum theory. (32)
  3. "I argued that a very different picture emerges if we imagine ourselves under a quantum constraint with a panpsychist ontology. Quantum Man is physical but not wholly material, conscious, in superposed rather than well-defined states, subject to and also a source of non-local causation, free, purposeful, and very much alive. (207)
  4. "Quantum consciousness theory builds on these intuitions by combining two propositions: (1) the physical claim of quantum brain theory that the brain is capable of sustaining coherent quantum states (Chapter 5), and (2) the metaphysical claim of panpsychism that consciousness inheres in the very structure of matter (Chapter 6). (92)
  5. Quantum decision theory: "[There is] growing experimental evidence that long-standing anomalies of human behavior can be predicted by “quantum decision theory.” (4)
  6. Panpsychism: "Quantum theory actually implies a panpsychist ontology: that consciousness goes “all the way down” to the sub-atomic level. Exploiting this possibility, quantum consciousness theorists have identified mechanisms in the brain that might allow this sub-atomic proto-consciousness to be amplified to the macroscopic level. (5)
  7. Consciousness: "The hard problem, in contrast, is explaining consciousness. (15) ... "As long as the brain is assumed to be a classical system, there is no reason to think even future neuroscience will give us “the slightest idea how anything material could be conscious.” (17) ... "Hence the central question(s) of this book: (a) how might a quantum theoretic approach explain consciousness and by extension intentional phenomena, and thereby unify physical and social ontology, and (b) what are some implications of the result for contemporary debates in social theory? (29)
  8. The quantum brain: "Quantum brain theory hypothesizes that the brain is able to sustain quantum coherence – a wave function – at the macro, whole-organism level. (30) ... "Quantum brain theory challenges this assumption by proposing that the mind is actually a quantum computer. Classical computers are based on binary digits or “bits” with well-defined values (0 or 1), which are transformed in serial operations by a program into an output. Quantum computers in contrast are based on “qubits” that can be in superpositions of 0 and 1 at the same time and also interact non-locally, enabling every qubit to be operated on simultaneously. (95)
  9. Weak and strong quantum minds: "In parsing quantum brain theory an initial distinction should be made between two different arguments that are often discussed under this heading. What might be called the “weak” argument hypothesizes that the firing of individual neurons is affected by quantum processes, but it does not posit quantum effects at the level of the whole brain. (97)
  10. Vitalism: "Principally, because my argument is vitalist, though the issue is complicated by the variety of forms vitalism has taken historically, some of which overlap with other doctrines. (144)
  11. Will and decision: "In Chapter 6, I equated this power with an aspect of wave function collapse, viewed as a process of temporal symmetry-breaking, in which advanced action moves through Will and retarded action through Experience. (174) ... "Will controls the direction of the body's movement over time by harnessing temporal non-locality, potentially over long “distances.” As advanced action, Will projects itself into what will become the future and creates a destiny state there that, through the enforcement of correlations with what will become the past, steers us purposefully toward that end. (182)
  12. Entangled people: "It is the burden of my argument to show that despite its strong intuitive appeal, the separability assumption does not hold in social life. The burden only extends so far, since I am not going to defend the opposite assumption, that human beings are completely inseparable. This is not true even at the sub-atomic level, where entangled particles retain some individuality. Rather, what characterizes people entangled in social structures is that they are not fully separable. (208-209)
  13. Quantum semantics: "This suggests that the “ground state” of a concept may be represented as a superposition of potential meanings, with each of the latter a distinct “vector” within its wave function. (216)
  14. Social structure: "If the physical basis of the mind and language is quantum mechanical, then, given this definition, that is true of social structures as well. Which is to say, what social structures actually are, physically, are superpositions of shared mental states – social wave functions. (258) ...  "A quantum social ontology suggests – as structuration theorists and critical realists alike have long argued – that agents and social structures are “mutually constitutive.” I should emphasize that this does not mean “reciprocal causation” or “co-determination,” with which “mutual constitution” is often conflated in social theory. As quantum entanglement, the relationship of agents and social structures is not a process of causal interaction over time, but a non-local, synchronic state from which both are emergent. (260) ... "First, a social wave function constitutes a different probability distribution for agents’ actions than would exist in its absence. Being entangled in a social structure makes certain practices more likely than others, which I take to involve formal causation. (264-265)
  15. The state and other structures: "The answer is that the state is a kind of hologram. This hologram is different from those created artificially by scientists in the lab, and also from the holographic projection that I argued in Chapter 11 enables us to see ordinary material objects, since in these cases there is something there visible to the naked eye. (271) ... Collective consciousness: "A quantum interpretation of extended consciousness takes us part way toward collective consciousness, but only part, because even extended consciousness is still centered in individual brains and thus solipsistic. A plausible second step therefore would be to invoke the concept of ‘We-feeling,’ which seems to get at something like ‘collective consciousness,’ and is not only widely used by philosophers of collective intentionality, but has been studied empirically by social psychologists as well. (277)
In my view the key premise here is the quantum interpretation of the brain and consciousness that Alex advocates. He wants us to consider that the operations of the brain -- the input-output relations and the intervening mechanisms -- are not "classical" but rather quantum-mechanical. And this is a very, very strong claim. It is vastly stronger than the idea that neurons may be affected by quantum-level events (considered in an earlier post and subject to active research by people interested in how microtubules work within neurons). But Alex would not be satisfied with the idea that "neurons are quantum machines" (point 9 above); he wants to make the vastly stronger argument that "brains are quantum computers". And even stronger than that -- he wants to claim that the brain itself is a wave function, which implies that we cannot understand its working by understanding the workings of its (quantum) components. (I don't think that computer engineers who are designing real quantum computers believe that the device itself is a wave function; only that the components (qubits) behave according to quantum mathematics.) Here is his brain-holism:
Quantum brain theory hypothesizes that quantum processes at the elementary level are amplified and kept in superposition at the level of the organism, and then, through downward causation constrain what is going on deep within the brain. (95)
So the brain as a whole is in superposition, and only resolves with perception or will as a whole in an event of the collapse of its wave function. (He sometimes refers to "a decoherence-free sub-space of the brain within which quantum computational processes are performed" (95), which implies that the brain as a whole is perhaps a classical thing encompassing "quantum sub-regions".) But whether it is the whole brain (implied by "walking wave function") or a relatively voluminous sub-region, the conjurer's move occurs here: extending known though kinky properties of very special isolated systems of micro-entities (a handful of electrons, photons, or atoms) to a description of macro-sized entities maintaining those same kinky properties.

So the "brain as wave function" theory is very implausible given current knowledge. But if this view of the brain and thought cannot be made more credible than it currently is -- both empirically and theoretically -- then Wendt's whole system falls apart: entangled individuals involved in structures and meanings, life as a quantum-vital state, and panpsychism all have no inherent credibility by themselves.

There are many eye-widening claims here -- and yet Alex is clear enough and well-versed enough in relevant areas of research in neuroscience and philosophy of mind to give his case some credibility. He lays out his case with calm good humor and rational care. Alex relies heavily on the fact that there are difficult unresolved problems in the philosophy of mind and the philosophy of physics (the nature of consciousness, freedom of the will, the interpretation of the quantum wave function). This gives impetus to his call for a fresh way of approaching the whole field -- as suggested by historians of science like Kuhn and Lakatos. However, failing to reach an answer to the question, "How is freedom of the will possible?", does not warrant us to jump to highly questionable assumptions about neurophysiology.

But really -- in the end this just is not a plausible theory in my mind. I'm not ready to accept the ideas of quantum brains, quantum meanings, or quantum societies. The idea of entanglement has a specific meaning when it comes to electrons and photons; but metaphorical extension of the idea to pairs or groups of individuals seems like a stretch. I'm not persuaded that we are "walking wave functions" or that entanglement accounts for the workings of social institutions. The ideas of structures and meanings as entangled wave functions (individuals) strike me as entirely speculative, depending on granting the possibility that the brain itself is a single extended wave function. And this is a lot to grant.

(Here is a brief description of the engineering goals of developing a quantum computer (link):
Quantum computing differs fundamentally from classical computing, in that it is based on the generation and processing of qubits. Unlike classical bits, which can have a state of either 1 or 0, qubits allow a superposition of the 1 and 0 states (both simultaneously). Strikingly, multiple qubits can be linked in so-called 'entangled' states, in which the manipulation of a single qubit changes the entire system, even if individual qubits are physically distant. This property is the basis for quantum information processing, with the goal of building superfast quantum computers and transferring information in a completely secure way.
See the referenced research article in Science for a current advance in optical quantum computing; link.)

(The image above is from a research report from a team which has succeeded in creating entanglement of a record number of atoms -- 3,000. Compare that to the hundreds of billions of neurons in the brain, and once again the implausibility of the "walking wave function" idea becomes overwhelming. And note the extreme conditions of low temperature that are required to create this entangled group; the atoms were cooled to 10-millionths of a degree Kelvin, trapped between two mirrors, and subjected to exposure by a single photon (link) And yet presumably decoherence occurs if the temperature raises substantially.)

Here is an interesting lecture on quantum computing by Microsoft scientist Krysta Svore, presented at the Institute for Quantum Computing at the University of Waterloo.


Quantum biology?



I have discussed several times an emerging literature on "quantum consciousness", focusing on Alex Wendt's provocative book Quantum Mind and Social Science: Unifying Physical and Social Ontology. Is it possible in theory for cognitive processes, or neuroanatomical functioning, to be affected by events at the quantum level? Are there known quantum effects within biological systems? Here is one interesting case that is currently being explored by biologists: an explanation of the ability of birds to navigate by the earth's magnetic field in terms of the chemistry of entangled electrons.

Quantum entanglement is defined as a relation between two or more micro-particles (photons, electrons, …) in which the quantum state of one is entangled with the quantum state of the other. When observation of the first part of the pair brings about alteration of the quantum state in that particle, quantum theory entails that the state of the second particle will change as well.

It has been hypothesized that the ability of birds to navigate by reference to the earth’s magnetic field may be explained by quantum effects of electrons in molecules (cryptochromes) in the bird’s retina. Thorsten Ritz is a leader in this area of research. In "Magnetic Compass of Birds Is Based on a Molecule with Optimal Directional Sensitivity" he and his co-authors describes the hypothesis in these terms (link):
The radical-pair model (7,8) assumes that these properties of the avian magnetic compass—light-dependence and insensitivity to polarity—directly reflect characteristics of the primary processes of magnetoreception. It postulates a crucial role for specialized photopigments in the retina. A light-induced electron-transfer reaction creates a spin- correlated radical pair with singlet and triplet states. (3451)
Here is the chemistry from the same article (3452):

Markus Tiersch and Hans Briegel address these findings in "Decoherence in the chemical compass: the role of decoherence for avian magnetoreception". They describe the hypothetical mechanism of paired-electron chemistry as a mechanism in birds for detecting magnetic fields (link):
Certain birds, including the European robin, have the remarkable ability to orient themselves, during migration, with the help of the Earth's magnetic field [3-6]. Responsible for this 'magnetic sense' of the robin, according to one of the main hypotheses, seems to be a molecular process called the radical pair mechanism [7,8] (also, see [9,10] for reviews that include the historical development and the detailed facts leading to the hypothesis). It involves a photo-induced spatial separation of two electrons, whose spins interact with the Earth's magnetic field until they recombine and give rise to chemical products depending on their spin state upon recombination, and thereby to a different neural signal. The spin, as a genuine quantum mechanical degree of freedom, thereby controls in a non-trivial way a chemical reaction that gives rise to a macroscopic signal on the retina of the robin, which in turn influences the behaviour of the bird. When inspected from the viewpoint of decoherence, it is an intriguing interplay of the coherence (and entanglement) of the initial electron state and the environmentally induced decoherence in the radical pair mechanism that plays an essential role for the working of the magnetic compass. (4518)
So the hypothesis is that birds (and possibly other organisms) have evolved ways of exploiting "spin chemistry" to gain a signal from the presence of a magnetic field. What is spin chemistry? Here is a definition from the spin chemistry website (yes, spin chemistry has its own website!) (link):
Broadly defined, Spin Chemistry deals with the effects of electron and nuclear spins in particular, and magnetic interactions in general, on the rates and yields of chemical reactions. It is manifested as spin polarization in EPR and NMR spectra and the magnetic field dependence of chemical processes. Applications include studies of the mechanisms and kinetics of free radical and biradical reactions in solution, the energetics of photosynthetic electron transfer reactions, and various magnetokinetic effects, including possible biological effects of extremely low frequency and radiofrequency electromagnetic fields, the mechanisms by which animals can sense the Earth’s magnetic field for orientation and navigation, and the possibility of manipulating radical lifetimes so as to control the outcome of their reactions. (link)
Tiersch and Briegel go through the quantum-mathematical details on how this process might work in the case of molecules that might be found in birds' retinas. Here is the conclusion drawn by Tiersch and Briegel:
It seems that the radical pair mechanism provides an instructive example of how the behaviour of macroscopic entities, like the European robin, may indeed remain connected, in an intriguing way, to quantum processes on the molecular level. (4538)
This line of thought is still unconfirmed, as both Ritz and Tiersch and Briegel are careful to emphasize. If confirmed, it would provide an affirmative answer to the question posed above -- are there biological effects of quantum-mechanical events? But even if confirmed, it doesn't seem like an enormously surprising result. It traces out a chemical reaction which proceeds differently depending on whether entangled electrons in molecules stimulated by a photon have been influenced by a magnetic field; this gives the biological system a signal about the presence of a magnetic field that does in fact depend on the quantum states of a pair of electrons. Entanglement is now well confirmed, so this line of thought isn't particularly radical. But this is entirely less weird than the idea that quantum particles are "conscious", or that consciousness extends all the way down to the quantum level (quantum interactive dualism, as Henry Stapp calls it; link). And it is nowhere nearly as perplexing as the claim that "making up one's mind" is a form of a collapsing quantum state represented by a part of the brain.

(Of interest on this set of topics is a recent collection, Quantum physics meets the philosophy of mind, edited by Antonella Corradini and Uwe Meixne. Here is a video in which Hans Briegel discusses research on modeling quantum effects on agents: https://phaidra.univie.ac.at/detail_object/o:300666.)

Friday, December 18, 2015

Von Neumann on the brain


image: representation of a mammalian brain neural network 

After World War II John von Neumann became interested in the central nervous system as a computing organ. Ironically, more was probably known about neuroanatomy than about advanced digital computing in the 1940s; that situation has reversed, of course. Now we know a great deal about calculating, recognizing, searching, and estimating in silicon; but relatively less about how these kinds of processes work in the setting of the central nervous system. At the time of his final illness von Neumann was preparing a series of Silliman Lectures at Yale University that focused on the parallels that exist between the digital computer and the brain; these were published posthumously as The Computer and the Brain (CB) in 1958. This topic also comes in for substantial discussion in Theory Of Self Reproducing Automata (TSRA) (edited and published posthumously by Arthur Burks in 1966). It is very interesting to see how von Neumann sought to analyze this problem on the basis of the kinds of information available to him in the 1950s.

Much of CB takes the form of a rapid summary of the state of knowledge about digital computing machines that existed in the 1950s, from Turing to ENIAC. Almost all computers today possess the "von Neumann" architecture along these lines.


Alan Turing provided some of the mathematical and logical foundations of modern digital computing (link). He hypothesized a very simple computing device that consisted of a tape of indefinite length, a  tape drive mechanism that permitted moving the tape forwards or backwards one space, and a read-write mechanism that could read the mark in a tape location or erase and re-write the mark in that location. Here is a diagram of a Turing machine:

(Fascinatingly, here is a photo of a working model of a Turing machine (link):)


Turing's fundamental theorem is that any function that is computable at all is computable on a Turing machine; so a Turing machine is a universal computing machine. The von Neumann architecture and the computing machines that it spawned -- ENIAC and its heirs -- are implementations of a universal computing machine. 

From the time of Frege it has been understood that mathematical operations can be built up as compounds of several primitive operations -- addition, subtraction, etc.; so, for example, multiplication can be defined in terms of a sequence of additions. Programming languages and libraries of subroutines take advantage of this basic logic: new functions are defined as series of more elementary operations embodied in machine states. As von Neumann puts the point in CB:
More specifically: any computing machine that is to solve a complex mathematical problem must be “programmed” for this task. This means that the complex operation of solving that problem must be replaced by a combination of the basic operations of the machine. Frequently it means something even more subtle: approximation of that operation—to any desired (prescribed) degree—by such combinations. (5)
Key questions about the capacities of a computing machine, either electro-mechanical or biological, have to do with estimating its dimensionality: how much space does it occupy, how much energy does it consume, and how much time does it take to complete a given calculation? And this is where von Neumann's analysis took its origin. Von Neumann sought to arrive at realistic estimates of the size and functionality of the components of these two kinds of computation machines. The differences in scale are enormous, whether we consider speed, volume, or energy consumption. Fundamentally, neurons are more numerous by orders of magnitude (10^10 versus 10^4); slower by orders of magnitude (5 msec vs. 10^-3 msec); less energy-intensive by orders of magnitude (10^-3 ergs vs.10^2 ergs); and computationally less precise by orders of magnitude. (Essentially he estimates that a neural circuit, either analog or digital, is capable of precision of only about 1%.) And yet von Neumann concludes that brains accomplish computational problems faster than digital computers because of their massively parallel structure -- in spite of the comparative slowness of the individual elements of computation (neurons). This implies that the brain embodies a structurally different architecture than sequential digital computing embodied in the von Neumann model.

Von Neumann takes the fundamental operator of the brain to be the neuron, and he represents the neuron as a digital device (in spite of its evident analog electrochemical properties). A neuron transmits a pulse. "The nervous pulses can clearly be viewed as (two-valued) markers.... The absence of a pulse then represents one value (say, the binary digit 0), and the presence of one represents the other (say, the binary digit 1)" (42). "The nervous system has a prima facie digital character" (44).

In their introduction to the second edition of CB the Churchlands summarize von Neumann's conclusion somewhat differently by emphasizing the importance of the analog features of the brain: "If the brain is a digital computer with a von Neumann architecture, it is doomed to be a computational tortoise by comparison... [But] the brain is neither a tortoise nor a dunce after all, for it was never a serial, digital machine to begin with: it is a massively parallel analog machine" (kl 397). However, it appears to me that they overstate the importance of analog neural features in von Neumann's account. Certainly vN acknowledges the analog electro-chemical features of neural activity; but I don't find him making a strong statement in this book to the effect that analog features contribute to the better-than-expected computational performance of the brain. This seems to correspond more to a view of the Churchlands than to von Neumann's analysis in the 1950s. Here is their view as expressed in "Could a Machine Think?" in Scientific American in 1990:
First, nervous systems are parallel machines, in the sense that signals are processed in millions of different pathways simultaneously. The retina, for example, presents its complex input to the brain not in chunks of eight, 16 or 32 elements, as in a desktop computer, but rather in the form of almost a million distinct signal elements arriving simultaneously at the target of the optic nerve (the lateral geniculate nucleus), there to be processed collectively, simultaneously and in one fell swoop. Second, the brain’s basic processing unit, the neuron, is comparatively simple. Furthermore, its response to incoming signals is analog, not digital, inasmuch as its output spiking frequency varies continuously with its input signals. Third, in the brain axons projecting from one neuronal population to another are often matched by axons returning from their target population. These descending or recurrent projections allow the brain to modulate the character of its sensory processing. (link, 35)
In considering the brain von Neumann reached several fundamental observations. First, the enormous neural network of the central nervous system is itself a universal computing machine. Von Neumann worked on the assumption that the CNS could be "programmed" to represent the fundamental operations of arithmetic and logic; and therefore it has all the power of a universal computational machine. But second, von Neumann believes his analysis demonstrates that its architecture is fundamentally different from the standard von Neumann architecture. This observation is the more fundamental. It derives from von Neumann's estimates of the base speed rate of calculation available to neurons in comparison to vacuum tubes; a von Neumann machine with components of this time scale would take eons to complete the calculations that the brain performs routinely. And so this underlines the importance of the massively parallel computing that is accomplished by the biological neural network. Ironically, however, it has proven challenging to emulate massively parallel neural nets in digital computing environments; here is an interesting technical report by Paul Fox that identifies communication bandwidth as being the primary limiting factor for such emulations (link). 

(Tsutomu Miki explores some of these issues in Brainware : Bio-Inspired Architecture and Its Hardware Implementation.)

Friday, November 6, 2015

Social relations across class lines



People relate to each other on the basis of a set of moral and cognitive frameworks -- ideas about the social world and how others are expected to behave -- and on the basis of fairly specific scripts that prescribe their own behavior in given stylized circumstances. It is evident that there are important and deep differences across cultures, regions, and classes when it comes to the specifics of these frameworks and scripts. Part of what makes My Man Godfrey humorous is the mismatch of expectations that are brought forward by the different signals of social class presented by Godfrey. Is he a homeless man, a victim of the Depression, or an upper class gentleman in disguise? His accent suggests the latter; whereas his dress and living conditions suggest one or another of the first two possibilities.

It is relatively rare for people in the United States to have sustained contact with individuals from substantially different socioeconomic circumstances; and when they do, the interactions are generally stylized and perfunctory. Consider churches -- there is remarkably little socioeconomic diversity within churches in the United States. This is even more true of elite private and public universities (link). Take the percentage of Pell eligibility as an indicator of socioeconomic diversity. The University of Wisconsin-Madison serves only 10% Pell-eligible students, and Yale University only 12% Pell-eligible. According to the New York Times article providing this data, the upper margin of Pell eligibility is a family income of about $70,000; so roughly 90% of the undergraduate students in these elite universities come from families with greater than $70,000 annual income. What is the likelihood of a Yale or UW student having a serious, prolonged conversation with a person from a family below the poverty line (roughly $25,000)? It is virtually nil.

Non-elite public universities are more diverse by this measure; in 2011 49% of 19.7 million students in AASCU universities are Pell recipients (link). So the likelihood of cross-class conversations occurring in non-elite public universities is substantially higher than at flagships and elite private universities. But, as Elizabeth Armstrong and Laura Hamilton show in Paying for the Party: How College Maintains Inequality, even more socioeconomically diverse public universities fall victim to institutional arrangements that serve to track students by their socioeconomic status into different life outcomes (link).

This lack of socioeconomic diversity in most fundamental institutions in the United States has many consequences. Among these is a high level of perspective-blindness when it comes to the ability of upper-income people to understand the worldview and circumstances of lower-income people. In a very blunt way, we do not understand each other. And these forms of blindness are even more opaque when they are compounded by unfamiliar racial or religious backgrounds for the two parties.

This socioeconomic separation may go some ways towards explaining what otherwise appears very puzzling in our politics today -- the evident hostility to the poor that is embodied in conservative rhetoric about social policies like food assistance or access to Medicaid-subsidized health insurance. A legislator or commentator who has never had a serious conversation with a non-union construction worker supporting a family earning $18.50/hour ($38,500 annually) will have a hard time understanding the meaning of a change in policy that result in additional monthly expenses. But also, he or she may not be in a position to understand how prejudicial his way of expressing himself is to the low-income person. (I've treated this issue in an earlier post as well.)

E.P. Thompson considered some of these forms of separation and mutual incomprehension across class boundaries in eighteenth-century Britain in his excellent essay, "Patrician Society, Plebeian Culture" (link). His central theme is the passing of a paternalistic culture to a more purely economic and exploitative relationship. Patrons came to have less and less of a sense of obligation when it came to the conditions of the poor within their domain. Simultaneously, men and women on the lower end of the socioeconomic spectrum came to have a more confident sense of their independence from earlier forms of subordination, sometimes in ways that alarmed the old elites. But this growing sense of independence did not after all threaten the relations of subordination that governed:
And yet one feels that "crisis" is too strong a term. If the complaint continues throughout the century that the poor were indisciplined, criminal, prone to tumult and riot, one never feels, before the French Revolution, that the rulers of England conceived that their whole social order might be endangered. The insubordination of the poor was an inconvenience; it was not a menace. The styles of politics and of architecture, the rhetoric of the gentry and their decorative arts, all seem to proclaim stability, self- confidence, a habit of managing all threats to their hegemony. (387)
The efforts that universities make to enhance the diversity and inclusiveness of their classrooms often focus on this point of social separation: how can we encourage students from different races, religions, or classes to interact with each other deeply enough to learn from each other? The need is real; the segregation of American society by race, religion, and socioeconomic status is a huge obstacle to mutual understanding and trust across groups. But all too often these efforts at teaching multicultural competence have less effect than they are designed to have. Organizations like AmeriCorps and CityYear probably have greater effect, simply because they succeed in recruiting highly diverse cohorts of young men and women who learn from each other while working on common projects (link).

Sunday, March 15, 2015

Improving skilled performance


What can ordinary experience tell us about how skilled practitioners can be brought to a high level of competence and performance?

Let's say we are interested in teaching a child to play the violin at an advanced level, beginning at the novice level. The outcome we want to influence is the child's ability to play competently. This requires developing an ensemble of capabilities, both cognitive and motor, in virtue of which the child Is able to handle the bow and the instrument, interpret and perform the printed score, and give the performance appealing emotion. How does that process work?

In its essentials the violin coach works with a set of rules of thumb: demonstrate bow technique, motivate the child to emulate and practice, offer corrective feedback to which the child attempts to respond; more practice, more coaching, more performance.

This is a simple process for an individual child and a coach, for several important reasons. The performance and its defects are visible. The sound from the string is squeaky? Hold the bow less tightly and concentrate on a straight stroke across the string. Further, the necessary component skills are largely independent from one another. So the coach can concentrate on the bow stroke for several lessons without worrying that the new proficiency with the bow will interfere with the fingerboard or the sight reading.

Moreover, neither child nor coach needs to be a cognitive psychologist or a physicist. It isn't necessary to have a theory of the underlying cognitive processes or the physics of the violin in order to help the pupil improve. Playing the violin capably isn't easy, but it is largely incremental and separable into progress in the several component skills.

Now consider a moderately more complex example of expertise, becoming a good writer. Here the problem for the coach is somewhat more difficult. The quality of the performance itself is no longer entirely visible. The coach has to make some interpretive judgments about the student's efforts. The skills are not so discrete, allowing coach and learner to workbook them separately. And crucially, the skills are not so independent from one another. Enhancing logical narrative skills may detract from the skills associated with poetic expression or emotional directness. So a coaching strategy that focuses on skills A, B, and C may succeed in improvement of the components but lead to a degradation of performance overall.

Now consider a highly complex and non-transparent performance -- becoming an excellent test pilot, let us say. An excellent pilot requires a high level of excellence in a wide range of skills, including both motor skills and intellectual skills. So how should a coach approach the problem of transforming the 18 year old into an excellent pilot? A major part of the answer is that we have now exceeded the capacity of individual coaching. The pilot needs a whole curriculum of training, in which coaching plays an occasional role. Significantly, it may be the case that coaching is most pertinent for pilots at the high end of expertise, to help them go from capable to exceptional. But the question of enhancing performance shifts from the techniques and rules of thumb used by individual coaches, to the components and pedagogy of the flight training curriculum.

I find this an interesting topic because it focuses attention on one of the fundamental features of human capacity -- our ability to become highly skilled at various challenging tasks -- and asks how this process unfolds. What is it about the pupil's cognitive and motor potential that makes it possible for him or her to gain skill? And what features of coaching and training help to cultivate this potential?

But second, I find the topic interesting because of how distant it seems to be from a traditional university curriculum. Universities organize the learning process around formal courses and the mastery of content. A large part of skill cultivation, however, seems to require a more practical engagement with a coach or a teacher who encourages, corrects, and demonstrates.

(Here is an interesting bit of physics on why the violin is difficult; link.)