Navigation page

Pages

Thursday, June 25, 2020

STS and big science


A previous post noted the rapid transition in the twentieth century from small physics (Niels Bohr) to large physics (Ernest Lawrence). How should we understand the development of scientific knowledge in physics during this period of rapid growth and discovery?

One approach is through the familiar methods and narratives of the history of science -- what might be called "internal history of science". Researchers in the history of science generally approach the discipline from the point of view of discovery, intellectual debate, and the progress of scientific knowledge. David Cassidy's book  Beyond Uncertainty: Heisenberg, Quantum Physics, and The Bomb is sharply focused on the scientific and intellectual debates in which Heisenberg was immersed during the development of quantum theory. His book is fundamentally a narrative of intellectual discovery. Cassidy also takes on the moral-political issue of serving a genocidal state as a scientist; but this discussion has little to do with the history of science that he offers. Peter Galison is a talented and imaginative historian of science, and he asks penetrating questions about how to explain the advent of important new scientific ideas. His treatment of Einstein's theory of relativity in Einstein's Clocks and Poincare's Maps: Empires of Time, for example, draws out the importance of the material technology of clocks and the intellectual influences that flowed through the social networks in which Einstein was engaged for Einstein's basic intuitions about space and time. But Galison too is primarily interested in telling a story about the origins of intellectual innovation.

It is of course valuable to have careful research studies of the development of science from the point of view of the intellectual context and concepts that influenced discovery. But fundamentally this approach leaves largely unexamined the difficult challenge: how do social, economic, and political institutions shape the direction of science?

The interdisciplinary field of science, technology, and society studies (STS) emerged in the 1970s as a sociological discipline that looked at laboratories, journals, and universities as social institutions, with their own interests, conflicts, and priorities. Hackett, Amsterdamska, Lynch, and Wajcman's Handbook of Science and Technology Studies provides a good exposure to the field. The editors explain that they consulted widely across researchers in the field, and instead of a unified and orderly "discipline" they found many cross-cutting connections and concerns.
What emerged instead is a multifaceted interest in the changing practices of knowledge production, concern with connections among science, technology, and various social institutions (the state, medicine, law, industry, and economics more generally), and urgent attention to issues of public participation, power, democracy, governance, and the evaluation of scientific knowledge, technology, and expertise. (kl 98)
The guiding idea of STS is that science is a socially situated human activity, embedded within sets of social and political relations and driven by a variety of actors with diverse interests and purposes. Rather than imagining that scientific knowledge is the pristine product of an impersonal and objective "scientific method" pursued by selfless individuals motivated solely by the search for truth, the STS field works on the premise that the institutions and actors within the modern scientific and technological system are unavoidably influenced by non-scientific interests. These include commercial interests (corporate-funded research in the pharmaceutical industry), political interests (funding agencies that embody the political agendas of the governing party), military interests (research on fields of knowledge and technological development that may have military applications), and even ideological interests (Lysenko's genetics and Soviet ideology). All of these different kinds of influence are evident in Hiltzik's account in Big Science: Ernest Lawrence and the Invention that Launched the Military-Industrial Complex of the evolution of the Berkeley Rad Lab, described in the earlier post.

In particular, individual scientists must find ways of fitting their talents, imagination, and insight into the institutions through which scientific research proceeds: universities, research laboratories, publication outlets, and sources of funding. And Hiltzik's book makes it very clear that a laboratory like the Radiation Lab that Lawrence created at the University of California-Berkeley must be crafted and designed in a way that allows it to secure the funds, equipment, and staff that it needs to carry forward the process of fundamental research, discovery, and experimentation that the researchers and the field of high-energy physics wished to conduct.

STS scholars sometimes sum up these complex social processes of institutions, organizations, interests, and powers leading to scientific and technological discovery as the "social construction of technology" (SCOT). And, indeed, both the course of physics and the development of the technologies associated with advanced physics research were socially constructed -- or guided, or influenced -- throughout this extended period of rapid advancement of knowledge. The investments that went into the Rad Lab did not go into other areas of potential research in physics or chemistry or biology; and of course this means that there were discoveries and advances that were delayed or denied as a result. (Here is a recent post on the topic of social influences on the development of technology; link.)

The question of how decisions are made about major investments in scientific research programs (including laboratories, training, and cultivation of new generations of science) is a critically important one. In an idealized way one would hope for a process in which major multi-billion dollar and multi-decade investments in specific research programs would be made in a rational way, incorporating the best judgments and advice of experts in the relevant fields of science. One of the institutional mechanisms through which national science policy is evaluated and set is the activity of the National Academy of Science, Engineering, and Medicine (NASEM) and similar expert bodies (link). In physics the committees of the American Physical Society are actively engaged in assessing the present and future needs of the fundamental science of the discipline (link). And the National Science Foundation and National Institutes of Health have well-defined protocols for peer assessment of research proposals. So we might say that science investment and policy in the US have a reasonable level of expert governance. (Here is an interesting status report on declining support for young scientists in the life sciences in the 1990s from an expert committee commissioned by NASEM (link). This study illustrates the efforts made by learned societies to assess the progress of research and to recommend policies that will be needed for future scientific progress.)

But what if the institutions through which these decisions are made are decidedly non-expert and bureaucratized -- Congress or the Department of Energy, for example, in the case of high-energy physics? What if the considerations that influence decisions about future investments are importantly directed by political or economic interests (say, the economic impact of future expansion of the Fermilab on the Chicago region)? What if companies that provide the technologies underlying super-conductor electromagnets needed for one strategy but not another are able to influence the decision in their favor? What are the implications for the future development of physics and other areas of science of these forms of non-scientific influence? (The decades-long case of the development of the V-22 Osprey aircraft is a case in point, where pressures on members of Congress from corporations in their districts led to the continuation of the costly project long after the service branches concluded it no longer served the needs of the services; link.)

Research within the STS field often addresses these kinds of issues. But so do researchers in organizational studies who would perhaps not identify themselves as part of the STS field. There is a robust tradition within sociology itself on the sociology of science. Robert Merton was a primary contributor with his book The Sociology of Science: Theoretical and Empirical Investigations (link). In organizational sociology Jason Owen-Smith's recent book Research Universities and the Public Good: Discovery for an Uncertain Future provides an insightful analysis of how research universities function as environments for scientific and technological research (link). And many other areas of research within contemporary organizational studies are relevant as well to the study of science as a socially constituted process. A good example of recent approaches in this field is Richard Scott and Gerald Davis, Organizations and Organizing: Rational, Natural and Open Systems Perspectives.

The big news for big science this week is the decision by CERN's governing body to take the first steps towards establishment of the successor to the Large Hadron Collider, at an anticipated cost of 21 billion euros (link). The new device would be an electron-positron collider, with a plan to replace it later in the century with a proton-proton collider. Perhaps naively, I am predisposed to think that CERN's decision-making and priority-setting processes are more fully guided by scientific consensus than is the Department of Energy's decision-making process. However, it would be very helpful to have in-depth analysis of the workings of CERN, given the key role that it plays in the development of high-energy physics today. Here is an article in Nature reporting efforts by social-science observers like Arpita Roy, Knorr Cetina, and John Krige to arrive at a more nuanced understanding of the decision-making processes at work within CERN (link).

Wednesday, June 24, 2020

The arc of justice


It has been over a month since the murder of George Floyd in Minneapolis. The horror, brutality, and relentless cruelty of George Floyd's death moves everyone who thinks about it. But George Floyd is, of course, not alone. Michael Brown was murdered by police in Ferguson, Missouri, in 2014, and Eric Garner was choked to death by New York City police in the same year. The Washington Post has created a database of police shootings since 2015 (link), which includes shootings but not other causes of death. According to the data reported there for more than 5,000 deaths recorded in 2015-2020, black individuals are 2.38 times as likely to be shot and killed by police as white individuals, and Hispanic individuals are 1.77 times as likely to be shot and killed by police as white individuals. During the past five years, persons shot and killed by police included 2,479 white individuals (13 per million), 1,298 black individuals (31 per million), 904 Hispanic individuals (23 per million), and 219 "other" individuals (4 per million). Plainly there are severe racial disparities in these data. Black and brown people are much more likely to be shot by police than white people. Plainly these data demonstrate beyond argument the very clear arithmetic that black men and women are treated very differently from their white counterparts when it comes to police behavior.

Thanks to the availability of video evidence, a small number of these deaths at the hands of police have provoked widespread public outrage and protest. The Black Lives Matter movement has demanded that policing must change, and that police officers and superiors must be held accountable for unjustified use of force. But it is evident from the Washington Post data that most cases do not gain much public recognition or concern; and even worse, nothing much has changed in the five years since Michael Brown's death and Eric Garner's death in terms of the frequency of police killings. There has not been a sea change in the use of deadly force against young men of color by police across the country. According to the WP data, there were an average of more than 250 shooting deaths per year of black individuals, and only a few of these received national attention.

What change can we observe since Michael Brown's death and Eric Garner's death? The Black Lives Matter movement has been a persistent and courageous effort to demand we put racism and racist oppression aside. The public reaction to George Floyd's murder in the past month has been massive, sustained, and powerful. The persistent demonstrations that have occurred across the country -- with broad support across all racial groups -- seem to give some hope that American society is finally waking up to the deadly, crushing realities of racism in our country -- and is coming to realize that we must change. We must change our thinking, our acceptance of racial disparities, our toleration of hateful rhetoric and white supremacy, and our social and legal institutions. Is it possible that much of white America has at last emerged from centuries of psychosis and blindness on the subject of race, and is ready to demand change? Can we finally make a different America? In the words of Langston Hughes, "O, yes, I say it plain, America never was America to me, And yet I swear this oath—America will be!"

Michael Brown was killed at about the time of the 2014 annual meeting of the American Sociological Association. A small group of sociologists undertook to write a letter -- a manifesto, really -- concerning the pervasiveness and impact of racism and racial disparities in America. Sociologist Neda Maghbouleh organized a small group of sociologists in attendance to draft the letter during the ASA conference in San Francisco, and over 1800 sociologists signed the letter. Nicki Lisa Cole contributed to writing the letter and summarizes its main points and recommendations here, and the text of the document can be found here. It is a powerful statement, both fact-based and normatively insistent. The whole document demands our attention, but here are two paragraphs that are especially important in today's climate of outrage about violent and unjustified use of force by police:

The relationship between African Americans and law enforcement is fraught with a long history of injustice, state violence and abuse of power. This history is compounded by a string of recent police actions that resulted in the deaths of Michael Brown (Ferguson, Mo.), Ezell Ford (Los Angeles, Calif.), Eric Garner (Staten Island, N.Y.), John Crawford (Beavercreek, Ohio), Oscar Grant (Oakland, Calif.), and the beating of Marlene Pinnock (Los Angeles, Calif.) by a California Highway Patrol officer. These events reflect a pattern of racialized policing, and will continue to occur in the absence of a national, long-term strategy that considers the role of historic social processes that have institutionalized racism within police departments and the criminal justice system more broadly.

Law enforcement’s hyper-surveillance of black and brown youth has created a climate of suspicion of people of color among police departments and within communities. The disrespect and targeting of black men and women by police departments across the nation creates an antagonistic relationship that undermines community trust and inhibits effective policing. Instead of feeling protected by police, many African Americans are intimidated and live in daily fear that their children will face abuse, arrest and death at the hands of police officers who may be acting on implicit biases or institutional policies based on stereotypes and assumptions of black criminality. Similarly, the police tactics used to intimidate protesters exercising their rights to peaceful assembly in Ferguson are rooted in the history of repression of African American protest movements and attitudes about blacks that often drive contemporary police practices.


These descriptions are not ideological, and they are not statements of political opinion. Rather, they are fact-based observations about racial disparities in our society that any honest observer would agree with. Alice Goffman's On the Run: Fugitive Life in an American City is an ethnographic documentation of many of the insights about surveillance, disrespect, and antagonism in Philadelphia (link).

Sociologists, public health experts, historians, and other social scientists have written honestly and passionately about the nature of the race regime in America. Michelle Alexander captures the thrust of much of this analysis in her outstanding book The New Jim Crow: Mass Incarceration in the Age of Colorblindness, and the phrase "the New Jim Crow" is brilliant as a description of life today for tens of millions of African-Americans. But the current moment demands more than simply analysis and policy recommendations -- it demands an ability to listen and a better ability of all of America to understand and feel the life experience that racism has created in our country. It seems that we need to listen to a poetic voice as well as a sociological or political analysis.

One of those voices is Langston Hughes. Here are two of Langston Hughes' incredibly powerful poems from the 1930s that speak to our times, "The Kids Who Die" and "Let America Be America Again".

The Kids Who Die
1938

This is for the kids who die,
Black and white,
For kids will die certainly.
The old and rich will live on awhile,
As always,
Eating blood and gold,
Letting kids die.

Kids will die in the swamps of Mississippi
Organizing sharecroppers
Kids will die in the streets of Chicago
Organizing workers
Kids will die in the orange groves of California
Telling others to get together
Whites and Filipinos,
Negroes and Mexicans,
All kinds of kids will die
Who don't believe in lies, and bribes, and contentment
And a lousy peace.

Of course, the wise and the learned
Who pen editorials in the papers,
And the gentlemen with Dr. in front of their names
White and black,
Who make surveys and write books
Will live on weaving words to smother the kids who die,
And the sleazy courts,
And the bribe-reaching police,
And the blood-loving generals,
And the money-loving preachers
Will all raise their hands against the kids who die,
Beating them with laws and clubs and bayonets and bullets
To frighten the people—
For the kids who die are like iron in the blood of the people—
And the old and rich don't want the people
To taste the iron of the kids who die,
Don't want the people to get wise to their own power,
To believe an Angelo Herndon, or even get together

Listen, kids who die—
Maybe, now, there will be no monument for you
Except in our hearts
Maybe your bodies'll be lost in a swamp
Or a prison grave, or the potter's field,
Or the rivers where you're drowned like Leibknecht
But the day will come—
You are sure yourselves that it is coming—
When the marching feet of the masses
Will raise for you a living monument of love,
And joy, and laughter,
And black hands and white hands clasped as one,
And a song that reaches the sky—
The song of the life triumphant
Through the kids who die.

Let America be America again
1935

Let America be America again.
Let it be the dream it used to be.
Let it be the pioneer on the plain
Seeking a home where he himself is free.

(America never was America to me.)

Let America be the dream the dreamers dreamed—
Let it be that great strong land of love
Where never kings connive nor tyrants scheme
That any man be crushed by one above.

(It never was America to me.)

O, let my land be a land where Liberty
Is crowned with no false patriotic wreath,
But opportunity is real, and life is free,
Equality is in the air we breathe.

(There's never been equality for me,
Nor freedom in this "homeland of the free.")

Say, who are you that mumbles in the dark?
And who are you that draws your veil across the stars?

I am the poor white, fooled and pushed apart,
I am the Negro bearing slavery's scars.
I am the red man driven from the land,
I am the immigrant clutching the hope I seek—
And finding only the same old stupid plan
Of dog eat dog, of mighty crush the weak.

I am the young man, full of strength and hope,
Tangled in that ancient endless chain
Of profit, power, gain, of grab the land!
Of grab the gold! Of grab the ways of satisfying need!
Of work the men! Of take the pay!
Of owning everything for one's own greed!

I am the farmer, bondsman to the soil.
I am the worker sold to the machine.
I am the Negro, servant to you all.
I am the people, humble, hungry, mean—
Hungry yet today despite the dream.
Beaten yet today—O, Pioneers!
I am the man who never got ahead,
The poorest worker bartered through the years.

Yet I'm the one who dreamt our basic dream
In the Old World while still a serf of kings,
Who dreamt a dream so strong, so brave, so true,
That even yet its mighty daring sings
In every brick and stone, in every furrow turned
That's made America the land it has become.
O, I'm the man who sailed those early seas
In search of what I meant to be my home—
For I'm the one who left dark Ireland's shore,
And Poland's plain, and England's grassy lea,
And torn from Black Africa's strand I came
To build a "homeland of the free."

The free?

Who said the free? Not me?
Surely not me? The millions on relief today?
The millions shot down when we strike?
The millions who have nothing for our pay?
For all the dreams we've dreamed
And all the songs we've sung
And all the hopes we've held
And all the flags we've hung,
The millions who have nothing for our pay—
Except the dream that's almost dead today.

O, let America be America again—
The land that never has been yet—
And yet must be—the land where every man is free.
The land that's mine—the poor man's, Indian's, Negro's, ME—
Who made America,
Whose sweat and blood, whose faith and pain,
Whose hand at the foundry, whose plow in the rain,
Must bring back our mighty dream again.

Sure, call me any ugly name you choose—
The steel of freedom does not stain.
From those who live like leeches on the people's lives,
We must take back our land again,
America!

O, yes,
I say it plain,
America never was America to me,
And yet I swear this oath—
America will be!

Out of the rack and ruin of our gangster death,
The rape and rot of graft, and stealth, and lies,
We, the people, must redeem
The land, the mines, the plants, the rivers.
The mountains and the endless plain—
All, all the stretch of these great green states—
And make America again!

Big physics and small physics




When Niels Bohr traveled to Britain in 1911 to study at the Cavendish Laboratory at Cambridge, the director was J.J. Thompson and the annual budget was minimal. In 1892 the entire budget for supplies, equipment, and laboratory assistants was a little over about £1400 (Dong-Won Kim, Leadership and Creativity: A History of the Cavendish Laboratory, 1871-1919 (Archimedes), p. 81). Funding derived almost entirely from a small allocation from the University (about £250) and student fees deriving from lectures and laboratory use at the Cavendish (about £1179). Kim describes the finances of the laboratory in these terms:
Lack of funds had been a chronic problem of the Cavendish Laboratory ever since its foundation. Although Rayleigh had established a fund for the purchase of necessary apparatus, the Cavendish desperately lacked resources. In the first years of J.J.’s directorship, the University’s annual grant to the laboratory of about £250 did not increase, and it was used mainly to pay the wages of the Laboratory assistants (£214 of this amount, for example, went to salaries in 1892). To pay for the apparatus needed for demonstration classes and research, J.J. relied on student fees. 
Students ordinarily paid a fee of £1.1 to attend a lecture course and a fee of £3.3 to attend a demonstration course or to use space in the Laboratory. As the number of students taking Cavendish courses increased, so did the collected fees. In 1892, these fees totaled £1179; in 1893 the total rose a bit to £1240; and in 1894 rose again to £1409. Table 3.5 indicates that the Cavendish’s expenditures for “Apparatus, Stores, Printing, &c.” (£230 3s 6d in 1892) nearly equaled the University’s entire grant to the Cavendish (£254 7s 6d in 1892). (80)
The Cavendish Laboratory exerted great influence on the progress of physics in the early twentieth century; but it was distinctly organized around a "small science" model of research. (Here is an internal history of the Cavendish Lab; link.) The primary funding for research at the Cavendish came from the university itself, student fees, and occasional private gifts to support expansion of laboratory space, and these funds were very limited. And yet during those decades, there were plenty of brilliant physicists at work at the Cavendish Lab. Much of the future of twentieth century physics was still to be written, and Bohr and many other young physicists who made the same journey completely transformed the face of physics. And they did so in the context of "small science".

Abraham Pais's intellectual and scientific biography of Bohr, Niels Bohr's Times: In Physics, Philosophy, and Polity, provides a detailed account of Bohr's intellectual and personal development. Here is Pais's description of Bohr's arrival at the Cavendish Lab:
At the time of Bohr's arrival at the Cavendish, it was, along with the Physico-Technical Institute in Berlin, one of the world's two leading centers in experimental physics research. Thomson, its third illustrious director, successor to Maxwell and Rayleigh, had added to its distinction by his discovery of the electron, work for which he had received the Nobel Prize in 1906. (To date the Cavendish has produced 22 Nobel laureates.) In those days, 'students from all over the world looked to work with him... Though the master's suggestions were, of course, most anxiously sought and respected, it is no exaggeration to add that we were all rather afraid he might touch some of our apparatus.' Thomson himself was well aware that his interaction with experimental equipment was not always felicitous: 'I believe all the glass in the place is bewitched.' ... Bohr knew of Thomson's ideas on atomic structure, since these are mentioned in one of the latter's books which Bohr had quoted several times in his thesis. This problem was not yet uppermost in his mind, however, when he arrived in Cambridge. When asked later why he had gone there for postdoctoral research he replied: 'First of all I had made this great study of the electron theory. I considered... Cambridge as the center of physics and Thomson as a most wonderful man.' (117, 119)
On the origins of his theory of the atom:
Bohr's 1913 paper on α-particles, which he had begun in Manchester, and which had led him to the question of atomic structure, marks the transition to his great work, also of 1913, on that same problem. While still in Manchester, he had already begun an early sketch of these entirely new ideas. The first intimation of this comes from a letter, from Manchester, to Harald: 'Perhaps I have found out a little about the structure of atoms. Don't talk about it to anybody... It has grown out of a little information I got from the absorption of α-rays.' (128)
And his key theoretical innovation:
Bohr knew very well that his two quoted examples had called for the introduction of a new and as yet mysterious kind of physics, quantum physics. (It would become clear later that some oddities found in magnetic phenomena are also due to quantum effects.) Not for nothing had he written in the Rutherford memorandum that his new hypothesis 'is chosen as the only one which seems to offer a possibility of an explanation of the whole group of experimental results, which gather about and seems to confirm conceptions of the mechanismus [sic] of the radiation as the ones proposed by Planck and Einstein'. His reference in his thesis to the radiation law concerns of course Planck's law (5d). I have not yet mentioned the 'calculations of heat capacity' made by Einstein in 1906, the first occasion on which the quantum was brought to bear on matter rather than radiation. (138)
But here is the critical point: Bohr's pivotal contributions to physics derived from exposure to the literature in theoretical physics at the time, his own mathematical analysis of theoretical assumptions about the constituents of matter, and exposure to laboratories whose investment involved only a few thousand pounds.

Now move forward a few decades to 1929 when Ernest Lawrence conceived of the idea of the cyclical particle accelerator, the cyclotron, and soon after founded the Radiation Lab at Berkeley. Michael Hiltzik tells this story in Big Science: Ernest Lawrence and the Invention that Launched the Military-Industrial Complex, and it is a very good case study documenting the transition from small science to big science in the United States. The story demonstrates the vertiginous rise of large equipment, large labs, large funding, and big science. And it demonstrates the deeply interwoven careers of fundamental physics and military and security priorities. Here is a short description of Ernest Lawrence:
Ernest Lawrence’s character was a perfect match for the new era he brought into being. He was a scientific impresario of a type that had seldom been seen in the staid world of academic research, a man adept at prying patronage from millionaires, philanthropic foundations, and government agencies. His amiable Midwestern personality was as much a key to his success as his scientific genius, which married an intuitive talent for engineering to an instinctive grasp of physics. He was exceptionally good-natured, rarely given to outbursts of temper and never to expressions of profanity. (“ Oh, sugar!” was his harshest expletive.) Raising large sums of money often depended on positive publicity, which journalists were always happy to deliver, provided that their stories could feature fascinating personalities and intriguing scientific quests. Ernest fulfilled both requirements. By his mid-thirties, he reigned as America’s most famous native-born scientist, his celebrity validated in November 1937 by his appearance on the cover of Time over the cover line, “He creates and destroys.” Not long after that, in 1939, would come the supreme encomium for a living scientist: the Nobel Prize. (kl 118)
And here is Hiltzik's summary of the essential role that money played in the evolution of physics research in this period:
Money was abundant, but it came with strings. As the size of the grants grew, the strings tautened. During the war, the patronage of the US government naturally had been aimed toward military research and development. But even after the surrenders of Germany and Japan in 1945, the government maintained its rank as the largest single donor to American scientific institutions, and its military goals continued to dictate the efforts of academic scientists, especially in physics. World War II was followed by the Korean War, and then by the endless period of existential tension known as the Cold War. The armed services, moreover, had now become yoked to a powerful partner: industry. In the postwar period, Big Science and the “military-industrial complex” that would so unnerve President Dwight Eisenhower grew up together. The deepening incursion of industry into the academic laboratory brought pressure on scientists to be mindful of the commercial possibilities of their work. Instead of performing basic research, physicists began “spending their time searching for ways to pursue patentable ideas for economic rather than scientific reasons,” observed the historian of science Peter Galison. As a pioneer of Big Science, Ernest Lawrence would confront these pressures sooner than most of his peers, but battles over patents—not merely what was patentable but who on a Big Science team should share in the spoils—would soon become common in academia. So too would those passions that government and industry shared: for secrecy, for regimentation, for big investments to yield even bigger returnsParticle accelerators became the critical tool in experimental physics. A succession of ever-more-powerful accelerators became the laboratory apparatus through which questions and theories being developed in theoretical physics could be pursued by bombarding targets with ever-higher energy particles (protons, electrons, neutrons). Instead of looking for chance encounters with high-energy cosmic rays, it was possible to use controlled processes within particle accelerators to send ever-higher energy particles into collisions with a variety of elements. (kl 185)
What is intriguing about Hiltzik's story is the fascinating interplay of separate factors the narrative invokes: major developments in theoretical physics (primarily in Europe), Lawrence's accidental exposure to a relevant research article, the personal qualities and ambition of Lawrence himself, the imperatives and opportunities for big physics created by atomic bomb research in the 1940s, and the institutional constraints and interests of the University of California. This is a story of the advancement of physics that illustrates a huge amount of contingency and path dependency during the 1930s through 1950s. The engineering challenges of building and maintaining a particle accelerator were substantial as well, and if those challenges could not be surmounted the instrument would be impossible. (Maintaining a vacuum in a super-large canister itself proved to be a huge technical challenge.)

Physics changed dramatically between 1905 and 1945, and the balance between theoretical physics and experimental physics was one important indicator of this change. And the requirements of experimental physics went from the lab bench to the cyclotron -- from a few hundred dollars (pounds, marks, krone, euros) of investment to hundreds of millions of dollars (and now billions) in investment. This implied, fundamentally, that scientific research evolved from an individual activity taking place in university settings to an activity involving the interests of the state, big business, and the military -- in addition to the scientific expertise and imagination of the physicists.

Saturday, June 20, 2020

Guest post by Nicholas Preuth


Nicholas Preuth is a philosophy student at the University of Michigan. His primary interests fall in the philosophy of law and the philosophy of social science. Thanks, Nick, for contributing this post!

Distinguishing Meta-Social Ontology from Social Ontology

Social ontology is the study of the properties of the social world. Conventional claims about social ontology proceed by asking and answering questions such as, “what is the existential status of social entities (e.g. institutions, governments, etc.)?", “can institutions exert causal influence?”, “what is the causal relationship between micro, meso, and macro-level social entities?”, etc. Daniel Little is one of the many philosophers and sociologists who has written extensively on the topic of social ontology (see discussions here, here, and here). The types of arguments and discussions found in those blog posts represent conventional social ontology discussions—conventional in the sense that the content of the posts constitute the sort of commonly agreed-upon purview of social ontology discussions.

However, in recent years, many works of social ontology have embedded a new type of claim in their works that differs from the conventional discussions of social ontology. These new claims are a series of methodological claims about the role and importance of ontology in conducting social scientific research. Unlike conventional claims about ontology that try to answer substantive questions about the nature of the social world, these methodological claims ask why ontology matters and what role ontology should play in the conduct of social science research. Here is an example of Brian Epstein making such a claim in his work, The Ant Trap: Rebuilding the Foundations of the Social Sciences:
Ontology has ramifications, and ontological mistakes lead to scientific mistakes. Commitments about the nature of the entities in a science—how they are composed, the entities on which they ontologically depend—are woven into the models of science.…despite Virchow’s expertise with a microscope, his commitment to cell theory led him to subdivide tissues into cells where there are none. And that led to poor theories about how anatomical features come to be, how they are changed or destroyed, and what they do. (Brian Epstein, The Ant Trap: Rebuilding the Foundations of the Social Sciences (Oxford: Oxford University Press, 2015, 40-41))
Notice how in this passage Epstein makes a claim about why ontology is important and, consequently, tacitly takes a stance on a methodological relationship between ontology and research. According to Epstein, ontology matters because ontology shapes the very way that we investigate the world. He believes that bad ontology leads researchers into scientific mistakes because ontology distorts a researcher’s ability to objectively investigate phenomena. Epstein’s evident unstated conclusion here—which is never explicitly formulated in his book, even though it is a very important underlying premise in his project—is that ontological theorizing must take methodological priority over scientific experimentation. As Epstein might sum up, we ought to think about ontology first, and then conduct research later.

Yet Epstein’s statement is not the only way of construing the relationship between ontology and research. Epstein’s unstated assumption that ontological work should be done before research is a highly contested assertion. Why should we just accept that ontology should come before empirical research? Are there no other ways of thinking about the relationship between ontology and social science research? These methodological questions are better suited being treated as separate, distinct questions rather than being embedded within the usual set of conventional questions about social ontology. There should be a conceptual distinction between the conventional claims about social ontology that actually engage with understanding the social world, and these new kinds of methodological claims about the relationship between ontology and research. If we adhere to such a distinction, then Epstein’s methodological claims do not belong to the field of social ontology: they are claims about meta-social ontology.

Meta-social ontology aims to explicitly illuminate the methodological relationship between ontological theorizing in the social sciences and the empirical practice of social science research. The field of meta-social ontology seeks to answer two crucial questions:
  1. What methodology best guides the practice of ontological theorizing?
  2. To what extent should we be existentially committed to the ontological statements we make about the social world?
Let’s spend some time examining both questions, as well as proposed answers to each question.

The first question is a clear formulation of the kind of question that Epstein wants to answer in his book. There are two typical approaches to answering this question. Epstein’s approach, that ontological theorizing must occur prior to and outside of scientific experimentation, is called apriori ontology. Apriori ontology argues that ontology can be successfully crafted through theoretical deductions and philosophical reasoning, and that it is imperative to do so because ontological mistakes lead to scientific mistakes. Here is another philosopher, John Searle, supporting the apriori social ontology position:
I believe that where the social sciences are concerned, social ontology is prior to methodology and theory. It is prior in the sense that unless you have a clear conception of the nature of the phenomena you are investigating, you are unlikely to develop the right methodology and the right theoretical apparatus for conducting the investigation. (John Searle, “Language and Social Ontology,” Theory and Society, Vol. 37:5, 2008, 443).
Searle’s formulation of apriori ontology here gives an explicit methodological priority to ontological theorizing. In other words, he believes that the correct ontology needs to be developed first before scientific experimentation, or else the experimentation will be misguided. No doubt Epstein agrees with this methodological priority, but he does not explicitly state it. Nevertheless, both Searle and Epstein are clear advocates of the apriori ontology position.

However, there is another approach to ontological theorizing that challenges apriori ontology as being too abstracted from the actual conduct of social science experimentation. This other approach is called aposteriori ontology. Aposteriori ontology rejects the efficacy of abstract ontological theorizing derived from speculative metaphysics. Instead, aposteriori ontology advocates for ontology to be continually constructed, informed, and refined by empirical social science research. Here is Little’s formulation of aposteriori ontology:
I believe that ontological theorizing is part of the extended scientific enterprise of understanding the world, and that efforts to grapple with empirical puzzles in the world are themselves helpful to refine and specifying our ontological ideas…. Ontological theories are advanced as substantive and true statements of some aspects of the social world, but they are put forward as being fundamentally a posteriori and corrigible. (D. Little, “Social Ontology De-dramatized,” Philosophy of the Social Sciences, I-11, 2020, 2-4)
Unlike apriori ontology, aposteriori ontology does not look at ontology as being prior to scientific research. Instead, aposteriori ontology places scientific experimentation alongside ontological developments as two tools that go hand-in-hand in guiding our understanding of the social world. In sum, the apriori vs. aposteriori debate revolves around whether ontology should be seen as an independent, theoretical pursuit that determines our ability to investigate the world, or if ontology should be seen as another collaborative tool within the scientific enterprise, alongside empirical research and theory formation, that helps us advance our understanding of the nature of the social world.

The second question in the field of meta-ontology is a question of existential commitment: to what extent do we need to actually believe in the existence of the ontological statements we posit about the world? This is less complicated than it sounds. Consider this example: we often talk about the notion of a “ruling class” in society, where “ruling class” is understood as a social group that wields considerable influence over a society’s political, economic, and social agenda. When we employ the term “ruling class,” do we actually mean to say that such a formation really exists in society, or is this just a helpful term that allows us to explain the occurrence of certain social phenomena while also allowing us to continue to generate more explanations of more social phenomena? This is the heart of the second issue in meta-ontology.

Similar to the apriori vs. aposteriori debate, proposed answers to this question tend to be dichotomous. The two main approaches to this question are realism and anti-realism (sometimes called pragmatism). Realism asserts that we should be existentially committed to the ontological entities that we posit. Epstein, Searle, and Little are among those who fall into this camp. Here is Epstein’s approximate formulation of realism:
What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain?… How the social world is built is not a mystery, not magical or inscrutable or beyond us. (Epstein, The Ant Trap, 7)
As Epstein expresses here, realists believe that it is possible to discover the social world just as scientist discover the natural world. Realists maintain that their ontological statements about the world reflect social reality, meaning that the discovery and explanatory success of the “ruling class” hypothesis is like finding a new theory of the natural world.

Contrarily, anti-realists/pragmatists argue that ontology is only useful insofar as it advances scientific inquiry and enables successful inferences to a larger number of social phenomena. They do not believe that ontological statements reflect social reality, so they are not existentially committed to the truth of any particular ontology of the social world. Richard Lauer, a proponent of an anti-realist/pragmatist meta-social ontology, defines it like this:
The function of these statements is pragmatic. Such statements may open new possibilities that can further scientific aims, all without requiring a realist attitude…instead of concerning ourselves with whether there really are such [things], we may ask about the empirical merits of moving to [such] a view. (Richard Lauer, “Is Social Ontology Prior to Social Scientific Methodology,” Philosophy of the Social Sciences, Vol. 49:3, 2019, 184)
Taking the ruling class example above, an anti-realist/pragmatist like Lauer would suggest that the concept of ruling class is useful because it allows us to generate more explanations of social phenomena while rejecting the idea that there is such a thing as “ruling classes” that actually exists.

There is, however, some room for middle ground between realism and anti-realism. Harold Kincaid, another well-known philosopher of social science, has tried to push the realism/anti-realism debates in a more fruitful direction by asserting that a better way to answer the question is by addressing the question towards empirical research in specific, localized contexts:
I think we can go beyond blanket realism or instrumentalism if we look for more local issues and do some clarification. A first step, I would argue, is to see just when, where, and how specific social research with specific ontologies has empirical success…The notion of a ‘ruling class’ at certain times and places explains much. Does dividing those at the top differently into ruling elites also explain? That could well be the case and it could be that we can do so without contradicting the ruling class hypothesis…These are the kind of empirical issues which give ‘realism’ and ‘pluralism’ concrete implications. (Harold Kincaid, “Concrete Ontology: Comments on Lauer, Little, and Lohse,” Philosophy of the Social Sciences, I-8, 2020, 4-5)
Kincaid suggests here that a better way of arguing for the efficacy of a realist or anti-realist meta-ontology is by looking at the particular success of specific ontological statements in the social sciences and thereby determining an answer from there. Taking our ruling class example, Kincaid would suggest that we investigate the success of the ruling class hypothesis in localized contexts, and then from there evaluate our existential commitment to it based on its ability to successfully explain social phenomena and provoke new research regarding new social phenomena. This is still a clear endorsement of realism with respect to social concepts and entities. However, it pushes the conversation away from blanket realism (like Epstein) and blanket pragmatism (like Lauer). Instead, Kincaid emphasizes the interaction of empirical research on the subsequent development of our realist/anti-realist meta-ontological position towards specific social phenomena. Thus, as Kincaid sums up his position, “we need to get more concrete!” (Kincaid, 8).

So, there are many ways one can think about the methodological relationship between social ontology and social science research. If we were to categorize the philosophers discussed here, it would look like this:
  1. Apriori realism ontology (Searle, Epstein)
  2. Aposteriori realism ontology (Little, Kincaid)
  3. Anti-realist pragmatism ontology (Lauer)
In light of these discussions, it is important that works of social ontology maintain a conceptual distinction between social ontology arguments and meta-social ontology arguments. As we saw with Epstein, it can be tempting to throw in meta-social ontological justifications in a new work of social ontology. However, this both blurs the distinction between the field of social ontology and the field of meta-social ontology, and it obscures the view that meta-social ontological discussions deserve a treatment in their own right. As a complex, abstract field that deals with difficult subject matter, social ontology should strive for the utmost clarity. Adding meta-social ontological considerations as a quick aside in a work on social ontology just muddies the already murky water.

Thursday, June 18, 2020

A big-data contribution to the history of philosophy


The history of philosophy is generally written by subject experts who explore and follow a tradition of thought about which figures and topics were "pivotal" and thereby created an ongoing research field. This is illustrated, for example, in Stephen Schwartz's A Brief History of Analytic Philosophy: From Russell to Rawls. Consider the history of Anglophone philosophy since 1880 as told by a standard narrative in the history of philosophy of this period. One important component was "logicism" -- the idea that the truths of mathematics can be derived from purely logical axioms using symbolic logic. Peano and Frege formulated questions about the foundations of arithmetic; Russell and Whitehead sought to carry out this program of "logicism"; and Gödel proved the impossibility of carrying out this program: any set of axioms rich enough to derive theorems of arithmetic is either incomplete or inconsistent. This narrative serves to connect the dots in this particular map of philosophical development. We might want to add details like the impact of logicism on Wittgenstein and the impact of Tractatus Logico-Philosophicus, but the map is developed by tracing contacts from one philosopher to another, identifying influences, and aggregating groups of topics and philosophers into "schools".

Brian Weatherson, a philosopher at the University of Michigan, has a different idea about how we might proceed in mapping the development of philosophy over the past century (link) (Brian Weatherson, A History of Philosophy Journals: Volume 1: Evidence from Topic Modeling, 1876-2013. Vol. 1. Published by author on Github, 2020; link). Professional philosophy in the past century has been primarily expressed in the pages of academic journals. So perhaps we can use a "big data" approach to the problem of discovering and tracking the emergence of topics and fields within philosophy by analyzing the frequency and timing of topics and concepts as they appear in academic philosophy journals.

Weatherson pursues this idea systematically. He has downloaded from JSTOR the full contents of twelve leading journals in anglophone philosophy for the period 1876-2013, producing a database of some 32,000 articles and lists of all words appearing in each article (as well as their frequencies). Using the big data technique called "topic modeling" he has arrived at 90 subjects (clusters of terms) that recur in these articles. Here is a quick description of topic modeling.
Topic modeling is a type of statistical modeling for discovering the abstract “topics” that occur in a collection of documents. Latent Dirichlet Allocation (LDA) is an example of topic model and is used to classify text in a document to a particular topic. It builds a topic per document model and words per topic model, modeled as Dirichlet distributions. (link)
Here is Weatherson's description of topic modeling:
An LDA model takes the distribution of words in articles and comes up with a probabilistic assignment of each paper to one of a number of topics. The number of topics has to be set manually, and after some experimentation it seemed that the best results came from dividing the articles up into 90 topics. And a lot of this book discusses the characteristics of these 90 topics. But to give you a more accessible sense of what the data looks like, I’ll start with a graph that groups those topics together into familiar contemporary philosophical subdisciplines, and displays their distributions in the 20th and 21st century journals. (Weatherson, introduction)
Now we are ready to do some history. Weatherson applies the algorithms of LDA topic modeling to this database of journal articles and examines the results. It is important to emphasize that this method is not guided by the intuitions or background knowledge of the researcher; rather, it algorithmically groups documents into clusters based on the frequencies of various words appearing in the documents. Weatherson also generates a short list of keywords for each topic: words of a reasonable frequency in which the probability of the word appearing in articles in the topic is significantly greater than the probability of it occurring in a random article. And he further groups the 90 subjects into a dozen familiar "categories" of philosophy (History of Philosophy, Idealism, Ethics, Philosophy of Science, etc.). This exercise of assigning topics to categories requires judgment and expertise on Weatherson's part; it is not algorithmic. Likewise, the assignment of names to the 90 topics requires expertise and judgment. From the point of view of the LDA model, the topics could be given entirely meaningless names: T1, T2, ..., T90.

Now every article has been assigned to a topic and a category, and every topic has a set of keywords that are algorithmically determined. Weatherson then goes back and examines the frequency of each topic and category over time, presented as graphs of the frequencies of each category in the aggregate (including all twelve journals) and singly (for each journal). The graphs look like this:


We can look at these graphs as measures of the rise and fall of prevalence of various fields of philosophy research in the Anglophone academic world over the past century. Most striking is the contrast between idealism (precipitous decline since 1925) and ethics (steady increase in frequency since about the same time), but each category shows some interesting characteristics.

Now consider the disaggregation of one topic over the twelve journals. Weatherson presents the results of this question for all ninety topics. Here is the set of graphs for the topic "Methodology of Science":


All the journals -- including Ethics and Mind -- have articles classified under the topic of "Methodology of Science". For most journals the topic declines in frequency from roughly the 1950s to 2013. Specialty journals in the philosophy of science -- BJPS and Philosophy of Science -- show a generally higher frequency of "Methodology of Science" articles, but they too reveal a decline in frequency over that period. Does this suggest that the discipline of the philosophy of science declined in the second half of the twentieth century (not the impression most philosophers would have)? Or does it rather reflect the fact that the abstract level of analysis identified by the topic of "Methodology of Science" was replaced with more specific and concrete studies of certain areas of the sciences (biology, psychology, neuroscience, social science, chemistry)?

These results permit many other kinds of questions and discoveries. For example, in chapter 7 Weatherson distills the progression of topics across decades by listing the most popular five topics in each decade:



This table too presents intriguing patterns and interesting questions for further research. For example, from the 1930s through the 1980s a topic within the general field of the philosophy of science is in the list of the top five topics: methodology of science, verification, theories and realism. These topics fall off the list in the 1990s and 2000s. What does this imply -- if anything -- about the prominence or importance of the philosophy of science within Anglophone philosophy in the last several decades? Or as another example -- idealism is the top-ranked topic from the 1890s through the 1940s, only disappearing from the list in the 1960s. This is surprising because the standard narrative would say that idealism was vanquished within philosophy in the 1930s. And another interesting example -- ordinary language. Ordinary language is a topic on the top five list for every decade, and is the most popular topic from the 1950s through the present. And yet "ordinary language philosophy" would generally be thought to have arisen in the 1940s and declined permanently in the 1960s. Finally, topics in the field of ethics are scarce in these lists; "promises and imperatives" is the only clear example from the topics listed here, and this topic appears only in the 1960s and 1970s. That seems to imply that the fields of ethics and social-political philosophy were unimportant throughout this long sweep of time -- hard to reconcile with the impetus given to substantive ethical theory and theory of justice in the 1960s and 1970s. For that matter, the original list of 90 topics identified by the topic-modeling algorithm is surprisingly sparse when it comes to topics in ethics and political philosophy: 2.16 Value, 2.25 Moral Conscience, 2.31 Social Contract Theory, 2.33 Promises and Imperatives, 2.41 War, 2.49 Virtues, 2.53 Liberal Democracy, 2.53 Duties, 2.65 Egalitarianism, 2.70 Medical Ethics and Freud, 2.83 Population Ethics, 2.90 Norms. Where is "Justice" in the corpus?

Above I described this project as a new approach to the history of philosophy (surely applicable as well to other fields such as art history, sociology, or literary criticism). But it seems clear that the modeling approach Weatherson pursues is not a replacement for other conceptions of intellectual history, but rather a highly valuable new source of data and questions that historians of philosophy will want to address. And in fact, this is how Weatherson treats the results of this work: not as replacement but rather as a supplement and a source of new puzzles for expert historians of philosophy.

(There is an interesting parallel between this use of big data and the use of Ngrams, the tool Google created to map the frequency of the occurrences of various words in books over the course of several centuries. Here are several earlier posts on the use of Ngrams: link, link. Gabriel Abend made use of this tool in his research on the history of business ethics in The Moral Background: An Inquiry into the History of Business Ethics. Here is a discussion of Abend's work; link. The topic-modeling approach is substantially more sophisticated because it does not reduce to simple word frequencies over time. As such it is a very significant and innovative contribution to the emerging field of "digital humanities" (link).)

Wednesday, June 17, 2020

ABM models for the COVID-19 pandemic


In an earlier post I mentioned that agent-based models provide a substantially different way of approaching the problem of pandemic modeling. ABM models are generative simulations of processes that work incrementally through the behavior of discrete agents; so modeling an epidemic using this approach is a natural application.

In an important recent research effort Gianluca Manzo and Arnout van de Rijt have undertaken to provide an empirically calibrated ABM model of the pandemic in France that pays attention to the properties of the social networks that are found in France. They note that traditional approaches to the modeling of epidemic diseases often work on the basis of average population statistics. (The draft paper is posted on ArXiv; link; they have updated the manuscript since posting). They note, however, that diseases travel through social networks, and individuals within a society differ substantially in terms of the number of contacts they have in a typical day or week. This implies intuitively that the transmission of a disease through a population should be expected to be influenced by the social networks found within that population and the variations that exist across individuals in terms of the number of social contacts that they have in a given time period. Manzo and van de Rijt believe that this feature of disease-spread through a community is crucial to consider when attempting to model the progression of the disease. But more importantly, they believe that consideration of contact variation across a population suggests public health strategies that might be successful in reducing the spread of a disease at lower social and public cost.

Manzo offers a general framework for this approach in "Complex Social Networks are Missing in the Dominant COVID-19 Epidemic Models," published last month in Sociologica (link). Here is the abstract for this article:
In the COVID-19 crisis, compartmental models have been largely used to predict the macroscopic dynamics of infections and deaths and to assess different non-pharmaceutical interventions aimed to contain the microscopic dynamics of person-to-person contagions. Evidence shows that the predictions of these models are affected by high levels of uncertainty. However, the link between predictions and interventions is rarely questioned and a critical scrutiny of the dependency of interventions on model assumptions is missing in public debate. In this article, I have examined the building blocks of compartmental epidemic models so influential in the current crisis. A close look suggests that these models can only lead to one type of intervention, i.e., interventions that indifferently concern large subsets of the population or even the overall population. This is because they look at virus diffusion without modelling the topology of social interactions. Therefore, they cannot assess any targeted interventions that could surgically isolate specific individuals and/or cutting particular person-to-person transmission paths. If complex social networks are seriously considered, more sophisticated interventions can be explored that apply to specific categories or sets of individuals with expected collective benefits. In the last section of the article, I sketch a research agenda to promote a new generation of network-driven epidemic models. (31)
Manzo's central concern about what he calls compartmental models (SIR models) is that "the variants of SIR models used in the current crisis context address virus diffusion without modelling the topology of social interactions realistically" (33).

 Manzo offers an interesting illustration of why a generic SIR model has trouble reproducing the dynamics of an epidemic of infectious disease by comparing this situation to the problem of traffic congestion:
It is as if we pretended realistically to model car flows at a country level, and potentially associated traffic jams, without also modelling the networks of streets, routes, and freeways. Could this type of models go beyond recommendations advising everyone not to use the car or allowing only specific fractions of the population to take the route at specific times and days? I suspect they could not. One may also anticipate that many drivers would be highly dissatisfied with such generic and undifferentiated instructions. SIR models currently in use put each of us in a similar situation. The lack of route infrastructure within my fictive traffic model corresponds to the absence of the structure of social interactions with dominant SIR models. (42)
The key innovation in the models constructed by Manzo and van de Rijt is the use of detailed data on contact patterns in France. They make highly pertinent use of a study of close-range contacts that was done in France in 2012 and published in 2015 (Béraud et al link). This study allows for estimation of the frequency of contacts possessed by French adults and children and the extensive variation that exists across individuals. Here is a graph illustrating the dispersion that exists in number of contacts for individuals in the study:

This graph demonstrates the very wide variance that exists among individuals when it comes to "number of contacts"; and this variation in turn is highly relevant to the spread of an infectious disease.

Manzo and van de Rijt make use of the data provided in this COMES-F study to empirically calibrate their agent-based model of the diffusion of the disease, and to estimate the effects of several different strategies designed to slow down the spread of the disease following relaxation of extreme social distancing measures.

The most important takeaway from this article is the strategy that it suggests for managing the reopening of social interaction after the peak of the epidemic. Key to transmission is frequency of close contact, and these models show that a small number of individuals have disproportionate effect on the spread of an infectious disease because of the high number of contacts they have. Manzo and van de Rijt ask the hypothetical question: are there strategies for management of an epidemic that could be designed by selecting a relatively small number of individuals for immunization? (Immunization might take the form of an effective but scarce vaccine, or it might take the form of testing, isolation, and intensive contact tracing.) But how would it be possible to identify the "high contact" individuals? M&R consider two strategies and then represent these strategies within their base model of the epidemic. Both strategies show dramatic improvement in the number of infected individuals over time. The baseline strategy "NO-TARGET" is one in which a certain number of individuals are chosen at random for immunization, and then the process of infection plays out. The "CONTACT-TARGET" strategy is designed to select the same number of individuals for immunization, but using a process that makes it more likely that the selected individuals will have higher-than-average contacts. The way this is done is to select a random group of individuals from the population and then ask those individuals to nominate one of their contacts for immunization. It is demonstrable that this procedure will arrive at a group of individuals for immunization who have higher-than-average numbers of contacts. The third strategy, HUB-TARGET, involves selecting the same number of individuals for treatment from occupations that have high levels of contacts.

The simulation is run multiple times for each of the three treatment strategies, using four different "budgets" that determine the number of individuals to be treated on each scenario. The results are presented here, and they are dramatic. Both contact-sensitive strategies of treatment result in substantial reduction in the total number of individuals infect over the course of 50, 100, and 150 days. And this  in turn translates into substantial reduction of the number of ICU beds required on each strategy.


Here is how Manzo and van de Rijt summarize their findings:
As countries exit the Covid-19 lockdown many have limited capacity to prevent flare-ups of the coronavirus. With medical, technological, and financial resources to prevent infection of only a fraction of its population, which individuals should countries target for testing and tracking? Together, our results suggest that targeting individuals characterized by high frequencies of short-range contacts dramatically improves the effectiveness of interventions. An additional known advantage of targeting hubs with medical testing specifically is that they serve as an early-warning device that can detect impending or unfolding outbreaks (Christakis & Fowler 2010; Kitsak et al. 2010).
This conclusion is reached by moving away from the standard compartmental models that rely on random mixing assumptions toward a network-based modeling framework that can accommodate person-to-person differences in infection risks stemming from differential connectedness. The framework allows us to model rather than average out the high variability of close-contact frequencies across individuals observed in contact survey data. Simulation results show that consideration of realistic close-contact distributions with high skew strongly impacts the expected impact of targeted versus general interventions, in favor of the former.
If these simulation results are indeed descriptive of the corresponding dynamics of spread of this disease through a population of socially connected people, then the research seems to provide an important hint about how public health authorities can effectively manage disease spread in a post-COVID without recourse to the complete shut-down of economic and social life that was necessary in the first half of 2020 in many parts of the world.

*.    *.    *


Here is a very interesting set of simulations by Grant Sanderson of the spread of infectious disease on YouTube (link). The video is presented with truly fantastic graphics allowing sophisticated visualization of the dynamics of the disease under different population assumptions. Sanderson doesn't explain the nature of the simulation, but it appears to be an agent-based model with parameters representing probability of infection through proximity. It is very interesting to look at this simulation through the eyes of the Manzo-van de Rijt critique: this model ignores exactly the factor that Manzo and van de Rijt take to be crucial -- differences across agents in number of contacts and the networks and hubs through which agents interact. This is reflected in the fact that every agent is moving randomly across space and every agent has the same average probability of passing on infection to those he/she encounters.