Showing posts with label CAT_misc. Show all posts
Showing posts with label CAT_misc. Show all posts

Friday, February 24, 2012

SSHA Call for Papers

Call for Papers: 37th Annual Meeting of the Social Science History Association

Vancouver, British Columbia, 1-4 November, 2012
Submission Deadline: 1 March 2012

"Histories of Capitalism"

CONSIDER SUBMITTING A PANEL OR SESSION TO THE MACRO-HISTORICAL DYNAMICS NETWORK.

The 2012 Program Committee seeks panel proposals that focus on Histories of Capitalism. But it also encourages, as usual, papers and panels on all aspects of social science history.


Dramatic developments in the contemporary world – including the current world economic crisis; the rapid economic growth of China; the shocking rise of income inequality in the United States, or the looming danger of climate change – argue strongly for putting the history of capitalism at the center of our agenda in social science history. These contemporary developments point to capitalism’s enduring enigma: it promises the utopian possibility of overcoming material want but creates barriers, inequalities, and dystopian disasters en route.

Features or aspects of capitalism often figure as causes or effects in studies of a wide range of topics close to the heart of social science historians: urbanization, labor struggles, cultural change, the demographic transition, gender and racial inequalities, migration, agrarian movements, or economic growth, to cite a few key examples. Yet capitalism usually figures as a context – either avowed or unavowed – of the phenomena we are attempting to grasp. Only occasionally do we reflect explicitly about the specific dynamics of capitalism as an evolving system or about how these dynamics shape possibilities for social and political action.

As the plural ‘histories’ in our theme’s title affirms, there are various kinds of histories of capitalism: macro and micro histories; Marxian, neo-classical, Weberian, Schumpeterian, Polanyian, and neo-institutionalist histories; cultural, economic, political, and social histories; histories informed by anthropology, political science, literature, geography, economics, sociology, philosophy, and of course history itself; histories of capitalism’s fundamental movements and of its manifold effects. Perhaps new histories will emerge at these meetings…

The Social Science History Association, with its rich tradition of interdisciplinary research, is an ideal forum for exploring all aspects of the history of capitalism both as an enduring intellectual problem and as a burning issue of contemporary politics and culture.

How Do I Participate in the 2012 SSHA Program?

Starting in December 2011, proposals for individual papers and complete sessions will be accepted at http://ssha.org, which provides instructions for submission. The deadline is 1 March 2012; we prefer the submission of complete sessions. If you want to organize a session, we recommend that you first contact a network representative. Network representatives – who are open to all possibilities – screen all papers and panels in their areas. (Current networks, with their representatives' e-mail and web addresses, are listed on the SSHA website.) If you are not certain which network your paper proposal best fits, just ask the representatives of the networks closest to your interests.

SSHA will continue to make competitive grants for graduate student travel, now with additional help from theCharles and Louise Tilly Fund for Social Science History, which also supports a graduate student paper prize.

SSHA President for 2011-12
William H. Sewell, Jr., University of Chicago, wsewell@uchicago.edu

Program Committee Co-Chairs for the 2012 Conference:
Tessie Liu, Northwestern University (History), t-liu@northwestern.edu
David Pedersen, University of California San Diego (Anthropology), dpedersen@ucsd.edu
Dan Slater, University of Chicago (Political Science), slater@uchicago.edu

Tuesday, November 22, 2011

New tools for digital humanities


One of the innovative papers I heard at the SSHA last week was a presentation by Harvard graduate student Ian Miller, with a paper called "Reading 500 Years of Chinese History at Once". (In the end Ian apologized for only getting to the last 188 years of the Qing Dynasty.) I won't mention the details, since Ian hasn't yet published any of this work. But it was a genuinely fascinating exploration of emerging tools in the "digital humanities," to apply topic analysis to a 188-year series of Imperial memoranda. Ian's goal was to identify spikes of interest in topics such as rebels and bandits, and the work was really fascinating to hear about.  (Here are a couple of interesting pages on digital humanities; link, link.)

The basic insight that is leading to new research in digital humanities is the fact that vast quantities of texts are now available for quantitative analysis. Humanists typically work with texts, and up till now their approaches have largely taken the form of close readings and semantic interpretations. Now that much of the published corpus of humanity is available in digital form thanks to the Google Books project, and now that many archives are steadily moving their ephemera to digital versions as well, it is possible for humanities researchers to broaden their toolkit and look for patterns among these published and unpublished texts. Google's NGrams tool allows all of us to do some of this kind of work (link, link), but more specialized tools for statistical analysis and presentation are needed if we are to go beyond compiling of changing frequencies of specific terms.

Statistical techniques for discovering "topics" in documents represent a crucial step forward in this endeavor. As Nelson Goodman noted in a pre-digital time, knowing what a text is "about" requires more than simply knowing what words are included in the document in what frequencies (Problems and Projects). We might have said at that point in the 1960s, that what we need beyond the syntax and the list of terms, is "understanding", an irreplaceably human capability. But a central task for web-based search arises from exactly this issue, and a great deal of research has been done to attempt to do a better job of discovering the "topics" that are central in a given document without invoking a human reader. And surprisingly enough, real progress has been made. This progress is at the heart of the digital humanities.  The fundamental problem is this: are there statistical methods that can be used to analyze the frequency of the words included in a given document to provide a compressed analysis of the "topics" included in the document?  We might then say that this compressed representation is a good approximation to what the document is "about".

A theoretical advancement, and corresponding set of tools, that is frequently invoked in research projects in this field is a "latent Direchet allocation" (LDA), a statistical technique for using word frequencies in a document to sort out a smaller set of topics.  David Blei, Andrew Ng, and Michael Jordan introduced the idea in 2003 (link).  (There is a detailed and technical description of the model in Wikipedia; link.)  They indicate that this method is similar to algorithms based on "latent semantic indexing".  Here is how Blei, Ng, and Jordan describe the approach in the abstract to this paper:
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.
And here is their statement of the goal of LDA analysis:
The goal is to find short descriptions of the members of a collection that enable efficient processing of large collections while preserving the essential statistical relationships that are useful for basic tasks such as classification, novelty detection, summarization, and similarity and relevance judgments. (993)
And here is a summary assessment of the effectiveness of the LDA representation of a set of documents relative to a less compressive representation:
We see that there is little reduction in classification performance in using the LDA-based features; indeed, in almost all cases the performance is improved with the LDA features. Although these results need further substantiation, they suggest that the topic-based representation provided by LDA may be useful as a fast filtering algorithm for feature selection in text classification. (1013)
Here is a table they provide illustrating the kind of topic analysis that this statistical methodology creates:


In some ways the type of application that Ian Miller is making of these tools seems ideal.  This kind of statistical methodology can be applied to very large databases of historical texts in order to discover patterns that the authors of those texts would have been entirely unaware of.  So methods like LPA seem well designed to uncover historically shifting patterns of topic emphasis by observers and policy makers over time and space.

This is just a first cut for me on the kind of reasoning and statistical analysis that information theorists are employing to do semantic analysis of documents, and I certainly don't have a good understanding of how this works in detail.  The power of these frameworks seems very great, though, and well worth studying in greater detail by historians and humanists.

Wednesday, August 24, 2011

Small cities

A recent post on the suburbs closed with the observation that there is an important "other" social space in the United States beyond the categories of urban, rural, and suburban.  These are the small cities throughout the United States where a significant number of people come to maturity and develop their families and careers.  I speculated that perhaps there is a distinctive sociology associated with these lesser urban places.  Here I will look into this question a bit more fully.

There are about 275 cities in the US with populations 100,000 or larger (Wikipedia link).  201 of these cities are small, with populations between 100,000 and 250,000.  There are 30.3 million people living in these cities -- about 10% of the US population.  A certain number of these cities fall within the metropolitan areas of larger cities, but a significant number are at least 50 miles from a major city.

Here is a map of 200 cities with populations between 100,000 and 250,000:


View Small Cities in a larger map

And here is a map of 25 cities with population greater than 500,000 (red) and 48 cities with population between 250,000 and 500,000 (green):


View Small Cities in a larger map

Google Maps limits the number of objects that can be placed on a map to 200 items, so it isn't possible to overlay these maps using Google Maps.  Google Earth does not have this limitation, and all these points are included on the Google Earth version of the map. Here is what the overlay looks like:


And here is a map of the Metropolitan Statistical Areas in the US in 1999. Wikipedia provides an up-to-date list of the MSAs in the US (link). (Many of the small cities actually constitute an MSA of their own; so determining whether a small city is "metropolitan" really involves the question of whether the place falls within one of the top 25-50 MSAs by population.)


The group of cities I'm interested in here is a subset of the cities on the first map: those that are more than 50 miles from one of the top 25 cities on the second map.  This still leaves well over 100 cities in the United States with a couple of interesting characteristics: they are relatively small, so they can be expected to lack a number of higher-level functions and industries; and they are relatively isolated from other larger cities, so their populations are extensively dependent on the resources of the city itself for employment, social services, entertainment, consumption, education, etc.

So the takeaway question here is this: what is life like in Billings MT, Topeka KS, Norman OK, Pueblo CO, Springfield IL, Knoxville TN, Cary NC, Green Bay WI, Grand Rapids MI, Allentown PA, Shreveport LA, and Killeen TX?  What is it like to grow up in these places?  Where do young people go for post-secondary education?  What percentage of young people leave these places permanently in the course of their careers?  Where do the elected officials in these places come from? How are these cities doing, from the perspective of unemployment, neighborhood and business district decline, and social problems?

Further, we can ask whether there are any structural features in common that imply that these places are more similar to each other than they are to larger cities or smaller towns.  Are issues of immigration, race relations, drug use, teen pregnancy, or high school dropout rates different in these places?

Finally, we can ask whether growing up in these places gives rise to a specific mentality.  Do those of us who grew up in small cities like these -- Peoria, Rock Island, Springfield -- have a different set of values, a different way of looking at the world, or perhaps different ways of relating to people in ordinary social life?  Or are regional differences (south, midwest, Pacific Coast) more of a determinant of one's mentality?

(I've placed the lists of cities and MSAs I've used here as spreadsheets at Google Docs; link, link. Both lists come from Wikipedia entries on US Cities and Metropolitan Statistical Areas.)

Sunday, May 8, 2011

Flood courses of the Mississippi River


This fantastic map of the historical twists and turns of the Mississippi River near Cairo, Illinois, was drawn in 1944.  It is reproduced in the New York Times today (link).  In an age of digitally produced information displays, it is fascinating to see the density of historical information represented in this hand-drafted map.  It is reminiscent of the maps Edward Tufte highlights in The Visual Display of Quantitative Information.  Here is Charles Joseph Minard's 1869 map of Napoleon's invasion of Russia that Tufte made famous:


While on the subject of great maps, here is one by George Abel Schreiner in 1924, representing the structure of the world's telegraph cable system (link).


Here is a contemporary graphic representing global Internet flow:


And here is a graph of global cities connections, produced by R. Wall and B. v.d. Knaap in "Sustainability within a World City Network" (link).


What these images have in common is a very simple point: the power of graphical representation to capture complex sets of inter-related data.

Thursday, May 5, 2011

The drop-out crisis (II)

We've talked about "wicked problems" before -- problems that involve complex social processes, multiple actors, and murky causal pathways (link, link). A particularly important example of such a problem currently confronting the United States is the high school dropout crisis. The crisis is particularly intense in high-poverty areas, but it is found in all states and all parts of urban, suburban, and rural America. (Here is an earlier discussion of these issues; link.)

The consequences of this crisis are severe. More than a million high school students a year drop out of high school. Over 50% of these dropouts come from fewer than 20% of high schools. These young people have virtually no feasible pathways to a middle class life or a job in the 21st-century economy. And this in turn means a permanent underclass of unemployed or underemployed young people. This in turn has consequences for crime rates, social service budgets, incarceration rates, and a serious productivity gap for our economy as a whole. So the problem is an enormously important one. (The Alliance for Excellent Education is a national organization devoted to tracking this issue; link. Another important resource is Building a Grad Nation ((link) from the America's Promise Alliance.)

Changing this current situation requires change of behavior on the parts of many independent parties -- teenagers, parents, teachers, principals, elected officials, and foundation officers, to name only some of the most obvious participants.

There are many social actors who have an interest in this problem and a commitment to trying to resolve it. Teachers, principals, school boards; mayors and governors; non-profit organizations; foundations; universities and schools of education; citizens' groups -- there are committed and concerned actors throughout the country that are highly motivated to attempt to solve the problem.

But it is very, very hard to marshal these actors into effective attacks on the causes of this crisis. One part of the problem is strategic -- what are the interventions that can work on a large scale? How can a school system introduce changes in behavior and organization that really change the outcomes in a measurable way?

Another part of the problem is a coordination problem. How can we succeed in gaining commitment and cooperation across this range of actors, even if we have some credible strategies at hand? It often seems that every actor has a different theory of the problem, and often it is difficult to gain concerted action across diverse actors. A foundation has one strategy; a school board has a different theory; and the teachers themselves work on the basis of a different understanding of the problem as well. All are well motivated; but there is a clash of efforts.

In this context the Diplomas Now initiative is particularly encouraging. It is focused on a national initiative to target the "drop out factories" through a clear theory of how to create turn-around schools. It is referred to as a civic Marshall Plan. It is based on careful empirical research. It has developed a clear theory about how interventions with children through the schools can impact persistence through graduation. It has mobilized a strategic group of partners -- CityYear, Communities in Schools, and Talent Development at Johns Hopkins. And it has an ambitious and effective national strategy that is already being implemented.

And the most impressive fact is that Diplomas Now is beginning to work. There are DN schools in some of the toughest urban contexts in America; these schools are showing real measurable progress; and the example is spreading to other cities and systems. Concrete evidence of these successes is highlighted by a wide range of committed leaders, academics, and corps members at the CityYear National Leadership Conference in Washington (link).

So maybe we can have some cautious optimism that our wicked problems can be solved, with sufficient commitment and persistence from a range of actors.

Monday, May 2, 2011

Causality and Explanation Conference Call for proposals



CALL FOR ABSTRACTS:
CaEitS2011: CAUSALITY AND EXPLANATION IN THE SCIENCES
19-21 September
Faculty of Arts and Philosophy, Ghent University
Blandijnberg 2, Ghent, Belgium


This is the sixth conference in the Causality in the Sciences series of conferences.  
Organizers: Phyllis McKay Illari, Federica Russo, Jon Williamson, Erik Weber, Julian Reiss


KEYNOTE SPEAKERS

Henk de Regt, Daniel Little, Michael Strevens, Mauricio Suarez and James Woodward.

INTRODUCTION

Causality and causal inference play a central role in the sciences. Explanation is one of the central goals of scientific research. And scientific explanation requires causal knowledge. At least, these are well-known tenets in present-day philosophy of science.

In this conference, we aim to bring philosophers and scientists together to discuss the relation between causality and explanation.

Even though the view that explanation requires causal knowledge is widespread, some accounts of explanation present themselves as a-causal or even as non-causal. Kitcher’s unificationism had it that causal relations are epistemically dependent on explanatory relations, not vice versa. In the mechanistic framework, interlevel explanation is said to be constitutive, not causal. Other accounts of explanation are primarily functional. What is the precise relation between causal and a- or non-causal accounts of explanation?

Relatedly, one of the close relatives of explanation is understanding. But what is the precise relation between explanation and understanding? And what is the role of causation herein?

But wait a minute. There is no consensus as to what causation is. Probabilistic, mechanistic, interventionist, and other accounts are available on the market and it is still an interesting and open question how precisely they relate to each other and how this bears upon the problem of scientific explanation.

Are causality and explanation the same across scientific disciplines? Is causality in physics the same as in psychology? Is causal discovery in biology the same as in economics? And is explanation in geology the same as in chemistry? Mathematics seems to be devoid of causation. Does that mean that it is also devoid of explanation? And is there a place for causation in technological explanation?

Our explanatory practices are partly determined by pragmatic considerations. What precisely do we want to explain, and what do we want to use our explanatory knowledge for? Do these pragmatic considerations influence our search for causal relations? Do they play a role, either implicitly or explicitly, in our algorithms for automated causal discovery (such as algorithms based on causal Bayes nets)?

We welcome contributions addressing these and other questions.

EXAMPLE QUESTIONS
  • How is causality related to explanation? Is all explanation causal?
  • Which accounts of causality best fit which accounts of explanation?
  • Do different sciences demand different notions of causality and explanation?
  • Which case studies shed most light on the uses of causality and explanation in the sciences?
TIMETABLE

15 May: deadline for submission of titles and abstracts of papers for presentation
  • 500 words
  • To be emailed to CaEitS2011@UGent.be
  • Please write "ABSTRACT SUBMISSION" in the Subject header of your mail
  • A notification of receipt will be sent shortly after 15th May
  • All abstracts will be carefully refereed
15 June: notification of acceptance of papers for presentation.
1 July: deadline for registration to attend the conference
  • Instructions for registration will be listed on www.caeits2011.ugent.be in due course
  • Please also send an email to CaEitS2011@UGent.be to say that you will be attending
    (with "REGISTRATION" in its Subject header)
19-21 September: conference

Wednesday, December 22, 2010

Ngram anomalies

Now that I've played with the Google Ngrams tool a little, I continue to think it's a powerful window into a lot of interesting questions. But I also see that there are patterns that emerge that are plainly spurious, and surely do not correspond to real changes in language, culture, or collective interest over time. It is easy to find examples of search terms that very plainly indicate that there is some kind of "instrument error", an observation that emerges because of an artifact of the method rather than a real pattern in the underlying behavior.

Fortunately it is possible to probe these areas of anomaly with the goal of figuring out what they mean. So let's see what happens when we pick out a set of common words that are not freighted with a lot of culturally specific significance. This will let us see more clearly how the instrument itself works.

Consider the color words red, green, yellow, blue, black. Let's graph the frequency of these terms in American English from 1800 to 2000. Before looking at the Ngram graph, let's consider what we would expect ex ante. Color words occur in books to designate -- color. Color terms are common words, so we might expect that they would remain fairly constant in frequency over time. So here is the null hypothesis about the frequency of common color terms: without a change in culture about color, we should expect the color words would remain roughly constant in frequency (flat curve). And the usage patterns for each term should be independent from the others. So we should expect a degree of independent random fluctuations in the frequencies of each color word, where "blue" bumps up in frequency in a given year and "red" bumps down.

(Why should we expect a degree of independence in the random fluctuations between "red" and "blue"? Because, fundamentally, there is no common mechanism that would link their behavior.)

Here are some ways in which the actual behavior of color terms might deviate from the null hypothesis.  Some colors may be more in style than others at a time -- there may be a cultural preference for red over blue, so the frequency of red may be greater than the frequency of blue. And the frequencies may change as cultural preferences change; so blue may become more frequent than red in a later generation. More generally, literary taste may change by becoming more descriptive overtime -- with more frequent use of color terms -- or more formal, with less use of color terms. So it would be possible to explain persistent differences in frequency of color terms; shifting frequencies across different color words; and even a longterm rise or decline in the whole family of color words.

So ex ante, for this group of common color words we would expect a graph of flat lines for the five terms, with uncorrelated fluctuations in each line.

Now let's look at the actual graph of these word frequencies (link).


Here we can see behavior that flatly contradicts these reasonable ex ante expectations. First, there are stretches of time in which the color words covary extremely closely, to the extent that the graphs look identical in shape. This is true, for example, in the neighborhood of 1820. This is impossible to explain as anything else than an artifact of some sort. It is impossible to believe that the frequencies of several color words would fluctuate up and down with this degree of synchrony.

Here is another aspect of the graph that is also suggestive of artifact: the long wave of rise and fall in the frequency of all the color words between 1810 and 1920. It is not impossible that "color" became more important in literary language and then declined; but that seems improbable. So this long wave coordinated behavior of the color words seems to be more likely the effect of a database anomaly than a manifestation of a real trend.

Is there any reliable information in this graph?  Yes.  There is one feature of this graph that appears to have real significance, and that is the change in the behavior of "black" after 1960. Prior to that year the term behaves pretty much like all the other color words. After that year it takes off on a very different trajectory. And this abrupt and accelerating increase in the frequency of "black" seems to have everything to do with a real social and cultural change in the 1960s and forward -- the abrupt increase in those decades in the salience of race. There is a similar divergence between the behavior of "black" and all the other color words in 1860; the frequency of the word increases for a few years following the American civil war.

More tantalizingly, it may be significant that "blue" moves up from "yellow" to "green" in frequency over time.  This is one element of the graph where the terms are not correlated with each other; instead, "blue" changes its position relative to other color frequencies.

This example shows that we need to be careful about the inferences we draw from the patterns that appear from Ngram searches. We need to always ask: "Does this pattern really correspond to a fact about underlying collective linguistic behavior, or is it the result of an artifact?" More fundamentally, we need to understand the sources of the artifacts we are able to detect -- spurious correlations, inexplicable long-wave changes in frequency, and others still to be discovered. And, finally, we should seek out techniques that can be applied to the results that serve to filter out the artifacts and focus on the real variations the data contain. We need some signal processing here to separate signal from noise. The Ngram tool is powerful, but we need to use it critically and intelligently.

Wednesday, November 3, 2010

Three years of UnderstandingSociety


Today marks the end of the third year of publication of UnderstandingSociety. This is the 481st posting, with prior posts covering a range of themes from "social ontology" to "foundations of the social sciences" to "globalization and economic development." In beginning this effort in 2007 I had envisioned something different from the kinds of blogs that were in circulation at the time -- something more like a dynamic, open-ended book manuscript than a topical series of observations. And now, approaching 500,000 words, I feel that this is exactly what the blog has become -- a dynamic web-based monograph on the philosophy of society. It is possible to navigate the document in a variety of ways -- following key words, choosing themes and "chapters", or reading chronologically. And it is also possible to download a full PDF copy of the document up through July, 2010; this will be updated in January 2011.

I find that the discipline of writing the blog has led me into ideas and debates that I would not have encountered otherwise. For example, the thread of postings on "world sociology" and the epistemologies and content of sociology in China, France, or Mexico opens up a new set of perspectives for me on the social context of the disciplines of sociology. Thanks to Gabriel Abend, Marion Fourcade, Céline Béraud and Baptiste Coulmont for a range of stimulating ideas on this subject. (These discussions can be located under the "disciplines" and "sociology" tags.)

I've also been pleased at the way that the social ontology topic has unfolded. The ideas of plasticity, heterogeneity, and contingency as fundamental features of social entities are crucial when we try to understand why social entities and processes are different from natural entities. They give a basis for understanding that the distinction between natural kinds and social kinds is a crucial one. And I don't think I would have come to the particular formulations and found here without the working canvas provided by the blog. (This thread falls under the ontology theme.)

And, of course, I've been led into a number of discussion areas that I wouldn't have anticipated: Michigan's economic crisis, practical strategies of stimulating regional economic development, analysis of the schooling crisis our major cities face, and current developments in China, for example.

The writing process here is quite different from that involved in more traditional academic writing. When a philosopher starts out to write a philosophical essay for publication, he/she plans to spend many days and weeks formulating a line of argument, crafting the prose, and critically revising until it is perfect. Likewise, writing a traditional academic book involves coming up with a "story-line" of topics and key arguments, turning that into a chapter outline, and methodically drafting out the full manuscript. Creative planning, writing, and editing occupy months or years before the ideas come into public view.

Writing an academic blog has a different structure. It is a question of doing serious thinking, one idea at a time. Each post represents its own moment of thought and development, without the immediate need to fit into a larger architecture of argument. Eventually there emerges a kind of continuity and coherence out of a series of posts; but the writing process doesn't force sequence and cumulativeness. Instead, coherence begins to emerge over time through recurring threads of thinking and writing.

I've done each of these other forms of traditional academic writing -- dissertation, conference presentation, journal article, book, and book review -- and I find this current form particularly valuable and intellectually rewarding. It stimulates creativity, it leads to new insights, it permits rigor in its own way, and it leads to new discoveries within the vast literatures relevant to "understanding society" and the space of questions this domain presents.

Writing the blog is intertwined with the web in several deep ways that are worth calling out. First, of course, is the availability of a venue for publication and the possibility of gaining a world-wide readership. The web and the search engines made this kind of readership possible, and this was a unique new capability entirely absent in the world of print. But equally important for me has been the ubiquitous availability of knowledge and writing on the web, and the ease with which we can access this knowledge through search engines and other web-based tools. This means that the web-based scholar can quickly discover other materials relevant to the current topic, leading to unexpected turns in the argument. It means that the web-based scholar is not locked into the circle of his own study and the literatures that he/she has already mastered; rather, there is an open-ended likelihood of interaction with new and important ideas previously not part of the mix. So the task of writing is no longer that of formulating one's pre-existing ideas; it is simultaneously an act of inquiry and intellectual discovery. (For me a good example of this is the interest I've developed in the theory of assemblage and the writings of Deleuze, Delanda, and Latour. These ideas fall far outside my own analytic philosophy comfort zone, and yet they align well with my own thinking about micro-foundations and social contingency.)

I invite readers to take this opportunity to make suggestions or observations about this experiment in academic writing. Are there topics you'd like to see addressed in future postings? Do you have suggestions for how to make the presentation of the content more useable? Can you see possibilities for this kind of inquiry and writing in your own field?

Saturday, October 30, 2010

China's confidence

Traveling in China for the past two weeks has given me a different perspective on the country.  The most powerful impression I've had is one of collective national confidence; the sense that China is on the move, that the country is making rapid progress on many fronts, and that China is setting its own course.  We've known for twenty years about the unprecedented rate of economic development and growth in China since the fundamental reforms of the economy in the 1980s.  China's manufacturing capacity is also well known throughout the world.  But the story is bigger than that.  What is perhaps not so well understood outside the country is the scope and purposiveness of the development plans the country is pursuing.  

One aspect of this is the breadth of forms of capacity building that the country is investing in. The nation is making long-term investments in a range of fundamental areas aimed at providing a foundation for long-term, sustained evolution.  Transportation is one good example.  The extension of the high-speed trains among China's important cities indicates a good understanding of the future importance of economic integration and mobility for future innovation and growth.  But this high-speed rail system indicates something else as well: China's readiness to successfully design and build the most sophisticated engineering and technology projects on a large scale.  The high-speed train between Hangzhou and Shanghai opened last week, with a sustained speed in excess of 350 km/hour; this brings the travel time down from 78 minutes to 45 minutes over the distance of 202 kilometers.  Similar service will be completed between Beijing and Shanghai, providing 5-hour service between these key cities.  So China will soon be leading the world in high-speed rail. 

Higher education is another great example.  The universities in and around Shanghai have built whole new campuses in the past ten years, reflecting a local and national commitment to improvement of the high-end talent base in the country.  Universities in Beijing, Guangdong, Hangzhou, and Souzhou are making rapid and focused plans to enhance the quality of their faculties and the effectiveness of their curricula -- especially in the areas of mathematics, science, and engineering.  My visit to the Chinese Academy of Fine Arts in Hangzhou was a great example of this dynamism.  There I saw many bright, talented students from all across China studying the fine arts, design, and multimedia on a beautiful urban campus serving 9,000 students. The student work is very good, and it gives a sense of the creative potential invested in the current generation. 

A more intangible aspect of China's current confidence comes from a long series of conversations with Chinese faculty, graduate students, and undergraduate students.  There is a real pride in China's cultural heritage -- new friends in Hangzhou and Souzhou were eager to explain the meaning of ceramics, paintings, and gardens in terms of the Chinese value systems they represent.  And there is a sense of purpose and direction in many of these conversations -- as if people in their 60s and people in their 20s alike have absorbed China's history and its half-century of turbulence, and are now looking forward to consolidation and enhancement of the cultural and economic power of their country. There also appears to be a deep underlying fear of turbulence; the people we met want to see stable, continuous progress. There was sympathy for Liu Xiabo, but not much appetite for radical changes in rights and liberties. "China needs to maintain stability."

This sense of confidence is accompanied by a lack of "Western envy."  There is very little sense that any of the people I talked with over these several weeks think that their country should emulate Europe or North America -- politically or culturally.  "China can create its own way ."  Some of the students I talked to were very clear in their criticisms of the policies of their own government -- from educational access and equality to internet access -- but none expressed the notion that China should simply follow the European or North American models in these areas. (I was asked, why do corporations have so much influence on the government in the US?)  And more importantly -- many of these young people have the desire to study abroad; but they also express a very specific intention to return to China and have their lives and careers in China. And equally important, I met leading Chinese academics who have chosen to return to China from leading universities in the US. 

So -- rapid, sustained economic growth; a broadly shared sense of China's distinctive values and history; successful incorporation of advanced, largescale technology systems; the world's fastest super-computer; integrated regional and national plans for the future; and a degree of recognition of the importance of addressing China's social problems -- this is a powerful foundation for a China-centered future for this country and its 1.3 billion citizens.

Where is the place for social criticism in this picture? China faces a number of difficult social problems that will require decades to solve. Consider some of the hardest problems: Dealing with the needs of China's aging generation; providing quality healthcare to everyone; rapidly increasing incomes to China's poorest 40%; reining in the steadily rising pressures on air and water quality; reducing the prevalence of guanxi and corruption in business and daily life; and handling the challenges of rapid rural-urban transformation, to name just a few important problems. Many of these problems affect large segments of Chinese society, and their solution will require critical demands by these groups if the government is to take appropriate action. So allowing Chinese people a genuine voice in defining the problems the country needs to tackle is crucial. 

Moreover, many of the policy choices that need to be made will affect different social groups differently.  Expansion of the rail network or the power grid provides large gains for many people, but it imposes important costs on other people. And often the "losers" in these policy areas are poor people with little effective voice in the policy arena. If poor people don't have open avenues through which they can express their needs and sources of hardship, these needs will not be heard.  So for both these types of reasons, it is crucial that China move in the direction of creating greater space for dissent and the expression of fundamental concerns and interests. 

An important part of this evolution is the development of an institutionally protected investigative press. It is crucial in a modern society that the role of the news-gathering investigator be established and secured against the pressures of government. Investigations of corruption sometimes occur in the Chinese press. But there seem to be fairly clear limits to the depth and subjects that journalists can undertake. Investigators trying to establish culpability for school building collapses during the Sichuan earthquake quickly ran into government controls for going too far. And yet it is only when the spotlight falls on corruption that it can be addressed. 

So the confidence that Chinese people currently have in their future is warranted. And the path will be more direct if the Chinese political system continues to develop more institutionalized ways of allowing citizens and groups to express their concerns, desires, and criticisms. There will be a distinctively Chinese polity in the future. And it needs somehow to solve the problem of facilitating citizen voice and deliberative social problem solving. 

Monday, October 25, 2010

The global talent race

We have a lot of anxiety in the United States about the quality and effectiveness of our educational system, particularly at the elementary and secondary levels. And the anxiety is justified. A large percentage of our school-age population lives in high poverty neighborhoods, and they are served by schools that fail to allow them to make expected progress in needed academic skills, including especially reading, writing, and math. And we have high school dropout rates in many cities that exceed 25% -- leading to the creation of large cohorts of young adults who lack the basic skills necessary to do productive work in our society. So at a time when personal and social productivity depends on problem-solving, innovation, and invention, many of our young people in the US haven't developed their talents sufficiently to make these contributions.

How does this problem look from an international perspective? Other countries and regions seem to have taken more seriously the macro-role that education and talent will play in their futures, and are preparing the ground for superior outcomes on a population-wide basis. Here is one example -- Hong Kong. Though part of the People's Republic of China, Hong Kong retains a degree of autonomy in its social policies, and education is one of those areas where Hong Kong government can take special initiatives.

There is a pervasive feeling in Hong Kong that educational success is absolutely crucial. School children are strongly motivated, their families support them fully, and the city is trying to ensure that all children have access to effective schools. And there is a lot of civic focus on the quality and reach of the Hong Kong universities as well.  Business and civic leaders recognize the key role that well-educated Hong Kong graduates will play in the economic vitality of the city in the future. And university leaders are keenly interested in enhancing the quality of the undergraduate and graduate curricula  Here is a valuable survey report by Professor Leslie N.K. Lo, director of the Hong Kong Institute of Educational Research at Hong Kong Chinese University (http://www.hkpri.org.hk/bulletin/8/nklo.html). The report documents the priority placed on quality of education by the authorities, even as it raises concerns about the effective equality of education in the city. Here is a report on the state of education research and reform in HK (http://www.springerlink.com/content/gt11u17672j34372/fulltext.pdf).  The report raises the possibility that Hong Kong's educational system is skewed by income and language: low-income families attending Cantonese-speaking schools may not get a comparable education to that provided to middle- and upper-income families in English-speaking schools.  But it isn't easy to find detailed educational research that would validate this point.

One very interesting data point concerning the equality of access provided by Hong Kong education can be located in the distribution of family incomes among students in Hong Kong's elite universities.  Basically the data indicate that the Hong Kong universities are reasonably well representative of the full income spectrum of the city.  About half of students in the elite universities in Hong Kong come from families in the lower half of the income distribution (or in other words, the median student's family income is equal to the median family income of the city).  This compares to a markedly different picture in selective public universities in the United States, where the median student family income is at about the 85th percentile of the US distribution of family income.  In other words, universities in the United States are over-represented by students and families from the higher end of the income distribution; whereas the Hong Kong university student population is relatively evenly distributed over the full Hong Kong income distribution.  (These data are based on a summary report prepared by researchers at Hong Kong University of Science and Technology.)  

This statistical fact gives rise to a suggestive implication: that students of all income levels in Hong Kong are roughly as likely to attend Hong Kong's elite universities.  And this contrasts sharply with the situation in the United States, where attendance in elite universities is sharply skewed by family income (Equity and Excellence in American Higher Education (Thomas Jefferson Foundation Distinguished Lecture Series)).  

The issue is important, because in the world-wide race for talent cultivation, those countries that do the best job of cultivating the talents of all their citizens are surely going to do the best in the economic competition that is to come.  Countries that waste talent by denying educational opportunities to poor people or national minorities are missing an opportunity for innovation, creativity, and problem-solving that can be crucial for their success in the global environment.  And if Hong Kong, China, and other East Asian countries are actually succeeding in creating educational systems that greatly enhance equality of opportunity across income, this will be a large factor in their future success.

Sunday, October 10, 2010

Strategies of economic adaptation


Charles Sabel and Jonathan Zeitlin made a powerful case for there being alternative institutional forms through which modern economic development could have taken place in their 1985 article, "Historical Alternatives to Mass Production: Politics, Markets and Technology in Nineteenth-Century Industrialization" (link). In an important volume in 1997, World of Possibilities: Flexibility and Mass Production in Western Industrialization, they take the argument two steps further: first, that institutional variations were not merely hypothetical, but in fact had an extended history in a variety of industries well into the twentieth century; and second, that the current situation of pervading uncertainty about our most basic economic institutions was characteristic of the earlier periods as well.  The volume represents the work of an intensive seminar in economic history sponsored by the Maison des Sciences de l'Homme.  Contributors include a broad swath of researchers in economic history across Europe (not Asia!).  Chapters take up the processes of mechanization, specialization, and mass manufacture in a variety of industries in the nineteenth and early twentieth centuries -- silk, cutlery, watch-making, metal-working, and ship-building.  (Here is part of the very good introduction to the volume provided by Sabel and Zeitlin; link.)

Sabel and Zeitlin take the view that the history of business and technology can in fact shed quite a bit of light on the economic situation we face today -- from brand new sectors (Google, Facebook, Amazon) to the abrupt decline of old industries (the US auto industry) to speculation about the next big area of business growth (biotech, alternative energy).  They highlight a couple of features of the business and economic climate in the late 1990s that seem equally applicable today -- an acute sense of economic fragility and institutional plasticity.  They argue that these features were also the hallmarks of earlier periods of economic change as well.  So they argue that we can learn a great deal for today's challenges by considering the situation of industries like glass-making or watch-making in 1880 or 1920.
The sense of fragility goes to the once commonsensical idea that progress would lead to the gradual consolidation of particular forms of economic organization, and hence to an ever more certain sense of how best to deploy technology, allocate labor and capital, and link supply of particular products to demand. Today ... it is commonsensical to believe that the way many of these things are done depends on constantly shifting background conditions whose almost insensible mutation can produce abrupt redefinitions of the appropriate way to organize economic activity.
The second experience is one of the recombinability and interpenetration of different forms of economic organization: the rigid and the flexible, the putatively archaic and the certifiably modern, the hierarchical and the market-conforming, the trusting and the mistrustful.
...
The central theme of this book is that the experience of fragility and mutability which seemed so novel and disorienting today has been, in fact, the definitive experience of the economic actors in many sectors, countries, and epochs in the history of industrial capitalism.
...
But this double perception of mutability and fragility ... has not led them to exalt catch-as-catch-can muddling through as the organizing principle of reflection and action. What we find instead is an extraordinarily judicious, well-informed and continuing debate within firms, and between them and public authorities, as to the appropriate responses to an economy whose future is uncertain, but whose boundary conditions at lease in the middle term are taken to be clear.
...
Our purpose here is to show that most firms in nineteenth- and early twentieth-century Europe and the United States, neither mired in tradition nor blinded by the prospect of a radiant future, carefully weighed the choices between mass production and what we would now call flexible specialization. (2-3)
One of Sabel and Zeitlin's most basic arguments is the idea that firms are strategic and adaptive as they deal with a current set of business challenges. Rather than an inevitable logic of new technologies and their organizational needs, we see a highly adaptive and selective process in which firms pick and choose among alternatives, often mixing the choices to hedge against failure.  They consider carefully a range of possible changes on the horizon, a set of possible strategic adaptations that might be selected; and they frequently hedge their bets by investing in both the old and the new technology. "Economic agents, we found again and again in the course of the seminar's work, do not maximize so much as they strategize" (5).
During the eighteenth and early nineteenth centuries, for example, the silk merchants and weavers of Lyons carefully monitored but did not imitate the policies of design routinization, subdivision of labor and price competition pursued by their Spitalfields counterparts, preferring alternative strategies based on rapid style change, increasingly flexible machinery and the skillful exploitation of fashionable markets for high-value products. ... Much as they admired the efficiency of American methods, detailed accounts of the American system in trade journals and technical society proceedings typically emphasized that this efficiency depended on standardization of the product which was wholly incompatible with the current or expected organizations of their respective markets. (12)
In other words, specialized firms did not "resist change;" rather, they carefully assessed the full implications of one form of organization and one use of technology against another, and selected those innovations that represented the best match to their own business realities. 

An interesting case study of an alternative way of organizing production is provided in the chapter by Peer Hull Kristensen and Charles Sabel, "The small-holder economy in Denmark."  It was an example of cooperative-based agriculture and small-scale production that provided a durable alternative to private capitalism farming and manufacture:
Denmark was the exception.  There in the decades before World War I peasant small holders built a technologically innovative cooperative movement that outcompeted estate-owners and urban financiers in virtually every segment of the dairy, egg and pork products industries.  In so doing they created demand for particular kinds of capital goods that contributed to the modernization of the small-shop sector of industry as well. (345)
Alongside the agrarian republic there was another estate of small holders, the artisans and craftsmen.  Their property was the knowledge of tools, materials, and techniques which made them independent of any one market or employer. By the outbreak of the First World War, they too had built institutions -- particularly a network of technical schools -- which allowed them to defend their place in Danish society by constantly renewing it. (365)
The history these activities in Denmark demonstrate that it was possible for voluntary producers' cooperatives to manage the provision of specialized services, marketing services, and economies of scale to farmers and artisans that we sometimes believe can only be provided by the market.  This system did not last forever -- though it proved economically durable for half a century, and it demonstrated much of the flexibility and organizational innovativeness that Sabel and Zeitlin emphasize in their introduction.
But some fifty years later, in the late 1950s, the cooperative core of this small-holder economy was coming visibly undone.  First cooperative dairies, then the cooperative slaughterhouses began to combine into larger and larger units abandoning in the process many of their original constitutional features and becoming in fact and law corporations. The corporations in turn fought with one another and the remaining cooperative for control of their respective markets. (374)
I find the contributions to this volume interesting in exactly the way predicted by Sabel and Zeitlin in the introduction: for the models they illustrate of deft navigation of uncertain economic environments by firms, cooperatives, and individuals.  The economic and business environment in the region where I live is unforgiving for a wide range of industries; for example, job shops and tool and die shops have largely disappeared in the Detroit metropolitan area.  However, there are a number of mid-sized adaptable businesses that have continued to thrive, through exactly the kinds of intelligent, forward-scanning adaptation to new opportunities described by contributors to this volume.  These businesses are in the engineering and advanced services sector, and they are innovative in two ways: they are constantly looking for new opportunities to apply existing and new technologies to new applications; and they are looking for customers in developing countries, including especially the Middle East from Lebanon to Saudi Arabia.  Energy, solar power, building control systems, urban parking systems, and aviation maintenance can be found within the portfolio; and the leaders of these companies are systematically and strategically developing the relations abroad that are necessary to secure the next wave of contracts.

It is interesting to consider whether there is a difference between economic history and business history. One might say that the former has to do with the large features of economic organization, social regulation, and logistics that constitute an economic system, whereas business history has to do with the tactical maneuvering and small-scale adaptations that individual firms undertake within the general framework of the existing economic structure.  But I think Sabel and Zeitlin's answer would be a fairly decisive one: there is no fundamental distinction between the two levels of analysis.  They frame the distinction in terms of the ideas of "epochs" and "crises"; this language distinguishes between long periods of institutional stability, and short periods of dislocation and change -- something like the theory of punctuated equilibrium.  But Sabel and Zeitlin doubt the validity of this distinction.  "The solution, we think, is to relax the distinction between periods of stability and periods of transition in the same way and for the same reasons that we relaxed the distinction between maximizing actor and constraining context" (29).  Or, in other words, when we look closely, almost every period of economic activity is also a period that mixes elements of stability with deep and unpredictable change.

Tuesday, September 21, 2010

Intangible services


Neoclassical economics presents a pretty simple theory of the equilibrium price of a manufactured good. This theory also extends to a theory of the wage for skilled and unskilled labor. We postulate production and demand curves, and the equilibrium price is the point where supply equals demand. The supply curve is influenced by factors governing the cost of production and therefore the level of profit created at a given production level and price, and the demand curve is influenced by subjective consumer preferences. An increase in demand for a good pushes up the price, thus triggering more production; and the price falls to a new equilibrium.

Wages are affected by this calculation because labor is a factor of production, and demand for labor at a given wage is influenced by the marginal product of labor. If the marginal product is greater than the wage the employer will hire another worker, increasing demand for labor and marginally increasing the wage. The equilibrium wage is the point at which the marginal product equals the wage.

Labor is not a homogeneous substance; the marginal product is affected by skill, intensity, and experience. So we should expect different wage curves for different segments of the labor force, with the wage rate for unskilled and inexperienced workers at the lowest level.  But because specialized labor is somewhat elastic in supply (through additional training) we would expect some degree of convergence between skilled and unskilled labor rates over moderate time periods.

How does this theory apply to intangible services where the quality of the product is difficult to measure? I'm thinking of a college education; how does a consumer decide between the education offered at a private university like Rice and the lower-cost alternative at UT-Arlington? But let's think of simpler examples -- for example, architectural services, family lawyers, or studio musicians. What are the factors that influence the price a supplier can charge in the marketplace?  Why do each of these sectors embody significantly tiered price structures?

Take architectural services. There is a wide range of fees charged by architectural firms, ranging from one-person firms designing single-family homes to multi-city firms charging much higher fees. There is demand for architectural services regionally and nationally. There is good information about suppliers and rates at the national level. And the supply of services is somewhat elastic -- more students will enter architecture school when the incomes they can expect are high. So why doesn't the simple logic of supply and demand imply convergence of prices for this service that is reasonably consistent and related to the cost of production of the service? Why are some elite firms able to retain a significant and permanent price premium? In other words, why don't we witness the commodification of architectural services along the lines of the auto industry, where firms compete aggressively on price?

I suppose some of the factors that stabilize this sort of multi-tier price system in services are fairly obvious. These might include brand and reputation; quality and prestige of professional service providers within the various firms; and depth and quality of referral networks.

Consider this thought experiment. RUNOFTHEMILL is an architectural firm of 30 professionals in the Rustbelt. TOPOFTHELINE is a firm of 200 professionals in San Francisco. Detailed quality assessment by the XYZ consulting firm estimates that RUNOFTHEMILL completes a wide range of midsize projects at roughly the same level of quality as TOPOFTHELINE. However, TOPOFTHELINE charges roughly twice what RUNOFTHEMILL charges for a project of comparable size. What are the mechanisms that preserve the price differential between the two firms? Why are rational business organizations willing to pay the premium to have their buildings designed by TOPOFTHELINE?

First, it is possible that TOPOFTHELINE has succeeded in positioning itself in the marketplace as a provider of superior quality. By hypothesis, this is untrue; but if potential buyers are persuaded of the quality advantage, they may choose TOPOFTHELINE over RUNOFTHEMILL in spite of the premium. This seems to be an inverted version of the "market for lemons": because the actual quality of the good is difficult to measure, the purchaser is forced to turn to other indicators as possible signals of quality. And this may lead the purchaser to pay more for the service than necessary.

Second, TOPOFTHELINE may have pursued a deliberate and successful strategy of recruitment of architects from the most respected schools in the world, whereas RUNOFTHEMILL may pay lower fees and may recruit equally capable but less prestigious professionals. Prospective clients may take the prestige of the staff as an indicator of the quality of the product, and may therefore be willing to pay the premium.  The observable prestige of the professional staff may serve as a surrogate for the inferred quality of the service.

Third, TOPOFTHELINE may have a brand that conveys significant prestige on its projects.  A company whose corporate offices are designed by TOPOFTHELINE may gain from that prestige, and the gain may justify the premium in spite of the additional cost.

Finally, it may be that the market for architectural services is highly segmented as a result of the networks of referrals that exist involving the two firms. TOPOFTHELINE exists in a network of premiere organizations, both providers and purchasers; and referrals and endorsements for TOPOFTHELINE support premium prices for its services. RUNOFTHEMILL has completed equally high-quality projects, but for a second tier of companies and consumers; so its referrals more or less automatically steer its services towards a tier of companies that are more likely to compete on price. So RUNOFTHEMILL's referrals generate lower average fees.

Several of these factors are inherently irrational grounds for accepting a price premium.  If purchasers had full information about quality and price, they would not pay a premium for the pedigrees of the professional staff, and they would not restrict their purchasing horizon to suppliers recommended by other elite firms.  Instead, they would go with the Walmart strategy: get the best product for the lowest price. So far, however, it seems that the markets for advanced and specialized services are fairly sticky when it comes to price, quality, and prestige.

Wednesday, September 1, 2010

Development economics in historical context


Hollis Chenery and T. N. Srinivasan published the Handbook of Development Economics in 1988.  It was state-of-the-art in the late 1980s.  It is interesting to look back at the Handbook twenty-two years later to see how it stands up today.

First, the contributors.  The volume is a dream-team of development thinkers from the 1970s and 1980s: Amartya Sen, Arthur Lewis, Pranab Bardhan, Joseph Stiglitz, Peter Timmer, Nancy Birdsall, Paul Streeten, and Dwight Perkins, to name only a small subset of the authors.  (There are 33 essays in volumes I and II.)  Several currently important figures are not represented -- Arturo Escobar, Jeffrey Sachs, and Dani Rodrik, for example.  Escobar's Encountering Development: The Making and Unmaking of the Third World appeared in 1994; Jeffrey Sachs's The End of Poverty: Economic Possibilities for Our Time didn't appear until 2005; and Dani Rodrik's One Economics, Many Recipes: Globalization, Institutions, and Economic Growth appeared in 2008.  So it is certainly true that the field has moved forward with the emergence of new voices and perspectives since 1988.  But it is also true that the volume represents a very deep body of knowledge about some of the dynamics and policy choices pertaining to economic development.

More important is the question of the range of perspectives on development represented in the volume.  Development thinking has tended to swing from progressive to neo-liberal over the decades.  Progressives have paid more attention to distribution, poverty, and social provisioning; whereas neo-liberals have focused on markets and "getting the prices right," with little appetite for redistribution, government subsidies, or serious efforts at poverty reduction.  Gunnar Myrdal, Amartya Sen, and Arturo Escobar represent three generations of progressive development theorists; perhaps Peter Timmer, Malcolm Gillis, and Jeffrey Williamson fall closer to the neo-liberal end of the spectrum.  I would judge that the Handbook does a pretty good job of finding the middle of the spectrum.  Chenery's own emphasis on the importance of redistribution in development (Redistribution with Growth) places him closer to the progressive end, along with Pranab Bardhan, Irma Adelman, and Lance Taylor (each of whom has a contribution in the volume).  The book pays attention to "alternative approaches" to economic development as well as poverty-related issues like health and nutrition.  The book does a good job of combining a clear vision of the goals of economic development -- improvement of human welfare -- with technical economic analysis of growth, labor markets, and trade.  And many of the authors explicitly recognize the point that development economics benefits from theoretical pluralism; the approach is not narrowly neo-classical.

Here are a few interesting observations from several contributors:

Pranab Bardhan:
Development economics as a separate branch of economics originated in a widespread perception of the limited usefulness of orthodox economics, and even though its pristine separatism has mellowed over the years it retains to this day its contrary, unruly, if somewhat flaky, image in the eyes of mainstream economics.  Standard neoclassical economics is mainly on the defensive in this terain and a number of alternative approaches clash and contend for our attention. (40)
Joseph Stiglitz:
The central questions facing development economics are: Why is it that some countries are so much poorer than others?  What can be done to make them grow faster? Faster growth is needed if the gap in living standards is not to be widened even further. (94)
J.G. Williamson:
What explains the timing and the extent of the transition from a traditional rural to a modern urban society? Why does city growth speed up in early development and slow down in later stages? What role does migration play in the process, and do migrants make rational location decisions? Do urban labor markets serve to absorb urban immigrants quickly? Are rural emigrants driven by "push" conditions in the countryside or by "pull" conditions in the cities? Is the Third World "overurbanized"? (425)
T.P. Schultz:
The record of sustained modern growth in real per capita income cannot be accounted for by the accumulation of conventional units of physical capital or by the increased applicaiton of hours of labor per capita.  The source of modern economic growth are sought instead in the changing quality of labor and capital, in the more comprehensive accounting of other inputs, and in change of organization, policy environment, or technology.  ... Research on various aspects of the microeconomic relationshipb etween education and development has expanded rapidly, forging a consensus on questions for study and appropriate methodologies to address these questions. ... Studies across persons, households, farms, and firms have documented, first generally in the United States and then in many low income countries, strong empirical regularities between educational attainment of populations and their productivity and performance in both market and nonmarket (home) production activities.  (544)
Jere Behrman and Anil Deolalikar:
Health and nutrition are important as ends in themselves and often are emphasized as critical components of basic needs in developing countries.  In addition they may be channels through which productivity and distributional goals of developing societies may be pursued effectively if, as is often hypothesized, the productivity of low-income persons in work and in human- capital formation is positively affected by health and nutrition status. (633)
Several things are noteworthy in reviewing the contents and methods of the Handbook -- issues and perspectives that would now be regarded as crucial.

A phrase that does not occur in the volume is "Washington Consensus."  This concept became current in the 1990s after being introduced by John Williamson in 1990 (link).  Here is how Williamson puts his point: "The paper identifies and discusses 10 policy instruments about whose proper deployment Washington can muster a reasonable degree of consensus." He identifies ten policy goals as constituting the Washington Consensus: Fiscal Deficits, Public Expenditure Priorities, Tax Reform, Interest Rates, The Exchange Rate, Trade Policy, Foreign Direct Investment, Privatization, Deregulation, and Property Rights. It is apparent that this list is heavily tilted towards the neo-liberal end of the spectrum.  By contrast, consider the Millenium Goals adopted by the United Nations in 2000 (link): End Hunger, Universal Education, Gender Equity, Child Health, Maternal Health, Combat HIV/AIDS, Environmental Sustainability, Global Partnership.  The Millenium Goals are focused on ending world poverty, while the Washington Consensus is focused on achieving effective market institutions and trading systems globally.  The Handbook isn't a sourcebook or a polemic in support of the neo-liberal agenda; but neither is it emphatic in its treatment of poverty.

Another term that does not occur in the volume is "globalization."  There are discussions of international trade, migration, capital flows, transnational corporations, and credit markets -- important components of contemporary debates about globalization.  But the concept space involved in the idea of economic development had not yet fully highlighted the importance of global interconnectivity.

Third, the Handbook gives virtually no attention to sustainability, resource depletion, and the environment.  These are now regarded as crucial aspects of the challenge of economic development.  Taxation, trade, and governance come in for repeated treatments; but environmental sustainability is not raised as a significant issue.

Finally, the Handbook doesn't give central priority to the issues of poverty alleviation and inequality that were already becoming central for some development economists, including Amartya Sen.  Sen's central ideas of functionings, freedom, and capabilities are expressed in his opening chapter to the volume.  But the bulk of the contributions to the Handbook don't begin with poverty, but rather more specific questions about growth, modernization, trade, and population.  The conceptual shift that Sen's writings would eventually bring to the field had not yet had full effect.

It is also interesting to examine the first and second editions of an important textbook on development economics that was roughly contemporary to the Handbook.  Malcolm Gillis, Dwight Perkins, Michael Roemer, and Donald Snodgrass's Economics of Development was published a few years earlier than the Handbook in 1983 and 1987.  There is a high degree of conceptual and organizational similarity between the two treatments of the economics of development, including topics, approaches, and models and methods.

To get a sense for how the discipline of development economics has shifted since 1988, take a look at the topics and readings included in the course on Economic Development offered by Rohini Pande and Dani Rodrik (link).  Addressing poverty is the central focus in this conceptualization of the field; there is lots of attention to the components of human wellbeing (health, education, nutrition); and the syllabus pays a good deal of attention to issues of institutions and governance within the development process.