Navigation page

Pages

Friday, November 29, 2013

Relevant to what?

photo (3)
source: Benoit Mandelbrot, The Fractal Geometry of Nature (cover)
photo (2)
source: Benoit Mandelbrot, The Fractal Geometry of Nature, p. 15
An earlier post raised the question of the value of academic research and concluded that we shouldn't expect academic research to be "relevant" (link). That is a strong conclusion and needs some further dissection. Plainly research needs to be relevant to something -- it needs to be relevant to a recognized "problem" in the discipline or across disciplines; it needs somehow to be relevant to a tradition or thread of conversation within the discipline; and (as Tasia Wagner pointed out in a comment) it often needs to be relevant to a "hot" topic if the author wants to see it published. And of course academic research needs to be judged by a set of standards of rigor, method, and overall significance. Iit needs to be relevant to a set of standards of academic assessment. We want to be able to make comparative judgments about research contributions -- "not well argued," "derivative," "minor", as well as their opposites -- strongly argued, original, and important. That is what academic communities are for, and that is why we have confidence in peer review processes for publication and for advancement in the university.

All true.

The specific kind of relevance I was taking issue with is "practical utility" -- the demand for immediate problem-solving potential that underlies common critiques of research in the humanities and social sciences. The Proxmire "Golden Fleece" awards a generation ago caught this current exactly (link), and there is a similar current of thinking in the Congress today. For example, the current effort to exclude funding for research in political science by the NSF seems to fall in this category (link). This is the view I want to take issue with -- the idea that abstract research in the humanities or social sciences is frivolous, pointless, and without social value.

There is a related kind of relevance that I think I would discount as well: "accessibility to a wide public." Some academic research is in fact accessible to a wide audience in its primary form. But that is not generally the case. Take the mathematics of chaos theory. It is esoteric and technical, not readily understood by non-mathematicians. (The illustration and page of text above are taken from Benoit Mandelbrot's 1983 book, The Fractal Geometry of Nature.) But the theory can be translated by gifted science writers and communicators like James Gleick, whose Chaos: Making a New Science was read by a very wide non-specialist audience, in forms that significantly influence the imaginations and frameworks of non-specialists. Likewise, the primary research in archeology, ethnography, and economic history that underlies our understanding of the long-term material history of our species makes for a tough read for non-specialists. But then a Jared Diamond can write a wildly popular book, Guns, Germs, and Steel: The Fates of Human Societies, that translates this research for the wider readership. Diamond is an accomplished academic. But  Guns, Germs, and Steel is not a primary work of original academic research; it is a beautifully executed work of translation.

So here is the scoring system I'd like to see guiding our thinking about social investments in research in the humanities and social sciences (which is probably relevant in the natural sciences as well):
  • Is the problem an important one?
  • Has an appropriate methodology been pursued with rigor, evidence, and logic?
  • Is there an original or innovative discovery involved in the research product?
Significantly, these criteria will be familiar to any academic who has served as a reviewer for journal submissions, a grant proposal reviewer for a foundation, or a reviewer for a faculty tenure case.
Now let's score one particular philosopher, John Rawls, for a research article that was written before he became a household word with the publication of A Theory of Justice in 1971. The article is "Justice as Fairness" and it appeared in Philosophical Review in 1958.

  • The problem is, how should we attempt to assess the justice of basic institutions in a modern society? This problem is one of the big ones -- give it a 10.
  • The methodology is analytic philosophy of ethics, with an innovative use of economic reasoning added. Most of the world of expert philosophers would say the arguments are carried off perfectly. Another 10.
  • And what about innovation? For sure. Rawls insisted on a new way of framing ethical issues, distinctly different from the metaethical and utilitarian approaches of the 1950s. Another 10.

So "Justice as Fairness" scores a perfect 30 on my metric. And yet the article probably achieved a readership of 800 people in its published form in The Philosophical Review within a year of its publication. It was technical philosophy and would have been a quick rejection in The Atlantic or the New Yorker. But in hindsight, it was very important. It laid the ground for what became the most influential and widely read book of political philosophy in the second half of the twentieth century (over 300,000 copies according to its publisher), and substantially changed the terms of debate about issues of distributive justice.

All of this suggests that we can't judge the likely impact or even the practical importance of a work at the time it is undertaken. But we can make judgments about rigor, importance, and originality, and these are the best guides we have for deciding what research to publish and support.

No comments:

Post a Comment