MR 4: On Leiter’s Nietzsche I

Recently, I looked into Brian Leiter’s leading (leitende) opus, Nietzsche on Morality (the not-so–rare First Edition, from Routledge in 2012). I had the most laudable of motives: I didn’t want to be like Brian, who is wont to trash people (including me) without bothering to read them. After a few pages, however, I closed the book again. Here, in five posts, are the reasons. There are a lot of them; the book is a compendium of strategies for converting historical figures into analytical philosophers.

A few pages in comes a discussion of Nietzsche’s “naturalism” (pp. 3-11). It’s an important discussion to Leiter because Nietzsche’s “naturalism” is what Leiter hopes to deploy against his (and, it would seem, civilization’s) main enemies, the “postmodernists.” This identification of an enemy is the first step in the Leiter Conversion: “O my analytical confrères” (it fairly shouts) “Nietzsche is your brother—he hates whom you hate!” The relevant enemy is hateful indeed:  a set of vile and dangerous nincompoops who claim that no text conveys anything objective, that all we have are interpretations.

On Leiter’s view, postmodern interpretations of Nietzsche deny two things: that (a) humans have a nature and (b) we can know facts about that nature. Foucault, the arch-postmodernist, attributes denials of (a) and (b) to Nietzsche; Leiter is out to show he accepts them (p. 2).

One of these claims is ontological and the other epistemological, but Leiter runs them together in that anyone who accepts either is assumed to accept the other: “that the genealogical object has no ‘essence’ suggests an anachronistic affinity with postmodern skepticism about facts and objectivity” (p. 167). Why would the denial of essences “suggest” skepticism? What connection between denying essence and embracing skepticism allows this “inference”?

Leiter seems to be skating here across a conflation of two very different claims, but he is not. He is skating across no fewer than four very different questions: (1) whether we can know human nature; (2) whether we can know facts about human nature; (3) whether there is human nature; and (4) whether there are facts about human nature.  The relations among these claims are tangled. There can be facts about human nature if there is no such thing as human nature, such as the fact that it doesn’t exist.  If it exists and we know that, we know at least one fact about it and so there are some such facts; but the idea that we can know facts about human nature without knowing human nature itself is as old as Plato (the ti esti question). Conversely, if essences are known intuitively, as Plato sometimes thought, we can have an intuition of human nature without knowing facts about human nature (the intuition will then be ineffable: Symposium 211). So some of these claims are logically independent of others, and some are not.

I won’t go into the whole thicket, which Leiter doesn’t even appear to see. His Foucault runs the the four claims together as well, denying them all. But Leiter’s evidence for the denials differs from case to case. The denial of (3) is supported by quoting “Nietzsche, Genealogy, History” on essences (p. 2). So far so good: Foucault indeed does not think there is a human nature, and also (4) does not think there are “deep facts” about human nature, unless perhaps we count its non-existence as a “deep fact” about it. But when Leiter ties Foucault to (1) and (2), he does so with a quote, not from Foucault himself, but with one from Dreyfus and Rabinow (p. 2).  Suspicions awake: can’t he find Foucault himself saying this?

When I wrote he chapters on Foucault for my book Philosophy and Freedom (Indiana 2000) I couldn’t. What I did find was Foucault saying that his archeological project must “correctly describe the discourses it treats” (Archeology of Knowledge p. 29) and that he himself is a “positivist” with respect to truth (op. cit. p. 125-127; also see pp. 31, 46; for further discussion and more passages see my whole discussion of Foucault and truth at my Philosophy and Freedom pp. 133-136). So Foucault thinks there are facts, and that we can know them; he just doesn’t think there are essences or “deep facts” about them; in the latter case, he denies, not the factuality of “deep facts,” but their depth.

Indeed, the view that we can’t know any facts would render Foucault’s entire project—in both its archeological and its genealogical phases—massively incoherent. For that project is openly polemical—as “Nietzsche, Genealogy, History” shows, Foucault thinks traditional historians are wrong. They have distorted history by interpreting it in light of overarching ideas such as “historical epochs.” If all Foucault can offer against them is one more interpretation, he cannot carry the day; everyone is right.

To be sure, Foucault denies that there is such a thing as human nature, and he  also denies that there are deep facts about that nature,, since he doesn’t believe in “depth.”  But he certainly thinks that people exist, and that we can know facts about them (such as that they are determined by the discourses in which they participate). So if we change (2) and (4) above by substituting “people” for “human nature,” Foucault accepts them. His claim, then, is that people exist, but that what they are changes too often and radically to be a stable and coherent “nature.” His fundamental point is ontological, not epistemological—and certainly not skeptical. (The same, by the way, applies to  Derrida. If Plato did not write the Phaedrus, for example, what is the point of Derrida’s deconstruction of it?)

As I argue in Philosophy and Freedom, the “epistemologizing” of the conflict between postmodernism and modernism was a highly unfortunate move, for though it rendered refutation easy, it rendered debate impossible: how can you argue with someone who denies the possibility of truth and reference? The way is open so all sorts of insouciant chicanery, including the creation of straw men. In Leiter’s case, however, the straw has a purpose. For Leiter’s ultimate goal, as we will see, appears to have nothing to do with postmodernism.

19. Randle P. McMurphy and the Cold War Aesthetic

Eric Bennett’s recent book, Workshops of Empire: Stegner, Engle and American Creative Writing During the Cold War (Iowa 2015) deals with the founding and funding of creative writing programs in Cold War America. What happened turns out to be a microcosm of what happened in many other fields. American writing had previously bathed in a wide-open, let’s-give-it-a-try atmosphere in which success was largely based on personal contacts (think Thomas Wolfe—Maxwell Perkins). Creative writing programs signaled the replacement of this shambolic non-system with a well-managed meritocracy in which serious achievers could be identified and certified by higher authorities by the time they were 25 years old. The oldest and most prestigious of these credentials were (and are) bestowed by the Iowa Writer’s Workshop.

As in other fields, the certification process required government and private institutions to work together, because writing programs needed to get funding from a variety of government agencies and private foundations. And as in other fields, the funding came with a political price. The result was a Cold War aesthetic that included basic principles of both form and content.

Consider the formal maxim, “Show Don’t Tell.” Bennett argues that this was a core principle of good fiction writing at the Iowa Writer’s Workshop, and passed over from that prestigious height into American fiction generally. Notice that it trades, consciously or not, on the distinction between showing and telling made in Wittgenstein’s Tractatus: “creative writing” can almost be defined as writing that shows rather than tells. All the important things for Wittgenstein have to be shown, rather than told, but showing for (the early) Wittgenstein is paradigmatically accomplished by true propositions. “Creative showing,” as we may call it, differs from Wittgenstein’s concept in that it dispenses with the truth-requirement; it shows for the sake of showing, and is the purest form of writing possible.

Though Bennett’s book does not mention Wittgenstein, his views were in the air and suggest that “Show Don’t Tell” was not merely a pragmatic maxim but was deeply rooted in the philosophy of the time. But for all its philosophical abstraction, “Show Don’t Tell” has a politically dark side. First of all it is not, Bennett argues, a truism; it is not even a universally-applied maxim, for novelists and poets have often told their readers about things that cannot be shown. I would suggest that this includes, first, their own thoughts: where would Proust or Tolstoy be if they couldn’t use their authorial voices to reflect upon and evaluate their characters and their actions?

The second result is the suppression of reflection in characters, whose thoughts, in line with this particular aesthetic, have to be immediately inferable from their actions. Like all unreflective people, unreflective fictional characters simply float from incident to incident. Hence, I suggest, the subgenre of the “Creative Writing Novel” that we all know so well. When they are written by a man, its exemplars (unnecessary to mention any by name) portray how lack of reflection passes over into inarticulateness and then into violence. Such a novel is one incident of senseless violence after another, all in the service of Showing it Like It Is. When written by women, such novels show one incident of human caring after another, all in the service of Showing It Like It Should Be. Better, of course; but still Cold War.

Most American literature, and certainly the first-rate stuff, did not fall prey to this. But it had to fight against it. Mindless violence and unreflective caring constituted the default content of Cold War fiction. Why? Because reflection, the attempt to articulate what one has just done or just been, is a prerequisite of critical thinking. No reflection, no critique; problem solved. (Philosophy had an interesting application of this principle for those who, like me, confused reflection and self-reference: Russell’s paradox made both impossible. But self-reference is an atemporal notion and thus impossible long before you get to its logic.)

As to content, male Artists became hypermasculinized: their works stood for the “natural man—” the exaggeratedly roisterous, but free, individual battling some sort of Machine.  Again, this naturalness applied both to artists themselves and to their characters (think Pollack throwing paint on a canvas, Kerouac spitting out a novel onto an improvised roll of tracing paper). Artists thus became, to their good fortune, the very kind of person whom Communism sought to repress. This image found its way into fictional characters, resulting in a type-character—call it “Randle P. McMurphy,” after its unsurpassable depiction by Ken Kesey— a cultural counterpoint to the cold, calculating rational chooser propounded by philosophers (see Scare, chapters 3 and 4).

American literature in the Cold War thus became would-be McMurphys writing about fictional McMurphys. But there is a fundamental dishonesty in this conmprehensive rejection of reflection, because it takes a lot of reflection to produce a work of art. Even the most unreflective painter or poet is continually monitoring their work, making (often highly constrained) choices as they go along. So while the literary character McMurphy retains even today his freshness and vigor, the artist McMurphy turns out to be a sham.

20. Cold War Philosophy and Medical Care

Cold War philosophy believes that market thinking—rational choice procedures, sometimes augmented by game theory—constitutes the whole of rationality. Any other mental activity is either rational choice in some sort of disguise, or is irrational. Everything therefore has to be organized on market principles.

It is the word “everything” in that last sentence that shows we are dealing with a philosophy, and not a mere theory or ideology. As Kant pointed out more than once, you can’t base universal judgments on experience, so you have to have some sort of a priori argument for them—and that’s philosophical.

Cold War philosophy has had a particularly touchy history in the case of medicine.  One thing which readers of Scare’s Chapter Six may notice is the way in which public health disappeared from the writings and concerns of Raymond B. Allen around the time the Cold War began. When Allen was still running medical schools (i.e. until 1946), he believed that the great challenge in medical care was no longer treating individual illnesses—that, he thought, was pretty well in hand—but setting up social systems for delivery of health care and, indeed of health itself, to Americans. His argument was explicitly pragmatic: health was assumed to be a good and the issue was how to deliver it in a systematic way.

As Scare shows, when Allen became a Cold War academic administrator, the pragmatism disappeared. Of course, he no longer had occasion to write specifically on medical issues, but in general he adopted the Cold War philosophical view that science aims at truth (or confirmation) alone. Allen’s turn was part of a broader, but tacit, cultural development in which American medicine came to focus on solving the health problems of individuals, rather than of communities (this turn can be summed up in a single word: “Flint”). This was only, as the Germans say, konsequent. It followed from the view that medicine has to be, basically, a market exchange between a (sick) consumer and medical science.

Medical care is one place where experience pretty clearly refutes Cold War philosophy: you have only to step across the Canadian border to see that single payer systems produce better health more efficiently than the traditional American panoply of insurance plans. But the Canadian plan, like the French and the British… does not allow for market choice. They are all single payer plans, and so appear to Cold War philosophy as irrational.

There are many reasons why market rationality does not apply very well to medicine. One obvious one is the lack of information available to the chooser: unless you are a doctor yourself, you don’t have a clue which treatments will be best for you. Most people rely on their doctors to provide this information, but this leads to a regress: how do you know your doctor is right? There are various web sites for evaluating medical practitioners—but how do you know which to trust? And so on.

Another is that consumers of medical care are highly constrained: they passionately want to have the most effective possible treatment, and are loathe to consider alternatives that may be less costly or inconvenient but also less effective. Since the more effective treatments often cost more than the alternatives, they opt for those. Cost even becomes, in their state of imperfect information, a proxy for effectiveness.

The Republican model for health care seeks to substitute market forces for governmental action in medical insurance: privately purchased insurance should replace Obamacare, with its mandate to purchase insurance from an array of government-approved plans. (Single payer schemes are of course out of the question: There is no alternative in a single payer system, and so rational choice among insurance plans is impossible.)

The problem with this is that no one has formulated a credible alternative to Obamacare (except,of course, single-payer).  Republicans hate the mandate, but unless healthy (and often young) people are forced to buy insurance, the pool will be too expensive and costs will skyrocket. They also hate the government constraints on medical insurance plans, but removing them would lead to a proliferation of junk insurance (high deductibles and many exclusions, often hidden by needlessly complex prose). And let us not forget that Obamacare was arrived at by two very different paths: Barack Obama’s in 2009, and Mitt Romney’s in 2006. Alternatives, like unicorns, will be hard to find.

But on the premises of Cold War philosophy, they have to exist, because if setting a national medical plan is to be a rational exercise it has to come about through a choice among alternatives. Hence, a touching faith among Republicans: there is, somehow, an alternative to Obamacare’s mandate and governmental role—it just hasn’t been found yet. And hence “repeal and delay:” end Obamacare now and then wait for the alternative to show up, as it surely–surely–will.

But if an alternative to Obamacare which did away with the mandate and other government constraints were possible, one would think it would have been found by now. The Republican faith in a future alternative to Obamacare derives, not from experience, but from Cold War philosophy. And faith in a philosophy is a dubious thing.