18. Cold War Ethics and the Rejection of Identity

One problem with the ethics of Cold War philosophy concerns what I will call “disidentification” (Hegel called it Entäußerung, or “externalization”). Whatever I choose has at least one alternative, for otherwise there would be no choice. And if I identify myself with any member of my plurality of alternatives, I cannot choose any alternative to it (or them). Since alternatives are incompatible with one another (otherwise there would also be no choice), doing that would end my identity and so be suicidal, physically or morally. Therefore, any alternative in a rational decision must be something I can walk away from and still be me.

This is not an issue for rational choice theory, which was originally developed to cover cases of consumer choice and related contexts. In such cases, my identity is not at stake; no matter which brand of toothpaste I choose to buy in the drugstore, I will still be me. But when rational choice theory becomes Cold War philosophy, it applies to everything I do that can claim to be “rational,” and so to more important matters.

One inevitable result is that commitment comes to look like choice, in spite of the fact that commitments are, precisely, what you cannot walk away from. Instead of seeing myself as committed to my religion, for example, I may find myself trying to choose it in a Cold War way. But choosing a religion implies, as commitment does not, that there are other religions which I might choose—alternative religions.

But then, before I make the choice, I can have no religion at all.

Suppose, for example, I am “choosing” Catholicism. There must then be an alternative religion on the table which I might choose—say, Hinduism. If I already identify as a Catholic, however, this choice is fake: I cannot choose to become a Hindu without changing my identity. So for my choice to be real, I must put aside the religion I already have—Catholicism.  I must “disidentify” with it.

Cold War philosophy, claiming as it does to apply to everything rational, bids us to take this stance on all things. Everything about me then becomes an object of my choice, and at the limit I can have no identity other than that of being a rational chooser, i.e. an algorithmic machine who first ranks her preferences in accordance with transitivity and completeness and then opts for the highest utility. (As Rawls might say, everything concrete about me is behind the “veil of ignorance.”)

When Hegel unpacks disidentfication as “externalization” in the central sections of his Phenomenology of Spirit, it is an emancipatory process: it frees me from all kinds of identities imposed on me by my upbringing and social conditions. The difference is that Hegel does not present externalization as a process involving choice among alternatives. It is rather a “negation,” or rejection, of various predominantly natural features as constituting my identity. In doing that, I come to see them, not as features of my identity, but as mere circumstances which I can  change or abandon. (I discuss this at length in my Poetic Interaction.)

The resulting identity, however “emancipated” it may be, is a pretty thin one, and in any case the emancipation is a bit of a sham. No matte how many of my own properties I have negated, denied or ejected from my identity, others will treat me differently: they will continue to see me as having, or performing in accordance with, specific traits of gender, race, class, nationality and so forth—and it is not I who write the scripts for the various scenes I am forced to perform.

Who does?Who makes this a sham emancipation? The question is a pretty big one, but the answer is pretty commonsensical. I’ll get to it soon.

 

MR 7: On Leiter’s Nietzsche IV (and Final)

My encounters with Leiter’s Nietzsche had led, not to Nietzsche, but to a set of rules for turning Nietzsche into someone analytically acceptable. But what, according to Leiter, did analytical philosophers find acceptable? What sort of people are they, if we  judge by Leiter’s rules for pleasing them?

First of all, they are deeply concerned with their enemies (Rule # 1): Nietzsche had to be rescued from Foucault and his evil “postmodern” henchmen. Your enemies, O analytical philosophers, are Nietzsche’s enemies, and mine as well—so we are all friends!

Second, they know little about those enemies. Though some of the henchmen Leiter tilts against are real enough, Leiter’s Foucault is largely a fabrication, and anyone who has read a bit of what Foucault says about his own project should know it.

Third, they are incurious: what would it take not to notice that Leiter’s “continuity with science” thesis Chapter One is buttressed by a convoluted discussion of “continuity” but none of “science” until the next chapter, which quietly contradicts what is presupposed in this one? Leiter doesn’t bother to cover his tracks.

Not only are these people incurious, but they are dismissive of any project not their own (Rule # 3); like to ape scientists (# 4); and adhere to philosophical strictures established to fight a war that ended thirty years ago (# 2).

Not a very pleasant picture—and hardly, of course, an accurate one. The analytical philosophers that I have come to know in my career no more resemble this than they resemble pterodactyls.

But there was one final twist, and one final perfidy.

The twist had to do with Foucault—the real one—and naturalism. As Scare shows, “naturalism” has a heavy history in American philosophy. During during the McCarthy Era it came to be used as a euphemism for “atheism,” and in that sense, Nietzsche is obviously a naturalist: he denies the existence of God. But “naturalism” also meant turning to nature rather than to culture for your explanations of human affairs: it was consciously designed to bring philosophy under the umbrella of natural science. The survival value of  this during the early Cold War is evident.

Leiter, of course, knows nothing of that history. Having anti-postmodernist fish to fry, he wants “naturalism” to mean “belief in human nature.” But human nature plays a small role in his subsequent discussion, because what Nietzsche actually appeals to in his explanations, according to Leiter, is not human nature but various “type facts” about human nature: “It is type-facts, in turn, that figure in the explanation of human actions and beliefs (including beliefs about morality)”—p. 8.

There is one human nature, but there are many type-facts about it. Leiter, quoting himself, refers them to “a fixed psycho-physical constitution” (p. 8). But what is this “fixity?” Do type-facts about you  last your whole life? No quote from Nietzsche in this section says so. Do they change historically? Again, Nietzsche—Leiter’s Nietzsche, anyway—is silent.

Suppose that they do, and that the nature of a type-fact can be explored via the way people talk about themselves and other things. Then type-facts would act very much, for Foucault, as do the discourses in which a person has been “subjectivated” (Scare p. 8) explain the behavior of that person. Like type-facts, indeed, Foucaldian discourses are relatively stable forms which determine, and so explain, peoples’ behavior. The main difference would appear to be that for Foucault these determining forms are not natural, but cultural. That’s quite a difference—but also very far from the epistemological chasm that Leiter has dug between Nietzsche and Foucault.

At this point, I decided to stop reading. But the final perfidy awaited. Leiter’s Chapter Two recounts “the profound intellectual influences on the young Nietzsche, influence that shaped his naturalism” (p 35). Nietzsche’s philosophy, we learn, arose from his youthful, enthusiastic, and uncritical reading of the Presocratics; the Sophists; Schopenhauer; and the German Materialists. Nietzsche, it appears, loved them all—presumably because of the relevant type-facts about him.

But Nietzsche was clearly not a universally uncritical reader; he was pretty critical, and early on, of the Bible, and later of manifold philosophers and historians. So why take it that a spate of uncritical reading in Nietzsche’s early years explains his views?

It helps, here, to remember that Leiter’s book is intended for young people: it is to be “student-friendly” (p. xi). This absolves Leiter of numerous tedious things we would expect to find in a book intended to convince scholars—things like careful reading of Foucault, engagement with secondary literature not in English, historically informed discussions of naturalism, and linguistically informed discussions of terms like Wissenschaft.  But it raises another question: what sort of a friend to students is our friend Leiter?

One who gives the young readers of his book the impression that enthusiastic, uncritical reading was crucial to the development of a major philosopher. One who thus, tacitly, recommends that kind of reading to students. Including, of course, the very students who read Leiter’s book! He apparently wants students to read him uncritically and enthusiastically, abjure the evil “postmodernists,” and dedicate themselves to Leiter’s caricature of analytical philosophy. And in this way Leiter will endear himself to those analysts (or would if there were any like that) and develop an uncritical following among philosophy majors. He will have a much more influential, and so agreeable, career than if he had made his arguments rigorously and presented complete discussions of the evidence.

So now I really did stop reading. Though Leiter’s book has some merits as popularizing Maudemarie Clark’s account of Nietzsche’s “theory” of truth—views with which I am in some, though limited, sympathy—isn’t really about Nietzsche at all. Like so much of what Leiter does, it’s really about—the Life of Brian.

MR 6: On Leiter’s Nietzsche III

Sorry for the interlude—I was traveling and ill. The illness did not come from reading Brian Leiter’s Nietzsche book. I think.

Having attempted to digest the eightfold (or so) intellectual diverticulum presented by Leiter’s discussion of Nietzsche’s naturalism, I found I was still only on p. 11 and tried to move on. But there was much more in this section for those who want to learn how to make  someone who died in Germany in 1900 look like a contemporary American analytical philosopher. And for me there were three more Leiter Rules

First, it turns out that Nietzsche’s own basic project has nothing to do with his naturalism. That project, Leiter said, is Nietzsche’s philosophical attempt at “value-creation,” i.e. at the “revaluation of all values.” Value creation’s profound concern for human greatness (not a notable feature of naturalism) “animates all [Nietzsche’s] writings” (p. 27); and the much-belabored naturalism is merely an instrument in its service (pp. 11, 26, 283). But value-creation, Leiter avers, has no “continuity” at all with Nietzsche’s naturalism (p. 11). So Leiter dismisses it: “most of Nietzsche’s writings are devoted, in fact, to the M-naturalistic project” (p. 11).

We now have one philosophical project (value-creation) that “animates all” of Nietzsche’s’s writings; and another (naturalism) which, though extensively treated in those writings, is explicitly said to be merely an instrument in its service. On which do we focus? The instrument! It’s as if a book on Quine’s philosophy focused on his many technical writings in logic, dismissing the philosophical overview in whose service they stand. So we get Leiter Rule #3: Dismiss any aims and concerns of your guy that do not align with those of analytical philosophers—even as you openly admit how important they were to Nietzsche himself.

Dismissing value creation is in the service of yet another Leiter Rule, # 4: Make your guy as close to a scientist as you can. We saw in an earlier post that Nietzsche’s “emulation” of scientific method (which is not really an emulation, and not of a method), is phrased by Leiter in terms of Nietzsche’s “continuity” with science. Leiter supports this continuity-with-science thesis with four quotes in which Nietzsche praises scientific method. The first and most important of these is not from the main text Leiter’s book deals with—On the Genealogy of Morality—but from the preceding book in Nietzsche’s oeuvre, Beyond Good and Evil. But in addition to the fact that Leiter had to get it from another book—not fatal of course, but grounds for worry—there are, he notes, some “striking” things about this passage.

Yes, and one striking omission from Leiter’s discussion of it. The quotation is long and I won’t reproduce it here; it’s on p. 6 (having already been as far as p. 11, I was clearly losing ground). Suffice it that the quote discusses the “discipline” of science, and Leiter claims it shows Nietzsche’s allegiance to scientific method. But what does Nietzsche mean by “science?”

Leiter then, on p. 7, supports his view of this passage with three more quotes, of which the first and most problematic is from The Antichrist § 59. (I note in passing that The Antichrist, like Beyond Good and Evil, is not On the Genealogy of Morality.) As Leiter has it, the quote says that “[S]cientific methods… one must say it ten times, are what is essential…” But alas (for Leiter!) the Colli-Montanari German text, which is the only German text referenced in his bibliography (p. 306) does not contain the word “scientific;” it refers only to “methods.” Same for the Kaufmann translation, which Leiter claims to have used. Uh-oh.

Perhaps some reference to the context of science is established elsewhere in the passage, unquoted by Leiter but enough to justify (though not to excuse) the mutilation of the actual quote? No: the whole passage is about the ancient Greeks, who as far as I know had not discovered the modern scientific method (they didn’t do many experiments and didn’t have a clear conception of empirical “method”). Later in the passage, Nietzsche in fact lists the ancient “methods” that, in the twilight of Christianity, have been reconquered by moderns—and what are they? “The free gaze on reality, the cautious hand, patience, and the entire probity (Rechtschaffenheit) of knowledge.” Cognitive virtues, all of them—but why call them “scientific”? They could have come from Heidegger—and the later one at that.

The other two quotes on p. 7 do valorize “scientific method—” but again, what does Nietzsche mean by that? It is, ahem, surprising that in his discussion of Nietzsche’s continuity with science, and in contrast to his elaborate discussion of the meaning of “continuity” in this context, Leiter doesn’t ask what Nietzsche means by “science.”

To be sure, Leiter notes—many pages later—that the English-language obsession with natural science is not conveyed by Wissenschaft, the German term (p. 36), and at one point (p. 41) he notes that Nietzsche characterizes science as “knowledge for the sake of knowledge,” which is part of the German meaning, but is hardly the whole of it. Wissenschaft, in German, etymologically means “an organization of knowledge;” the Oxford Living Dictionaries define it as “The systematic pursuit of knowledge, learning, and scholarship (especially as contrasted with its application).” So in German, things like jurisprudence, (Rechtswissenchaft) and the study of Judaism (Wissenschaft des Judentums) count as sciences. Neither of them, of course, makes a presupposition of determinism: they are all about people who are responsible for their actions. So continuity with Wissenschaft does not guarantee naturalism in Leiter’s sense.

A discussion of Wissenschaft as part of the discussion of naturalism would thus weaken Leiter’s claim of Nietzsche’s continuity with science, even as that claim was advanced. So Leiter discusses it later and obliquely, when he comes to Nietzsche’s concern with classical philology in Chapter Two (pp. 35-38). This discussion trades upon the general view of Wissenschaft mentioned above, as it must—philology is hardly an empirical science as Anglophones understand the term (it make virtually no use of mathematics, for example). But it does not relate Nietzsche’s views on the “science” of classical philology to his views on Wissenschaft in general.

What is at stake in this is Leiter’s “continuity with science” thesis. Since his discussion of that in Chapter One makes no reference ot the German term, it sounds by default as if Nietzsche were continuous with science as Anglophones understand it–with natural science. In fact, wissenschaftliche method turns out for Nietzsche to mean nothing more than building up an organized body of knowledge from empirical evidence, doing so carefully and with “free gaze,” i.e. with a willingness to see reality as it really is. All the quotes that Leiter adduces for Nietzsche’s praise of scientific methodology are thus to be read in the context of Nietzsche’s home language—and Nietzsche’s continuity with science is not as contemporary as it sounds in Leiter’s discussion. Indeed, it is downright vapid. And we get a fifth Leiter Rule: When the meanings of your guy’s words in the original language to not coincide with the meanings of the terms by which they are conventionally translated into English, ignore them—or discuss them somewhere else.

 

MR 5: On Leiter’s Nietzsche II

Rule #1 for Leiter transformations—for turning some wild and crazy figure from the history of philosophy into someone logical and mild-mannered enough to be acceptable in today’s well-purged philosophy departments—was: claim that your guy and contemporary philosophers have a common enemy, even if you have to invent that enemy.

Now we turn to rule # 2: dress your guy up in the formal structures of Cold War philosophy. Most philosophers, when you say “formal structure,” think of logic. But, as Scare argues, logic merely structures philosophy’s surface; it has no ontological bite and is more of an intellectual veneer than a structure. The underlying formal structure really used by Cold War philosophers—the one that applies across the board, whatever else they are doing, and which unlike modus ponens and its empty ilk actually gets certain things done—is rational choice. As many scholars have noted, Cold War “rationality” just means choosing for the highest utility among alternatives ranked according to transitivity and completeness (see Scare for bibliographical details). It’s pretty hard to apply logic to Nietzsche and make him look good. But what about Cold War rationality? Leiter’s effort, though doomed, is noble.

Understanding what is meant by the claim that Nietzsche was a naturalist requires, Leiter claims, telling us what sort of naturalist Nietzsche was. One way to do this, the simplest and most direct, would be to find the places where Nietzsche talks about naturalism and expound them. But Leiter doesn’t do this; instead, he offers us a whole taxonomy of naturalisms—I counted eight. Nietzsche, I think, counted none; as far as I know, he never discusses the typology of naturalism.  I won’t go into all of Leiter’s types and subtypes; the main distinction is between M-naturalism, which holds that philosophy should be “continuous” either with the methodology of the sciences (which means emulating scientific method) or with their results (which should be “supported” by “the best science”: p. 3), and “S-naturalism,” which is “substantive, i.e. holds either that “the only things that exist are natural (or perhaps simply physical) things,” or  “semantic,” holding  that “suitable philosophical analysis of any concept must show it to be amenable to empirical inquiry” (p. 5). So we are up to four varieties of “naturalism.”

I’ll stop there. Leiter’s question is: where does Nietzsche fit? On p. 5, he is both an “historical S-naturalist,” in that he rejects any explanatory role for God in an account of the world, and a “speculative M-naturalist” in that he “takes over from science the idea that natural phenomena have deterministic causes.” The adjectives here are cryptic, but meanings can be worked out. “Historical” just appears to mean that Nietzsche, like Hume, is now an historical figure—like Lincoln, he belongs to the ages. “Speculative” appears to mean that Nietzsche, also like Hume, develops a theory of human nature that is “modeled” on science in that it takes over from science the view that “natural phenomena have deterministic causes;” beyond that (and there is an awful lot beyond that) Nietzsche’s views appear to be produced by some sort of “speculation.” The determinism means that the general theory of human nature provides a basis for explaining (in words quoted, still on p 5, from Barry Stroud) “everything in human affairs.”

So much for cryptic; let us move on to problematic. Just why Nietzsche’s M-naturalism should be called “speculative” is not explained. If “speculative” means “goes beyond sensory experience,” as it used to, then the empiricism Leiter attributes to Nietzsche (e.g. on p. 71) poses problems. A more puzzling problem is the identification of determinism as a method. “Natural phenomena have deterministic causes” is not the description of a method, but a single statement which can (and does) serve science as a principle by which scientific methods get elaborated. Misidentifying it as a method is no idle tomfoolery; it makes Nietzsche look more “scientific” than he is. If I say I have adopted the “Brady method” for playing football, it sounds like I have a lot more in common with the great Patriot than if I admit that I have adopted only the principle that the aim of the offense is to get the ball across the goal line. (I will come back to Nietzsche and science.)

On p. 8, Nietzsche also turns out to be a results M-naturalist in that he draws on actual scientific results, “particularly in physiology.” Of course, lots of non-naturalists, from Aristotle to Aquinas to Alasdair MacIntyre to Charles Taylor, do the same. So why “drawing on” scientific results should make you a naturalist is unclear, as is how you can draw on them and still remain “speculative.” We certainly have here a remarkably relaxed criterion for naturalism.

That Nietzsche is a naturalist is hardly news. Leiter’s contribution, in his own eyes apparently, is to determine exactly what type of naturalist Nietzsche was.  The discussion has slightly misfired in that out of eight (or so) categories of naturalism Leiter identifies, Nietzsche lands in three. It’s a bit like ordering half the menu in a restaurant. But people do do that, if the menu is cryptic enough.

More to the point: why this involved and confusing discussion of types of naturalism in the first place? Nietzsche himself never talks that way as far as I know (and as far as Leiter documents); it is an external matrix applied to his thought.  Why? Why not just say that Nietzsche’s naturalism has three components: his denial of an explanatory role for supernatural entities, his recurrent use of physiological results, and his emulation of the main presupposition of science? What is the point this complex and confusing discussion in a supposedly “student friendly” (p. xi) book?

This is where Rule # 2 comes in. Leiter’s discussion of naturalism makes wild-and-crazy Nietzsche look, not only canny, but (sort of) rational in a Cold War way. First, if you have eight (or so) classes of naturalism, and Nietzsche exhibits three of them, he is nicely (if a bit widely) boxed in: pinned, we may say, in a lepidopteral sort of way, like Max Otto during the Otto Affair (see Scare, Chapters One and Two). Second, who put Nietzsche in there? Was it Nietzsche himself, who after all is responsible for his own views? Then it might look, at first blush, as if Nietzsche arrived at his version of naturalism via a choice among eight (or so) different versions of it. It might, if you are used to Cold War philosophers operating that way.

There are of course enormous problems with claiming that Nietzsche arrived at his naturalism via a rational choice among alternative forms of it—problems we will see later—and Leiter doesn’t openly claim that he did. What he has done with this strange discussion is set things up to make that an easy inference for the student. But who knows? It may not be Nietzsche that Leiter is trying to adapt to Cold War philosophy. Maybe it is Leiter himself: maybe he wants us to think that it was he, not Nietzsche, who has laid out the alternatives and then placed Nietzsche in among them—that it is he, not Nietzsche, who is “rational” by the standards of Cold War philosophy. Or maybe Leiter just assumes, unconsciously even, that this is just the way to do good philosophy. There are by now so many articles and books that operate like this—they lay out some alternatives, consider their pros and cons, and then choose among them—that Leiter just assumes that’s the way to proceed. In any case, we readers of Leiter, young and old, are beginning to understand Nietzsche, because we know where he is on our Cold War grid.

MR 4: On Leiter’s Nietzsche I

Recently, I looked into Brian Leiter’s leading (leitende) opus, Nietzsche on Morality (the not-so–rare First Edition, from Routledge in 2012). I had the most laudable of motives: I didn’t want to be like Brian, who is wont to trash people (including me) without bothering to read them. After a few pages, however, I closed the book again. Here, in five posts, are the reasons. There are a lot of them; the book is a compendium of strategies for converting historical figures into analytical philosophers.

A few pages in comes a discussion of Nietzsche’s “naturalism” (pp. 3-11). It’s an important discussion to Leiter because Nietzsche’s “naturalism” is what Leiter hopes to deploy against his (and, it would seem, civilization’s) main enemies, the “postmodernists.” This identification of an enemy is the first step in the Leiter Conversion: “O my analytical confrères” (it fairly shouts) “Nietzsche is your brother—he hates whom you hate!” The relevant enemy is hateful indeed:  a set of vile and dangerous nincompoops who claim that no text conveys anything objective, that all we have are interpretations.

On Leiter’s view, postmodern interpretations of Nietzsche deny two things: that (a) humans have a nature and (b) we can know facts about that nature. Foucault, the arch-postmodernist, attributes denials of (a) and (b) to Nietzsche; Leiter is out to show he accepts them (p. 2).

One of these claims is ontological and the other epistemological, but Leiter runs them together in that anyone who accepts either is assumed to accept the other: “that the genealogical object has no ‘essence’ suggests an anachronistic affinity with postmodern skepticism about facts and objectivity” (p. 167). Why would the denial of essences “suggest” skepticism? What connection between denying essence and embracing skepticism allows this “inference”?

Leiter seems to be skating here across a conflation of two very different claims, but he is not. He is skating across no fewer than four very different questions: (1) whether we can know human nature; (2) whether we can know facts about human nature; (3) whether there is human nature; and (4) whether there are facts about human nature.  The relations among these claims are tangled. There can be facts about human nature if there is no such thing as human nature, such as the fact that it doesn’t exist.  If it exists and we know that, we know at least one fact about it and so there are some such facts; but the idea that we can know facts about human nature without knowing human nature itself is as old as Plato (the ti esti question). Conversely, if essences are known intuitively, as Plato sometimes thought, we can have an intuition of human nature without knowing facts about human nature (the intuition will then be ineffable: Symposium 211). So some of these claims are logically independent of others, and some are not.

I won’t go into the whole thicket, which Leiter doesn’t even appear to see. His Foucault runs the the four claims together as well, denying them all. But Leiter’s evidence for the denials differs from case to case. The denial of (3) is supported by quoting “Nietzsche, Genealogy, History” on essences (p. 2). So far so good: Foucault indeed does not think there is a human nature, and also (4) does not think there are “deep facts” about human nature, unless perhaps we count its non-existence as a “deep fact” about it. But when Leiter ties Foucault to (1) and (2), he does so with a quote, not from Foucault himself, but with one from Dreyfus and Rabinow (p. 2).  Suspicions awake: can’t he find Foucault himself saying this?

When I wrote he chapters on Foucault for my book Philosophy and Freedom (Indiana 2000) I couldn’t. What I did find was Foucault saying that his archeological project must “correctly describe the discourses it treats” (Archeology of Knowledge p. 29) and that he himself is a “positivist” with respect to truth (op. cit. p. 125-127; also see pp. 31, 46; for further discussion and more passages see my whole discussion of Foucault and truth at my Philosophy and Freedom pp. 133-136). So Foucault thinks there are facts, and that we can know them; he just doesn’t think there are essences or “deep facts” about them; in the latter case, he denies, not the factuality of “deep facts,” but their depth.

Indeed, the view that we can’t know any facts would render Foucault’s entire project—in both its archeological and its genealogical phases—massively incoherent. For that project is openly polemical—as “Nietzsche, Genealogy, History” shows, Foucault thinks traditional historians are wrong. They have distorted history by interpreting it in light of overarching ideas such as “historical epochs.” If all Foucault can offer against them is one more interpretation, he cannot carry the day; everyone is right.

To be sure, Foucault denies that there is such a thing as human nature, and he  also denies that there are deep facts about that nature,, since he doesn’t believe in “depth.”  But he certainly thinks that people exist, and that we can know facts about them (such as that they are determined by the discourses in which they participate). So if we change (2) and (4) above by substituting “people” for “human nature,” Foucault accepts them. His claim, then, is that people exist, but that what they are changes too often and radically to be a stable and coherent “nature.” His fundamental point is ontological, not epistemological—and certainly not skeptical. (The same, by the way, applies to  Derrida. If Plato did not write the Phaedrus, for example, what is the point of Derrida’s deconstruction of it?)

As I argue in Philosophy and Freedom, the “epistemologizing” of the conflict between postmodernism and modernism was a highly unfortunate move, for though it rendered refutation easy, it rendered debate impossible: how can you argue with someone who denies the possibility of truth and reference? The way is open so all sorts of insouciant chicanery, including the creation of straw men. In Leiter’s case, however, the straw has a purpose. For Leiter’s ultimate goal, as we will see, appears to have nothing to do with postmodernism.

19. Randle P. McMurphy and the Cold War Aesthetic

Eric Bennett’s recent book, Workshops of Empire: Stegner, Engle and American Creative Writing During the Cold War (Iowa 2015) deals with the founding and funding of creative writing programs in Cold War America. What happened turns out to be a microcosm of what happened in many other fields. American writing had previously bathed in a wide-open, let’s-give-it-a-try atmosphere in which success was largely based on personal contacts (think Thomas Wolfe—Maxwell Perkins). Creative writing programs signaled the replacement of this shambolic non-system with a well-managed meritocracy in which serious achievers could be identified and certified by higher authorities by the time they were 25 years old. The oldest and most prestigious of these credentials were (and are) bestowed by the Iowa Writer’s Workshop.

As in other fields, the certification process required government and private institutions to work together, because writing programs needed to get funding from a variety of government agencies and private foundations. And as in other fields, the funding came with a political price. The result was a Cold War aesthetic that included basic principles of both form and content.

Consider the formal maxim, “Show Don’t Tell.” Bennett argues that this was a core principle of good fiction writing at the Iowa Writer’s Workshop, and passed over from that prestigious height into American fiction generally. Notice that it trades, consciously or not, on the distinction between showing and telling made in Wittgenstein’s Tractatus: “creative writing” can almost be defined as writing that shows rather than tells. All the important things for Wittgenstein have to be shown, rather than told, but showing for (the early) Wittgenstein is paradigmatically accomplished by true propositions. “Creative showing,” as we may call it, differs from Wittgenstein’s concept in that it dispenses with the truth-requirement; it shows for the sake of showing, and is the purest form of writing possible.

Though Bennett’s book does not mention Wittgenstein, his views were in the air and suggest that “Show Don’t Tell” was not merely a pragmatic maxim but was deeply rooted in the philosophy of the time. But for all its philosophical abstraction, “Show Don’t Tell” has a politically dark side. First of all it is not, Bennett argues, a truism; it is not even a universally-applied maxim, for novelists and poets have often told their readers about things that cannot be shown. I would suggest that this includes, first, their own thoughts: where would Proust or Tolstoy be if they couldn’t use their authorial voices to reflect upon and evaluate their characters and their actions?

The second result is the suppression of reflection in characters, whose thoughts, in line with this particular aesthetic, have to be immediately inferable from their actions. Like all unreflective people, unreflective fictional characters simply float from incident to incident. Hence, I suggest, the subgenre of the “Creative Writing Novel” that we all know so well. When they are written by a man, its exemplars (unnecessary to mention any by name) portray how lack of reflection passes over into inarticulateness and then into violence. Such a novel is one incident of senseless violence after another, all in the service of Showing it Like It Is. When written by women, such novels show one incident of human caring after another, all in the service of Showing It Like It Should Be. Better, of course; but still Cold War.

Most American literature, and certainly the first-rate stuff, did not fall prey to this. But it had to fight against it. Mindless violence and unreflective caring constituted the default content of Cold War fiction. Why? Because reflection, the attempt to articulate what one has just done or just been, is a prerequisite of critical thinking. No reflection, no critique; problem solved. (Philosophy had an interesting application of this principle for those who, like me, confused reflection and self-reference: Russell’s paradox made both impossible. But self-reference is an atemporal notion and thus impossible long before you get to its logic.)

As to content, male Artists became hypermasculinized: their works stood for the “natural man—” the exaggeratedly roisterous, but free, individual battling some sort of Machine.  Again, this naturalness applied both to artists themselves and to their characters (think Pollack throwing paint on a canvas, Kerouac spitting out a novel onto an improvised roll of tracing paper). Artists thus became, to their good fortune, the very kind of person whom Communism sought to repress. This image found its way into fictional characters, resulting in a type-character—call it “Randle P. McMurphy,” after its unsurpassable depiction by Ken Kesey— a cultural counterpoint to the cold, calculating rational chooser propounded by philosophers (see Scare, chapters 3 and 4).

American literature in the Cold War thus became would-be McMurphys writing about fictional McMurphys. But there is a fundamental dishonesty in this conmprehensive rejection of reflection, because it takes a lot of reflection to produce a work of art. Even the most unreflective painter or poet is continually monitoring their work, making (often highly constrained) choices as they go along. So while the literary character McMurphy retains even today his freshness and vigor, the artist McMurphy turns out to be a sham.

20. Cold War Philosophy and Medical Care

Cold War philosophy believes that market thinking—rational choice procedures, sometimes augmented by game theory—constitutes the whole of rationality. Any other mental activity is either rational choice in some sort of disguise, or is irrational. Everything therefore has to be organized on market principles.

It is the word “everything” in that last sentence that shows we are dealing with a philosophy, and not a mere theory or ideology. As Kant pointed out more than once, you can’t base universal judgments on experience, so you have to have some sort of a priori argument for them—and that’s philosophical.

Cold War philosophy has had a particularly touchy history in the case of medicine.  One thing which readers of Scare’s Chapter Six may notice is the way in which public health disappeared from the writings and concerns of Raymond B. Allen around the time the Cold War began. When Allen was still running medical schools (i.e. until 1946), he believed that the great challenge in medical care was no longer treating individual illnesses—that, he thought, was pretty well in hand—but setting up social systems for delivery of health care and, indeed of health itself, to Americans. His argument was explicitly pragmatic: health was assumed to be a good and the issue was how to deliver it in a systematic way.

As Scare shows, when Allen became a Cold War academic administrator, the pragmatism disappeared. Of course, he no longer had occasion to write specifically on medical issues, but in general he adopted the Cold War philosophical view that science aims at truth (or confirmation) alone. Allen’s turn was part of a broader, but tacit, cultural development in which American medicine came to focus on solving the health problems of individuals, rather than of communities (this turn can be summed up in a single word: “Flint”). This was only, as the Germans say, konsequent. It followed from the view that medicine has to be, basically, a market exchange between a (sick) consumer and medical science.

Medical care is one place where experience pretty clearly refutes Cold War philosophy: you have only to step across the Canadian border to see that single payer systems produce better health more efficiently than the traditional American panoply of insurance plans. But the Canadian plan, like the French and the British… does not allow for market choice. They are all single payer plans, and so appear to Cold War philosophy as irrational.

There are many reasons why market rationality does not apply very well to medicine. One obvious one is the lack of information available to the chooser: unless you are a doctor yourself, you don’t have a clue which treatments will be best for you. Most people rely on their doctors to provide this information, but this leads to a regress: how do you know your doctor is right? There are various web sites for evaluating medical practitioners—but how do you know which to trust? And so on.

Another is that consumers of medical care are highly constrained: they passionately want to have the most effective possible treatment, and are loathe to consider alternatives that may be less costly or inconvenient but also less effective. Since the more effective treatments often cost more than the alternatives, they opt for those. Cost even becomes, in their state of imperfect information, a proxy for effectiveness.

The Republican model for health care seeks to substitute market forces for governmental action in medical insurance: privately purchased insurance should replace Obamacare, with its mandate to purchase insurance from an array of government-approved plans. (Single payer schemes are of course out of the question: There is no alternative in a single payer system, and so rational choice among insurance plans is impossible.)

The problem with this is that no one has formulated a credible alternative to Obamacare (except,of course, single-payer).  Republicans hate the mandate, but unless healthy (and often young) people are forced to buy insurance, the pool will be too expensive and costs will skyrocket. They also hate the government constraints on medical insurance plans, but removing them would lead to a proliferation of junk insurance (high deductibles and many exclusions, often hidden by needlessly complex prose). And let us not forget that Obamacare was arrived at by two very different paths: Barack Obama’s in 2009, and Mitt Romney’s in 2006. Alternatives, like unicorns, will be hard to find.

But on the premises of Cold War philosophy, they have to exist, because if setting a national medical plan is to be a rational exercise it has to come about through a choice among alternatives. Hence, a touching faith among Republicans: there is, somehow, an alternative to Obamacare’s mandate and governmental role—it just hasn’t been found yet. And hence “repeal and delay:” end Obamacare now and then wait for the alternative to show up, as it surely–surely–will.

But if an alternative to Obamacare which did away with the mandate and other government constraints were possible, one would think it would have been found by now. The Republican faith in a future alternative to Obamacare derives, not from experience, but from Cold War philosophy. And faith in a philosophy is a dubious thing.

21. Time, Trump, and Aristotle Part I

It should not be surprising that Trump and some of the people he has put into his cabinet appear to be narcissistic fools—narcissistic foolery is an occupational disease of billionaires and generals when they forget they’re being kowtowed to by absolutely everybody, and think instead that they’re being treated as friends or even told the truth. But foolishness, by definition, is not understood by fools; you need some smarts and, sometimes, a good deal of background. In order to understand the Trumpian crop fully, for example, you have to know a good bit about the central books of Aristotle’s Metaphysics (VII-IV). The following is therefore a bit abstruse—but as I like to say, the most concrete struggles can require the most abstract thinking.

In the Metaphysics, Aristotle unpacks the nature of Being in terms of ousia—of form in matter. As I argued many years ago in my Metaphysics and Oppression, in natural beings form is active, and exercises a threefold domination over matter: it separates a chunk of it off from other matter (boundary); generates and/or orders everything that goes on within those boundaries (disposition); and controls the exchanges between the being thus constituted and the world outside (initiative). Being itself thus comes to exhibit a two-level structure of leader and led, oppressor and oppressed. This structure, I argued, has been basic not only to Western thought but to Western life ever since Aristotle formulated (and also, less consciously, before). Both sovereignty and freedom, for example, tend to be conceived on its basis, and it has provided the model for many different types of social organization in the western world: families, schools, the Roman Empire, the French railway system, bourgeois households, and more.

Including corporations and armies. A military commander exercises (though in the US under civilian leadership) nearly complete control over the activities of a closely defined set of people. A corporation, too, has a set of boundaries that divide what it owns and whom it employs from what other “legal persons” own and employ; it has a CEO who, (though nominally overseen by a board of directors) organizes both what happens within it and the marketing of its products, i.e. their sale to the outside world. While modern corporations and armies have many ramifications and complexities undreamed of in Aristotle’s time, their basic lineaments come right out of his Metaphysics. They are, we may say in his name, beings par excellence.

Modern corporations prove this by a paradox: Despite the fact that the stock market (and corporate valuations in general) do much better under Democratic presidents, CEO’s today are overwhelmingly Republican. This, if you think about it, is bizarre: what CEO’s (and members of boards of directors, for that matter) supposedly want, above all else, is to make money. So they should clearly prefer Democrats!

But they don’t. So what is going on? Maybe they don’t really want money. Listen to them: the high taxes under Democratic administrations bother them, to be sure—but what really drives them berserk is government regulation. Indeed, what actually bothers them about high taxation is often not that it thins their personal checkbooks, for the money would go first to stockholders anyway; but that paying taxes keeps them from doing certain things they want to do with the business, mainly having to do with expanding it. What business leaders really want, then, is to be able to control their own corporations as an ousiodic form controls its matter, without interference from outside or resistance from below. So money is not at the forefront of their aspirations. If it was, they would all be Democrats. They are trying to fulfill the demands of ousiodic structure, not of their stockholders.

There is, however, one very untraditional fact about the people Trump has put into place to oversee the American government, and it has to do with the modernity of their education. Traditionally—for Plato as well as for Aristotle—to be a form meant to be specific. Over and above the human form, for example, you had in a human being only the relatively undefined human matter—the physical constitution of the human being, not all that different from that of other animals. It was thus up to the form to provide and so to exemplify the characteristic features of the being of which it was the form. Translated into the panoply of ousiodic institutions and practices in the Western world, this meant that leadership status was not transferable: take the pater of one family and put him into control of another family and disaster would ensue. Same for all institutions: the leadership role was, like form itself, specific to the institution.

Modern leaders, by contrast, have been selected for leadership positions in accordance with the basic premises of Cold War philosophy. And Cold War philosophy defines leadership in terms of rational choice: to lead a group or institution is to make decisions for it (George W. Bush, when president, actually referred to himself as the “decider”). Making decisions rationally is a skill transferable from one institution to another, as we see today in the steady migrations of CEO’s from company to company. The result is that the leader is no longer bound to his institution: he is free to leave and find another enterprise to lead. It was much more difficult when the skills involve in leadership were specific to the organization.

How does this apply to Trump and his lads? See § 22.

22. Trump, Time, and Aristotle Part II

 

Even if the people with whom Donald Trump has filled his cabinet are narcissistic fools (as I suggest in ¶ 21), we must give them their due: they are narcissistic fools who have mastered leadership skills which apply far beyond any one institution.  Those skills are the ones involved in making decisions according to the tenets of rational choice theory. Absolute confidence in them is basic to Trump and his world, which is wholly predicated on the idea that the skills needed to build and run a business can be transferred smoothly to everything else. This is lunacy; if Cold War philosophy had not accustomed us all to think that the skills involved in rational choice management are the only skills the rational mind has to offer, no one would accept it.

If we follow Aristotle a bit farther, we see not only that it will not work, but how it will fail. For according to Aristotle, the leadership of the people Trump is bringing to Washington will, in time, fall victim to time itself: “Time,” as he puts it, “is the enemy of ousia.” The passage of time alone destroys ousiodic structure, the kind whose leading positions the Trumpians are trained to occupy.

Why? Because time for Aristotle is not the kind of abstract and do benign ticking-away that it is for Newton. It is, cryptically to be sure, the measure (arithmos) of motion (kinesis). Motion, for its part, is “the actuality of potentiality quâ potentiality.” This even more cryptic phrase can be understood by contrast with what Aristotle thinks is the more basic case, the conversion of potentiality to actuality. In this sort of actualization, something that is not yet comes to be: the cut wood is now potentially a house, and when it is put together it will no longer be potentially a house, but actually one. In between those states of the world, however, there is a sequence of states in which that goal, the built house, is having effects in the world in that it is directing the movements of the people building the house. In that sense it becomes actual while remaining potential, for the house is not yet built. To focus on something as a goal is thus to “actualize it quâ potentiality.”

This applies to all motion for Aristotle because all motion for him is basically goal-directed: seeds grow into plants in order to fulfill their natures, and stones fall to earth (in his view) in order to fulfill theirs. We can circumvent this extravagant teleology by noting that no individual motion can go on forever, and so each has an end point of some sort; talk of that end point as “fulfilling” some nature or other is unnecessary. We can still say that any currently existing motion is “potentially” at its end point, and only when it gets there will the motion be complete: only then will it be a motion organized enough to be measured. Its measure is time, which presupposes this sort of organized motion. Beneath such orgsanized motions we do not find stasis for Aristotle. We find, as for Plato and Hegel,  a sort of entropic pullulating.

Organized motion kills ousia because to say that the  material components of an ousia are in motion is to say that they have end points of their own. Not all of these can be imposed by an organizing form. In particular, the subordinate members of a social organization never do only what the managers tell them to do; being human beings, they have all sorts of other plans, goals, and motivations as well. The pursuit of each of these moves the organization, or part of it, in ways not determined by its form. It therefore constitutes a weakening of the dispositional authority of the form: it diverts energy, we may say, away from the central directives even if it does not explicitly contest them. Time itself, the measure of motion, thus weakens ousiodic structure. Thus, no form in matter can last forever for Aristotle; time itself destroys it.

It is these weakenings, moreover, that make it necessary for form to be particular. The master of the slave Aristoxenus never tells him to have any task completed within an hour after lunch time because he knows that Aristoxenus, being elderly, falls asleep after lunch. Nor can said master rely on advisors to tell him about Aristoxenus, because maybe they, too, are elderly enough that certain things escape them. The kind of “specificity” required of the leader of a social organization is the kind provided by what Aristotle calls syzên, living together.

This kind of specificity is denied by the modern theory of corporate leadership, which incorporates the many idealizations of Cold War philosophy, some of which I mention in Chapter Four of Scare. Just as rational choice theory presupposes perfect information on the part of the chooser, so this view of leadership presupposes perfect obedience on the part of one’s subordinates.

Alas for the theory, nothing human is perfect: orders, plans and policies get lost in the diverse complexities of human goal pursuing, and that is no occasional accident. It follows from the very nature of time.

So it will be for Trump’s cabinet picks: in time, their schemes will fall apart, not only as the Washington “swamp” seeks to subvert them but, and more irresistably, as the individuals who are supposed to realize them simply do other things, like go to sleep. For a while Trump’s cabinet picks, like the billionaires and generals they used to be, may be sheltered from knowing about this by their subordinates. But civilian governance has outside scrutiny that armies and corporations don’t, and news of the failures will eventually come out. At which time the supposed transferability of leadership skills will come into play—and the leaders, frustrated and humiliated, will leave.

How long will it take? How long will the Trumpians last, issuing orders that are neither obeyed nor disobeyed, and formulating strategies that don’t exactly go awry but don’t work as intended either? Aand how many lives will be destroyed in this process?

Aristotle, l’m afraid, does not tell us.

23. McCarthyism and Philosophy: Strategies of Denial

There are a few people–more than a few, actually– who would like to deny that the domestic tumult of the early Cold War caused permanent changes in American philosophy. There are many ways to do this, but it’s harder than it looks. Here are a few hints.

First, you can go whole hog and claim that the years after World War II saw no significant rise in Anti-Communism in America: the McCarthy Era is a left-wing fiction. No one I know of does this, even on the Right, because it is delusional. The volume of research on the McCarthy Era is vast and growing. Plus, there are people alive today who remember it. I am one of them.

Other strategies of denial seem saner—until you think about them. They all involve admitting that McCarthyism was real but limiting its effects either in duration or in scope. On the temporal side, for example, you can say that McCarthyism didn’t last long enough to be a serious political force (it went quiet on campus around 1960). It was merely an unhappy blip, quickly rectified.

While less loony-sounding than the whole hog approach, this one also ignores salient facts.  McCarthyism originated in the Cold War and is often dated from President Truman’s speech of March 12, 1947, which awakened fear of domestic subversion to gear Americans up for our Cold War intervention in Greece. “McCarthyism” is thus not an independent phenomenon, but merely a popular (and reassuring) name for the first phase of the home front in the Cold War. Though anti-Communism did lower its volume around 1960, the Cold War persisted and this did not signify a return to normal. It just meant that no Communists were left to fight. Things have largely stayed that way: for better or worse, American radicalism is mainly concerned with identity, not class.

The denialist might also try various scope-limitations: claiming that while the domestic pressures of the early Cold War were strong and proved enduring, they spared certain institutions. But which? Again, a vast body of established fact that shows that American universities were heavily attacked. Cold War defense funding still drives much of their research, and the philosophical assumptions that support that funding still drive many other fields (again, see Scare).

So how about conceding the strength and staying power of Cold War political pressures on universities, but claiming that they somehow spared philosophy departments? Believing this would require an insouciance worthy of a Brian Leiter (if anyone pursues this type of denialism it will probably be Leiter or one of his acolytes, if any are left). That is because the intersection of the Cold War and philosophy departments is where Leiter’s rubber meets his road. His laudably left wing instincts push him to recognize the damage done on American society by right-wing forces—but if philosophy departments themselves were seriously hit, Leiter’s beloved departmental rankings would be conducted by a post-purge generation, and so may well be skewed.

But why would philosophy department have been spared? Was it because they are (or are perceived to be) simply too stultifyingly trivial to be of interest to players in the political world?  That may be true today. ( I heard it often enough from my teachers in the 60’s–now I know why). But facts get in the way again. As Scare establishes, philosophy departments were in fact prime targets of right-wing forces in the early Cold War because of their propensity to teach atheism. (Besides, if philosophy is so stultifyingly trivial, why is anyone  doing it?)

If philosophy departments were front and center and so offered no protection, denialists must turn to individual philosophers–to the really important ones who shaped the future of the discipline—and claim that they were somehow spared in spite of being in philosophy departments. But this ignores the fact that, as I show in Scare, several of those Great Men, people like Quine, Davidson and eventually David Lewis, just happened to incorporate elements of what I call Cold War philosophy, the anti-Communist ideology of the time into their philosophy.

Is this an accident? Did they do it to gain protection? Or, having done it for what they considered to be philosophical reasons, did they find it helped their careers? The last is most likely. But it remains a fact that philosophers who in those days turned to other paradigms, such as phenomenology or class analysis, didn’t have skyrocket careers like those guys.

Only one strategy of denial now seems open: accept all the facts showing that the political pressures of the early Cold War affected American philosophy, and permanently, but claim that this was a good thing. Didn’t it chase all sorts of charlatanry out of the discipline?

This strategy allows one to have one’s cake and eat it too: It affords a standpoint from which to condemn difficult thinkers like Hegel, Heidegger, Derrida and Foucault without bothering to read them—and if one’s ignorance is discovered, one can always claim that one has been attacked by charlatans.

But this, alas, supposes that the discipline of philosophy either could not in time have cleansed itself, or that it would have taken too long to do so. If philosophers cannot eliminate charlatanry from their own ranks, their discipline itself is very close to being fraudulent. And when did philosophers invest in speed? We’re still trying to figure out exactly where Plato went wrong, and it obviously take decades to get the McCarthy Era right.

Maybe there is some way beyond these to deny the main thesis of Scare, but I don’t know what it would be. That won’t stop the denialists, of course. Facts are facts, but you can’t tell that to some people.

 

MR 3: Leiter and me

In some dank corner of the philosophical forest lies Brian Leiter, mortally wounded and yet unable to die. From his delirious lips come hacking sobs and tortured moans, hopeless cries and senseless screams. Sometime he just babbles, as if recounting good days gone by, or seeking help. Or, most of all and always, seeking attention. None comes, but on he babbles. And every now and again, filtered through the undergrowth, you might hear something that sounds like my name.

He did it again today—Dec. 14, 2016—at his leiterreport blog. Like his other posts concerning me, this one betrays not the slightest evidence of having read anything I wrote: not my 2001 Time in the Ditch (Northwestern University Press), which suggests possible political explanations of the triumph of analytical philosophy, or my 2016 The Philosophy Scare (University of Chicago Press), which gives a much more definitive treatment of developments at one American university during that time.

(I consider a philosophical approach to have “triumphed” over another approach, by the way, when many of its adherents are not embarrassed to have no serious knowledge of that other approach. This is not necessarily a bad thing. Do we all need serious knowledge of Hermes Trismegistus? No. Do we need to reopen the issue from time to time? Yes.)

Leiter’s post consists entirely of a quotation from Charles Pigden, of Otago (New Zealand), prefaced by an assurance that Pigden is correct and followed by a dig at the brilliant Babette Babich.

Pigden is not correct. His post is, it seems, rather quickly written, to the point that his argument (I can find only one) is hard to discern. (He states, for example, that the “triumph” of analytical philosophy was a “global” phenomenon, then spends half a paragraph taking that back. Did someone steal his “delete” key?) In any case, I do not seek to give an “American explanation” for a “global” or even a “pan-Anglophonic” phenomenon (as Pigden comes to call the triumph in question) and for a couple of reasons:

First, the American triumph did not occur in the Forties. As my 2016 book shows, pragmatism was viable in the United States at least through the early Fifties. Its indispensable anthology, Naturalism and the Human Spirit, often called the “Columbia Manifesto,” was published in 1944, and a sixth edition came out in 1969.

It is, perhaps, analytical philosophy’s triumph over the British Hegelians that can be dated to around 1940, but I wouldn’t know: that happened in Britain. In the US, where British Hegelians were not easily to be found, the main enemies were idealism (of a Roycean kind) and pragmatism. So if Pigden thinks that analytical philosophy triumphed in “pan-Anglophonia” in the Forties, he has done what he accuses me of doing: generalizing from his own national/cultural context to the rest of the world. This suspicion is furthered by the fact that his three main figures of socially-engaged analytical philosophy are Ayer, Hart, and Russell. Pigden’s real gripe, then, appears to be that I have not viewed the American story of analytical philosophy as wholly unified with or subordinate to the British one.

I haven’t. It isn’t.

Second, even if the relevant “triumph” had occurred simultaneously in Britain and the United States, there is no reason whatever to think that what brought it about must have been the same in both contexts. The ice cream bar in my freezer and the lions in front of the Chicago Art Institute both have temperatures, as I write, of about 19º Fahrenheit; should I conclude that the lions are in my freezer? No one who has taken intro logic should have to bother with this. I’ll give Leiter a pass on it, as he has been well beyond logic for a long time. Pigden should know better, though.

There may be other arguments in Pigden’s garbled and digressive prose, but I cannot find them. Of more interest (though not much more) is that what exercises Pigden is actually rather different from what infuriates Leiter. Pigden reads my work as, fundamentally, an attack on analytical philosophy, which he views as a unified historical movement. I protest. My books deal with American developments and are in no way an attack on Her Majesty’s Analysts, though as noted above the Brits are given short shrift.

Let me say it as plainly as I can: in my view, analytical philosophy has made important and lasing contributions to philosophy, and its two core values of clarity and rigor are values I try to serve with everything I write—I just define “rigor” differently than analysts do. (I would say that my definition of “rigor” is in fact more rigorous than theirs, but that topic is for another time.) If this gets me dismissed by continentals for being insufficiently “profound”—and it sometimes does—too bad.

What my book does attack is the view that historical success, in philosophy or elsewhere, automatically equals intellectual merit (Donald Trump is a currently favored counterexample). Whether or not analytical philosophy rose to triumph partly as a result of political pressures has, in my view, very little to do with whether it is good philosophy. If I believed that historical success and philosophical merit were in any important way connected, why did I devote so much of my life to Hegel? He has certainly had the opposite of a “triumph” in the “pan-Anglophonic” world. That I think this miserable fate is undeserved hardly means that I think Hegel is wholly right.

Similarly in reverse. I think the present degree of dominance of analytical philosophy is in part historically explainable and, also, philosophically undeserved. That doesn’t mean I think the approach has no merit whatever. No one thinks analytical philosophy is perfect as it stands (do they??), and I join with prominent analysts in my criticisms of it (see below). It’s easy and wholesale dismissals that I am against.

Which brings me to Leiter. If you look through all the bloody spume he has puked out against me over the years—go ahead, there’s not that much of it—you will not find him issuing a single detailed citation, quotation, or intelligent engagement with any of my writings (three CHOICE outstanding academic title awards, Brian—maybe you should read more!). For example, he called my 2001 book, Time in the Ditch, riddled with errors. I asked him what they were. (We were briefly on semi-cordial terms, provoked by our common hatred for George Bush—how I long for those days!—For Bush I mean, not for cordiality with Leiter.) Leiter replied, in an amusingly insouciant email, that he had not read the book. He had heard about it from people who had.

Insouciance? This, as Normal Mailer once wrote, had all the insouciance of a drop of oil sliding down a scallion. There were two reasons why it was so amusing. First, I had double and triple-checked everything that went into that book, so I was fairly sure that unless Leiter had actual evidence that I was wrong, my points could stand. He had admitted that he had none. Oh, the innocence!

Second, I had made it a point of method that for every single criticism of analytical philosophy I made in that book, I would cite an analytical philosopher. Time in the Ditch, therefore, merely gathers and focuses criticisms of analytical philosophy that analytical philosophers themselves were already making. Check it out.

I’ll finish with Leiter for now with one further question: how did he come to hate me so much if he has never read my stuff?

Now there’s a story! I remember it well. As so often, it has to do with his beloved ranking system for philosophy departments. A couple of decades ago I was asked about it by, I think, lingua franca, an academic newsletter of those days. What I conveyed to them was that I thought it was pretty funny: the idea that experts in their own field, with heavy demands on their teaching and research, would spend any serious time and effort ranking other departments struck me as absurd. Would anyone who believes in their own work spend more than a coffee break per year scrutinizing what was happening at other institutions? Let alone ranking them? Moreover, I thought then that philosophers are wild and crazy intellectual trailblazers, each one acutely sensible of her or his own uniqueness. Who among them would sit still to be ranked against others?

It all just struck me as—well, to quote Pigden (and Leiter) “obviously silly.” When it came out, Leiter lost it and —not for the last time—threatened to sue. Of course he never forgave me. Because with him it’s not about the philosophy; it’s about the rankings. Which means it’s about him. Why does Trump love Putin and hate Kelly? Because of what they say about him. So with Leiter, who shares with Trump the policy of making many vacuous threats of lawsuits (I expect a few by the end of the week).

I close with a word of warning to younger philosophers: I am not alone. Other philosophers, and good ones, are investigating what happened to philosophy, especially post-immigration Logical Positivism, during the Cold War. I won’t drag their names into this putrid fight, but a few google searches should uncover at least some of them. Their results don’t usually agree fully with mine, but that is the nature of history. We all think that analytical philosophy in the United States has been seriously affected by political pressures, and we are trying to find out how and how far.

So you can neglect Leiter if you wish; his incoherent rages are already dying away into the laughter of forest creatures. But don’t neglect the rest of us and our historical work. Don’t neglect the archival work we have done, or our careful expositions of major texts, or our circumspect tracings of influences. And, of course, don’t accept what any of us says at face value, either. Because this is  something really important: the fate of a great philosophical tradition during the heyday of the American Empire. Someday, historians of philosophy are going to ask about that. And they won’t turn to the likes of Leiter for answers

 

 

 

24. Philosophy and Political Reflection

Of course social context affects philosophy! The society you live in affects how you eat, sleep, travel, marry or don’t–everything else. Why not your philosophical behavior?

But philosophers are traditionally eager to deny this, to imagine all philosophy as being done, in Peter Hylton’s words, “at a single timeless moment” (Hylton, Russell, Idealism and the Emergence of Analytic Philosophy p. vii). Claiming independence from social—and political—pressures is almost a defining characteristic of philosophy.

Philosophers avoid politics primarily, if not solely, by claiming exclusive allegiance to the standards of reason. If what philosophers say is required by universal standards that hold for all cultures and societies, then it can hardly  respond to political or social circumstances. Reason buys cultural and political independence.

The problem is that in order for that sort of argument to work, the standards of reason themselves must already have been established: I can hardly defend a conclusion or a topic by claiming that it is what reason demands if I don’t yet know generally what reason demands. So what about our rational standards themselves? Until they have been defined, philosophy is wide open to social, political, and even familial, pressures

At which point the study of the “politics of reason,” the subtitle of both the Philosophy Scare book and this blog, becomes an important and necessary field.

So have the standards of reason been defined? Not fully, and not for all time. Even logic is turning out to be rather protean. And whether logic is coextensive with reason itself is, I think, a lot more open than it is often thought to be. (Hint: a defense of the rationality of dialectics is upcoming on this blog. And if dialectics can be rational, what can’t?).

What is there for a philosopher to do if the very starting point of philosophy, its definition of reason and its concomitant definition of reason’s goal, truth, may be affected by social and political pressures? Answer: reflect on those pressures as best you can, with whatever local tools are available to clarify what I call the ”parameters” that constitute your “situation.” Only when you have identified those parameters, and determined their origins and trajectories, can you formulate what you really need: a clear and rigorous definition of reason.

Such reflection, then, is pre-rational. Is it therefore impossible? Ask Nietzsche; his account, On the Genealogy of Morality, of the “ascetic ideal” is a paradigm of the genre.

25. Questions of Motive

I have tried (## 29-30) to show how The Philosophy Scare came to be from what went before, i.e. how I came to write it. There are two motivational factors for the book that I would like to underscore a bit further, because there are misapprehensions about them—and so about me.

One thing that motivates me is very traditional—truth. This may be surprising. I’m supposed to be a postmodernist (a thought that angered Habermas enough to end my career in philosophy departments), and postmodernists are supposed to have no truck with truth. But I do truck with it, and trailer as well. I honestly believe that whereas my earlier Time in the Ditch (2001) was suggestive, Scare is definitive. To be sure, “definitive” does not equal “final;” nothing in history is ever final. But I believe that on the basis of present evidence, no one can rationally deny that political pressures played a major role in the development of the UCLA philosophy department—and so, a fortiori, of other departments. For if UCLA, that crystalline bastion of logic, can be affected by politics, so can anyone.

The second factor is something that does not motivate me: contrary to myth (one discreetly bruited right here at UCLA), I do not write from a hatred for analytical philosophy. The truth is, I love the stuff. I have published on Davidson, Quine, and Wittgenstein (though admittedly the later one). My version of Hegel is more like Quine than like any normal version of Hegel. True, I think that analytical philosophy died about 1983 (to be replaced by what I call “mainstream” philosophy). And I am frankly exasperated by the refusal of so many current American philosophers to take responsibility for the political dimensions of their own history. But that exasperation is in their service, for doesn’t denialism tend to increase the power of what is denied?

So I hope that Scare will motivate philosophers to reflect on their position in history—not only on the ahistorical truths they usually seek to purvey, but on their own efforts to obtain such truths and how those efforts are historically situated. (The fact that such efforts are not usually reflected on by philosophers leaves them unknown and unrecognized. Philosophical successes are then chalked up to some mythical and complacent mystery called “natural talent.”)

As Robert Scharff has recently shown (How History Matters to Philosophy, Routledge 2016), the lack of historical reflection in recent American philosophy is not only endemic but constitutive. Even historians of philosophy often write in the present tense, as if Plato or Kant were standing before them, proffering ideas which must be evaluated as if they were first produced five seconds ago.

Which of course they must, but there is more to it: Plato and Kant are not only interlocutors, but ancestors. We are results of their thought, and their intellectual DNA operates in us in ways that are often very difficult to excavate. The same, Scare shows, goes for political creatures like Raymond B. Allen and Joe McCarthy. We are their grandchildren. Hiding this truth, not least from ourselves, has made American philosophy more political, not less (see the Introduction to my On Philosophy, Stanford 2013). This is a fate from which I, along with many others, hope to save it.

 

26. Cold War Philosophy and Education

Lots of people believe that importing business theory into the university blights the institution. I have no doubts about this. Running schools like businesses, especially the way businesses are run these days, with emphasis on the short term and “cost cutting”—i.e. firing people or cutting back their benefits—has resulted mainly in miserable teachers and ignorant students.

But the invasion of schools by Cold War philosophy its not just a matter of structuring educational institutions around the idea that the people who work and learn in them are nothing but utility maximizers. A recent story by Rebecca Klein at The Huffington Post shows how Cold War philosophy also shapes the content of the curriculum. Referring to a report from the the Century Foundation, Klein writes:

The American education system has focused on “market values” over “democratic values” for the past several decades…. Rather than preparing students to be responsible members of society, the report argues, schools have chiefly taught them to compete in a global marketplace

What does it mean to prepare someone to be a “responsible member of society?” Klein writes:

The report argues that students must learn to think critically and make informed decisions … They need to appreciate the factors critical to a functioning democracy, like civil rights.

Education for citizenship, then is just education. As Aristotle said, in a just society a good citizen is a good human being. Education for citizenship is therefore teaching students to make correct (rational) use of the human mind: how to reason and how to get clear on facts and values:.

So in orienting their curricula to market vales, what American educators have abandoned is education itself.  Why on earth did they do this? Aren’t educators among the first to get hit by market malfunction?  Where did they get rewarded by it? Teachers in the lower grades are at best down-market, watching their pennies disappear as they buy paper and pencils for their students. In higher education, advanced training—a Ph.D.—usually makes you distinctly un-marketable. How could people who suffer from the markets abandon the idea of education for citizenship?

But wait! There is no trade-off here once you accept that the two goals are the same—that educating the mind to function just is educating it to perform correctly  in the market. And you will accept this if you accept Cold War philosophy’s view that the human mind itself operates on the principles of the market, as codified in rational choice theory (see Chapter Four of Scare). Then market rationality becomes the only rationality there is, and education’s job is to teach and instill “market values.”

Can Cold War philosophy have gained such purchase on the minds of educators that they don’t see this? Writing an article that appeared in the Chronicle of Higher Education last October suggested to me that it has. Looking at the two main families of arguments in favor of the beleaguered humanities, I realized that both of them assumed that the humanities, to be beneficial at all, had to be of direct benefit to  individuals. One family of arguments claims that the humanities can indeed provide the skills necessary to go on the job market—an obvious case of a market curriculum. And the other argues that your life will be more interesting and perhaps even virtuous if you know something about the humanities—which is just an attempt to highlight a particular definition of “utility.”

As the article points out, both arguments are valid, but neither is successful: they are both out there in the “marketplace of ideas,” but resources are not flowing (back) to the humanities because of them. They have accepted, in fact, the premises that are causing the problem. Proving that market rationality tolerates humanistic education is not the same as challenging the claim of market rationality itself to be coextensive with all reason.

 

 

 

MR2: Trump’s Here; Can McCarthyism Come Back?

With the election of Donald Trump and his various appointments of hard line right wingers, people like Rebecca Schuman are wondering if the dark days of the McCarthy era might make a comeback. According to my research, it’s absolutely impossible.

Because McCarthyism never went away.

In 2005, Russell Jacoby listed some of the organizations keeping watch on professors: Campus Watch, Academic Bias, and Students for Academic Freedom were a few. The American Council of Trustees and Alumni, founded in 1995, has long scrutinized universities. It has occasionally listed individual professors to watch out for, such as some who criticized George Bush’s Iraq War. And in 2006, the Bruin Alumni Association went so far as to name little yours truly, not to the actual “Dirty Thirty” left-wing faculty faculty members it claimed to expose at UCLA, but to what could be called the Dirty JV–people who expressed left-wing views, but were somehow not as obnoxious as the Thirty themselves.

So McCarthyism hasn’t gone away. But it hasn’t recently had much of a hearing. ACTA retracted its list after an outcry that it was McCarthyistic, and the Bruin Alumni Association vanished with speed. The National Association of Scholars, yet another right wing watch group (but with more scholarly respectability) now couples general denunciations of newer (post-1968) trends in the humanities with a strangely intense commitment to fossil fuels. Most of the others have been crying in the wilderness, waiting for an opportunity.

Do they have one now?

Today’s situation shows two big differences with the McCarthy era. First, Marxism was not merely a set of ideas, or even a set of ideas that (in Marx’s words) had “gripped the masses” in various countries. It was, and presented itself as, the ideology of a large and disciplined group already in control of several national governments. This, of course, was false; even in those days, there were plenty of Marxists who were not followers of the Moscow line. But a very powerful apparatus said otherwise: that all true Marxists follow Soviet principles.

The idea that there was a single mass movement of Marxists was thus put forth by Marxists themselves. The current chaos in the Muslim world, with for example Sunni and Shia Muslims fighting one another far more bitterly than the Brezhnevites ever fought the Maoists, does not exhibit that sort of frightening unity.

Second, Karl Marx was steeped in the Western philosophical tradition. Indeed, as he himself argued, there is no way out of Hegel except through him—and  as many (including me) have argued, no way out of Kant except through Hegel. So Western philosophy either has to pass through Marx or stay with Kantian or pre-Kantian modes of philosophy (as American philosophers so often do).

Of course, the fact is that Islamic philosophy, like Marxism, is absolutely integral to the western philosophical tradition. The great Islamic philosophers are just as much forbears of contemporary European thought as Augustine and Maimonides. If the neo-McCarthyites ever figure this out, they will have a new brush with which to tar us all.

In any case, they’re coming. In a future post, I’ll tell you what to watch for.

 

27. Von Hegel zu Trump

In a recent seminar, I was asked what Hegel would think of Donald Trump. I was stumped at the time: how could Hegel, the greatest apostle rationality ever had, even begin to make sense of a man who, having been elected through resentment of the urban elite, went to one of America’s most expensive restaurants one week later, received a standing ovation from its wealthy patrons, told them he was going to lower their taxes, and allowed himself to be filmed doing it?

But in two early essays, The Difference between Fichte’s and Schelling’s System of Philosophy (DF), and Faith and Knowledge (FK; all paginations to Cerf and Harris translations), Hegel provides a clue. Discussing Fichte, he claims that each rational being exists for Fichte in a two-fold way: as free and rational, and as mere matter to be manipulated. This dichotomy is absolute: each side of it is precisely what the other side is not, and it cannot be transgressed (DF 144) Because of the absoluteness of this dichotomy, society must be founded on one principle or the other: the individual must be either mere matter or a free being of infinite worth.

The former case, Hegel goes on, locates reason not in the individual but above her, in the state. As FK puts it, “individuality [then] finds itself under absolute tyranny” (FK 183). The individual sinks under a mass of laws and regulations, each rationally enacted for the greater good of the whole. Such a state, Hegel tells us, is a “machine” (DF 148-9). He seems to have produced a small sketch of the Soviet Union, 115 years before its birth.

The other case will be more familiar to Americans. Seeing each individual as a free being of infinite worth, society grants her the right to determine her own life, no matter how childish, ignorant, ill, or depraved she may be. The only reason for doing anything in this kind of social order is the individual’s personal arbitrary insight, and the only consideration that justifies anything is that someone chose it:

Everything depends on reckoning out a verdict on the preferability of one duty to another and choosing among these conditioned duties according to one’s best insight…In this way self-determination passes over into the contingency of insight and, with that, into unawareness of what it is that decides a contingent insight. (DF151)

The references to preferences and choosing sound uncannily like rational choice theory, in which individuals rank their preferences and choose among them as they deem best; but here, as in Cold War philosophy, it has been elevated into the basic principle of the social order. Because each individual’s choices are absolute, such a society is incapable of uniting for a sustained attack on any social problem. Because  children are individuals, and so free to determine their lives, it is unable even to educate its young, who are allowed to choose what they will learn. Its leaders, secure in their own well-paid individuality, grow sleeker and more self-satisfied as the chaos proliferates, and strut their good conscience in the slogans of the moment.

Sound like anybody? In a previous post, I argued that Donald Trump operates by the maxims of rational choice theory, elevated into the philosophy that I call Cold War philosophy. Now it seems that such philosophy provides a middle term between Hegel, the apostle of reason, and Trump, the apostle of—Trump.

28. Choices About Choice

One pillar of Cold War philosophy is the use of rational choice theory as a fundamental account of human rationality—the equivalent, in weird ways, of Kantian critique. Do I choose to repeal rational choice theory? I do. But no repeal without replacement. So with what might I choose to replace rational choice?

According to rational choice theory, the chooser first establishes a set of feasible alternatives, sequences of events that can be triggered by her action. She ranks them according to how they contribute to her overall utility, and then opts for the highest.

She is thus dissociated from the alternatives themselves. She makes them into alternatives in the first place (by evaluating their “feasibility”), and can then choose or not choose any of them. She can even choose none of them and walk away from the game altogether, because a game is —only a game. A choice is “rational,” then, in the context of a game, i.e. when you are able not to choose any or all of the alternatives.

Some choices are indeed game-ish. I leave the drugstore with the toothpaste I chose to buy, and the voting booth having chosen for whom to vote. But viewing choice in terms of games also has limits. Some impressive philosophers thought that it presupposed an abrogation of causality that cannot be explained without heavy metaphysics (Kant) or theology (Augustine, Descartes).  Moreover, our important decisions are sometimes not made that way.

This leads some people to say that they are not made at all. I saw a movie about Le Chambon-sur-Lignane, the French village that over the course of the Nazi occupation managed to hide more Jews than there were people in the village. An elderly couple was asked how they decided to do this, and the wife said, shockingly for me: “on n’a jamais vraiment décidé:” “we never really ‘decided.’” She went on to say that, as French Protestants, they knew what it was to be persecuted.

My Hollywood friend Bernie Gordon had more movie credits restored to his name after the blacklist ended than even Dalton Trumbo. He tried to explain why he decided not to name names, which would have saved him, and he too said there was no such decision: It just wasn’t something I was going to do (paraphrased).

These people did not make rational choice decisions. They did not first formulate various alternatives and then opt for one. They simply did what they had to do, once it was clear what that was. Doing that was somehow necessary for them: the old couple had to be French Protestants; Bernie Gordon had to be Bernie Gordon. They were not obliged to do so; they simply could not do otherwise. It’s not that they identified with one of the alternatives before them; they were identified as one of the alternatives.

Here then is another view of choice: that it is the recognition of necessity. The “moment of decision” is when you realize that this is what you have to do, whether there are feasible alternatives to it or not. The “decision” was made before you were aware of it.

This view of choice has an impressive philosophical pedigree: start with Hegel, Spinoza, and Aristotle. It has also attracted notice from neuroscience, especially via the Libet experiments, which are said to show that decisions are made before we are aware of them. There have been numerous attempts to rescue freedom, in the rational choice sense, from Libet’s work. The rescues are sometimes successful: I bought Crest instead of Colgate just the other day. But I do not see how Bernie Gordon could have chosen to name names and still be Bernie Gordon. Neither did he. And those decisions, the ones that bear on who we really are, are the important ones.

So I want to repeal the view that the kind of choice involved in rational choice theory is the only kind, and replace it with a pluralistic theory of different types of choices.  I choose to have a choice among theories of choice

But I would like, someday, to get rid of the rational choice model altogether. I’m worried about the metaphysical lifting it requires—and the political downfalls it brings.

MR2: Cold War Philosophy and Donald Trump

Most people I know are horrified to see elected as their President someone of the unabashed mendacity, racism, xenophobia and misogyny of Donald Trump. How could the mentality that produced him exist in this country at all, let alone be so strong?

One common answer is that America contains a realm of rural and uneducated whites, which we didn’t even know was out there. We thought the racists and misogynists were not a powerful realm, merely a moribund fringe. But now it seems that the country contains two very different and very powerful mentalities. Trump represents an invasion from that other realm. Though he himself has a privileged background, his values are from the far backwoods.

But Trump’s very existence suggests that the line between these realms may be fuzzy. Even the most educated of  precincts— academia itself —has more than its share of race baiters and  p***y grabbers (as we philosophers know all too well). But such people, we like to think, don’t belong here. They are, like Trump, invaders from some Other and darker place.

This, alas, is false. The line between Us, the civilized academicians and Them, the unwashed outsiders, is not only fuzzy. If you go down one level–i.e., look not at the explicit racism and misogyny but at the underlying mentality that enables them– it doesn’t exist at all.

Cold War philosophy has (so far) had two main stages. The first, treated in Philosophy Scare, saw rational choice theory imported into philosophy, elevating a highly mathematized but at bottom empirical theory of market and voting behavior into a universal philosophy applicable to the human mind itself (think early Rawls). The second phase, which came later, saw this happen with game theory (think David Lewis). Game theory, too, was elevated into a set of universal doctrines which could not be established empirically and  so counted as philosophical: as Cold War philosophy.

In her new book, Prisoners of Reason (Cambridge 2016), S.M. Amadae asks the crucial question:

How did strategic rationality, which typically assumes consequentialism (only outcomes matter), realism (values exist prior to social relations) and hyper-individualism (“other-regarding” signifies viewing others as strategic maximizers like oneself) come to be the only approach to coherent  action available to individuals throughout their lives? (p. 147)

Amadae’s  question implies that game theory, when elevated into universality (into “the only approach to coherent action”) retains three of its original traits. First, if only outcomes matter then any means are acceptable—even, as Amadae shows elsewhere, promise breaking and lying. Second, the gifts of sociability—trust, fellowship, security of possession and so forth—have no value: value arises only from presocial desires (“preferences”), meaning that anything can be desired irrespective of its social consequences. Third, all human beings, in their social relations, seek only to maximize their own interest: the only reason for engaging with another human being is to persuade or coerce that person to help me achieve my interests—in other words, to dominate that person.

These three principles underlie much of American philosophy and social science today. One place they are explicitly taught, as universal and so philosophical truths, is business schools. They are also the basic principles by which Donald Trump, who went to Wharton, operates. He lies freely, as if he sees nothing wrong in it, and already seems about to break many of his campaign promises. The things he values—mainly, his own ego and p***y—shape his social relations, without being shaped by them in turn—so they preexist them. And his only interpersonal concern, as Josh Marshall has repeatedly noted at Talking Points Memo (http://talkingpointsmemo.com), is to dominate those around him.

In short, my fellow academics, on this level Donald Trump is not one of Them; he is one of Us.

29. Archive Fever

The UCLA archives were open only a couple of days a week, but I was on a mission top find the truth about Raymond B. Allen, leading academic red hunter and UCLA’s first chancellor. I  eventually got in and asked the archivist if she had any material from the 1940’s and 50’s concerning the philosophy department or the UCLA chancellor’s office. She brought the department stuff first. In the first box I opened was a stack of letters about an inch thick from the  winter and spring of 1947, protesting the hiring of somebody named Max Otto to the prestigious Flint professorship.

Someone had taken the trouble to walk them over from the department to the archives.

The letters were obviously from very conservative people, but they were not the kind of shrill, fact-free yelling we hear today. They were thoughtful and often  moving.  Otto, a prominent pragmatist, had been identified by the Los Angeles Examiner as an atheist The letter writers were deeply concerned, even fearful, about having an atheist teaching the youth of California.

Shades, I thought, of Socrates! —Some things are truly perennial.

There was a lot of other stuff in the box, including some letters the chair of the department had written in answer to  the protestors. They stoutly defended both Otto and the department, and underscored the nature of academic freedom. But there was only about half a dozen of them. Why so few?

One possibility was that in those days every single letter had to be typewritten; there was no putting a basic version in your computer and then editing it for different recipients. The chair, Donald Piatt, had probably gotten tired. His secretary had doubtless gotten tireder.

But there was another possibility: maybe Piatt had just decided that the protest was so trivial that responding was not worth the effort. In that case, the letters showed how impotent right-wing protests were in 1947, in contrast to the later brunt of the McCarthy Era proper, in the early 1950’s.

This turned out to be wrong; in fact the protestors appear, as Scare recounts, to have succeeded in blocking Otto’s appointment. The protest over Otto, moreover, was merely one in a series of events that concerned the teaching of atheism in the philosophy department. These continued, as far as I now know,  through the national uproar over Angela Davis, in 1968—over twenty years later.

This illustrates an important feature of archival work. The fact is, when a document or a set of them surfaces in an archive, you have no idea whatsoever —zero—what it means until you see what came before and after it. So the temporal sequence in which an expression stands determines its meaning. Put that in your logic book and smoke it!

30. A Shocking Discovery (for a philosopher)

I wrote my first book on McCarthyism and philosophy,  Time in the Ditch (2001) because I thought the victims of the postwar Reds Scare should be memorialized, and its ongoing effects identified.  But wasn’t one book on the topic enough? How did I come to write The Philosophy Scare?

Time in the Ditch assembled such evidence as was then available concerning political pressures on philosophy during the McCarthy Era, and suggested that philosophers needed, in its words, to “break their strange silence” about what had happened. It was therefore not a definitive treatment; it aimed merely to provoke discussion. There was some of that, most notably among philosophers of science. When it died down, I got back to my real work, which is that of an historian of philosophy who takes seriously the fact that philosophy went somewhere new and different after Kant—not just to the gutted a priori of the logical positivists.

But around ten years ago, something happened. I was in my office at UCLA,  enduring another lonely office hour. Office hours make it impossible for me to do serious work, because someone may always come in; but no one ever does, unless it is just before a major assignment is due. So you’re all by yourself. What to do?

I was, of course, surfing the Web. The California sun was shining in my windows, the UCLA Marching Band was practicing down in Sunset Canyon, and all was peaceful, though rather loud. Until I got to a web page about the history of UCLA. It had a few paragraphs about UCLA’s first chancellor, who took office in 1952 but is, I learned, kind of a secret: he is not commemorated anywhere on campus because he basically fled office seven years later just ahead of a football scandal. His name almost knocked me out of my chair: Raymond B. Allen.

I knew about Allen, or thought I did—I had written about him in Time and the Ditch. He was academic America’s leading Red hunter during he early Cold War. As president of the University of Washington in 1948-49, he orchestrated the firing of several Communist professors, most prominently philosopher Herbert Philips. A year or two after that, Allen left the University to become the director of a Cold War think tank. I had always assumed this meant that he had been hounded out of academia by professors irate at his high-handed violations of academic freedom. Not so: at his last faculty meeting at Washington, they  had given him a standing ovation. He spent only about six months at the think tank before assuming the chancellorship of UCLA.

The question was too obvious to ignore: Was there any connection between Allen’s fame as a Red hunter and his becoming the first chancellor of UCLA?

My next stop was the UCLA Archives.