21. Time, Trump, and Aristotle Part I

It should not be surprising that Trump and some of the people he has put into his cabinet appear to be narcissistic fools—narcissistic foolery is an occupational disease of billionaires and generals when they forget they’re being kowtowed to by absolutely everybody, and think instead that they’re being treated as friends or even told the truth. But foolishness, by definition, is not understood by fools; you need some smarts and, sometimes, a good deal of background. In order to understand the Trumpian crop fully, for example, you have to know a good bit about the central books of Aristotle’s Metaphysics (VII-IV). The following is therefore a bit abstruse—but as I like to say, the most concrete struggles can require the most abstract thinking.

In the Metaphysics, Aristotle unpacks the nature of Being in terms of ousia—of form in matter. As I argued many years ago in my Metaphysics and Oppression, in natural beings form is active, and exercises a threefold domination over matter: it separates a chunk of it off from other matter (boundary); generates and/or orders everything that goes on within those boundaries (disposition); and controls the exchanges between the being thus constituted and the world outside (initiative). Being itself thus comes to exhibit a two-level structure of leader and led, oppressor and oppressed. This structure, I argued, has been basic not only to Western thought but to Western life ever since Aristotle formulated (and also, less consciously, before). Both sovereignty and freedom, for example, tend to be conceived on its basis, and it has provided the model for many different types of social organization in the western world: families, schools, the Roman Empire, the French railway system, bourgeois households, and more.

Including corporations and armies. A military commander exercises (though in the US under civilian leadership) nearly complete control over the activities of a closely defined set of people. A corporation, too, has a set of boundaries that divide what it owns and whom it employs from what other “legal persons” own and employ; it has a CEO who, (though nominally overseen by a board of directors) organizes both what happens within it and the marketing of its products, i.e. their sale to the outside world. While modern corporations and armies have many ramifications and complexities undreamed of in Aristotle’s time, their basic lineaments come right out of his Metaphysics. They are, we may say in his name, beings par excellence.

Modern corporations prove this by a paradox: Despite the fact that the stock market (and corporate valuations in general) do much better under Democratic presidents, CEO’s today are overwhelmingly Republican. This, if you think about it, is bizarre: what CEO’s (and members of boards of directors, for that matter) supposedly want, above all else, is to make money. So they should clearly prefer Democrats!

But they don’t. So what is going on? Maybe they don’t really want money. Listen to them: the high taxes under Democratic administrations bother them, to be sure—but what really drives them berserk is government regulation. Indeed, what actually bothers them about high taxation is often not that it thins their personal checkbooks, for the money would go first to stockholders anyway; but that paying taxes keeps them from doing certain things they want to do with the business, mainly having to do with expanding it. What business leaders really want, then, is to be able to control their own corporations as an ousiodic form controls its matter, without interference from outside or resistance from below. So money is not at the forefront of their aspirations. If it was, they would all be Democrats. They are trying to fulfill the demands of ousiodic structure, not of their stockholders.

There is, however, one very untraditional fact about the people Trump has put into place to oversee the American government, and it has to do with the modernity of their education. Traditionally—for Plato as well as for Aristotle—to be a form meant to be specific. Over and above the human form, for example, you had in a human being only the relatively undefined human matter—the physical constitution of the human being, not all that different from that of other animals. It was thus up to the form to provide and so to exemplify the characteristic features of the being of which it was the form. Translated into the panoply of ousiodic institutions and practices in the Western world, this meant that leadership status was not transferable: take the pater of one family and put him into control of another family and disaster would ensue. Same for all institutions: the leadership role was, like form itself, specific to the institution.

Modern leaders, by contrast, have been selected for leadership positions in accordance with the basic premises of Cold War philosophy. And Cold War philosophy defines leadership in terms of rational choice: to lead a group or institution is to make decisions for it (George W. Bush, when president, actually referred to himself as the “decider”). Making decisions rationally is a skill transferable from one institution to another, as we see today in the steady migrations of CEO’s from company to company. The result is that the leader is no longer bound to his institution: he is free to leave and find another enterprise to lead. It was much more difficult when the skills involve in leadership were specific to the organization.

How does this apply to Trump and his lads? See § 22.

22. Trump, Time, and Aristotle Part II

 

Even if the people with whom Donald Trump has filled his cabinet are narcissistic fools (as I suggest in ¶ 21), we must give them their due: they are narcissistic fools who have mastered leadership skills which apply far beyond any one institution.  Those skills are the ones involved in making decisions according to the tenets of rational choice theory. Absolute confidence in them is basic to Trump and his world, which is wholly predicated on the idea that the skills needed to build and run a business can be transferred smoothly to everything else. This is lunacy; if Cold War philosophy had not accustomed us all to think that the skills involved in rational choice management are the only skills the rational mind has to offer, no one would accept it.

If we follow Aristotle a bit farther, we see not only that it will not work, but how it will fail. For according to Aristotle, the leadership of the people Trump is bringing to Washington will, in time, fall victim to time itself: “Time,” as he puts it, “is the enemy of ousia.” The passage of time alone destroys ousiodic structure, the kind whose leading positions the Trumpians are trained to occupy.

Why? Because time for Aristotle is not the kind of abstract and do benign ticking-away that it is for Newton. It is, cryptically to be sure, the measure (arithmos) of motion (kinesis). Motion, for its part, is “the actuality of potentiality quâ potentiality.” This even more cryptic phrase can be understood by contrast with what Aristotle thinks is the more basic case, the conversion of potentiality to actuality. In this sort of actualization, something that is not yet comes to be: the cut wood is now potentially a house, and when it is put together it will no longer be potentially a house, but actually one. In between those states of the world, however, there is a sequence of states in which that goal, the built house, is having effects in the world in that it is directing the movements of the people building the house. In that sense it becomes actual while remaining potential, for the house is not yet built. To focus on something as a goal is thus to “actualize it quâ potentiality.”

This applies to all motion for Aristotle because all motion for him is basically goal-directed: seeds grow into plants in order to fulfill their natures, and stones fall to earth (in his view) in order to fulfill theirs. We can circumvent this extravagant teleology by noting that no individual motion can go on forever, and so each has an end point of some sort; talk of that end point as “fulfilling” some nature or other is unnecessary. We can still say that any currently existing motion is “potentially” at its end point, and only when it gets there will the motion be complete: only then will it be a motion organized enough to be measured. Its measure is time, which presupposes this sort of organized motion. Beneath such orgsanized motions we do not find stasis for Aristotle. We find, as for Plato and Hegel,  a sort of entropic pullulating.

Organized motion kills ousia because to say that the  material components of an ousia are in motion is to say that they have end points of their own. Not all of these can be imposed by an organizing form. In particular, the subordinate members of a social organization never do only what the managers tell them to do; being human beings, they have all sorts of other plans, goals, and motivations as well. The pursuit of each of these moves the organization, or part of it, in ways not determined by its form. It therefore constitutes a weakening of the dispositional authority of the form: it diverts energy, we may say, away from the central directives even if it does not explicitly contest them. Time itself, the measure of motion, thus weakens ousiodic structure. Thus, no form in matter can last forever for Aristotle; time itself destroys it.

It is these weakenings, moreover, that make it necessary for form to be particular. The master of the slave Aristoxenus never tells him to have any task completed within an hour after lunch time because he knows that Aristoxenus, being elderly, falls asleep after lunch. Nor can said master rely on advisors to tell him about Aristoxenus, because maybe they, too, are elderly enough that certain things escape them. The kind of “specificity” required of the leader of a social organization is the kind provided by what Aristotle calls syzên, living together.

This kind of specificity is denied by the modern theory of corporate leadership, which incorporates the many idealizations of Cold War philosophy, some of which I mention in Chapter Four of Scare. Just as rational choice theory presupposes perfect information on the part of the chooser, so this view of leadership presupposes perfect obedience on the part of one’s subordinates.

Alas for the theory, nothing human is perfect: orders, plans and policies get lost in the diverse complexities of human goal pursuing, and that is no occasional accident. It follows from the very nature of time.

So it will be for Trump’s cabinet picks: in time, their schemes will fall apart, not only as the Washington “swamp” seeks to subvert them but, and more irresistably, as the individuals who are supposed to realize them simply do other things, like go to sleep. For a while Trump’s cabinet picks, like the billionaires and generals they used to be, may be sheltered from knowing about this by their subordinates. But civilian governance has outside scrutiny that armies and corporations don’t, and news of the failures will eventually come out. At which time the supposed transferability of leadership skills will come into play—and the leaders, frustrated and humiliated, will leave.

How long will it take? How long will the Trumpians last, issuing orders that are neither obeyed nor disobeyed, and formulating strategies that don’t exactly go awry but don’t work as intended either? Aand how many lives will be destroyed in this process?

Aristotle, l’m afraid, does not tell us.

23. McCarthyism and Philosophy: Strategies of Denial

There are a few people–more than a few, actually– who would like to deny that the domestic tumult of the early Cold War caused permanent changes in American philosophy. There are many ways to do this, but it’s harder than it looks. Here are a few hints.

First, you can go whole hog and claim that the years after World War II saw no significant rise in Anti-Communism in America: the McCarthy Era is a left-wing fiction. No one I know of does this, even on the Right, because it is delusional. The volume of research on the McCarthy Era is vast and growing. Plus, there are people alive today who remember it. I am one of them.

Other strategies of denial seem saner—until you think about them. They all involve admitting that McCarthyism was real but limiting its effects either in duration or in scope. On the temporal side, for example, you can say that McCarthyism didn’t last long enough to be a serious political force (it went quiet on campus around 1960). It was merely an unhappy blip, quickly rectified.

While less loony-sounding than the whole hog approach, this one also ignores salient facts.  McCarthyism originated in the Cold War and is often dated from President Truman’s speech of March 12, 1947, which awakened fear of domestic subversion to gear Americans up for our Cold War intervention in Greece. “McCarthyism” is thus not an independent phenomenon, but merely a popular (and reassuring) name for the first phase of the home front in the Cold War. Though anti-Communism did lower its volume around 1960, the Cold War persisted and this did not signify a return to normal. It just meant that no Communists were left to fight. Things have largely stayed that way: for better or worse, American radicalism is mainly concerned with identity, not class.

The denialist might also try various scope-limitations: claiming that while the domestic pressures of the early Cold War were strong and proved enduring, they spared certain institutions. But which? Again, a vast body of established fact that shows that American universities were heavily attacked. Cold War defense funding still drives much of their research, and the philosophical assumptions that support that funding still drive many other fields (again, see Scare).

So how about conceding the strength and staying power of Cold War political pressures on universities, but claiming that they somehow spared philosophy departments? Believing this would require an insouciance worthy of a Brian Leiter (if anyone pursues this type of denialism it will probably be Leiter or one of his acolytes, if any are left). That is because the intersection of the Cold War and philosophy departments is where Leiter’s rubber meets his road. His laudably left wing instincts push him to recognize the damage done on American society by right-wing forces—but if philosophy departments themselves were seriously hit, Leiter’s beloved departmental rankings would be conducted by a post-purge generation, and so may well be skewed.

But why would philosophy department have been spared? Was it because they are (or are perceived to be) simply too stultifyingly trivial to be of interest to players in the political world?  That may be true today. ( I heard it often enough from my teachers in the 60’s–now I know why). But facts get in the way again. As Scare establishes, philosophy departments were in fact prime targets of right-wing forces in the early Cold War because of their propensity to teach atheism. (Besides, if philosophy is so stultifyingly trivial, why is anyone  doing it?)

If philosophy departments were front and center and so offered no protection, denialists must turn to individual philosophers–to the really important ones who shaped the future of the discipline—and claim that they were somehow spared in spite of being in philosophy departments. But this ignores the fact that, as I show in Scare, several of those Great Men, people like Quine, Davidson and eventually David Lewis, just happened to incorporate elements of what I call Cold War philosophy, the anti-Communist ideology of the time into their philosophy.

Is this an accident? Did they do it to gain protection? Or, having done it for what they considered to be philosophical reasons, did they find it helped their careers? The last is most likely. But it remains a fact that philosophers who in those days turned to other paradigms, such as phenomenology or class analysis, didn’t have skyrocket careers like those guys.

Only one strategy of denial now seems open: accept all the facts showing that the political pressures of the early Cold War affected American philosophy, and permanently, but claim that this was a good thing. Didn’t it chase all sorts of charlatanry out of the discipline?

This strategy allows one to have one’s cake and eat it too: It affords a standpoint from which to condemn difficult thinkers like Hegel, Heidegger, Derrida and Foucault without bothering to read them—and if one’s ignorance is discovered, one can always claim that one has been attacked by charlatans.

But this, alas, supposes that the discipline of philosophy either could not in time have cleansed itself, or that it would have taken too long to do so. If philosophers cannot eliminate charlatanry from their own ranks, their discipline itself is very close to being fraudulent. And when did philosophers invest in speed? We’re still trying to figure out exactly where Plato went wrong, and it obviously take decades to get the McCarthy Era right.

Maybe there is some way beyond these to deny the main thesis of Scare, but I don’t know what it would be. That won’t stop the denialists, of course. Facts are facts, but you can’t tell that to some people.

 

MR 3: Leiter and me

In some dank corner of the philosophical forest lies Brian Leiter, mortally wounded and yet unable to die. From his delirious lips come hacking sobs and tortured moans, hopeless cries and senseless screams. Sometime he just babbles, as if recounting good days gone by, or seeking help. Or, most of all and always, seeking attention. None comes, but on he babbles. And every now and again, filtered through the undergrowth, you might hear something that sounds like my name.

He did it again today—Dec. 14, 2016—at his leiterreport blog. Like his other posts concerning me, this one betrays not the slightest evidence of having read anything I wrote: not my 2001 Time in the Ditch (Northwestern University Press), which suggests possible political explanations of the triumph of analytical philosophy, or my 2016 The Philosophy Scare (University of Chicago Press), which gives a much more definitive treatment of developments at one American university during that time.

(I consider a philosophical approach to have “triumphed” over another approach, by the way, when many of its adherents are not embarrassed to have no serious knowledge of that other approach. This is not necessarily a bad thing. Do we all need serious knowledge of Hermes Trismegistus? No. Do we need to reopen the issue from time to time? Yes.)

Leiter’s post consists entirely of a quotation from Charles Pigden, of Otago (New Zealand), prefaced by an assurance that Pigden is correct and followed by a dig at the brilliant Babette Babich.

Pigden is not correct. His post is, it seems, rather quickly written, to the point that his argument (I can find only one) is hard to discern. (He states, for example, that the “triumph” of analytical philosophy was a “global” phenomenon, then spends half a paragraph taking that back. Did someone steal his “delete” key?) In any case, I do not seek to give an “American explanation” for a “global” or even a “pan-Anglophonic” phenomenon (as Pigden comes to call the triumph in question) and for a couple of reasons:

First, the American triumph did not occur in the Forties. As my 2016 book shows, pragmatism was viable in the United States at least through the early Fifties. Its indispensable anthology, Naturalism and the Human Spirit, often called the “Columbia Manifesto,” was published in 1944, and a sixth edition came out in 1969.

It is, perhaps, analytical philosophy’s triumph over the British Hegelians that can be dated to around 1940, but I wouldn’t know: that happened in Britain. In the US, where British Hegelians were not easily to be found, the main enemies were idealism (of a Roycean kind) and pragmatism. So if Pigden thinks that analytical philosophy triumphed in “pan-Anglophonia” in the Forties, he has done what he accuses me of doing: generalizing from his own national/cultural context to the rest of the world. This suspicion is furthered by the fact that his three main figures of socially-engaged analytical philosophy are Ayer, Hart, and Russell. Pigden’s real gripe, then, appears to be that I have not viewed the American story of analytical philosophy as wholly unified with or subordinate to the British one.

I haven’t. It isn’t.

Second, even if the relevant “triumph” had occurred simultaneously in Britain and the United States, there is no reason whatever to think that what brought it about must have been the same in both contexts. The ice cream bar in my freezer and the lions in front of the Chicago Art Institute both have temperatures, as I write, of about 19º Fahrenheit; should I conclude that the lions are in my freezer? No one who has taken intro logic should have to bother with this. I’ll give Leiter a pass on it, as he has been well beyond logic for a long time. Pigden should know better, though.

There may be other arguments in Pigden’s garbled and digressive prose, but I cannot find them. Of more interest (though not much more) is that what exercises Pigden is actually rather different from what infuriates Leiter. Pigden reads my work as, fundamentally, an attack on analytical philosophy, which he views as a unified historical movement. I protest. My books deal with American developments and are in no way an attack on Her Majesty’s Analysts, though as noted above the Brits are given short shrift.

Let me say it as plainly as I can: in my view, analytical philosophy has made important and lasing contributions to philosophy, and its two core values of clarity and rigor are values I try to serve with everything I write—I just define “rigor” differently than analysts do. (I would say that my definition of “rigor” is in fact more rigorous than theirs, but that topic is for another time.) If this gets me dismissed by continentals for being insufficiently “profound”—and it sometimes does—too bad.

What my book does attack is the view that historical success, in philosophy or elsewhere, automatically equals intellectual merit (Donald Trump is a currently favored counterexample). Whether or not analytical philosophy rose to triumph partly as a result of political pressures has, in my view, very little to do with whether it is good philosophy. If I believed that historical success and philosophical merit were in any important way connected, why did I devote so much of my life to Hegel? He has certainly had the opposite of a “triumph” in the “pan-Anglophonic” world. That I think this miserable fate is undeserved hardly means that I think Hegel is wholly right.

Similarly in reverse. I think the present degree of dominance of analytical philosophy is in part historically explainable and, also, philosophically undeserved. That doesn’t mean I think the approach has no merit whatever. No one thinks analytical philosophy is perfect as it stands (do they??), and I join with prominent analysts in my criticisms of it (see below). It’s easy and wholesale dismissals that I am against.

Which brings me to Leiter. If you look through all the bloody spume he has puked out against me over the years—go ahead, there’s not that much of it—you will not find him issuing a single detailed citation, quotation, or intelligent engagement with any of my writings (three CHOICE outstanding academic title awards, Brian—maybe you should read more!). For example, he called my 2001 book, Time in the Ditch, riddled with errors. I asked him what they were. (We were briefly on semi-cordial terms, provoked by our common hatred for George Bush—how I long for those days!—For Bush I mean, not for cordiality with Leiter.) Leiter replied, in an amusingly insouciant email, that he had not read the book. He had heard about it from people who had.

Insouciance? This, as Normal Mailer once wrote, had all the insouciance of a drop of oil sliding down a scallion. There were two reasons why it was so amusing. First, I had double and triple-checked everything that went into that book, so I was fairly sure that unless Leiter had actual evidence that I was wrong, my points could stand. He had admitted that he had none. Oh, the innocence!

Second, I had made it a point of method that for every single criticism of analytical philosophy I made in that book, I would cite an analytical philosopher. Time in the Ditch, therefore, merely gathers and focuses criticisms of analytical philosophy that analytical philosophers themselves were already making. Check it out.

I’ll finish with Leiter for now with one further question: how did he come to hate me so much if he has never read my stuff?

Now there’s a story! I remember it well. As so often, it has to do with his beloved ranking system for philosophy departments. A couple of decades ago I was asked about it by, I think, lingua franca, an academic newsletter of those days. What I conveyed to them was that I thought it was pretty funny: the idea that experts in their own field, with heavy demands on their teaching and research, would spend any serious time and effort ranking other departments struck me as absurd. Would anyone who believes in their own work spend more than a coffee break per year scrutinizing what was happening at other institutions? Let alone ranking them? Moreover, I thought then that philosophers are wild and crazy intellectual trailblazers, each one acutely sensible of her or his own uniqueness. Who among them would sit still to be ranked against others?

It all just struck me as—well, to quote Pigden (and Leiter) “obviously silly.” When it came out, Leiter lost it and —not for the last time—threatened to sue. Of course he never forgave me. Because with him it’s not about the philosophy; it’s about the rankings. Which means it’s about him. Why does Trump love Putin and hate Kelly? Because of what they say about him. So with Leiter, who shares with Trump the policy of making many vacuous threats of lawsuits (I expect a few by the end of the week).

I close with a word of warning to younger philosophers: I am not alone. Other philosophers, and good ones, are investigating what happened to philosophy, especially post-immigration Logical Positivism, during the Cold War. I won’t drag their names into this putrid fight, but a few google searches should uncover at least some of them. Their results don’t usually agree fully with mine, but that is the nature of history. We all think that analytical philosophy in the United States has been seriously affected by political pressures, and we are trying to find out how and how far.

So you can neglect Leiter if you wish; his incoherent rages are already dying away into the laughter of forest creatures. But don’t neglect the rest of us and our historical work. Don’t neglect the archival work we have done, or our careful expositions of major texts, or our circumspect tracings of influences. And, of course, don’t accept what any of us says at face value, either. Because this is  something really important: the fate of a great philosophical tradition during the heyday of the American Empire. Someday, historians of philosophy are going to ask about that. And they won’t turn to the likes of Leiter for answers

 

 

 

24. Philosophy and Political Reflection

Of course social context affects philosophy! The society you live in affects how you eat, sleep, travel, marry or don’t–everything else. Why not your philosophical behavior?

But philosophers are traditionally eager to deny this, to imagine all philosophy as being done, in Peter Hylton’s words, “at a single timeless moment” (Hylton, Russell, Idealism and the Emergence of Analytic Philosophy p. vii). Claiming independence from social—and political—pressures is almost a defining characteristic of philosophy.

Philosophers avoid politics primarily, if not solely, by claiming exclusive allegiance to the standards of reason. If what philosophers say is required by universal standards that hold for all cultures and societies, then it can hardly  respond to political or social circumstances. Reason buys cultural and political independence.

The problem is that in order for that sort of argument to work, the standards of reason themselves must already have been established: I can hardly defend a conclusion or a topic by claiming that it is what reason demands if I don’t yet know generally what reason demands. So what about our rational standards themselves? Until they have been defined, philosophy is wide open to social, political, and even familial, pressures

At which point the study of the “politics of reason,” the subtitle of both the Philosophy Scare book and this blog, becomes an important and necessary field.

So have the standards of reason been defined? Not fully, and not for all time. Even logic is turning out to be rather protean. And whether logic is coextensive with reason itself is, I think, a lot more open than it is often thought to be. (Hint: a defense of the rationality of dialectics is upcoming on this blog. And if dialectics can be rational, what can’t?).

What is there for a philosopher to do if the very starting point of philosophy, its definition of reason and its concomitant definition of reason’s goal, truth, may be affected by social and political pressures? Answer: reflect on those pressures as best you can, with whatever local tools are available to clarify what I call the ”parameters” that constitute your “situation.” Only when you have identified those parameters, and determined their origins and trajectories, can you formulate what you really need: a clear and rigorous definition of reason.

Such reflection, then, is pre-rational. Is it therefore impossible? Ask Nietzsche; his account, On the Genealogy of Morality, of the “ascetic ideal” is a paradigm of the genre.

25. Questions of Motive

I have tried (## 29-30) to show how The Philosophy Scare came to be from what went before, i.e. how I came to write it. There are two motivational factors for the book that I would like to underscore a bit further, because there are misapprehensions about them—and so about me.

One thing that motivates me is very traditional—truth. This may be surprising. I’m supposed to be a postmodernist (a thought that angered Habermas enough to end my career in philosophy departments), and postmodernists are supposed to have no truck with truth. But I do truck with it, and trailer as well. I honestly believe that whereas my earlier Time in the Ditch (2001) was suggestive, Scare is definitive. To be sure, “definitive” does not equal “final;” nothing in history is ever final. But I believe that on the basis of present evidence, no one can rationally deny that political pressures played a major role in the development of the UCLA philosophy department—and so, a fortiori, of other departments. For if UCLA, that crystalline bastion of logic, can be affected by politics, so can anyone.

The second factor is something that does not motivate me: contrary to myth (one discreetly bruited right here at UCLA), I do not write from a hatred for analytical philosophy. The truth is, I love the stuff. I have published on Davidson, Quine, and Wittgenstein (though admittedly the later one). My version of Hegel is more like Quine than like any normal version of Hegel. True, I think that analytical philosophy died about 1983 (to be replaced by what I call “mainstream” philosophy). And I am frankly exasperated by the refusal of so many current American philosophers to take responsibility for the political dimensions of their own history. But that exasperation is in their service, for doesn’t denialism tend to increase the power of what is denied?

So I hope that Scare will motivate philosophers to reflect on their position in history—not only on the ahistorical truths they usually seek to purvey, but on their own efforts to obtain such truths and how those efforts are historically situated. (The fact that such efforts are not usually reflected on by philosophers leaves them unknown and unrecognized. Philosophical successes are then chalked up to some mythical and complacent mystery called “natural talent.”)

As Robert Scharff has recently shown (How History Matters to Philosophy, Routledge 2016), the lack of historical reflection in recent American philosophy is not only endemic but constitutive. Even historians of philosophy often write in the present tense, as if Plato or Kant were standing before them, proffering ideas which must be evaluated as if they were first produced five seconds ago.

Which of course they must, but there is more to it: Plato and Kant are not only interlocutors, but ancestors. We are results of their thought, and their intellectual DNA operates in us in ways that are often very difficult to excavate. The same, Scare shows, goes for political creatures like Raymond B. Allen and Joe McCarthy. We are their grandchildren. Hiding this truth, not least from ourselves, has made American philosophy more political, not less (see the Introduction to my On Philosophy, Stanford 2013). This is a fate from which I, along with many others, hope to save it.

 

26. Cold War Philosophy and Education

Lots of people believe that importing business theory into the university blights the institution. I have no doubts about this. Running schools like businesses, especially the way businesses are run these days, with emphasis on the short term and “cost cutting”—i.e. firing people or cutting back their benefits—has resulted mainly in miserable teachers and ignorant students.

But the invasion of schools by Cold War philosophy its not just a matter of structuring educational institutions around the idea that the people who work and learn in them are nothing but utility maximizers. A recent story by Rebecca Klein at The Huffington Post shows how Cold War philosophy also shapes the content of the curriculum. Referring to a report from the the Century Foundation, Klein writes:

The American education system has focused on “market values” over “democratic values” for the past several decades…. Rather than preparing students to be responsible members of society, the report argues, schools have chiefly taught them to compete in a global marketplace

What does it mean to prepare someone to be a “responsible member of society?” Klein writes:

The report argues that students must learn to think critically and make informed decisions … They need to appreciate the factors critical to a functioning democracy, like civil rights.

Education for citizenship, then is just education. As Aristotle said, in a just society a good citizen is a good human being. Education for citizenship is therefore teaching students to make correct (rational) use of the human mind: how to reason and how to get clear on facts and values:.

So in orienting their curricula to market vales, what American educators have abandoned is education itself.  Why on earth did they do this? Aren’t educators among the first to get hit by market malfunction?  Where did they get rewarded by it? Teachers in the lower grades are at best down-market, watching their pennies disappear as they buy paper and pencils for their students. In higher education, advanced training—a Ph.D.—usually makes you distinctly un-marketable. How could people who suffer from the markets abandon the idea of education for citizenship?

But wait! There is no trade-off here once you accept that the two goals are the same—that educating the mind to function just is educating it to perform correctly  in the market. And you will accept this if you accept Cold War philosophy’s view that the human mind itself operates on the principles of the market, as codified in rational choice theory (see Chapter Four of Scare). Then market rationality becomes the only rationality there is, and education’s job is to teach and instill “market values.”

Can Cold War philosophy have gained such purchase on the minds of educators that they don’t see this? Writing an article that appeared in the Chronicle of Higher Education last October suggested to me that it has. Looking at the two main families of arguments in favor of the beleaguered humanities, I realized that both of them assumed that the humanities, to be beneficial at all, had to be of direct benefit to  individuals. One family of arguments claims that the humanities can indeed provide the skills necessary to go on the job market—an obvious case of a market curriculum. And the other argues that your life will be more interesting and perhaps even virtuous if you know something about the humanities—which is just an attempt to highlight a particular definition of “utility.”

As the article points out, both arguments are valid, but neither is successful: they are both out there in the “marketplace of ideas,” but resources are not flowing (back) to the humanities because of them. They have accepted, in fact, the premises that are causing the problem. Proving that market rationality tolerates humanistic education is not the same as challenging the claim of market rationality itself to be coextensive with all reason.