Not so long ago David Denby wrote in the New Yorker that “the word ‘humanist’ has a slightly moldy sound.”1 He was spraying verbal air-freshener to clear the atmosphere for his praise of Iranian film director Abbas Kiarostami, who “redeems humanism by combining it with circulating formal play”—whatever that means. But regardless of his reason for doing so, Denby aptly expressed current sentiment about humanism among cultivated people: Humanism seems a vague, well-intentioned sort of thing but is very uncool, almost embarrassing.This ambiguous distaste for humanism is inherited partly from the revulsion created by World War I against 19th-century high-mindedness (as with Wilfred Owen on “the old Lie: Dulce et Decorum est/Pro patria mori”), and also from the Nietzschean revolution in thought, which gave that revulsion some philosophic depth. These come together, for example, in Sartre’s famous diatribe in Nausea (the epitome of serious cool for postwar college students) against the serene humanist who, faced with headstrong, angry people who have real passions, “digests all their violence and worst excesses; he makes a white, frothy lymph of them.” There is already something of Denby’s condescending tone in Thomas Mann’s treatment of Settembrini, the fiery spokesman for liberal humanism in The Magic Mountain. He’s a good guy in the end, but a somewhat comical lightweight compared to the seductive Naptha and the charismatic Peeperkorn. If humanism functions today as a kind of well-worn security blanket for those who prefer their nihilism with a human face, it doesn’t fare well as a concept among serious scholars either. Thus Vito R. Giustiniani, the intellectual historian of the Renaissance, demonstrates that “humanism” has meant widely different things. He lists variants from the study of “the resurgent classical culture” or even “what is now called literature” to a 19th-century (the period when the term itself was coined) “philosophy of man” as set out in Renan, Feuerbach and Marx. His melancholy account, which finds room even for John Dewey and Corliss Lamont, ends by pronouncing that “God’s curse still rests on a term which should define the very essence of God’s most perfect creature.”2 Conceptually cursed, intellectually diffuse and socially dubious, why not just jettison the term altogether, and maybe even the underlying concept of “man” itself? Derrida and Foucault were all for that.3 And why not, if its many meanings in the end mean nothing? Not so fast. Humanism, whatever its varied history and ambiguous or disputed origins, has always had a central concern, epitomized by its central question, that should matter to us: What does it mean, what is it like, to be a human being? That question, asked today, hauls in train all the problems of self-limitation and self-transcendence that are assuming practical urgency in an age of biotechnology, advanced robotics and the specter of neo-eugenics. Are there any natural limits to what it means to be human, or are we an indefinitely self-transcending, self-creating being that we call human after the fact? Is the definition of “human” to be like Humpty Dumpty’s definition of “glory”, which “means just what I choose it to mean—neither more nor less”? Or is it perhaps to be found somewhere between the static and the infinitely malleable? If so, where? And who gets to decide? We are, in fact, debating the essence of humanism today even as the term lies in disrepute. On the one side are libertarian technophiles who talk of freedom as the ultimate end, but who clearly carry in their back pockets a flask filled with the old liberal elixirs of freedom to seek pleasure and tolerance of the pleasures of others. On the other side are those like Leon Kass and Francis Fukuyama, and the traditionally religious, who seek support for some notion of limitation as essential to human life. Kass’s much derided appeal to the “wisdom of repugnance” is an attempt to mine and bring to the surface some bedrock awareness of what humanity means. Both of these approaches are problematic, however. For many libertarian technophiles, liberal ends no longer rest on a liberal account of human nature, of natural rights, much less on natural law. Freedom becomes instead a quasi-Nietzschean “value”, subjective and free-floating, if only because we could in principle be free to make ourselves (or at least some of us) into beings that enjoy slavery, or who are ready to live as servo-mechanisms for the ends of others. True, the Kantian tradition has supported freedom for higher reasons than mere natural right, but only and always as conceived in relation to some telos, some intrinsic purpose tied to human nature (for Kant, human reason). Thus, the second term of libertarian technophilia undercuts the first: If there is no natural basis for rights, there can be no standards for the use of freedom. But this isn’t good news for the technophiles’ more traditionalist opponents. Where postmodernists like Foucault and Derrida, following Nietzsche, had called the concept of man into question in highly theoretical ways, technology now calls it into question in potentially very practical ways. Thus Fukuyama has heavily qualified his end-of-history thesis because human beings soon may have the technical capacity to make themselves into whatever they want: “We are on the brink of new developments in science that will, in essence, abolish what Alexander Kojève called ‘mankind as such’.”4 Traditionalists can no longer confidently deny the possibility of certain utopian projects by pointing to the natural limits of human nature. Indeed, it is reasonable for them to worry that their pleas to respect limits are liable to be steamrolled by applied science before the very words can find their page. So far, the libertarians seem to be winning over public opinion. A testy exchange between Kass and Harvard evolutionary psychologist Steven Pinker in Commentary illustrates part of the reason why. Pinker denies that he reduces mind to brain; mind is, however, “what the brain does.”5 But what needs to be studied, he avers, is material, is brain; nothing is gained by emotive talk of the “soul.” Kass responds with an essentially Aristotelean distinction between form and matter. At that level, Pinker tends to sound reassuringly scientific and familiar, Kass merely odd and possibly even theological. Alas, too, the “wisdom of repugnance” doesn’t cut much ice anymore. We know all too well how our “natural” repugnances (to raw fish, interracial marriage, homosexuality) have been revealed as social constructs derived, often enough, from garden-variety prejudices. Indeed, overcoming those repugnances is often seen as (and, surely, often is) a sign of enlightenment and cultural progress. What seem like our instincts may only be our (bad) habits. So why trust them? Science raises another problem for the traditionalists, one that concerns the interface between technology and pain. Medical and pharmacological science has made life so much better for our bodies that morality has come to be seen as concerning the relief of suffering more than, as in earlier ages, the transcendence of it. For Kant, human dignity meant the capacity to transcend the merely empirical by the force of moral will. Similarly, Friedrich Schiller, in his essay “The Pathetic”, sees the moral “sublime” as being depicted in the statue of Laocoön, who stoically maintains his poise even while he is torn apart by a sea monster. But for the Euthanasia Societies of Great Britain and America, humans require relief from “the indignity of deterioration, dependence and hopeless pain.”6 The demand for a dignified death of the body, understood as a death without suffering, is, as the late Paul Ramsey demonstrated, actually predicated on an understanding of man as a mere body, which makes the true dignity of “suffering accepted” impossible. “Nothing reveals more”, Ramsey observed, “about the meaning we assign to human ‘dignity’ than the view that sudden death, death as an eruptive natural event, could be a prismatic case of death with dignity or at least one without indignity.”7 Indignity has come to mean the body’s suffering rather than dignity its well-being. This is precisely the opposite of the older view, apparently Ramsey’s too, that indignity is found in the defamation of the spirit. The naturalization of death as an ideally painless “part of life” indicates that our talk of dignity is more and more really about bodies, less and less about what is essentially human. True, most of those who speak of pain and indignity separate the two, with indignity referring primarily to lack of autonomy. Autonomy sounds like something noble, higher than mere avoidance of bodily pain. But again, under the influence of science, the teleological understanding of freedom has been severely undermined. Philosophers like Ronald Dworkin or Lawrence Friedman celebrate choice for its own sake, whatever its content. Since physical suffering constrains autonomy, and since overcoming suffering though moral will is no longer any better a choice than giving in, it is consistent of these thinkers to see pain itself as an indignity. Traditionalists have a hard time today overcoming the general view that the relief of physical suffering is the highest demand of morality. (Consider the power of the testimony of a Michael J. Fox or a Christopher Reeve on the question of using embryonic stem cells for medical research.) But at the same time, it is worth noting that, however relativistic the celebrators of autonomy, and however reductionist the libertarian technophiles, they still feel the need to dignify their positions with the use of high moral rhetoric. A disinterested observer of the dispute might therefore get the strong impression that, typically, we want it both ways. To cite Ramsey again: “‘Good death’: (euthanasia) like ‘Good grief!’ are ultimately contradictions in terms.”8 We want everything science can do for us by treating us essentially as things, because we deserve it, but we deserve it because we are not mere things. In this way, if not also in others, the question of “humanism”, the essential character of being human, remains urgent. It’s the answer we’re having a problem with. We think we know enough, since Nietzsche, to be sure that we can’t go back to a single confident and limiting account of human nature, anymore than we can be satisfied with Newtonian premises in what we know to be an Einsteinian world. Yet we are plainly unhappy with the consequences of accepting the conclusion that human beings are nothing more than material flukes of distant and random cosmic events. These consequences now present themselves to us in ever more horrid, machine-borne ways: for example, the prospect of growing acephalic humanoid organ donors, from which, for utterly humane purposes, we could extract whatever we needed—say, a spare liver. The dilemma we face today about the nature and status of human beings goes back at least to the Greek origins of philosophy, where human nature was understood as having to do with the hard-to-pin-down relation of body to soul. Contemporary versions of that dilemma, however, have their peculiar origin in the modern response to Christianity’s radical claims for soul over body, claims that dwarfed even Plato’s most extreme rhetorical assertions.9 The Renaissance humanists, in moving away from the question “What is God?” and toward the question “Who is man?”, reopened old issues in a new way, but did so in a mode of descent (however necessary and appropriate). The ensuing sine wave of alternating descent from man as divine down to man as subhuman and subsequent ascent back to man as quasi-divine has, in four discernible stages, brought us to our present incoherence. In the first stage, 16th-century humanists, who had understood the implicit program of the 15th-century return to classical letters, tried to grapple with human phenomena as they presented themselves. In the second, humanism as such seemed to disappear as the cause of humanity merged with the triumphal march of the natural sciences. Yet with its success, science also inclined the understanding of man toward what LaMettrie, the 18th-century French physician and materialist philosopher, called “Man the Machine.” So in the third episode, in the late 18th and the 19th centuries, the cause of humanity was identified with the ennobling cause of freedom—understood as a kind of dignity that implied liberation from the machine. What was called a “new humanism” eventually rode that horse, courtesy of Friedrich Nietzsche, over a cliff. The fourth episode is our own, where we cling to the shards of both the science and the freedom paradigms as we face ever more practical and concrete formulations of the human question courtesy of advances in science and technology. It remains to be seen what, if anything, a fifth episode might bring. The First Stage: The reigning view of Renaissance humanism aligns with what we mean by “the humanities.” This view affirms that humanism was not inherently anti-Christian and secular; rather, as Werner Gundersheimer put it, it “makes it possible for us to regard as humanists men who held widely differing beliefs and interests, who differed in their political and religious commitments.”10 The attacks on scholasticism from Petrarch on aren’t to be considered as essentially philosophic; these simply form “an educational program.” In attacking the value of academic theology and philosophy, a humanist like Vergerius (1370–1444) showed how humane studies could prepare children “for a life of civic duty, pleasure in literature and in the arts, sports, and social gentility.”11 The “civic humanism” of a Coluccio Salutati (1331–1406) or a Lorenzo Valla (1406–57) flows out of this “educational program” pretty naturally: It comprised concern about the well-being of one’s people, reflections on the superiority, when it came to Italian politics, of the Romans over their Christian successors, and the pursuit of ancient virtue. One rarely finds dissent from Gundersheimer’s view that these humanists “subscribed to the conventional Christianity” of their times, but the Church wasn’t as sure of this as contemporary historians seem to be. Around 1400, Salutati had to defend liberal studies against a Dominican friar who wanted to limit the study of the ancients to learned adults. It is quite a distance from a mere “educational program” to Machiavelli telling his friend Vettori that he loved his native city more than his soul, but a straight path leads there: The Prince was written in 1513. Let us not be coy: The Church had good reason to be suspicious. The rhetorical caution of Renaissance humanists has sometimes deceived contemporary readers into accepting protestations of innocence as real. Thomas More’s Utopia features a vindictive friar praising the harshness of Biblical law, for instance, in a way that argues the opposite. Perhaps the Dominicans had a clearer idea of what some of the humanists were up to. Luther, who denounced them as “Epicureans”, likewise thought they were trying to show that they could make a world better than the one God had created; they were, in sum, trying to liberate man from necessity, which very nearly implied that they were also trying to make God contingent.12 And after all, some humanists got pretty explicit: “It is far better to rise from ignorance to knowledge than to hope in the future”, claimed Mutianus Rufus, and Marlowe’s fictional Machiavel says, “I count religion but a childish toy and hold there is no sin but ignorance.”13 Yet what was the emancipated study of humanity and its works to look like? To some extent, it involved a return to the pre-Christian ancients, but given the Christian and scholastic appropriation of ancient thought, that return had to be carefully mapped out to detour around Church territories. The ancient psyche, with all its indecision about such considerations as immortality, had long since been merged into the “soul” of the scholastic Christians. How to return to the original version so as to talk afresh about what is peculiarly human? We find an answer in the great, paradoxical works of the 16th century, like Erasmus’s Praise of Folly, More’s Utopia, Shakespeare’s plays and Montaigne’s Essays, for example. These writers observed the changing world before them and dared to draw apparently self-contradictory and necessarily tentative conclusions from it. They created contingency against necessity; they opened a way for irony and useful doubt. They used available interpretative structures—Shakespeare’s Neoplatonism, Montaigne’s Stoicism—that their audiences could understand, but these structures did not wholly contain the observations they claimed to order and so gave way to something new. A kind of consistent pattern, if not a full theory of knowledge, emerged, and it was a pattern that denied monism, the God’s-eye view of the whole. Montaigne may be the best example of this new approach. He emphasizes the multiplicity of human possibilities and a “self”—no longer the “soul”—that is fundamentally divided. “Our feelings reach out beyond us” is the title of an essay early in Book I, while, in Book III, part ii, we learn that “while I am never at home, I am never far from it.”14 In other words, unlike other animals, the human self is capable of projecting itself, through imagination and reason, into other situations real and fictional. Indeed, humans are incapable of not doing so, and therefore one never is entirely what, or where, one is. The task of human refinement, then, becomes disciplining the self in an appropriate and limited way in order to learn to be “not far” from home. This view challenges the humorless and unrealistic Stoic efforts to overcome our fear of death that both Montaigne and Shakespeare lampoon (for instance, in the Duke’s speech to Claudio in Measure for Measure). These writers undertake the task of human refinement through a series of concrete, paradoxical and often hilarious explorations of the particulars of human self-deception and rationalization. They teach us both to know the full range of what human beings are capable of, and also ultimately to know ourselves as we see those selves in others. The “humanism” of an Erasmus, a Shakespeare or a Montaigne is far more appreciative and contemplative than activist. It teaches the fundamental insolubility of the human problem, even where, as with Montaigne, it hints at concrete measures for ameliorating our physical lot. A kind of freedom, the capacity to see things from various viewpoints, is inherent in this view, and it is something new. It is neither a freedom granted by the allegorical, top-down perspective of orthodox Christianity, nor yet an independent and noble virtue. The more limited goal, rather, is the balance afforded by the establishment of the possible (if fragile) unity of the self from all of its natural parts. It is a balance that was made impossible by monisms that sought to overcome the need for it by fixing eternally the relations of its elements. The Second Stage: Michel Foucault makes a distinction between the undisciplined and wildly connective 16th-century style of thought and the clear, orderly, more scientific 17th-century style, which “leads from those unrefined forms of the same to the great tables of knowledge developed according to the forms of identity, of difference, and of order.”15 In fact, the impulse to identify the cause of human beings with the progress of natural science considerably precedes the shift Foucault describes. In The Prince, Machiavelli already speaks of building dams and dikes as a remedy for the floods of fortune. Machiavelli’s disciple Francis Bacon called this “forcing nature by art.” It sets man above his environment as a practical matter in the name of his lower nature, the demands of his body, but adopting sub-human nature as a means to explain and benefit man. Thus Bacon’s confident technological project in his New Atlantis, complete with Frankenstein-like experiments on the preservation of flesh, is more than echoed in Descartes’ Discourse on Method.16 But these activist and meliorist tendencies don’t become dominant until the great system-builders of the 17th century, like Spinoza or Descartes. Their deep hopefulness may owe something to traditional religion, but the expression of this hope has become secular and to some degree both confident and scornful in ways at odds with any religious sensibility (think of Voltaire’s smug mockery of Pascal’s existential anxieties17). Yet, by the middle of the 18th century, it became increasingly evident that there was something amiss in the project of human emancipation via what we now call the natural sciences. The year 1748, the date of publication of La Mettrie’s Man the Machine, is as good a marker as any to note the moment the problem fully hit enlightened consciousness. If, to counter the deforming claims of the supernatural on human beings, and to benefit human beings practically, one accepted the account of the human being as principally a fleshy “machine”, there was a price to be paid: the disappearance of the human as such. It is not as if the founders of the project hadn’t known from the beginning what would follow. They knew from the outset that they were giving up on any possibility of unmediated or pure knowledge, and that a science based on how things appear to us and our instruments, as ordered by mathematics, could not get to essences. Therefore the self, including any intrinsic purposes it might have, would have to remain beyond reach. Yet the moral impulse that fueled the Enlightenment in the first place is hard to ground on the basis of “Man the Machine.” Machines do whatever they do and their functioning has no ethical status. It is not as if the late 18th and early 19th centuries didn’t get the point, as all those monsters and marionettes in Mary Shelley and E.T.A. Hoffmann et al. bear witness. So did the villains like Schiller’s Franz Moor, who justifies his villainy by his materialism: If a man’s birth is the product of animal compulsion or mere chance, who can call the negation of that birth any great matter? . . . Murder! an inferno of furies flutters round the word. Nature forgot to make one man more, the umbilical cord was not tied off, the father spent the wedding night with the runs—and the whole charade is over. It was something, it will be nothing; and nothing will come of nothing. Man comes from muck, splashes around for a while in muck, produces muck, and rots away in muck, till he is nothing but muck on the sole of his great-grandson’s shoe. And that’s the end of the mucky round of the human condition. So bon voyage, Brother.18
With materialism eventually came history. Without any kind of metaphysics to define us, we are merely what we do. Here in due course is where mind becomes “what the brain does.” But what the brain does is to change human life and even human consciousness. The record of that apparently arbitrary change is history in the form of a chronology without an inherent purpose or direction. The 18th-century thinkers understood the ugly, indeed nihilistic, implications of a humanity that blindly transformed itself with no ends in mind. Kant therefore urges us to “assume a plan of nature” for human history:For what is the use of lauding and holding up for contemplation the glory and wisdom of creation in the non-rational sphere of nature, if the history of mankind, the very part of this great display of supreme wisdom which contains the purpose of all the rest, is to remain a constant reproach to everything else? Such a spectacle would force us to turn away in revulsion, and, by making us despair of ever finding any completed rational aim behind it, would reduce to hoping for it only in some other world.19
Thus Kant tackled the problem of how to live as a human being, knowing that every effort one undertook was necessarily in vain, determined and thus “absurd.” His remedy was that mankind could rescue itself through daring acceptance of the dilemma. Basically, it went like this: All right then, we are things, but we are also the makers of ourselves through history. We are creative, and hence our own gods. We may not have empirical freedom, since we are caught up in a chain of causation. Our reason may only be a tool of our passions, rendering us examples of mere matter in motion. But that is not the human viewpoint. We do not know how we are determined until we have acted, and so might as well be free. We can choose to accept this human viewpoint and live accordingly, in what Rousseau had already called “moral freedom.” It is this definition of freedom that enables creativity, the passions that, for moderns, truly constitute the self, and ultimately the distinctive “cultures” of human society. These are the new keywords for what was already known in Germany as a “new humanism”, which spread in the late 18th and 19th centuries throughout the Western world and eventually beyond.From the outset, however, this freedom had a somewhat paradoxical character. The practical empirical freedom promoted by science and technology is the freedom of bodies, but bodies enabled to fulfill their desires. For the heirs of Rousseau those desires are inflamed and infinitely extended by our imaginations and intellects. That freedom, enabled by technology and perverted by imagination, turned us into degraded manipulators of ourselves and others. Thus the new moral freedom, untethered to any purpose beyond its own enjoyment, became essentially negative. Only with enormous difficulty could it deny desires. Rousseau and his successors realized that even partial success for the emphasis on moral freedom would require a huge cultural project to supply the necessary discipline. Kant’s morality of the categorical imperative is one famous example of an attempt to supply it. Rousseau’s general will, taught in The Social Contract, is another. Less famous, but more useful for our purposes, is the example found in Schiller’s Letters on Aesthetic Education. It begins from the problem he inherited from his understanding of Kant. How could men discipline their passions, thereby achieving true dignity, without succumbing to the temptation to rebel, passions being both so strong and we, in their grip, so clever at rationalizing and persuading ourselves of the justness of our desires? Schiller’s answer: The development of a high culture, based on the artistic creations of great geniuses like Goethe, could ennoble our tastes. Overcome by the moral beauty of self-sacrifice, men would actually come to want to throw themselves on the hand grenade to save their buddies. Among the most famous examples of self-denial is the airport scene in Casablanca where Bogey gives up Ingrid Bergman to fight the Nazis (even more appropriately, perhaps, is Woody Allen’s decision in Play it Again, Sam to give up Diane Keaton because of his aesthetic education at repeated showings of Casablanca). Schiller’s humanism wants morality to be no longer the rational overlord of the passions but the master passion itself. The human being who unites reason and passion in the mode of high sentiment then becomes whole and fully genuine in the process. But since it is beauty, not reason, that speaks to the passions, it is the artist who becomes the new prophet, the increasingly acknowledged legislator of the world. So it is that between Rousseau’s The New Heloise and Casablanca lies the long era of a certain kind of self-understanding by the educated bourgeoisie of the West. One of its monuments is John Stuart Mill’s On Liberty, dedicated to the inculcation of what he (following Schiller’s friend William von Humboldt) called “spontaneity”, defined as “the highest and most harmonious development of [man’s] powers to a complete and consistent whole.” In different ways Emerson, Thoreau, Renan, Arnold and Ruskin all labored at this same workbench. It may be summed up in Mill’s famous and possibly fatuous hope to make a new Periclean race out of the Victorian upper-commercial classes. In the end, the project succeeded just enough—in the lives of generations of high-minded bourgeois types, Kantian Germans, transcendental Bostonians, universalist Frenchmen or liberal Russians—to drive the likes of Dostoyevsky and Nietzsche crazy about their self-deceiving pomposity. Even Marx (who, with Hegel, had sought to finesse the original problem of historical relativism by claiming to have arrived at its end, when reason again could function normally) was engaged in a cultural effort similar to high-minded bourgeois liberals like Schiller and Kant. This time, though, not Schillerian cultural or aesthetic education, but a particular combination of History and economics was to bring about the complete identification of individual and collective and thus bring the mere ideal of Rousseau’s general will to practical reality. In his version, communism would soon move from the rule of necessity and class war into the realm of culture, of free self-creation, of the invention of new needs. That is, we could eat our material, scientific cake and have our beautiful high culture too, without all that ascetic “moral freedom” dieting the “idealists” insisted on. The Third Stage: The aspirations of the new humanism, as it turned out, bore fatal weaknesses. (The problems with Marx’s version are well known.) After identifying the beautiful with the moral and employing the artist, now raised to the status of prophetic genius, to make that identification, those humanists who had initiated the project found that they faced a rebellion of the artists themselves. Art liberated from imitation of nature and raised to quasi-divine status could not be confined to teaching moral lessons for the edification of merchants and their children. Their refusal to play the game had fateful results. “Decadence” and “formalism” combined to produce what Tom Wolfe called the “Boho dance” wherein the bourgeoisie, educated to believe that art appreciation made them superior human beings, was boundlessly excited by anti-bourgeois art, art meant to scandalize rather than ennoble. The resultant taste for modernism eventually created a synthesis very different from the one intended. Now the excitingly cynical hipster replaced the high-minded but hopelessly bourgeois square as an object of (bourgeois) esteem. Then too, “moral freedom”, the perpetual sublimation of one’s desires, which the 19th-century project required in most of its versions, gets old fast. Portnoy was not the first to want his superego strung up by its boots. The late 19th-century cults of Wagner and war were signs that the perpetual restraint of the desires sweetened by art was becoming intolerable to the very audiences it was aimed for. The thrill-seeking that Philip Rieff sees as a kind of death wish typical of contemporary culture may well have been an urgent response to a surfeit of high-mindedness. To this, add Nietzsche, for whom “moral freedom” was a cowardly choice that concealed its own admission of nihilism. True courage meant self-assertion over against universal meaninglessness and thus the war of strong wills to power. Nietszche’s thought rapidly made the old humanism look pallid, preachy and, as Sartre put it, “lymphatic.” And of course the experience of World War I severely undermined the whole culture of self-restraint and transcendence that had come before. Where was the dignity in the gruesome deaths of hundreds of thousands of young men in the trenches? Where was the noble purpose of the supposedly enlightened societies who sent them there? The Fourth Stage: Humanism had followed two opposite possibilities in defining a world appropriate for humans. The first, grounding itself solidly in the subhuman—namely, the scientific examination of human materiality—had failed because in so doing it had annihilated what was particularly human. The second, undertaken in reaction to the failure of the first, saved us by making us into free moral beings beyond anything merely “empirical.” But “freedom” came to be seen to be as just a weak way of saying “will to power”, and the moral world based on that kind of freedom turned out to be an illusion masking chaos. Mere nature, even moral nature, disappeared again. We were left as gods, but gods who might just as well act like devils. Once again there was no room for human beings. Since Nietzsche, then, modern humanism has mostly been a subject of waning philosophic concern and no small amount of cultural embarrassment. And yet, like many a felled tree, it keeps putting up odd little shoots. Much of what is recognizably humanist in inspiration today (anti-dogmatic, skeptical, tolerant, moderate) presents itself proudly as postmodern (meaning Nietzschean) and therefore anti-humanist. Reacting, too, against Marxian “totalism” (which seems to many of the leftist heirs of Nietzsche to be the apotheosis of humanism), postmodernists like Lyotard and Derrida seem to have bet that the fruits of humanism can be grown hydroponically, that “man” is not needed for humaneness, that relativism properly understood will by itself make us open-minded and kind. Attaining the character of a reasonable, civilized human being had been, for Montaigne, the product of a long and careful study of the paradoxical character of human beings. For liberals like Locke it had come from following out an account of human nature that privileged survival. For later liberals, like Kant, it had come from devoting oneself to the strictures of moral or aesthetic freedom. We are now told that we can get that, plus a lot of fun and playing around (“jouissance”), on the cheap—Nietzschean relativism with all the nasty bits (like the will to power) taken out.20 Current debates about bio-engineering or about multicultural tolerance suggest that relativism plus an “anti-hegemonic” attitude produce neither a humanist morality nor much clarity about the human situation. A skeptical temperament might be propped up slightly by a relativism shorn of its will-to-power accompaniment. Yet relativism as such lies wide open to the folly of Sartrean “commitments” to Stalinism or other mad illusions. The fact that in the end both Derrida and Foucault adopted fairly traditional liberal political views seems to me to have had little do with their thought, which could easily have led in very immoderate directions, and more to do with older and deeper loyalties.21
The attempt to explain things in a way that would place human beings at the center of the picture—and thus keep them from being racked by the extreme demands of Christianity—faced from the outset the difficulty raised by Leo Strauss: Since man is the being that either transcends or falls below himself, man cannot be his own measure.22 This is because we do not have a “nature” like a rock or a dog does. If we have one, it is a double and contradictory one, as described in Genesis. That is, man is both a created thing, like a rock or a dog, and something divine, in the image of God, that is capable of ranging freely, reflecting on the world and itself, and of changing both. Still, the apparent failure of the humanist effort to make human beings central does not seem to have affected our lingering sense of the need for such an account. Despite Denby’s embarrassment, despite the clever tactics of postmodernist misdirection, despite the technophiles’ failure to see that in cheerfully ditching nature they are also ditching their unremembered justification for their own love of liberty (namely, the doctrine of natural rights), we don’t seem to be able (or at least to want to be able) to abandon whatever it is we mean by our own humanity. We don’t want to be nihilists. Yet we don’t seem to have even the words to ask what it means to be a human being. It occurs to me that Strauss’s statement could use a supplement. Man is the being that transcends itself, but then he comes back to where he began. We can’t help doing metaphysics, seeking the God’s-eye view and ultimate answers to our questions, and thus we can’t help trying to be more than we can be. But in the process we find out not only that we aren’t really capable of certainty about the answers to our questions, but also that we aren’t satisfied with the answers we come up with, precisely because we are not the God for whom they would be suitable. This is what I think Montaigne means by being never at home but never far from it. As I understand him and perhaps many of his 16th-century fellow humanists, the task is not to ground in some comprehensive teaching the paradoxical character of human beings as both limited and self-transcending, both particular and universal, but rather to describe that character. Perhaps the task is also to teach us, as he records his own learning process, how to go outside oneself and come back without getting carried away in the process. A humanism that could get past the foundational/anti-foundational dialectic whose very terms each imply the other, a humanism that, like its great 16th-century representatives, would seek above all to pay attention to human phenomena and describe them with care and imaginative sympathy, still seems to me to have a future—and an important future, particularly in overcoming the abstraction of many of our current debates. For instance, if the abortion debate focused less on either the scientific or philosophic status of the fetus (that is, on what people ought rationally to think, based on sound scientific and philosophical premises), and more on what people can’t help thinking about a fetus (especially their own), its character might change for the better. Similarly, it would probably be a good thing for American politics if the urbanity of Federalist No. 1 could make a comeback. There Hamilton turns the assertion of interested motives from a weapon of righteous indignation into a confession of everyone’s motives, his own included. That takes the poison out of the charge and makes it easier to focus on the issues in question.23 Finally, such a humanism could help us think, and feel, about such vexed “multicultural” questions as the Danish Muhammad cartoons, politicized headscarves and similar matters. Montaigne understood the human sources of fanaticism and didn’t have a cow about it like the indignant Voltaire. Still, he mocked it and even felt a responsibility to chide it. Maybe it would be a good thing to take humanism off the museum shelf. Maybe it can stand getting dirty, getting used as a set of rough standards, a set of attitudes in contemporary cultural and political debate. We surely need some alternative to the various nihilisms and dogmatisms that crowd the stage today. What if we could again understand humanism not as an outmoded dogma (or, worse, as the pipe-smoking, sherry-drinking attitude of outmoded gentlemen) but, as it was for Montaigne and Shakespeare, as a fresh determination to look at human things on their own, often contradictory terms? I think it would become surprisingly useful at least in stating problems clearly, which is, after all, the most important step to finding reasonable solutions.
2Giustiniani, “Homo, Humanus, and the Meanings of Humanism”, Journal of the History of Ideas (April/June 1985).
3Foucault, The Order of Things (Random House, 1970), p. 387. “ . . . man is an invention of recent date. And one, perhaps nearing its end.” Derrida uses the citation as an epigraph to his own essay, “The Ends of Man.”
4Fukuyama, “Second Thoughts”, The National Interest (Summer 1999), p. 17.
5Letter to the Editor by Steven Pinker, Commentary (July/August 2007). Kass’s reply is in the same issue.
6Cited in Paul Ramsey, “The Indignity of ‘Death with Dignity’”, Hastings Center Studies (May 1974).
7Ramsey cites a Swedish survey showing the popularity of what had traditionally been thought a very bad death.
8Ramsey, “The Indignity of ‘Death with Dignity’.”
9“Be ye therefore perfect, even as your Father which is in heaven is perfect.” Matthew 5:48.
10Gundersheimer, ed., The Italian Renaissance (Prentice-Hall, 1965), p. 5.
11Gundersheimer, p. 7.
12Martin Luther, Tischreden, II (H. Böhlaus Nachfolger, 1912), p. 627.
13Respectively, Mutianus Rufus, “Longe melius esse ab ignorantia ad scientiam consurgere quam sperare futura”, from “Der Briefwechsel des Conradus Mutianus”, Geschichtsquellen der Provinz Sachsen (Halle, 1890), part I, p. 134; and Christopher Marlowe, The Jew of Malta, Richard W. Van Fossen, ed. (University of Nebraska Press, 1964), p. 8.
14Montaigne, The Complete Works, translated by Donald M. Frame (Knopf, 2003), pp. 9, 746.
15 Foucault, The Order of Things, p. 71.
16Francis Bacon, The Advancement of Learning and New Atlantis (Oxford University Press, 1921), p. 266. “These caves we call the lower region, and we use them for all coagulations, indurations, refrigerations, and conservation of bodies.” Also, René Descartes, Discourse on Method, Optics, Geometry and Meteorology, tr. Paul J. Oscamp (Bobbs-Merrill, 1965), p. 50. “For they made me see that it is possible to arrive at knowledge which is very useful in this life, and that . . . we can use them in the same way for all the purposes to which they are suited, and so make ourselves the masters and possessors, as it were, of nature.”
17“As for me, when I look at Paris or London I see no reason whatever for falling into this despair that M. Pascal is talking about; I see a town that in no way resembles a desert island, but is peopled, opulent, civilized, a place where men are as happy as human nature allows.” Voltaire, Philosophical Letters, translated by Ernest Dilworth (Bobbs-Merrill, 1961), p. 124.
18The Robbers, Act IV, scene 2, from Schiller, volume 1, translated by Robert David MacDonald (Oberon Books, 2005), p. 126.
19Kant: Political Writings, Hans Reiss, ed. (Cambridge University Press, 1991), p. 53; also see Immanuel Kant, Perpetual Peace and Other Essays, translated by Ted Humphrey (Hackett, 1983), p. 86.
20I realize that this picture is complicated by the influence of Levinas’s ethics on some post-modern writers, but I do not think it is essentially changed.
21Thus Derrida, angry on behalf of his friend Paul de Man (accused of writing quisling propaganda during World War II), argued, against his own theoretical strictures about being able to know the intention of an author, that de Man’s intentions were not what his accusers were claiming.
22Strauss, Thoughts on Machiavelli (University of Chicago Press, 1984), p. 78. “. . . since man is the being that must try to transcend humanity, he must transcend humanity in the direction of the subhuman if he does not transcend it in the direction of the superhuman. Tertium, i.e., humanism, non datur.”
23The Federalist Papers (New American Library, 1961), p. 34. “This circumstance, if duly attended to, would furnish a lesson of moderation to those who are ever so thoroughly persuaded of their being in the right in any controversy.”