About a year and a half ago I argued in these pages (“Between Relativism and Fundamentalism”, September/October 2006) that two apparently contradictory attitudes toward the nexus of truth and faith—relativism and fundamentalism—are united in their origins as responses to the inherently pluralizing effects of modernity. I suggested, too, that relativism and fundamentalism feed on each other with far-reaching implications for contemporary (and future) religion, morality and politics. I specifically included politics within the ambit of my interest to emphasize that there are secular as well as religious fundamentalisms, and that both may affect moral judgments and political engagements (which are in any event already entangled by their very natures).
Sensing that relativism and fundamentalism are both on balance inimical to the benign exercise of moral reason in society, I ended by intimating a project that would aim to gain practical understanding of how to promote a principled uncertainty in which the exercise of genuine moral reason can flourish. That project’s first objective would be to explain the sources of extreme views about moral reason: either that moral reasoning is inherently impossible, as relativists presume, or that it can only be based on certainty in invariant divine command, as fundamentalists insist. Its second objective would be to distill out the essence of those historical cases in which religion managed to find a balance between the urge to moral certainty and the humility of theological doubt. I am now ready to report on the project’s progress to date.
Sources of the Extremes
Both relativism and fundamentalism are inherently modern phenomena; indeed, they are so by definition. Modernity has two distinguishing characteristics: It separates a whole host of human activity, the arts not least, from any obligation to faith; and it confronts individuals with knowledge that there are different and often antithetical ways to conceive of religious belief and practice. On the one hand, modernity creates diverse menus specifying which human behaviors are “inside” or “outside” the bounds of religious authority—a problem that obviously never arises if virtually everything falls “inside.” On the other, it is difficult, verging on impossible, to maintain an innocent and absolute certainly in one’s own faith after having acknowledged other plausible approaches.
The relativist approach to these conditions is to minimize (or eliminate) the scope of religious authority and to deny the possibility of any fixed truth. The fundamentalist approach is to maximize (or totalize) the scope of religious authority and re-assert certainty in its tenets of faith. Neither approach is historically normal. Every human society, probably since the first appearance of homo sapiens, had a zone of taken-for-granted beliefs and values, and a zone in which one was allowed to question. Every society, too, associated faith with family groups: One’s religion was organic to identity, the result being that a person could privately believe anything he or she wished and still not risk being expelled from a religious community. In other words, most traditional societies featured both outlets for curiosity and built-in buffers for at least some non-conformism.
The quantum leap in intercultural contacts and communication that defines the social essence of modernization has now enabled most people on the planet to objectify their own cultures. When one took for granted the cognitive and normative definitions of reality established by one’s own culture, it was not possible to identify those definitions: They disappeared into the seamless flow of experience. Once aware of people who think and behave in very different ways, it becomes possible to identify those definitions. A classical example of this process is afforded by Montesquieu’s satirical work Persian Letters. Two fictitious visitors from Persia write letters home about the (to them) strange customs of Parisians—monogamy, for example. There are also accounts of how Parisians find them, the Persians, strange and ask: “How can one be a Persian?” The question that Montesquieu really wants to ask, of course, is: “How can one be a Parisian?” Fictitious visitors from an exotic country enable what Bertolt Brecht called Verfremdung, the theatrical technique of making the familiar unfamiliar.
This capacity to look at one’s own culture through the eyes of outsiders is precisely the relativizing dynamic that undermines taken-for-grantedness itself. Once this dynamic is unleashed, the taken-for-grantedness of one’s own cognitive and normative definitions is dissolved in the rich broth of cosmopolitanism. One then becomes an outsider, in a way, even with respect to one’s own culture, consciousness and conduct.
From this point, the narrative is a familiar one. Relativization is first experienced as a great liberation; modern literature is full of individuals who come from a narrow background in the village or countryside to the big city. The world opens up; our hero, freed suddenly from old prejudices and superstitions, becomes intoxicated by the experience of freedom. Then comes the hangover: The sense of liberation gives way to anxiety with the realization that one can no longer find guidance and emotional support in the comforting certainties of tradition. One must make choices; indeed, one must fashion a whole world of one’s own making.
This is asking a lot, and it often gives rise to a wish to be liberated from the burden of choice. Those whom one might call serious modernists accept the burden, acknowledge the obligations of moral choice under circumstances of inevitable uncertainty, and do the best they can. Both relativism and fundamentalism represent attempts to escape from this burden. Relativists escape by asserting that all moral choices are ultimately baseless and hence equal—either equally valid or equally banal. Fundamentalists escape by trying to restore to consciousness a taken-for-granted certainty.
Ah, but this restoration project is impossible. Once a person has objectified his or her own cognitive and moral templates, there is no going back to innocence. Metaphorically as well as literally, one can be a virgin only once. The realization that life after modernity has no reverse gear is the source for many people of a great angst, sometimes shading into metaphysical panic. Fundamentalism is the alleged solution to this problem, offering to supply certainty by means of a return to tradition; what it really proffers is a formula for creating certainty in the dress of tradition.
As far as its political impact goes, fundamentalism comes in both maximal and minimal versions, and in both secular and religious forms. The maximalist version requires the imposition of a new certainty on an entire society, which can only be achieved by setting up a totalitarian state that completely controls all contacts with the outside. This is a difficult and costly exercise, and, since it spites the very pluralizing tendencies of modernity that give rise to it in the first place, it is doomed to fail. Soviet Communism is the pre-eminent secular version of fundamentalist maximalism; the salafi jihad to reestablish the Caliphate is the contemporary religious example unfolding before our eyes.
In the minimalist version, the larger society is invited to go to hell in a handbasket as one builds a sub-society that closely controls the thought and behavior of its inhabitants. This mini-totalitarianism is also difficult to achieve, as the powerful forces of relativization keep pounding against the defensive walls of the sub-society. But there are both secularist and especially religious groups that come close to achieving it, sometimes for fairly long periods, on the basis not so much of a physical separation between the group and the rest of society, but on the basis of an intellectual separation. Such groups can be quite large and, when organized effectively, very politically influential.
Both relativism and fundamentalism are bad for social order, particularly so for social order in a democracy. Relativism undermines the shared value system (what Émile Durkheim called the “collective conscience”) without which a society cannot function over time. And fundamentalism either inclines to totalitarian ambitions or, in its minimalist version, balkanizes society into mutually hostile and incommunicative camps, threatening the social peace and civility without which democracy cannot survive.
By their very nature, fundamentalisms are intolerant. Post-taken-for-granted forms of faith are not impossible, but they tend to be brittle. After all, one cannot completely repress the memory that one has chosen this or that faith. Fundamentalists strive for such repression, however, and in defense of a vulnerable faith they invariably find themselves less tolerant of non-conformist views. This sharply distinguishes all fundamentalisms from traditional religions. An individual can relax in a taken-for-granted certainty; he can tolerate those who do not share his beliefs. By contrast, non-believers are a constant threat to fundamentalists. They must be segregated, expelled, converted (forcibly or otherwise) or, in the extreme case, physically liquidated.11.
Of course, processes of pluralization long preceded the temporal boundaries we commonly apply to the modern era, with the result that what we might call proto-fundamentalisms long predated the Renaissance. Jews as agents of pluralization in pre-modern Europe are a noteworthy example, inasmuch as the sequence “segregated, expelled, converted (forcibly or otherwise) or, in the extreme case, physically liquidated” fits the historical record very closely. Extreme intolerance in early Reconquista Spain seems also to fit the pattern: Not only were Christians exposed to Jews and Judaism, but also to Muslims and Islam, over several centuries. In this sense, Torquemada was a very modern man. Fanaticism, as cognitive dissonance theory suggests, is usually an overcompensation for self-doubt.
Whichever form it takes, relativists—and even most normal, confused people—are liable to react to fundamentalist intolerance with intolerance of their own. If fundamentalist intolerance is bad enough for a liberal democracy, a dialectic of intolerance is even worse.
It is therefore necessary to limit the growth of both relativism and fundamentalism to the extent possible, and one way to do this is to make those wounded by the process of modernization aware of a middle position between the extremes. This task has an intellectual side—how to define such a middle position, given the empirical fact that a modern society contains people with different religious and moral convictions. But to have any real effects, the establishment of a middle ground must also be a political task—one that can be described as promoting the politics of moderation. This leads to an obvious question: How have people ever sustained their faith in an admitted absence of theological certainty?
Middle Ground
The basic answer is, over time and under constant, but not overwhelming, duress. In Christian history, theological certainty has been legitimated in three distinct ways: by means of an infallible church grounded institutionally, most impressively the case in Roman Catholicism; by means of an inerrant Scripture, the preferred choice of much of conservative Protestantism; and by means of an irresistible experience taking forms from mystical ecstasy to more mellow cases such as being “born again”, or in the words of John Wesley, having one’s “heart strangely warmed.” Over time, each of these three possibilities has been undermined in one way or another. The evident history and sociology of ecclesiastical institutions make it hard to see them as unassailable rocks upon which to ground one’s faith. Modern Biblical scholarship makes it equally hard to believe in scriptural inerrancy. Modern psychology has fortified a posture of antecedent skepticism concerning all forms of allegedly certain religious experiences.
The form of Christian religion that best survived the modernist undermining of certainty is Protestantism. As Max Weber understood, it not only encountered and well survived the corrosive effects of modernity, Protestantism helped to shape the cognitive framework of modernity itself. How did it do so?
The Reformation doctrine of salvation “by faith alone” (sola fide) already contains a suggestion, namely that faith (fides) is essentially an act of trust (fiducia) undertaken in the absence of any false certainties. In essence, the key to maintaining faith amid confessed theological uncertainty is a capacity to distinguish between what is the core of the faith, and what is marginal and can therefore be subject to adjustment and compromise. If one can accept this distinction, whether it is discovered by theologians or in effect imposed by the vicissitudes of history, it is then possible to entertain an historical analysis of the scriptures and the traditions by which the faith has been transmitted. It then becomes possible, as well, to give doubt and debate legitimate places in the economy of one’s faith. And it follows that those outside one’s own community of faith may then be perceived as being something other than enemies, especially if they too evince doubt about absolute truths. Civic peace becomes possible.22.
Walter Russell Mead has described this process in Britain, in this case one decidedly imposed by the vicissitudes of history. See “Faith and Progress”, The American Interest (September/October 2007), and his book God and Gold (2007).
This developmental sequence amounts to a description of a moderate religious position. Such a position has always existed in Jewish, Christian and Muslim contexts, though obviously by means of different theological formulations. The crucial social consequence of such a position, no matter its specific theology, is that it will eschew any form of religious warfare, literal or virtual. It is no accident that the Enlightenment, with its promulgation of tolerance in matters of faith, came in the wake of religious wars that decimated the population of Europe and left behind a trail of terrible destruction. The historical memory of the wars of religion goes far in explaining both the general reluctance to introduce religious rhetoric into the public sphere in those parts of Europe roiled by those wars, and the absence of such reluctance in those parts spared (Iberia, for example). Religious tolerance in America had different origins, of course. There were no wars of religion, but there was the practical impossibility of giving a monopoly position to any one form of religion. On neither side of the Atlantic today is there any significant attempt to re-establish a coercive religious monopoly. The last such attempt was during the Spanish Civil War, when the Catholic Church supported the religious unity of the state propagated by the Nationalists under Franco, and, as suggested above, that this took place in Spain is no coincidence.
So religious faith is possible in the absence of theological certainty. Indeed, it may be argued that a measure of doubt is desirable, lest faith be rendered too childlike in the face of adult dilemmas. But does the same hold for morality—and what really is the relationship between religious faith and moral conduct? Doesn’t theological doubt presuppose moral uncertainty?
Western societies have learned, at great cost, that people with divergent religious beliefs can live together peacefully. Thus, individuals who believe, say, in the immaculate conception of Mary, the mother of Jesus, can have amicable relations with Christian and non-Christian neighbors alike who don’t share this belief. But is such tolerance possible when divergent views of morality involve matters of immediate practical import? What about those who find torture utterly unacceptable under any circumstances, and those who would make an exception in the case of dangerous terrorists? What if one believes that abortion is a basic human right, but one’s neighbor believes it is an act of homicide that violates basic human rights? With all due respect for theology, attitudes toward torture or abortion have a direct impact on public policies and the lives of ordinary people in a way that is not the case with divergent Mariological doctrines.
Let me put this in personal terms: I have religious faith without theological certainty, yet I make certain moral judgments with great certainty, even in the face of relativizing insights. For example, I am convinced that slavery constitutes a violation of the very core of being human, and my conviction is not affected by my knowledge that, through most of human history and in most societies, slavery was considered a perfectly acceptable, even natural, institution. If I reflect further about my certainty in making certain moral judgments, it seems to me that all such judgments are based on the core moral principle that every human being has inestimable inherent worth. From this principle flows all assertions of human rights and liberties. Hillel famously stated the essence of the Torah while a questioner stood “on one foot”: “That which is hateful to you, do not do to your neighbor. All the rest is commentary, now go and study.”33.
Babylonian Talmud, Shabbat, 31a. My moral certainty can be stated in one lapidary sentence in the constitution of the Federal Republic of Germany: “Die Würde des Menschen ist unantastbar” (“The dignity of man is inalienable”). One may add: “The rest is commentary.”
Yet this observation only focuses the question; it does not answer it. Even if we identify the source of our moral certainty, that still does not explain where our conviction comes from. Tracing the genealogy of moral values—whether to the Hebrew Bible and the New Testament, to Greek philosophy and Roman law, or to the Enlightenment—cannot prove or disprove their validity any more than knowing the antecedents of a scientific theory can validate or falsify the theory.
There have been at least four ways of explaining the certainty, and, therefore, the universal validity, of moral judgments. All are problematic in one way or another, but perhaps not equally so.
First, of course, one can derive moral certainty from religious certainty. This is most readily accomplished if one believes in a divinely mandated moral law, with all moral prescriptions and proscriptions prefaced, in effect, with “Thus saith the Lord.” The problem with this approach is that what seems certain can nonetheless produce wildly divergent practical consequences. Thus Torquemada would have been surprised to hear that the Lord does not permit torture; Christian theologians who undertook vigorous defenses of slavery in the antebellum American South would have been surprised to learn that their efforts stand as an embarrassment to virtually all Christians today.
Second is the venerable tradition of natural law—the proposition that there are moral judgments universally inscribed in human hearts. The natural law tradition is hoary and consequential in history, to say the least. In short, it has and presumably again could “work” in the world. The problem is that the proposition is empirically untenable. The conviction about the moral unacceptability of torture and slavery is not universally inscribed in human hearts. We know this because for vast stretches of history human beings happily tortured and enslaved each other without so much as a single evident pang of conscience. That pangs of conscience about such atrocities are becoming institutionalized in our times—the French sociologist Danièle Hervieu-Léger has caught the phenomenon in her phrase “the ecumene of human rights”—is a good thing. But it no more proves the validity of natural law theory than the discernment of a Fibonacci sequence in the design of a seashell proves the existence of God.
Two further avenues of explaining moral certainty are, more empirically, sociological and biological in nature. Some contend that certain moral principles are necessary for a human society to function and survive. Thus, for example, a society could not function if it tolerated its members murdering each other in response to every grievance. Fair enough. But what about murdering people who are not members of the society, or murdering certain subgroups who are?
Alas, sociological functionality turns out not to be a good basis of moral certainty. Indeed, certain acts we find morally repulsive can nevertheless be quite functional. Why not kill off aged or infirm people who are an unproductive burden on the society? Groups of Eskimos did so, sending old people to their deaths in the Arctic sea on fragile rafts. This arguably served an important function for the survival of Eskimo society. Why not kill infants who do not fit certain criteria of size and appearance in a martial society, whose leaders are persuaded that martial virtue is the only thing standing between survival and annihilation? Spartans did so, and we think of the Greeks as civilized people.
If not sociology, then biology? Some have proposed that moral principles are genetically based to ensure the survival of the species in the cruel competition of the evolutionary process. No doubt there are moral instincts, let’s call them, that foster or at least align with evolutionary survival. There seems to be something verging on an instinct that induces women to care for their newborn infants. Clearly, in the absence of such an instinct, a species would not survive. But neither would a species survive the absence of a heterosexual libido. Who is prepared to argue that the desire for sex as such, whether in marriage or out, is itself a moral imperative?
There are two problems here. One is that the number of universal bio-moral instincts is unknown, would be disputed in any attempt to codify them, and would anyway be small—much too small to account for the range of subjects about which human beings make moral judgments. And on most of these subjects moral judgments are clearly not universal. On both counts it is hard to see how biological imperatives could ever form a practical ground upon which to base the validity of moral judgments.
Walk Humbly with Thy God
Where does this leave us? The extremes of relativism and functionalism can be found operating as moral templates as well as religious ones. Most relativists deny the existence of moral axioms of any kind, claiming that all moral “narratives” are comparable and the choice between them arbitrary. This view leads to manifest absurdity, such as the contention that the “narrative” of the rapist is as valid as the “narrative” of his victim—a notion that flies in the face of any reasonable understanding of morality. Moral fundamentalists, rigid, aggressive and totally impervious to critical questioning, leave no room for reasoning at all. This is evident in the current American debate about abortion, where some “pro-life” and “pro-choice” activists can fairly be described as fundamentalists.
This is as good an example as any around which to describe what a middle position would look like. There is the axiomatic conviction that every human being has innate dignity and worth, but where does one human being (the mother) end and another human being (the embryo) begin? Six hours after conception? Six hours before birth? I think both answers are implausible, but the short answer to the question is: We don’t know.
Since we don’t know, and arguably can’t know, it becomes obvious that even the labels brandished by the two camps are evasions of a basic difficulty. “Pro-life”? The issue is not human life, but a human person. My appendix is an example of human life; so is a sperm or an ovum, but that is not the same as a person. “Pro-choice”? Of course a woman has the choice of what to do with her body, but what about another person’s body? That’s the real issue. No one would argue that a woman has a choice as to whether to kill her one-week-old baby, and so back to the basic problem of defining when a person becomes a person we go.
Obviously, we need to craft some advance over the sterile choices between relativism and fundamentalism, and neither divine law, natural law or sociological and biological suppositions can supply it. That advance can begin, I think, by appealing to recognition of a basic condition with which no reasonable person would argue: our ignorance, hence our need for humility.
The abortion issue is one of many instances in which morally acceptable decisions have to be made in a state of essential ignorance. We have to make a decision that will be ultimately arbitrary in some fashion. Because of my axiomatic conviction about the worth of every human being, my view is that we should err on the conservative side: so no abortion after the first trimester, certainly not after the second. The legislative consequence of this approach would be to allow free choice to a pregnant woman within a certain reasonable time span, but to make abortion very difficult to obtain or illegal after that time span. This is the actual legal situation in most European countries, a nuanced and morally cautious view that is a good example of the moral middle ground.
Another way to put this is that, while we must bear a degree of uncertainty on their application to specific issues, we can have certainty on fundamental moral axioms. However, a moral middle ground is not the same as a middle ground in religion, even though doubt plays a role in both cases. It is worth repeating that doubt about a theological principle need not force life-and-death decisions in the here and now, whereas doubt about a moral principle involves precisely that. At one point, exasperated with Parliament, Oliver Cromwell exclaimed: “I beseech you, by the bowels of Christ, to consider that you may be wrong.” This is not necessarily a good recommendation addressed to theologians, but it is very good recommendation addressed to practical men and women of the world (though I confess I don’t know the origin of the quaint phrase about the “bowels of Christ”). It asks us to be careful; it asks us to be humble. It asks us not to play God, but rather to be very, very serious about making necessary judgments that come to the same thing.
We are left still with the problem of explaining the axiomatic basis of moral convictions. Let the philosophers go at it (and they will). Meanwhile, I can only indicate a direction that advances a claim on practical grounds. It is useful, I think, to formulate moral propositions in the indicative, not the imperative mood. Conscience does not primarily tell us “do this, don’t do that”, but rather “consider this, look at that.” Conscience does not issue the commandment “you must not enslave”; rather it forces one to look at and to empathize with a slave. Thus Harriet Beecher Stowe did not preach a sermon against slavery; in Uncle Tom’s Cabin she forced the reader to consider the inhumanity of slavery.
Moral judgments are based in practice on a specific perception of what it means universally to be human. Of course our perceptions developed historically, and they are not empirically universal even today. But once the perception of what it means to be human has congealed, it necessarily takes on universal scope: I cannot say, “I am personally against slavery, but I accept your right—for whatever cultural reasons—to disagree with me.” That would be relativism—one “narrative” as against an equally valid other “narrative.” That cannot be: If slavery was wrong in antebellum Alabama, it is wrong today in Sudan.
Weber’s distinction between an “ethic of attitude” (Gesinnungsethik) and an “ethic of responsibility” (Verantwortungsethik) is relevant here. Moral fundamentalists always hold to the former type of ethic: One must act in accordance with moral principles applicable to every situation without modification, whatever the consequences. So, for example, no abortions, not even in cases of incest or circumstances where the mother’s own life is imperiled. By contrast, an “ethic of responsibility” will always consider the consequences. Typically, we cannot know the consequences for certain, but we must try to figure them out as best as we can. Thus, however certain we may be about a number of moral axioms, all attempts to apply them in real life should be haunted by doubt, and thus attended by humility. That exactly is the middle ground between moral nihilism and moral fanaticism. That is the middle ground we should seek to expand. That is the center that must hold.
Of course, processes of pluralization long preceded the temporal boundaries we commonly apply to the modern era, with the result that what we might call proto-fundamentalisms long predated the Renaissance. Jews as agents of pluralization in pre-modern Europe are a noteworthy example, inasmuch as the sequence “segregated, expelled, converted (forcibly or otherwise) or, in the extreme case, physically liquidated” fits the historical record very closely. Extreme intolerance in early Reconquista Spain seems also to fit the pattern: Not only were Christians exposed to Jews and Judaism, but also to Muslims and Islam, over several centuries. In this sense, Torquemada was a very modern man.
2.
Walter Russell Mead has described this process in Britain, in this case one decidedly imposed by the vicissitudes of history. See “Faith and Progress”, The American Interest (September/October 2007), and his book God and Gold (2007).
3.
Babylonian Talmud, Shabbat, 31a.