It was the first few pleasant moments in a two-day event, the second annual “Human Rights and the Humanities” conference at the National Humanities Center, held in March 2013. Greetings had been extended, sponsors thanked, the distinguished keynote speaker introduced, and the audience settled in. But almost immediately the speaker was parting company with many in his audience. His title was “Do States Have the Right to be Wrong about Justice?,” and the argument, surprising and disturbing to many in the room, was that yes, they did. States, the speaker insisted, can—short of such undisputed atrocities as massacre, ethnic cleansing, and genocide—oppress, imprison, and injure their own citizens because, as he put it, “state sovereignty is the only guarantee of moral pluralism.” If states were not recognized as masters in their own houses, with a “right to be wrong” in the eyes of others, we would rapidly return to an era of imperial impunity, with stronger states enforcing their moral, political, and cultural preferences around the world.
For those who came to the conference expecting that the concept and the cause of human rights would be affirmed, this was a decidedly rocky start. For in these opening moments, a fundamental premise had been called into question. The founding document of the modern concept of human rights, the 1948 Universal Declaration of Human Rights, had begun by proclaiming universal rights and standards: “Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world …” The massive moral assurance reflected in this phrasing had for decades been unquestioned by those, especially in the West, who denounced practices such as honor killings, the economic and educational oppression of women, human trafficking, genocide, child labor, and torture. “Pluralism,” which the speaker was advancing as a principle so precious that its preservation justified a very great deal of injustice, had in practice meant the self-interested behaviors of tribes, clans, militias, dictators, ruling parties, or juntas. In a sense, pluralism was the problem that the Universal Declaration attempted to solve by establishing norms of dignity and security that overrode local practices and prejudices. According to the Declaration, states had no right to be wrong about justice; indeed, they had an obligation to be just or to suffer condemnation or even correction by international force majeure, as when NATO intervened against the Serbian genocide in Kosovo in 1998-99.
But the same moral confidence that stiffened the spine of NATO had also been invoked by others with their own agendas. On several occasions, for example, George W. Bush had borrowed key phrases from this document when he announced his goal in going to war against Iraq, proclaiming on Human Rights Day in 2004 that “Freedom and Dignity are God’s gift to each man and woman in the world,” and that the United States had an obligation to deliver that gift, sentiments he repeated in April 2013 at the opening of the George W. Bush Presidential Library. A “human” right can easily be translated into a “God-given” right and proclaimed by those embarking on holy wars of all kinds. In recent years, rulers deemed to be “wrong about justice” had been toppled not only in Iraq, but in Afghanistan, Egypt, and Libya, but the results in terms of human rights could be called equivocal at best and tragic at worst, since the democratic West, feeling secure in its superior grasp of universal standards, had been responsible for immense numbers of casualties, refugees, and general destruction on an astonishing scale. In several instances, interventions undertaken on human rights grounds had led the West into unwitting complicity with forces unalterably opposed to Western conceptions including human rights.
These and other examples were undoubtedly on Michael Ignatieff’s mind as he unwound his dangerous and difficult argument. Currently a professor at Harvard and the University of Toronto, he had previously been the leader of the Liberal Party in Canada and candidate for prime minister; before that, he had been a novelist, historian, journalist, and director of the Carr Center for Human Rights at Harvard, with unmatched experience in witnessing, reporting on, and thinking through human rights violations on the ground as well as in the seminar room. He had publicly backed Western intervention on human rights grounds in Kosovo and Rwanda, and, to the dismay of many liberals, had supported the Iraq war, arguing that the United States was in a position to create a “humanitarian empire.” He has since recanted his support for the Iraq war—and lost the election and resigned from politics—and his keynote address was interpreted by some as an attempt to establish a rationale for this recantation, a new framework for human rights that would preserve the basic cause of securing human dignity and freedom while backing away from the international approach that had led the moralizing West into one debacle after another. According to the theory Ignatieff was now advancing, standards may be universal, but the struggle to root them in sovereign states must be carried out by people such as Nelson Mandela and Aung San Suu Kyi who can “vernacularize” the universal in a particular context: “All human rights politics are local.”
As the questions directed at him after his talk made clear, some wondered whether this account was itself too local, too protective of the interests of the Western powers that had assumed responsibility for ensuring global human rights. Surely, one questioner asked, there are limits to a state’s right to wrongness, and occasions when intervention is warranted? And, another wondered, if local judgments are always presumptively right, what do we do when atrocities such as female genital mutilation are accepted as customary practice—are we simply to acknowledge that the world contains many wonders and celebrate diversity? Are we not obligated, an elderly gentleman asked, to intervene even in certain cases where the victim herself does not register a violation? After all, people who have been abducted sometimes refuse to escape, rape victims sometimes blame themselves, and people who have suffered trauma are often so traumatized they cannot grasp what has happened to them. Why should the state, the tribe, the family, or even the victims be the arbiters? Ignatieff stuck to his position. “I totally reject the Marxist view of ‘false consciousness,’ ” he said; “we have to take seriously the doctrine of ‘victim’s consent,’ ” which gives to the victim the final say on whether a crime or violation has been committed, no matter what anyone else may think.
But, another asked, what about the rising number of stateless people—where can they turn? And what about human rights in failed or failing states? Or corrupt states: didn’t the Nuremburg trials establish the principles that individuals had ethical responsibilities that could contradict the edicts of the state, and that state officials could be prosecuted for rights violations? Does the United States have the right to insist on a “pluralistic” understanding of justice at Guantanamo or Abu Ghraib? And what value, precisely, does pluralism among states have if the citizens of those states are living under, for example, the Taliban? If the regime does not value human rights, are the rights of humans under their control simply negated? And who decides if crimes committed in, or by, a state are so horrendous that intervention is justified?
While these questions were being debated from the lectern, there was a buzz in the audience as people took up questions on their own. In one of these mini-debates, the word “drones” was audible. In fact, drones represent a rupture in the theory of state responsibility for its citizens’ welfare. The primary responsibility of the state, as Ignatieff stressed, is to its own citizens. Drones protect citizens both by killing terrorists who threaten them and by eliminating risk for soldiers who would otherwise have to put themselves in harm’s way. In some respects, drones are a humanitarian option. And yet there is something so morally repulsive about drones that some have been led to the brink of arguing that people—even terrorists—have a right not to be killed by such technology.
Again and again, Ignatieff responded to challenges with a restatement of his position and a concession that “[t]hese are difficult questions.” And after ninety riveting minutes, the evening drew to a close. But over the course of the following day and a half, these questions kept returning in different forms.
The fundamental question Ignatieff had raised concerned the relation between human rights and modern democracy. Article 21.3 of the Universal Declaration states that “[t]he will of the people shall be the basis of the authority of government,” and that this will must be expressed in “periodic and genuine elections” with universal suffrage and free voting procedures. Clearly, the framers of this text intended to link human rights and democracy. Ignatieff had affirmed that identification, but he pointed out the unstated risk about the “right to be wrong” that it implied. He was joined in his view the following morning by another controversial Canadian scholar, Daniel A. Bell, who teaches at Tsinghua University in Beijing. Bell began the day by advancing two arguments, that democracy did not necessarily serve the cause of human rights and that it was not itself a human right. In fact, he said, while the United States often lectured the Chinese about human rights, the Chinese system of government in which people vote for their local officials while the central government is, at least in theory, organized on a more meritocratic basis, might well serve the cause of human rights better than a system that entrusts everything to voters, who, as many studies have shown, are often ignorant, malleable, or misguided about their true interests. Westerners often describe the Chinese government as corrupt, but Bell pointed out that many democratic governments, some more corrupt than China, consider corruption part of business as usual in politics, whereas in China it is at least recognized as a real threat to the legitimacy of the system, a “core problem.”
Not everyone was persuaded. The very next speaker, Anat Biletzki of Quinnipiac University and formerly of the University of Tel Aviv, contradicted Bell’s argument directly, insisting that human rights and democracy were necessarily one and the same, and quoting not only the Universal Declaration but more recent statements by the United Nations and the European Union to the effect that respect for human rights is an “essential element” of democracy. Democracy, she insisted, was clearly the natural environment for the protection of human rights; as the philosopher John Rawls had said, nondemocratic countries might be “decent” but could never claim to be “just.” Citing examples with which she was intimately familiar, she argued that Israel is not a democracy for the reason that it egregiously violates Palestinian human rights, and if Palestinians under occupation cannot be said to live in a democracy, one reason is that they have no human rights. She herself had voted with her feet, moving to the United States on a permanent basis after years of living in Israel. Over coffee, one participant said to her, “Why did you move here? I can’t believe anyone would move to this country,” to which she responded, “Because here you can breathe.”
It fell to Catherine Gallagher of the University of California at Berkeley to argue that, when seen from a historical perspective, there is no natural relation between democracy and human rights at all. Both Alexis de Tocqueville and J. S. Mill saw in democracy, at that time a relatively new and untested form of government, a number of tendencies inimical to liberalism: narrowmindedness, competitiveness, mediocrity, homogeneity, and intolerance. To both thinkers, democracy seemed to have what Gallagher called a “tropism toward tyranny and against freedom.” But as Mill came to recognize, democracy also represented a culture in which people might be able to develop a “quality of individuality” that would eventually enable them to bear and bestow such things as rights. If there was any hope for the rights of the individual under democracy, Mill thought, it lay in the fact that the cultural dynamic activated by popular sovereignty could in principle embrace self-criticism as a means of self-improvement, so that one day the phrase “liberal democracy” might not be an oxymoron. The “banal conclusion” to which Mill was led was, as Gallagher put it, that democratic cultures get better over time at balancing popular sovereignty and human rights.
This commonplace provoked Hans Joas of the University of Freiburg and the University of Chicago, who rose immediately to say that Mill’s conclusion was not only not banal, it was not even true: witness the behavior of the United States after 9/11, or the recent move in Switzerland to ban minarets. Joas’s new book The Sacredness of the Person: A New Genealogy of Human Rights had just appeared in translation, already having attracted a good deal of attention in Europe for the argument that the concept of human rights was born of a “profound cultural shift” taking place over the course of the nineteenth century in the West in which “the human person became a sacred object,” albeit in a non-religious sense. This shift had made human rights thinkable, but it had not secured human rights in practice; in fact, Joas said, the West did not seem to be improving in this respect at all.
Maybe not, said Robert Post, dean of Yale Law School, but with respect to the institution of the law, to which enforcement of human rights is entrusted, the question does not involve fundamental moral principles at all, and often comes down to a mere tactical decision. The question, in a strictly legal context, is not, “What are my rights?,” much less “What is theright?,” but rather, “What are my goals?” One can choose to call a given atrocity a violation of human rights, a criminal offense, an occasion for individual compassion, a war crime, a subject for a truth and reconciliation commission, or a customary practice in a particular social group. Since each approach entails a certain range of outcomes, one simply has to decide which course will be most productive. Confronting some appalling circumstances (Kosovo), the West has chosen to justify its actions by invoking human rights; but in other cases (Afghanistan), it intervenes in the name of self-defense; and in still others (Somalia, Pakistan, and, as of April 2013, Syria), it gravely recognizes affronts to justice but does little or nothing.
Nor is the West alone in making human rights dependent on cost-benefit calculations. The newly postcolonial Indian state was founded with no reference to universal human rights, while South African independence was explicitly framed in human-rights terms, because the founders of the new states had different conceptions of, and agendas for, their countries. Even in the American context, where the Declaration of Independence, the Constitution, and the Bill of Rights specifically mention rights, the choice can seem tactical rather than moral. As Evelyn Brooks Higginbotham of Harvard University pointed out, nineteenth-century abolitionists, and Frederick Douglass in particular, fought for the rights of African-Americans by arguing for a concept of human rights that transcended the judicial framework of the founding documents (which sanctioned slavery); but, as Post noted, Susan B. Anthony, who could have done the same, argued for the rights of women by claiming that such rights represented not a higher form of justice than the established law, but simply a realization of the deeper intent of the law.
Concluding the conference, Wang Hui of Tsinghua University in Beijing invoked a notion anchored in classical Chinese thought but articulated with great force by Zhang Taiyan (1869-1936), the “equality of all things.” This concept could, Wang suggested, play a useful role in contemporary debates about human rights, for it posits a universal and non-anthropocentric substrate of being, a preconceptual unity that binds together not merely all people, but all things—atoms, lichen, water, hyenas, beryllium, subjects, objects, concepts, each with its own distinctive character and all in that respect equal to the others. Such a notion, whose difficulty in a Western context he fully acknowledged, might illuminate a path to universal human rights by enabling us to see, for example, that democratic nation-states do not have a monopoly on virtue, and that principles that claim to be universal—such as, he implied, Ignatieff’s “pluralism”—are not universal enough if they retain this implicit bias in favor of structures of exclusion and hierarchy. Before the audience could ease into a mood of Buddhistic acceptance, however, Post was at the microphone once again. “Recognizing the generosity of this notion,” he said, “I want to pose the question: if you followed that out, could you have law at all? If you lump everything together, don’t you make it impossible to judge? Are rights of any kind consistent with such a notion of equality?”
“Perhaps you have a point,” said Wang, “but it is understood that, in thinking about the law, one must also think beyond the law,” adding, after a pause, “whatever that may be.”
This is where things ended, in the ambiguous but immensely fraught space between the law—and rights, and humans—and whatever lay beyond them. The idea of human rights, it seemed, was stuck, pulled in two directions by aspirations to universal notions of justice, freedom, and dignity on the one hand, and, on the other, by the limited, local, and merely legal means that fallible humans must deploy to achieve these ends. One reason that human rights violations are so often committed by states—as a panel on genocide with Christopher Browning of the University of North Carolina, Richard A. Wilson of the University of Connecticut, and Ben Kiernan of Yale had underscored—is that all legal structures, insofar as they are committed to making distinctions, are themselves in violation of the principle of equality. Ignatieff’s argument that states alone have the right to determine their own forms and norms of justice casts a very bright light on this violation by placing states in opposition not just to human rights but to any “universal declaration” on any subject whatsoever.
Perhaps, however, Ignatieff had simply misstated his case. Did he—once so stout a defender of humanitarian intervention—really mean to say that states alone are the final adjudicators of justice within their boundaries? Or did his argument have a deeper personal relevance as a defense of the right of individuals to be wrong not just about justice but about anything and everything, including, for example, the wisdom of intervening in foreign countries on the basis of convictions about human rights? If the right to be wrong was applied to individuals rather than states, one could imagine a positive if limited connection between democracy and human rights, for in democracies, at least in principle, the expression of beliefs, attitudes, arguments, and convictions—manifestations of the quintessentially human capacity for reflection—enjoyed rights to freedom and protection they did not have in other systems. In this respect, democracy and human rights form a natural pair. This leaves in place, of course, the manifold inequalities and injustices associated with even the most democratic states, but it does at least identify one segment of the spectrum of human rights that democracies might claim their own.
And perhaps, one might even hope, the democratic capacity for self-criticism may yet lead in the direction of a more effective and less arrogant or delusional concept of human rights if democracies permitted themselves to be challenged and inspired by the thought of something beyond democracy—whatever that may be.