Showing posts with label Ellison. Show all posts
Showing posts with label Ellison. Show all posts

Friday, August 01, 2014

The Legacy of Group Thinking

1. The Culture Wars — Semantic authority — Who is anybody to tell me what to call myself? 2. The non-Marxist left — Two cheers for the cultural left — Communitarians against liberalism — The bad metaphysics of the unencumbered self; 3. The sources of multiculturalism — The authoritarian structure of group thinking — Rhetorical groups vs. actual groups — The "I" become "Aye!"; 4. What words I should use — Individuals as ends in themselves — Opting into and out of communities; 5. Autonomy and trauma — Racial wisdom — Thin and fragile communities — The instruments and conditions for autonomy; 6. Pride and Why hope? — Separatism and semantic authority — Shared practices and confidence schemes; 7. Antiauthoritarian adolescence — Who are you to tell me when my trauma is over? — Adults without confidence are a moral problem


1.      In the ‘80s and ‘90s, two parallel discussions enveloped a good part of our time in American political discourse, their energies both sometimes denoted by “the culture wars.” At the national level, debate revolved around affirmative action practices and policies. To see the connection to the term “culture,” one must recognize how “political correctness” as a term of abuse was part of the same debate. While political correctness was attaining infamy, the less abusive “multiculturalism” denoted the more parochial “culture wars” in American universities. The idea behind affirmative action practices was that, for example, systemic forms of racism had become embedded in all kinds of American practices (e.g. in the education system or governmental hiring practices or university admissions processes) and that only by active affirmation of equity could these systemic forms of disadvantage based on racial classification be corrected. As an extension, political correctness was the idea that our language is a practice that performs some of this embedding. When minority groups began requesting (or demanding) semantic authority over themselves, the post-Civil Rights milieu was inclined to hear them. So, as an example, within a short space of time accepted parlance went from “colored” to “Negro” to “Negro-American” to “Afro-American” to “African-American,” with the stock of “black” rising and falling randomly.

Now, I said “accepted parlance,” but some instinct in most of us is going to prompt, “accepted by whom?” If I’d said “approved,” then the siren certainly would’ve gone off. Who is handing out this “approval,” judging the “correctness” of our language? I was talking to a friend recently when for some reason this issue of the shift in what to call black people came up. He’s black and was born in the early ‘60s, and so lived through some of these shifts. With impatience he said, “I grew up saying ‘negro,’ but then I was told ‘black.’ Fine. And then I was told ‘African-American,’ and I said, ‘fine,’ but who cares? Why does it matter? I was born on Long Island, not in Africa.” A little while after that, I was talking to an eminent scholar of African-American literature about Ralph Ellison, and we stumbled into that area as well. I told him about my friend, and he related an old quip that someone made in the ‘80s—that only an academic could’ve come up with “African-American.” We laughed. But this is a nexus of the two cultural wars. My friend is no academic by any means, and he votes Republican. His instinct comes out of the American self-reliant tradition. Who is anybody to tell me what I should call myself? And what does it matter? The scholar and I, however, laughed from ironic self-deprecation, at the pieties of academe. For the reason why “African-American” is ensconced in public discourse is in large part because of its enforcement in the cultural sphere of the university, which permeates laterally other intellectually-minded spheres and longitudinally multiple generations of the college-educated.

2.      There are many relics of the cultural wars, of which Allan Bloom’s The Closing of the American Mind (1987) is probably the most famous. But that book, like the right-wing hatchet jobs that abut it (Profscam, Tenured Radicals), doesn’t interest me in the long-term. What do interest me are the books by those on the non-Marxist left. During this time period, the term “liberal” was used to refer to this left, just as “radical” was used for the kind of leftist that generally preferred a post-Marxist, highly theoretical vocabulary for talking about politics, a left that also had a very negative attitude toward America, sans phrase. Two of these books that I’ve kept close to heart for many years are Richard Rorty’s Achieving Our Country (1998) and Stanley Fish’s There’s No Such Thing as Free Speech (1994). Fish’s book has a more complex relationship to the attitudes and situation of that era, as our own, but Rorty’s book simplifies the issue by splitting the two lefts into the liberal “reformist left” and the radical “cultural left.”

This latter term Rorty picked up at a conference at Duke on liberal education, in the midst of the wars, from a comment Henry Louis Gates, Jr. made about the “Rainbow Coalition of contemporary critical theory.” Rorty thought that this left deserved at least “two cheers,” as he put it in the title to his contribution to that conference. What they were doing in focusing our attention on cultural issues of racism, misogyny, and homophobia, and in particular how our language ramifies those things, was an important step in the history of moral progress. The only problem with this left is that it seemed as if they forgot about the money. Class, as a defining concept in one’s politics, seemed to get left behind, and it was hurting the politics of the left at the national level. And when you see the culture wars against the background of the Nixon/Ford-Carter-Reagan-Bush-Clinton-Bush sequence, one can see the prescience of thinking that the parochial-level conversation was, perhaps not hijacking, but obscuring what was happening in national-level politics.

I have great sympathy for this point of view, for I tend to think—in my naifish way—that money would solve a lot of problems. [1] However, David Bromwich doesn’t seem to think that the cultural left even deserves two cheers. Bromwich, a friend of Rorty’s and an English professor at Yale, went after the cultural left, not on political grounds for forgetting about class and producing a skewed and losing political strategy, but on the cultural grounds itself. In Politics by Other Means: Higher Education and Group Thinking (1992), Bromwich argues that the forces at work in multiculturalism are undermining the liberal customs and traditions that support the practice of democracy. I have a lot of sympathy with this trajectory of thought as well, for debates in political theory at the time of the cultural wars were of the thought that the very concept of tradition was at irreducible odds with liberalism. Thus there was that motley crew of “communitarians”: Michael Sandel’s trenchant attack on Rawls in Liberalism and the Limits of Justice (1982), Michael Walzer’s alternative model in Spheres of Justice (1983), Alasdair MacIntyre’s sweeping story of descent into moral unintelligibility in After Virtue (1983), and Charles Taylor’s equally sweeping story in Sources of the Self (1989).

A lot of the debate with communitarians was extremely productive—at the level of theory. The only thing they all have in common is that they are anti-Kantian, and what Rorty and Bromwich have in common is an equally anti-Kantian attitude toward politico-moral philosophy. [2] The master argument of the communitarians was that liberal political philosophy grew in the bosom of Kantian moral philosophy. Kant argued that “the moral” was produced only by a will that willed actions built on the categorical imperative. These were actions that came from no particular interest—interests are contingent features of your empirical self. Moral action only emits from the transcendental self, which is a will not built out of any particular feature of yourself you may have picked up from your environment of individual growth. This is the form of argument Rawls translated into the “original position” argument: pretend you’re behind a veil of ignorance and know nothing about your own features—what kind of just society would you construct for everyone, including yourself?

Sandel suggested that the nature of the self this politico-moral philosophy imagines is peculiarly “unencumbered.” Thinking of yourself this way, as unencumbered by any relationships to the past, future, or the people around you, then dovetails really quite nicely with a libertarian economics that has produced some really bad socioeconomic disparities. The communitarians, riding high on a crest of anti-Kantian argument, said that the philosophy is unworkable, and without that justification, liberalism must fall apart. Additionally, it has produced a uniquely introverted culture with no tradition of coherence to fall back on because it imagines itself without tradition. As Emerson put it, we are endless seekers with no past at our back. So when Rorty and Bromwich turn to the communitarians, there response is roughly: “No, you’re right—Kant produced a terrible philosophy for liberalism. But political liberalism is a practice and tradition, and it doesn’t stand or fall by its philosophical articulation. What we will do—and the grounds upon which you should debate us—is articulate both a better philosophy that agrees with all your anti-Kantian positions and a sense of what liberalism’s practices and traditions are to help repair what we agree is an increasingly introverted public culture.”

3.      What bound the communitarians together was the effort to work from a post-Hegelian tradition of philosophy. This, thus, brought them close to the wisdom post-Marxists wielded. Multiculturalism, however, had quite other sources than Hegel, or even Marx—what motivated and gave it shape was, not the experience of reading a certain tradition of books, but the life experience of individuals shaped by their categorization as an X. [3] By this I mean, not that a person is a woman, or black, or homosexual, but that the person is reduced to being only that category. If you were a typical white man in 1830 and you saw a man walking down a road in the South, then if that man was black, you knew all you needed to know about him as you approached. “Who do you belong to? Where are you going? Where’s your master?” Multiculturalism was the large-scale implementation of the tactic embedded in the slogan “black is beautiful.” The slogan gets its significance (and efficacy) by rubbing against the practices of treating “black” as if it weren’t—e.g., practices of hair straightening and skin lightening. Multiculturalism was the movement of saying, “it’s okay to be a member of the group you’re identified with.”

The trouble is that that wasn’t all multiculturalism turned out to be. What “multiculturalism” obscures, like every -ism, are the boots on the ground translating the -ism into practice. Bromwich retails a few of these actions, translating them—as every good intellectual must—into allegories for the theoretical and practical commitments at work. Bromwich is very effective in showing how what underlies both the cultural left and the cultural right (e.g., William Bennett and Bloom) is an authoritarian structure. The Hegelian conceptual priority of community to the individual, pace Popper, isn’t inherently authoritarian, but when translated from the arid sphere of political theory to the practical politics of the post-Civil Rights left, emphasis on the embeddedness of the individual in a community produced a line of thought Bromwich calls “group thinking.” An example of its linguistic habits might be seen in my earlier formulation of what happened beginning with the Civil Rights movement: “When minority groups began requesting (or demanding) semantic authority over themselves....” But the concept of a “group” here obscures an ambiguity, for it isn’t like all black people got together, signed a petition of request to be referred to as “African-American,” and then delivered it to white people (a parallel group-designation to go with the first). “Minority group” here is a hypostatization, a rhetorical device to cover the thoughts, feelings, and actions of a number of individuals. The problem is that, unlike the President of the United States who has the authority to speak for Americans in foreign affairs because he received the most votes in an election, there’s no equivalent method for determining who has the right to speak for these rhetorical groups. Thus, when individuals begin formulating their thoughts unreflectively with these kinds of locutions—using the rhetorical “we” as a proxy for oneself and an implicit usurpation of semantic authority—Bromwich says they stop real thinking. [4]

If a group of individuals all start speaking for the group, everything is fine as long as everyone says the same thing. But as soon as there’s dissension—“hey! that’s not what I think!”—then the group will talk amongst themselves about what the group thinks. The thinking, you’ll see, happens before the next speaking of “what the group thinks.” But “black people” isn’t a real group in the same practical sense because there’s no place they all meet on Fridays to decide what they think and what they’ll bind themselves to, take responsibility for. So what happens when there’s dissension in a rhetorical group? Implicit rejection—by dissenting, and individuating yourself with the “I,” you’re automatically on the outside from all the other voices still saying the same thing. Bromwich’s argument is that this kind of rhetorical “we”-ing produces a covert norm of conformity, because once the habit of chanting begins, you’ll notice when someone stops, and if those people with the habits take control of actual groups—i.e. institutional apparatuses with practical levers of control (e.g. firing a person)—then you’ll have incentive to beware calling yourself out. Every “I” will become an affirmational “Aye!” [5]

The cultural left wanted to be antiauthoritarian, but its implementation in an institution—which without authority is not—created the situation in which a black person can be told what they should call themselves because they are black. [6]

4.      But who are you to tell me what words I should use? Who am I, indeed—that line of thought cuts very deep, much deeper than we often allow it to. That question is antiauthoritarian in impetus and demands not only an account of authority, but an account of the moral stance generally—the question undermines our ability to use the word should or ought. Bromwich senses the practices of conformity underlying the emphasis on individuals being embedded in demarcated groups, and this is why he smartly suggests Emerson’s “Self-Reliance” as a spiritual antidote: “Whoso would be a man, must be a nonconformist.” But who is Bromwich, or Emerson, to tell us who we should, ought, must be? In the polemical context, this kind of Idiot Questioning can get old fast, but if we’re going to take Descartes’ idiocy seriously, why shouldn’t we this? In other words, just as Descartes demanded an account of knowledge, so do we still need an account of authority. [7]

This is the problem Bromwich faces: The political project of a democratic community, which the United States was formed to embody, values the individual as an end in itself. Political liberalism says that the point of a community is to produce individuals who are differentiated from the community that produced them. Multiculturalism thus seems regressive for seeing individuals as identified with communities (hence, “identity politics”). The problem isn’t that you shouldn’t identify with a community—Bromwich agrees with Rorty that the left’s inability to identify with the American liberal political tradition is harming their ability to be an effective force in American national politics. The problem is that people aren’t given a choice in which communities they can identify with—if you’re born black, then you just are part of the black community. You might be born in America, and thus be part of the American community, but the entire reason Bromwich and Rorty are compelled to argue that the cultural left should act like it is because they have obviously chosen not to so act. There’s no practical mechanism there to make a person identify with the country and its traditions the person was born into. And if there were—like taking a loyalty oath by affixing a flag pin to your lapel—then it would be as dumb or disastrous as it sounds. [8]

For political liberalism, the idea is that individuals can opt into communities if they wish, like being a cheerleader or going to church. The good point to respond with is that there are some communities you don’t get a choice in, and the analogy here is with family: you don’t choose your family. And likewise, one doesn’t choose what country they’re born into, what genitals they have, what color their skin is, who they like having sex with, or for that matter what church they go to growing up. The point of liberalism, however, is that part of becoming an autonomous adult is growing up and choosing whether to remain in the communities one was “born into” because of who your parents were. Maturity is identified on this scheme with autonomous choice.

5.      I find the identification of maturity and autonomy completely persuasive—after all, nobody on any side of American political debate believes in authoritarian political structures. But because socialization requires authoritarian structures, we differentiate between the rights and responsibilities afforded adults as opposed to children in any number of different contexts, thus endorsing a concept of maturation in the life of the democratic community. However, while I think that’s true, I also think that the history of treatment of individuals based on certain attributes (e.g. gender, race, sexual desire, genealogy) has left a mark on the processes of socialization still felt today. In an individual’s growth, this kind of mark is called “trauma” and I think that concept, as many have used it before, is well-suited for talking about the effects of misogyny, racism, homophobia, and hereditary elitism. When the individual is reduced to a group against their will, it traumatizes and arrests their growth into autonomous individuals.

The best way to see this is to recur to the imagined encounter in 1830s Alabama: one must see that one effect of the white man seeing the black man and knowing all he needs to know is that it produces a mirrored response in the black man—as soon as the black man, walking alone on a deserted thoroughfare in Alabama saw a white man, he knew all he needed to know. For if he didn’t realize that he needed to hide in order to avoid those threatening and physically dangerous questions, then he wouldn’t survive 1830s Alabama. If he’d acted like an autonomous Kantian will, behind the veil of ignorance and unencumbered by the consciousness of being black skinned, then he would’ve stumbled into the very real and very dangerous encumbrances of racism. So part of the practical wisdom that had to be passed from generation to generation for blacks was racial categorization—forgetting that the masters think of you in some respect as all alike could lead to death. Indeed, this racial wisdom becomes self-enforced as the community suffers the effects of one individual’s forgetting of it. [9]

This is why “black is beautiful.” It is an outgrowth of a community forced to be a community by the flimsiest of attributes—one. It doesn’t seem to matter which one; if there’s wisdom in the last 200 years of moral reflection on this, then it might be that the difference between “thin” and “thick” conceptions of moral community might be almost literally quantifiable, and that thin communities might not be durable enough to last and fragile communities might be dangerous to themselves. I’m not convinced of that line of thought, but it seems a profitable direction of inquiry. [10] “Black is beautiful” is the kind of slogan needed to give self-esteem to people who have been traumatized because of a flimsy but dangerous reduction of self. Racism and the other ugly reductions dug a hole for those it affected—and you can’t just levitate out of that hole or pretend you’re not in it; you have to fill it in.

Self-esteem has gotten a bad rap in the last 30 years because—and in fact during this same time period of the initial culture wars—Americans have been found to have too much of it. The favored statistic is the difference between how good we think we are and our test scores that are supposed to quantify and validate how good we are. It has become a consistent fact that we think we’re better than we are. The rugged individualists of America (and people who so self-identify are often on the right politically these days, for whatever reason, with venerable exception for the late George Carlin) were right to laugh and denigrate the “Everyone’s a winner!” movement. Their instinct is that a win isn’t really a win if you don’t earn it. But what they weren’t fully cognizant of is the depth of the problem they still face as parents (and citizens, for that matter) with respect to self-esteem. For self-esteem is in the same family as pride, courage, confidence, dignity, self-respect, self-trust, and self-reliance. These are needed for individual autonomy, and every person in a liberal democracy has a right to the instruments and conditions for autonomy; for every individual has a right to grow and mature into an adult. So this is a practical problem of balance. You have to trust yourself to stand on your own, but Emerson realized that true self-trust is difficult, and cannot be treated as easy: “And truly it demands something godlike in him who has cast off the common motives of humanity and has ventured to trust himself for a taskmaster” (“Self-Reliance”). But you can’t brutalize a growing self either—we’ve all seen the horrors of that in portrayals of competitive sports families. Shame is the mechanism at work in learning the difference between winning and losing, correct and incorrect, but you can’t shame a person into the Stone Age without destroying the fertile ground out of which autonomy can grow.

6.      Rorty understood these difficulties, and so began his Achieving Our Country with a brilliant summary of the relevant balances:
National pride is to countries what self-respect is to individuals: a necessary condition for self-improvement. Too much national pride can produce bellicosity and imperialism, just as excessive self-respect can produce arrogance. But just as too little self-respect makes it difficult for a person to display moral courage, so insufficient national pride makes energetic and effective debate about national policy unlikely. Emotional involvement with one’s country—feelings of intense shame or of glowing pride aroused by various parts of its history, and by various present-day national policies—is necessary if political deliberation is to be imaginative and productive. Such deliberation will probably not occur unless pride outweighs shame.
The relevant problem that Rorty confronts is: what do we do when shameful acts seem to outweigh meritorious ones? The title of Rorty’s book is from a famous line in James Baldwin’s The Fire Next Time (1963): “If we—and now I mean the relatively conscious whites and relatively conscious blacks, who must, like lovers, insist on, or create, the consciousness of the others—do not falter in our duty now, we may be able, handful that we are, to end the racial nightmare, and achieve our country, and change the history of the world.” During Baldwin’s meditation on America, he goes to meet Elijah Muhammad, prophet of the Nation of Islam. Muhammad is essentially a separatist, who cannot hope that America might be able to change. Rorty says of the two:
I do not think there is any point in arguing that Elijah Muhammad made the right decision and Baldwin the wrong one, or vice versa. Neither forgave, but one turned away from the project of achieving the country and the other did not. Both decisions are intelligible. Either can be made plausible. But there are no neutral, objective criteria which dictate one rather than the other.
I take this to mean that there is no answer to “Why hope?”—no knockdown argument to force people into the position of being unentitled to give up on a group loyalty. For as I intimated before, being a citizen of a nation is already a rhetorical grouping on all fours with the ones Bromwich is concerned about, of race, gender, or sexual identity. The problem Bromwich cogently faces is the interaction between these latter groupings and the former. For while they are all rhetorical groupings, the rhetorical grouping of national identity also has practical mechanisms for control. That makes an important difference.

The problem Rorty considers, however, is the role such separatism as Muhammad’s plays in the life of individuals negotiating a world in which all are not in control of how they are grouped. Rorty didn’t discuss this kind of problem very much in his work, but it shows up in his major essay on feminism, “Feminism and Pragmatism” (collected in Truth and Progress). [11] Taking a cue from Marilyn Frye’s book, The Politics of Reality, Rorty says that “individuals—even individuals of great courage and imagination—cannot achieve semantic authority, even semantic authority over themselves, on their own. To get such authority you have to hear your own statements as part of a shared practice. Otherwise you yourself will never know whether they are more than ravings, never know whether you are a heroine or a maniac” (TP 223, emphasis Rorty’s). This is where the interesting friction with Bromwich’s book occurs. The concept of “semantic authority” articulates “control over meaning.” We cannot just define words as we wish—words are public items that ping-pong between users, and thus can be imbued with significance a single individual has no control over. The problem for oppressed groups—individuals who are forced to belong to a rhetorical grouping because of the flimsiest of attributes: one—is that their language has been colonized. (And now you can see how these reflections can be extended even further.)

Language is the instrument of self-definition. The problem Bromwich skirts is that you cannot just declare yourself self-reliant. Self-reliance is earned, but in addition to being an attitude, it is also earned linguistically. Being reliant upon a self you have created from public linguistic materials poses the Idiot Question: are you really reliant upon a self you’ve created and not simply conforming, if unconsciously, to the movements of the herd? You can be confident of such authority when you can “hear your own statements as part of a shared practice.” But what if you’ve historically been disallowed from sharing in the practice? Can you be confident that the practice isn’t just foisting on you thoughts and feelings that are actually detrimental to your well-being, that the practice isn’t a confidence scheme, that you aren’t being conned?

7.      This is the existentialist motif of antiauthoritarian instincts, and teenagers often get to this point in their development. We adults say, “trust me: this is yet for your own good—you aren’t being conned.” And, in fact, adolescent rebelliousness is a requisite stage for autonomy. It might not always take the form we’re used to associating with rebellion—nose rings, tattoos, dyed hair—but if you don’t eventually rebel from an authority figure, then you won’t set off on the course of reflection required for making decisions on your own. [12] So demanding semantic authority looks like adolescent behavior to an adult facing an adult—“take it,” is the response, “I thought you already had it.” But the problem of semantic authority is more difficult than that. This is why the concept of trauma is useful. The problem isn’t “Why don’t you grow up?”; it’s “Who are you to tell me when my trauma is over?” No one can just wish it away, and everyone lives with the consequences. Who are we to tell people to grow up, when—as William James said in another connection—it feels like a fight? In the context of a family, growth and parental figures are part of a neutral, necessary structure of authority. But in the rest of life, treating someone like a child is infantilization and “Ah, grow up!” is fightin’ words. That’s the dilemma right there. Adults without confidence are a moral problem. Telling someone to grow up is cruel. Treating them like a child is equally cruel. Cruelty, as Rorty defined the liberal ethos echoing Judith Shklar, is the worst thing we can do. But we live in a world in which historical conditions have made it difficult to produce autonomy. Worse, even without the weight of history, we don’t know any sure-fire methods of education for producing it. Our only consolation is that the value of autonomy is a relatively recent invention—hopefully we can figure this out.




Endnotes

[1] For example, I still maintain to friends that money would pretty much solve our problems with K-12 education, something I became convinced of after reading Jonathan Kozol’s book, Savage Inequalities. My conviction remains unshaken, even after hearing some very good points from friends on the inside of the situation and debate. Whatever the utility of Horace Mann’s vision of education for the commercialist agenda of turning us into good little drones that mindlessly consume, books like Dumbing Us Down just don’t provide a viable long-term solution.

[2] Some of them were more anti-Kantian than others. Bromwich, for example, feels comfortable with enlisting Kant into the articulation of his point of view, whereas Rorty’s distrust of Kant was so deep that he would never do so, even when he could recognize his compatibility on a particular score.

[3] We shouldn’t, for that reason, underestimate the importance of especially Marx to the theoretical self-understanding of this movement, particularly given the importance of the Communist Party in Chicago and Harlem between the World Wars. (And that’s not to mention the importance of Marx to our current overtheorized left.) One should also mention the importance of Hegel to Franz Fanon in Black Skin, White Masks.

[4] Anyone familiar with Rorty, and particularly Rorty criticism, will wonder how this fits together with Rorty’s practice of using the rhetorical “we”: “we pragmatists,” “we historicists,” “we liberal ironists.” The rhetorical “we” is a flexible device, I think; my instinct is to say that Rorty’s “we” does not occlude thought the way Bromwich suggests can happen with the “we,” and of which people have implicitly suggested about Rorty’s “we.” But as this is the most interesting and original line of argument in Bromwich’s book (that I’ve perhaps taken some liberty in reconstructing), I haven’t been able to think through all of its ramifications. For I also still believe, with Rorty, that you need to say “we” to construct a tradition and a community. (For a discussion of this facet of Rorty and the issues it involves, see my “Two Uses of ‘We.’”) So some serious thinking still needs to be put into how to say “we” without forming group thinking. How do we avoid that? What practices and habits do we need to have in place to make sure sheep don’t just start bleating back to us what we want to hear? It’s not enough to say “practices of self-reliance” because what are those? As long as power and authority are in play in the world, and on theoretical grounds I don’t think it’s possible to get rid of them, then the issue of telling between sheep, shepherds, wolves, and autonomous individuals will seem always to be in the air. Could this be the democratic equivalent of epistemological skepticism? Not the Problem of Other Minds, but the Problem of Autonomous Minds?

[5] I discuss some abstract problems with the “we” prompted by Rorty’s work in “Two Uses,” cited in note 4, but see especially Section 3, when I turn to the question I turn to below in the next section. Also, one might compare my discussion of Brandom’s Enlightenment notion of a “norm of commonality” that he invokes to distinguish Truth from the Good, which is at the base of his distinction between commitments to believe and commitments to act—see “On the Asymmetry,” esp. section 9. Perhaps I should add in this note that, despite my rhetoric in this paragraph about “real groups” and “actual groups,” when it comes to the metaphysics of this, rhetorical groups are as real and actual as these other kinds of groups. But we must make a distinction somewhere, rooted in differences in practice (in this case, practical control), even if it shouldn’t be at the level of ontology. And for my current purposes, we needn’t think it through any further. However, if one wanted a taste of the direction I would go, see an old discussion of Rorty and metaphysics in “Philosophy, Metaphysics, and Common Sense.” That paper moves through a discussion of Socrates, Plato, Robert Pirsig and Rorty on how to define philosophy, and what is distinctive about it regarding my shift in thinking and approach, is that it tries to translate problems in metaphilosophy into practical problems of behaving in the world. The discussion of Rorty is toward the end, starting with a paragraph that begins “Rorty treats professional philosophers the same way.”

[6] The saddest story to my ears was, of course, about the professor: in this case, the black political scientist whose class on black politics was boycotted by the black chairman of the Black Studies department because the latter thought the former “might not sufficiently represent the Afro-American point of view.” See Politics by Other Means, 23-26. What’s sad about it, I think, is not that the chairman had a view about the class—the proliferation of opinions and views and their friction with each other is the essence of Milton’s hope that truth will win in a free and open encounter—but that he led the particular boycott he did, meaning he lobbied the undergraduates in his own class to drop out of the other or get involved in protesting and pressuring other undergrads. (And in the environment we should have the highest expectations for creating a free and open encounter—if not the university, where?)

That’s the same kind of subtle coercion at work as we see at issue in cases like Town of Greece v. Galloway, the recent Supreme Court case where an atheist and a Jewish citizen of Greece, NY sued the town for opening every town meeting with a Christian prayer. During oral arguments, Douglas Laycock—arguing for Galloway and Stephens—suggested there was coercion involved when all are asked to bow their heads or rise to their feet for prayer. Justice Scalia scoffed, saying someone who didn’t want to participate could just stay seated. Laycock responded: “What’s coercive about it is it is impossible not to participate without attracting attention to yourself, and moments later you stand up to ask for a group home for your Down syndrome child or for continued use of the public access channel or whatever your petition is, having just, so far as you can tell, irritated the people that you were trying to persuade.” (See page 37 of the transcript of the oral arguments, found here. An audio version with background of the case can be found here. My knowledge of the case is indebted to a student of mine that did excellent research on it.)

Students in the university need to be able to trust that an instructor’s politics, or other extraneous opinions other than the subject of the course, will not interfere with the student’s ability to take the course and do well. It’s one thing, I think, to let your views about such things filter in through the course in various ways; it’s quite another to begin persuading your students to act on your views. It’s the second that transforms the university space from one of inquiry into one of political persuasion—and political persuasion is coercive if you can be punished for not being persuaded. (I should be honest, though: I have from time to time made a plea for students to pay attention to politics, and to make sure to vote. It’s more or less extraneous to any courses I teach, but I figure I have some sort of civic responsibility to do so.)

[7] Since my concerns are philosophical and not polemical, I’ll add here that Bromwich understands this problem, though he wasn’t largely concerned with it in the space of his book. Bromwich was concerned with the effect of multiculturalism on our practices of higher learning, and particularly the practices of literary study, and not about offering an abstract account of authority. He does, however, have all the resources for one in his chapter “Reflection, Morality, and Tradition” and mounts a short version of it in Ch. 5 with respect to aesthetic judgment, and otherwise does nothing to undercut the possibility of pulling a more elaborate one together. (I don’t here pull one together, but I think Robert Brandom has made available the conceptual resources to do so. In sections 2 and 3 of “On the Asymmetry” (cited in note 5) I give an outline of the main notions at work.) The line of thought I’m interested in is, taking for granted that an Emersonian account of authority is possible, how does that affect our assessment of the situation Bromwich faced? For Bromwich, it is clear from the tenor of his book, was deeply embedded in his polemical situation—i.e. he was very angry and concerned about literary study in America. But as we all know, emotion can fade—and it is helpful for it to fade for us to make reflective historical judgments about whether we should still be angry, or whether we should’ve been angry.

A case in point is Bromwich’s treatment of Barbara Herrnstein Smith in Ch. 5. I’ve grown to think of Smith as a pragmatist ally on the plane of epistemology, and Bromwich’s treatment of her Contingencies of Value is, perhaps not unfair, but at least unkind. It is in that chapter that Bromwich formulates the thrust of Smith’s book’s argument’s response to an expert community’s judgment: “who are we, after all ... who are we to dismiss the person who judges the game or the work quite differently?” I’m trying to give, in this brief space, a sketch of who the “we” is that produced multiculturalism and whose claims have a certain equal standing to the expert community. I think Bromwich is right about Smith, that if she turns her epistemological pragmatism into Idiot Questioning with a political point, then she’s undermining the very idea of expert communities—which is disastrous. However, I also think that a better understanding of just what the issue is that divides the Emersonian Bromwich from the Marxish multiculturalists can give us a better idea about what the real problem is.

[8] Bromwich discusses the equivalent of a loyalty oath in English departments on 26-29. I should say here that there’s another, slightly different point that Bromwich agrees with Rorty on here, and that’s the view that the cultural left seemed to think that by doing their academic work they were doing political work. So, if one spends one’s days deconstructing a text (in class or at the computer), exposing phallogocentrism by showing how Woman is in a marginal position, or if you spend it uncovering the capitalist ideology that is really the motivation of a character in a story—then you needn’t attend a rally protesting the very idea of “forced rape” (is there another kind?) or signing a petition for raising the minimum wage. In the words of David Hollinger’s slogan that Rorty liked, “you gave at the office.” See Bromwich’s discussion at 223-25.

[9] One of the best reminders of this historical experience with its attendant racial wisdom is Richard Wright’s “The Ethics of Living Jim Crow,” which appears as an introduction to his collection of short stories, Uncle Tom’s Children. One way to understand the differences between Wright and the two other major post-Harlem Renaissance writers, James Baldwin and Ralph Ellison, is by the differences in their geographical experience. Wright grew up in the deep South; Ellison in the marginally southern Oklahoma City; Baldwin in Harlem. Wright’s pessimism about being black in America—epitomized in his unforgettable description in Black Boy of it as the “essential bleakness” and “cultural barrenness of black life”—was taken issue with by Baldwin and especially Ellison. Both Baldwin and Ellison sound the polemical notes—Baldwin in his scorching “Everybody’s Protest Novel” and Ellison in “Richard Wright’s Blues”—of autonomous maturity as against what they take to be Wright’s short shrift of black prospects. But it’s possible to see this difference in perspective as one of different experiences—Ellison in particular never experienced the harshness of the Southern experience of being black. Growing up black anywhere in America produced trauma, but it’s important to distinguish between the different kinds of experiences in the different regions that inform those experiences. (I should also add that Ellison’s different experience didn’t stop him from producing an equally unforgettable literary epitomization of Southern black experience in the opening chapters of Invisible Man.) The fight that occurred between Ellison and Irving Howe in print about this issue in the early sixties is probably one of the most enduring polemical exchanges between great minds I know of. Polemic usually causes writings to date themselves, but as Howe suggests in his wonderful reflections on the exchange, their attitudinal differences and the problems raised by both pieces have remained, and prove immensely useful to think through ourselves. Ellison’s piece, “The World and the Jug,” was collected in Ellison’s Shadow and Act, and was a response to Howe’s defense of Wright against Baldwin and Ellison, “Black Boys and Native Sons.” The latter should be read as it has been collected in Howe’s Selected Writings, 1950-1990, with its two retrospective addendums from 1969 and 1990.

[10] I’m not going to try to unpack the significance of the thin/thick distinction. It has played an increasingly prominent role among a series of thinkers and I think we’ve only begun to understand the distinction’s utility in conceiving the relationship between conceptual thought and politico-moral community. Thin/thick attempts to play the role once played by abstract/concrete, but in an attempt to avoid some of the dialectical seesaws of nominalism and platonism. The idea was seeded by Clifford Geertz in his famous essay, “Thick Description: Toward an Interpretive Theory of Culture” (1973; included in The Interpretation of Cultures). It has lived a life in many, but the most important for my purposes are Rorty’s use of the distinction in CIS to articulate the concept of “final vocabularies” (see 73) and Walzer’s usage in his little book Thick and Thin: Moral Argument at Home and Abroad (1994).

[11] I think this is one of his most visionary essays that we’ve yet to mine completely of insight. Much of Rorty’s later work, as he readily admits, was repackaging of earlier ideas for different audiences. Only occasionally does Rorty find himself in a position to formulate a new insight in this kind of work, since a good portion of it was also carrying further conversations with old interlocutors (e.g. Putnam, Habermas, etc.) or unimaginative ones (e.g. the many responds-to-his-critics books that Rorty took time to do). (I don’t mean to devalue either kind of work, particularly the latter; garden work is important to do for both sides—you can’t always be breaking new ground.) But the essay on feminism puts him into many interesting, new dialectical positions that produce some interesting reflections on pragmatism. One will find the general form of the argument I’ve made above about an individual’s self-esteem on TP 219.

[12] Against the background of his infamy as the supposed leader of a rebellion against analytic philosophy, Rorty once recalled that “my parents were always telling me that it was about time I had a stage of adolescent revolt. ... They were worried I wasn’t rebellious enough” (TCF 4).

Friday, July 12, 2013

Two Uses of "We"

1. We who agree with me – We as community; 2. Like herding cats – Foucault’s oui – Initiating vs. justifying; 3. Deliberating as a group – Whose heritage? Which communitas? – You are a function of we – You cannot reason your way to hope; 4. The we-initiator is prophetic – And arrogant – Emerson’s Sayer: too confident? – Ellison’s Emersonianism – Carlin’s Millian liberalism

1.      One of the things Richard Rorty was taken to task for most often—from teasing to anger—was his rhetorical use of “we.” It was also one of the earliest things his late-coming to moral and political philosophy was criticized for, particularly by those on the left, and here as in most relevant criticism of Rorty, it was his old friend Richard Bernstein: “Rorty frequently speaks of ‘we’ – ‘we liberals,’ ‘we pragmatists,’ ‘we inheritors of European civilization.’ But who precisely constitutes this ‘we’? Sometimes it seems as if what Rorty means by ‘we’ are ‘all those who agree with me.’” [1] This would, indeed, be disastrous if that is all Rorty meant by “we.” However, it is important to recognize that sometimes you do want to talk to just those “who agree with me,” though it couldn’t be “about all things” because you wouldn’t need to talk then (unless it were simply to remind everyone of the things y’all agree on, which isn’t as silly a task as you may think, but one I shan’t talk about for now). This “relevant we” is a community—all questions, assertions, positions are made and taken in front of some particular group.

In Rorty’s original response to Bernstein, he concedes that he has to spell out better who he means by “we,” and so begins his reply with a “political credo” in order to specify “the audience I am addressing.” [2] This wasn’t, however, exactly Bernstein’s problem with those “we’s,” and I want to slowly bring out the back and forth because both angles the two are standing at are important. Rorty is concerned with the ability of political philosophers—or, really, people generally—to identify with a community in solidarity in order to propose reforms. Perhaps this is an ability to stand shoulder to shoulder, if only metaphorically, with an established political party in order to get things done—this kind of solidarity is exclusionary insofar as the solidarity you have is not with the opposing party(s). However, to get reforms for the whole nation, the kind of solidarity we are talking about is larger—identification as an American in order to convince everyone that the reforms of your party are what’s best for everyone. So you are addressing Americans while also acknowledging that they, obviously, do not agree with you about everything.

2.      Rorty, here, was doing something even more narrow—addressing a subset (“the people whom I think of as social democrats”) of a national-political set (the American left) part of the larger array of America. [3] But Rorty thought that thinking in terms of solidarity was necessary for thinking in terms of getting stuff done. The reason one talks to subsets of various kinds is to get people pointed in the same direction, to add force to force to counteract opposing forces. The old cliché of getting leftists to agree on anything is like herding cats is apropos. And so Rorty criticized Foucault through the ‘80s for never being able to quite countenance himself inside of some solidarity group. “[Foucault] forbids himself the tone of the liberal sort of thinker who says to his fellow-citizens: ‘We know that there must be a better way to do things than this; let us look for it together.’ There is no ‘we’ to be found in Foucault’s writings, nor in those of many of his French contemporaries.” [5] Foucault, in I believe his last interview, replied to this particular point during his conversation with Paul Rabinow:
Rorty points out that in these analyses I do not appeal to any “we”—to any of those “we’s” whose consensus, whose values, whose traditions constitute the framework for a thought and define the conditions in which it can be validated. But the problem is, precisely, to decide if it is actually suitable to place oneself within a “we” in order to assert the principles one recognizes and the values one accepts; or if it is not, rather, necessary to make the future formation of a “we” possible, by elaborating the question. Because it seems to me that the “we” must not be previous to the question; it can only be the result—and the necessarily temporary result—of the question as it is posed in the new terms in which one formulates it. [6]
Rorty thought it was very important to respond to this point. [7] Bernstein quotes this passage, and Rorty responds to it in a footnote after his concession, earlier discussed, that he needs to be more specific about “we”:
I agree with Bernstein that I need to spell out the reference of “we” more fully. I think that this is best done by reference to a view of current political dangers and options—for one’s sense of such dangers and options determines what sort of social theory one is able to take seriously. However, I cannot figure out what Foucault meant when he said (in the passage Bernstein quotes) that “the ‘we’ must not be previous to the question.” With Wittgenstein and Dewey, I should have thought that you can only elaborate a question within some language-game currently under way—which means within some community, some group whose members share a good many relevant beliefs (about, e.g., what is wrong, and what would be better). Foucault seems to be envisaging some sort of simultaneous creatio ex nihilio of vocabulary and community. I cannot envisage this. As far as I can see, you can only describe or propose radical social change if you keep a background fixed—if you take some shared descriptions, assumptions, and hopes for granted. Otherwise, as Kant pointed out, it won’t count as change, but only as sheer, ineffable difference. [8]
Rorty picks out precisely the bit in Foucault’s passage that is most problematical because of the two roles “we” can play: “we” as initiating a community and “we” as justifying an act. The latter is what Foucault finds so offensive about “we,” and this is what he means when he says he doesn’t want to appeal to the “we’s” “whose consensus, whose values, whose traditions constitute the framework for a thought.” If it had just said “consensus,” Rorty may have not bucked the point in the way he did because the idea that some shit’s gotta’ change is based on the idea that the current consensus needs reconfiguration. But including “values” and “traditions” in his formulation of what he wishes not to appeal to is why Rorty suggests that Foucault envisages some “simultaneous creatio ex nihilio of vocabulary and community.” The whole point of the first half of Contingency, Irony, and Solidarity is that you can’t just make stuff up—you have to use the tools you were acculturated with. Why? Because there is no you until you’ve been acculturated. This is a Hegelian point. And Foucault’s response is just a little too decisionistic, the meta-ethical stance that suggests that you are an empty toolbox that should look around and put the good stuff in. Meta-ethical decisionism is the heart of right-wing individualism and accounts for the left-wing communitarian backlash in the ’70s and ’80s. [9] “The problem is, precisely, to decide if it is in order to assert the principles one recognizes and the values one accepts.” Who is this one? What are you made of, that can recognize and make decisions, if you’ve emptied out all the values and traditions?

3.      So, first there’s the conceptual Hegelian point that Rorty wants to press back, and then the rhetorical-political point that I made earlier—that to effect change in this world you’re going to need to form a solidarity group. Foucault finds insidious the consensus because when you use a consensus to justify, you necessarily exclude the dissenters from counting in the justification. Republicans still have a Democratic president even though they may not have voted for the person. But that’s the way democratic politics has to work, right? Well, what if you’ve been excluded from the deliberation? That’s what’s really the problem. And Americans, especially, should be sensitive to the problem of somethingsomething without representation. A “we” that gets too close to the justifying sense can seem like an act of exclusion if you use it in the middle of a debate. And this was Bernstein’s problem when he quoted Foucault:
At times … Rorty seems to be insensitive to the dark side of appealing to ‘we’ when it is used as an exclusionary tactic…. Rorty criticizes those versions of ‘realism’ that appeal to a ‘fact of the matter’ that is presumably independent of my (or our) interpretations. Yet he fails to realize that when he appeals to our shared beliefs and our common historical heritage, he is speaking as if there is at least a historical fact of the matter. [10]
Bernstein is summoning the outrage the oppressed who have been excluded from the process of creating those “shared beliefs” and “common heritage” have when they are told that “this what ’Merca ’sabout.” It’s not their heritage.

But whose heritage should we have? Yours? Who are you? If you aren’t American, why should Americans have your heritage? That’s the conundrum if you don’t form that large kind of solidarity group—the intellectual sword wielded in pointing out the exclusion doesn’t simultaneously let you back in. So what does? Rorty thought the only thing that lets you, any you, into the democratic societas is a liberal communitas—liberalism is an ethics of inclusion. In his second run at Foucault’s point, in Contingency, Rorty says to Foucault’s formulation of the problem, as deciding whether or not to take part in the old community or form a new one, that “this is, indeed, the problem. But I disagree with Foucault about whether in fact it is necessary to form a new ‘we.’ My principal disagreement with him is precisely over whether ‘we liberals’ is or is not good enough.” [11] This is because his “hunch is that Western social and political thought may have had the last conceptual revolution it needs,” [12] and this because “expanding the range of our present ‘we’” [13] is central to the liberal communitas.

So—after the Hegelian point that you are a function of a we, while taking on board the point that you are not thus reducible to that we, and the rhetorical-political point that you have to justify yourself in front of some community and form solidarity groups to affect change, there is still the problem of historical exclusion (or, continued exclusion). How do you decide whether or not to be part of the actual American community when it continues to fail regularly at the inclusionary image it prides itself on? Rorty didn’t think there was anything to say to this. You either hope, with James Baldwin, that the American common project of inclusion might be made, if not new, at the very, very least much better than it is currently behaving, though its dreams are more or less the same, or you cast off America as hopeless as Elijah Muhammad and many others have, both alienated intellectuals and working class folks who actually feel the brunt of the exclusions still left in America’s leaky ship so much more often than the leisured intellectuals. There will never be a deciding factor when it comes to deciding whether or not one should hope, at least no factor that will ever be portable. We, each of us, should have reasons for our belief or disbelief in a community, but reason will never decide the issue. [14]

4.      But how should we use “we” then? Sometimes I think people like Bernstein we’re being too hard on Rorty because how else do you decide what a group should do then a bunch of people saying things like “We should no X” “No! We should do Y!” “No, Z!”? Rorty followed Wilfrid Sellars in thinking of these as “we-intentions,” and since communities don’t speak, only individuals do, somebody has got to speak up and suggest things the community should do and think. I think Bernard Williams may have given all the answer Rorty ever needed in his response to this problem in his Shame and Necessity:
More than one friend, reading this book in an earlier version, has asked who this ubiquitous “we” represents. It refers to people in a certain cultural situation, but who is in that situation? Obviously it cannot mean everybody in the world, or everybody in the West. I hope it does not mean only people who already think as I do. The best I can say is that “we” operates not through a previous fixed designation, but through invitation. (The same is true, I believe, of “we” in much philosophy, and particularly in ethics.) It is not a matter of “I” telling “you” what I and others thinks, but of my asking you to consider to what extent you and I think some things and perhaps need to think others. [15]
As I said before, there are two uses of “we”—the we-initiator and the we-justification. The we-justification counts and uses that count as a reason for a belief. “We, in Wisconsin, counted up our votes, and give our electoral votes to Candidate X.” “Indeed, and there’s reason to believe that we in Wisconsin are beginning to go liberal because exit polls show that the margin X lost by in rural districts diminished, showing a rising left tide.” “Well, then we in Wisconsin should have liberal policies. Let’s furnish some.” You cannot, however, add individuals to get a we-community. You need to initiate it somehow. Declare a border or give yourselves a name—“Cubs fans” or “pragmatists” or “humans.” Like Foucault’s question, the we-initiator is prophetic—it proposes a community we could all belong to though we might not yet. It prophesizes an ideal community we should live in by thinking we do and beginning to behave like it (and criticizing each other when one of us doesn’t). It is a request, an invitation, and as Williams points out, it is an invitation to help think through what we are all about.

The reason people still get miffed about “we” is because it is arrogantyou propose to speak for me? Well, no, not exactly, but kind of. Somebody has got to speak for we. This risk of arrogance is at the heart of Emersonianism, for self-expression is the most important general trait of humanity, but not everyone was given equally to it. Emerson was right to imply that the Sayer, above the Doer and Knower, was king in a democracy, but Emerson’s sense of Providence was far too strong. He saw the agon that was a necessary consequence of self-reliance, but he said, “Don’t sweat it. Just ‘speak your latent conviction, and it shall be the universal sense.’” [16] It shall? How? Emerson has no answer for that except his confidence, his optimism, which is to say his faith that Providence will make sure that everyone’s latent conviction (not those false, external ones) is in harmony (and never mind how we tell the difference between the truly latent and the falsely societal). So I take it that Ralph Ellison’s modulation of Emersonianism at the very end of Invisible Man speaks volumes about what we’ve learned is right and wrong about liberalism’s ethics of inclusion and its Emersonian need for everyone to act their own part:
Who knows but that, on the lower frequencies, I speak for you?
Who knows?—we will all only know when each of us looks inside and speaks what we find there. There’s a lot on the surface that divides us, but maybe there’s some kind of agreement lower down that needs articulation for us all to realize how much we do hold in common, and how we will need to hold it. And if not—well, there’s always George Carlin’s articulation of Millian liberalism: “Live and let live, that’s my motto. Anyone who can’t live with that, take’em outside and shoot the motherfucker.” [17]




Endnotes

[1] Bernstein, “One Step Forward, Two Steps Backward: Rorty on Liberal Democracy and Philosophy,” in his The New Constellation, 246-7. This paper was original published in Political Theory, Nov. 1987, where Rorty’s reply, “Thugs and Theorists: A Reply to Bernstein,” was simultaneously published (which I shall be quoting from shortly).

[2] Rorty, “Thugs and Theorists,” 565

[3] In fact, it’s more complicated than that, for the subset he is addressing in “Thugs and Theorists” and, say, Contingency, Irony, and Solidarity is international—he’s addressing not just Bernstein and Irving Howe, but Charles Taylor of Canada and Jürgen Habermas of Germany. However, in Achieving Our Country he is specifically addressing the American left.

[4] This isn’t, in fact, much of a criticism for Rorty, who attempts to have a much more nuanced set of terms with which to praise and criticize. The burden of Contingency, Irony, and Solidarity is, after all, the attempt to convince people to treat those with different tasks differently, and not test them all with one thermometer. So, Nietzsche and Heidegger, while getting F’s for political views, get A’s for attempting to achieve autonomy from Plato. Likewise, Orwell and Habermas get A’s for politics, but maybe B’s for philosophy. Rorty’s criticism of Foucault basically amounts to unfortunately running together his attempt for private perfection with a dominating concern for the welfare of others. What makes Foucault curious in this regard is that unlike, say, Plato whose running together of those two things emitted a totalitarian-like fantasy, Foucault’s attempt to do both at once had very few adverse effects on the public utility of his best works. This comes out best in Rorty’s essay “Moral Identity and Private Autonomy: The Case of Foucault” in Essays on Heidegger and Others. The lesson he drew from it was, roughly: “[My] critics on the left … think of themselves as standing outside of the sociopolitical culture of liberalism with which Dewey identified, a culture with which I continue to identify. So when I say ethnocentric things like ‘our culture’ or ‘we liberals,’ their reaction is ‘who, we?” I, however, find it hard to see them as outsiders to this culture; they look to me like people playing a role – an important role – within it” (Objectivity, Relativism, and Truth 15).

[5] “Habermas and Lyotard on Postmodernity,” EHO, 174

[6] “Polemics, Politics, and Problemizations: An Interview with Michel Foucault,” in The Foucault Reader, ed. Rabinow, 385

[7] I say this because of the timing of the essays. Rorty published “Habermas and Lyotard” in 1984, to which Foucault responded in 1984 (just before his death). Bernstein quotes the passage at Rorty in 1987, to which Rorty responds in 1987 in “Thugs and Theorists” (as I will presently elaborate). However, the exchange with Bernstein is after Rorty’s Northcliffe lectures of 1986, which were published that year in the London Review of Books (and Bernstein had already read when he wrote his essay). Those lectures were to become the first three chapters of Contingency, Irony, and Solidarity, but not before Rorty could add the last section of “The Contingency of Community,” which juxtaposes Habermas and Foucault, and begins with a reconsideration of how to respond to Foucault’s point.

[8] “Thugs and Theorists,” 575n4. Rorty continues: “Attempts at ineffability can produce private ecstasy (witness Kierkegaard and Nietzsche) but they have no social utility. A lot of Foucault’s admirers seem to think that he (or he taken together with Lacan, Derrida, Deleuze, and so on) showed us how to combine ecstasy and utility. I cannot envisage this either.” This last points in the direction of Rorty’s concerns in CIS.

[9] A backlash that happened amongst intellectuals, and only some of them, mind you (roughly, those that considered themselves “political theorists” or read Dissent). My right and left contrast here should have obvious resonance in our current American political climate, as it did then, and has in fact throughout the 20th century. However, one should never forget that many of the debates that ebb and flow in academic journals only rarely spill out into the wider political area. It’s often and usually the other way around.

[10] Bernstein, “One Step, Two Steps,” 247

[11] CIS, 64

[12] CIS, 63

[13] CIS, 64n24. This footnote is Rorty’s reconsideration of the passage from Foucault, in which he emphasizes that he agrees with him “that the constitution of a new ‘we’ can, indeed, result from asking the right question. … But forming new communities is no more an end in itself than is political revolution.”

[14] For a reapplication of this line of thought to the "culture wars" of the 1980s and ‘90s, see “The Legacy of Group Thinking,” esp. sections 3 and 4.

[15] Shame and Necessity, 171n7

[16] The (second) quoted bit is from the beginning of “Self-Reliance.”

[17] One of the early jokes from Carlin on Campus (1984).

Friday, July 05, 2013

Better and the Best

1. Practical stance against the Best – Platonic prophecy as theory – Romantic prophecy as poetry; 2. Theoretical stance against the Best – Evaluative platonism and robustness – Possible betters vs. actual best; 3. The Village Champion argument – Truth and justification; 4. Sloughing off the relativist with self-referential arguments – Contradiction is a practical infelicity; 5. Practical attitudes should be allowed to trump theory – Absolutes are parasitic, not autonomous – Sellars’ parasitism argument about ‘looks’-talk on ‘is’-talk; 6. Is ‘best’-talk parasitic on ‘better’-talk? 7. You can’t say what’s best without saying what’s worse – Self-justifiers as platonism – Closing aperçu

1.      In an interview toward the end of his life, Richard Rorty was asked if he thought that advocates of black reparations had valid and serious arguments. Rorty responded that:
There are valid and serious arguments, but there are also valid and serious arguments for taxing the citizens of the First World down to the standard of living of the average inhabitant of the Third World, and distributing the proceeds of this taxation to the latter. But since neither set of arguments will lead to any such action being taken, I am not sure how much time we should spend thinking about them, as opposed to thinking about measures that have some chance of actually being carried out. It would be better to think about what might actually be done than to think about what an absolutely perfect world would be like. The best can be an enemy of the better. [1]
The stance Rorty is here taking is a practical stance against the Best. Rorty is not against thinking about what the world should be like, as utopic and prophetic thinking is central to how Rorty conceives of the intellectual’s role in democratic culture. But what he is suggesting here is that we cannot spend the day in imagination. Rorty’s conception of prophecy is romantic, not platonic. A platonic conception of prophecy got off the ground when Plato began using metaphors of sight to articulate his sense of philosophy—“theory” derives from theoros, or “onlooker,” “spectator.” Plato’s transformation of the common Greek word for an audience member of a festival is what produced Dewey’s attack on the “spectatorial account of knowledge,” and when theoria was Latinized by contemplatio, it became invested with our derived word “speculation” from speculum, or mirror. Hence Rorty’s devastating attack on platonism in Philosophy and the Mirror of Nature.

The romantic conception of prophecy, however, is different—it gains its sense, not from theoria, but from poiēsis, “making,” which we get “poetry” from. Rather than seeing something already there, the romantic conception of prophecy rests more on the Renaissance trope of building “castles in the air.” While the platonic conception gains a sense of urgency from its positing of a Best behind a veil that, if only we could just see it, we’d have our blueprint from how to order the world—the romantic conception loses that urgency, but in compensation we get a picture of how toying with castles too far off in the sky can distract us from the reality around us.

2.      What undergirds Rorty’s practical stance toward the Best is his theoretical stance against the Best, which is to say against platonism. For Rorty’s stance at the level of theory is that the Best is a mirage because it is circumscribed by our fallibility and our lack of method—for any X said to be the Best, we have to admit something better might come along. Such an admission of fallibility is what then produces the search for a method with which one could know certainly that one has in fact found the Best. Dewey and Rorty thought that this Quest for Certainty showed a lack of maturity, and that we should rather face up to the contingency of our assertions of what is the best.

The problem for this line of thought is that a new form of platonist has come along that suggests that you can’t have a notion of what’s better if you don’t have a robust notion of the Best. Rorty wants to deny needing one. This new version is a particular species of the more general form of what I will call evaluative platonism. There are many species, but the basic form is that if you don’t have the Best in mind as a sort of ideal to shoot for, you won’t progress toward anything. The industrial strength version is a full-blooded Platonism, which posits an Absolute Good that can be reached (at least in theory—you can see the Sun outside the Cave, even if you can’t reach it). There are, however, important knock-off brands, the most important of which for my purposes are Peircian end-of-inquiry notions which suggest that one needs a robust focus imaginarius to make sense of inquiry—these are important precisely because of the range of agreement Rorty shares with these other pragmatists. Rorty’s romantic notion isn’t robust enough because it simply expands our repertoire of possible betters without narrowing one of them as the actual best. Since the traditional enemy of the platonist is the relativist, it should be no surprise that that is the epithet pragmatists like Hilary Putnam wield at Rorty for continuing to resist attempts at robustness. [2]

3.      To get a sense of how Rorty responds, we might turn to his Village Champion Argument against Jürgen Habermas, another pragmatist admirer of Peirce. Rorty sets the stage by contrasting the Peircian strategy with what I’ve called “full-blooded Platonism”:
Instead of arguing that because reality is One, and truth correspondence to that One Reality, Peircians argue that the idea of convergence is built into the presuppositions of discourse. They all agree that the principal reason why reason cannot be naturalized is that reason is normative and norms cannot be naturalized. But, they say, we can make room for the normative without going back to the traditional idea of a duty to correspond to the intrinsic nature of One Reality. We do this by attending to the universalistic character of the idealizing presuppositions of discourse. [3]
To “naturalize reason” in this context is to reject the utility of the concept of truth when attempting to figure out what is and is not knowledge—instead, naturalizers argue, justification is all that does any real work. Rorty does not want to collapse the distinction between the two, however, only argue that the T in the JTB conception of knowledge, exactly like the B, does not play an operative role in the determination of what people know. [4] Peircians think that the T does play an operative role, a transcending moment in which more than justification is had. Without the ability to transcend the moment of justification, they think, everything would be relative to a particular audience. Rorty continues:
Habermas’ doctrine of a “transcendent moment” seems to me to run together a commendable willingness to try something new with an empty boast. To say “I’ll try to defend this against all comers” is often, depending upon the circumstances, a commendable attitude. But to say “I can successfully defend this against all comers” is silly. Maybe you can, but you are no more in a position to claim that you can than the village champion is to claim that he can beat the world champion. [5]
Rorty later glosses this argument:
When we have finished justifying our belief to the audience we think relevant (perhaps our own intellectual conscience, or our fellow-citizens, or the relevant experts) we need not, and typically do not, make any further claims, much less universal ones. After rehearsing justification, we may say either “That is why I think my assertion true” or “That is why my assertion is true,” or both. Going from the former assertion to the latter is not a philosophically pregnant transition from particularity to universality, or from context-dependence to context-independence. It is merely a stylistic difference. [6]
I hope it is apparent how the Village Champion Argument, and therefore the relationship between justification and truth, bears on the relationship between the better and the Best. To claim that X is “the best,” you are asserting the truth of the claim “X is the best.” Rorty’s point is that these assertions are necessarily always in front of some particular audience, and therefore the pragmatic power of any particular claim is relative to an audience.

4.      I think we can be a little more precise than Rorty’s usual mode of sloughing off the relativist as something he needs not be concerned with. The Village Champion Argument carries a lot of force, but there is more in the area than just a stylistic difference. The pattern of Rorty’s mode is set in his infamous APA presidential address, “Pragmatism, Relativism, and Irrationalism.” There he says, succinctly and some might say too perfunctorily, “‘Relativism’ is the view that every belief on a certain topic, or perhaps any topic, is as good as every other. No one holds this view. Except for the occasional cooperative freshman, one cannot find anybody who says that two incompatible opinion on an important topic are equally good.” [7] The reason he wishes to dispose of the relativist quickly is because he thinks, rightly, that it hides the real issues at work behind the conflict between pragmatists and platonists. So he says that “if there were any relativists, they would, of course, be easy to refute. One would merely use some variant of the self-referential arguments Socrates used against Protagoras.” [8] The argument is like this:
Protagoras: Every view is as good as any other!
Socrates: Does that include yours?
Protagoras [sensing already the end]: Er, well, yes, it must, then hunh?
Socrates: Okay, so if your view that “every view is as good as any other” is as good as the view that “not every view is as good as any other,” why should we adopt your view over the ones that say yours is shit?
Protagoras: Because…it’s true?
Socrates: Yah, okay, but what grip do you have on truth that is independent of your relativism about goodness? Isn’t goodness in the way of views just truth? How can you have a view that is itself true where others are false, but the false views are just as good? Doesn’t that just make truth an idle curiosity, and therefore your own view idle as well? Can you give me no reason for adopting your view?
Protagoras: I…will…get back to you on that one…
Socrates: Yes, Miss Palin, please do.
Since pragmatism is heir to a discernible Protagorean tradition, we are, in fact, better positioned to get back to Plato on this issue. The first step is recognizing what underpins self-referential contradiction arguments. The invalidity of contradiction is the foul incurred when you say both “X” and “not X.” But this is just to say that in the practice of saying thou shalt not incur such violations of the rules of that practice. Following Wittgenstein, one has to think, here, of practices on the analogy of games. You don’t get to count as playing the game of football, as practicing football, if—as many turd to third football comedies have underscored—you jumpkick the quarterback. In some definable Practice of Saying, it is against the rules to hold contradictory claims. (This isn’t to say that there are other practices that involve words in which it is okay to do this. Poetry and lying are the most obvious examples, which is why Plato thought poetry was a form of lying.)

This first step gets us onto pragmatist ground: there’s nothing inherently wrong with saying X and not-X. There are many contexts in which it is fine, like when you say the latter with your fingers crossed or in the context of saying the sentence before this one. Contradiction is, then, a practical infelicity of a special kind. And once this has been identified, we can see the point in Habermas’ notion of a “performative contradiction.” This is part of the idea that you can’t say one thing and do another. And this displays the larger genus that the species of self-refutation falls under with regards to relativism, for it has often been accepted as a refutation of relativism (and nihilism, for that matter) that when someone says “peeing standing up is as good as sitting down” and they then pee sitting down, to say, “if they are both just as good, then on what grounds did you make the choice?” For giving any grounds at all is grounds enough for identifying criteria being used to adjudicate truth from falsity, good from bad. And having done something is ipso facto having made a choice. So the doing contradicts the saying.

5.      So what underlies Rorty’s blithe rejection of relativism as a real concern is the pragmatic understanding that to behave at all is to refute the very idea that grounds of choice are all made equal. And while this is true, that our everyday practice refutes every day the theoretical thesis of relativism, it does not refute it at the level of theory. Rorty’s attitude tells us that we shouldn’t care about that, that we cannot, as Emerson says, “spend the day in explanation.” [9] And I think this is true as well, that our practical attitude toward the world should be allowed to trump pressures at the level of theory. [10] However, at the level of theory, it should be possible to show how relativism and platonism go the wrong way at things.

Robert Brandom, I believe, has shown how we might go at this. The charge of relativism leveled by platonists is motivated by the idea that you cannot talk about “betterness” without the Best, whereas heirs to Protagoras think all claims are of the form “X is better than …” with the ellipsis being filled in by specific claims. Brandom, in his notion of the pragmatically mediated semantic relation, has shown us how charges of relativism leveled at the pragmatist can be refuted by showing how Absolutes, like the Best, are parasitic, and not autonomous.

To understand this argument we need to understand the basic form of Wilfrid Sellars’ master argument in “Empiricism and the Philosophy of Mind” against the Myth of the Given and elsewhere. [11] For one example, one project in what Brandom calls the “empiricist core program of the classical analytic project” is to establish phenomenalism. Think of phenomenalism on the model of Berkelyan idealism, whose commitment to empiricism was so powerful that unlike Locke who thought our knowledge starts with our individual experience, he thought all we could know was our individual experience. This gets transposed into the analytic idiom as a reductionist program—the attempt to reduce talk about how things are to talk about how things seem or look. If that reduction can be shown to be successful without remainder, then we’ve shown how we don’t need to talk about how things are, but only about how things seem. (Consider the analogous materialisms about supernatural entities—we can do without talk about witchcraft because we can get along fine in explaining what happens by talking about bad mushrooms. [12]) Reductionism in the analytic idiom is a semantic relation—when you explain what you mean, you are relating your first misunderstood statement with a second, hopefully better understood statement. So when you suggest that when you talk about tables what you are really talking about are clouds of electrons at particular spatialtemporal vectors, you are suggesting a special form of paraphrase. “When you say ‘table,’ you really mean ‘cloud of electrons.’” [13]

So this is what “autonomy” means in this context—if you attempt to explain away a particular vocabulary (e.g., the vocabulary for saying “how things are”), you are suggesting that all the work can be done by a different vocabulary (e.g., the vocabulary for saying “how things seem”). For this reduction to work, the alternative vocabulary must be independent of the target vocabulary you are reducing into nothingness. If it isn’t, if you need the target vocabulary to use the alternative, then the reduction was misguided because you have a remainder (that being something you need but now can’t explain the existence of or how you do it, etc.). So if you can show that a marked for demolition vocabulary is needed to use the alternative, then you can combat the reductionism. Brandom says that Sellars’ argument “turns on the assertion of the pragmatic dependence of one set of vocabulary-deploying practices-or-abilities on another” [14]:
Because he thinks part of what one is doing in saying how things merely appear is withholding a commitment to their actually being that way, and because one cannot be understood as withholding a commitment that one cannot undertake, Sellars concludes that one cannot have the ability to say or think how things seem or appear unless one also has the ability to make claims about how things actually are. [15]
Sellars argues that ‘is’-talk is pragmatically dependent on ‘looks’-talk because you wouldn’t be able to do (i.e. deploy) the latter without being able to deploy the former. So while you might not be actually deploying ‘is’-talk when you say, “There seems to be water over there,” you are implicitly relying on your grasp of the difference between “there is water over there” and “oh, there only seemed to be water over there—it’s actually a mirage…too bad we’re gonna’ die now.” If you didn’t have a grip on this implicit distinction, and all you had was ‘seems’-talk, then you’d have to say that “there seems to be water over there” was false when it turned out to be a mirage. This would impoverish our ability to say true things, though, for with the distinction in hand I can say two potentially true statements (“there is…” and “there seems…”) while without I can only say one. Now, what that one statement is is a good question, for the way ‘seems’ is being used appears to be the way ‘is’ is normally used—after all, the cases of falsification are exactly the same between “there is…” in our current modes of speech and “there seems…” within the reduced language-game (where “there is…” isn’t used).

To review: a reductive semantic relation can be refuted if it can be shown that the target vocabulary to be reduced is needed to use correctly the alternative vocabulary. If so shown, we will see that the alternative vocabulary is parasitic upon the target vocabulary and so the latter not a suitable candidate for reduction. And what we will have shown is that the semantic relation between the two vocabularies is pragmatically mediated. The relationship between saying “seems” and saying “is” is that saying “seems” is mediated by your ability to say “is.”

6.      It is beyond my powers to show that what I called evaluative platonism can be so refuted, so the best I can do is suggest the path to be taken. The problem here is that it is beyond my ability to show that the reconstructed versions of platonists that follow Peirce are suggesting a reduction of ‘better’-talk to ‘Best’-talk. That is, roughly, what Plato was after when he set up his divided line between the parasitic world of shadows and the autonomous Realm of Forms, with the Good (i.e., the Best) being most autonomousest of them all (being the sun that produced all the shadows). But we need to acknowledge that Habermas and Putnam need not be suggesting this when they level their criticisms at Rorty for not having a robust enough notion of betterness to make inquiry function right. All they need to say is that ‘better’-talk is intertwined with ‘best’-talk, and Rorty seems to be suggesting that ‘best’-talk can be reduced away without remainder to ‘better’-talk. In other words, the arguments I’ve just elaborated could be the ones used against Rorty to hit back against the Village Champion Argument.

I don’t think this will end up being the case. My suspicion is that the only way to get the robustness criticism to stick is to reconfigure in such a way that one would ipso facto fall within the bounds of the reductive form of platonism. (That was the form of Rorty’s criticism of Putnam’s labeling of him as a relativist: if I am, so are you!) Further, it is also my suspicion that ‘best’-talk is in fact parasitic upon ‘better’-talk, though I cannot see how to refute the idea that ‘better’-talk is also parasitic upon ‘best’-talk. If they are both parasitic, then they are intertwined, neither being autonomous of the other. So the best I can do is suggest that you can’t get rid of ‘better’-talk.

7.      For saying something is “the best” is pragmatically mediated by your ability to say what is better. The Best is parasitic on the better because you can’t specify what is best without specifying what is worse. This is the effect of having to answer “how do you know?” by justifying yourself. And since everyone agrees that the ability to justify is parasitic on the selection of a community, ‘best’-talk is as relative as ‘better’-talk, as much as you may wish that your claim about what is the Best transcends the community it is directed at. For it is simply not the case that your claim “X is the best!” is ipso facto better than “X is better than Y.” Perhaps you mean it more, but then by the same token you’re being less cautious and perhaps more dogmatic. But either way, when did caution or dogmatism tell us anything about the truth of the statements? People can have a terrible attitude and still be right.

Say we back up, though, and say that you won’t specify worse things in justifying the Best. We will concede that justification happens in front of communities, but we’ll avoid the implication of relativism by confining ourselves to an interlocking set of self-justifying things (principles, forms, whatever)—in other words, the community the justification is happening in front of is itself (and we just happen to be onlookers). This is the form of those fuller platonisms. If your justification for the Best, however, is another Best, then you generate a regress, for given the form of the Best, I will want to know what it is better than. “You say ‘the best’—the best of what?” What set does the Best reign supreme over? (Itself? Now it seems like a useless phrase.) So an interlocking set of Bests will generate an infinite regress, and so hence the easy, unanswerable skepticism we can apply to any claim about what is the Best. “How do you know that’s the best? Are you able to survey all possible counterclaims and pronounce upon them beforehand?” To say you can is to pronounce yourself Village Champion, and we all know how quickly such hubris can make you look like the Village Idiot.

The only way to stop the regress is to accept the relative justification as sufficient, and this amounts to rejecting platonism and taking “the best” as expressive of “that I know of”—we might call that a transcendental fallibilism. The warrant for this redescription of what “the best” expresses is the pragmatically mediated semantic relation between the best and the better. You can’t do the best without doing better, but you might be able to do better without doing the best.




Endnotes

[1] Take Care of Freedom and Truth Will Take Care of Itself, 105

[2] See, e.g., Putnam’s “Realism with a Human Face” in his collection of that name and Rorty’s response, “Putnam and the Relativist Menace,” in Truth and Progress. Rorty’s reaction in that essay boils down to: “We seem, both to me and to philosophers who find the view of both of us absurd, to be in much the same line of business. But Putnam sees us as doing something quite different, and I do not know why” (59). I suspect Putnam’s long-standing use of Rorty as a punching bag has more to do with Rorty standing too close to Derrida and the events of the 1979 Eastern Division APA meeting than any thesis he’s ever promulgated. That’s my suspicion, at least, though I have no particular evidence for judging Putnam’s attitude in the latter case. (His remarks about the French littering his corpus I consider enough for the former.) The best description I’ve come across of what actually happened at the infamous APA meeting is Neil Gross’s description in Richard Rorty, 216-227.

[3] “Universality and Truth,” Rorty and His Critics, 5

[4] The JTB—“justified true belief”—conception of knowledge derives from Plato’s Theaetetus, and most epistemologists have accepted it as the beginning, though not the end, of wisdom in regards to knowledge. For a somewhat embroidered discussion of the relationship of pragmatism to the distinction between truth and justification, see my "Rhetorical Universalism." I say that belief doesn’t play a role in the determination of knowledge because all the belief concept tells you is that it is a claim being held by some person. And if you test that claim by wondering whether a claim being held doesn’t tell you something about its plausibility—like a show of hands, one being better than none—then you need to consider the fact that authority is a structure built into the nature of justification, and so a claim being actually held lending it therefore credence already has the conceptual shape of justification.

[5] Ibid., 6. Rorty is discussing Habermas’ Between Facts and Norms. Though Robert Brandom, who I will be discussing shortly, tells us the title of Between Saying and Doing comes from an old Italian proverb ("between saying and doing, many a pair of shoes is worn out"), I think there’s a felicitous ratio with Habermas recorded there—for though Habermas considers himself an heir to American pragmatism, the difference between Habermas’ objects and Brandom’s gerunds suggests a greater commitment to the priority of pragmatics over semantics.

[6] Ibid., 56

[7] Consequences of Pragmatism, 166

[8] Ibid., 167

[9] Emerson, “Self-Reliance”

[10] Every person tired of an interminable conversation with a deaf and dogged interlocutor knows this to be true, but again, only at the level of practice. By this I mean that when the deaf dog retorts that you’re being dogmatic because you haven’t answered their objections, the rules of open inquiry we’ve venerated (explicitly) at least since the Enlightenment demand that we grant their point. However, only recently might we be able to work out the theoretical entitlement for allowing attitude to trump reason-giving, and the overarching reason is why Brandom says that in pragmatist philosophy of language, semantics must answer to pragmatics. Something of this orientation is elaborated in what follows, but on this particular point, at the beginning of Making It Explicit, Brandom justifies it via Wittgenstein’s regress argument about rules, which is roughly that if every statement needs rules to regulate correct interpretation, and those rules need to be stated, then the rules need to have rules—and then those rules need rules, etc. (see 20-23). What stops this regress from being infinite? Nothing, not at least if you haven’t fixed things so that normative attitude precedes normative rule/reason-giving. The trick here is seeing that we do, obviously, have the power to stop regressing. Platonism, in this area, is a form of intellectualism that says that rules precede attitudes, and thus nothing should stop the regress (except for something rule-like, which is where the idea of self-evident principles comes from). So one way to think about pragmatism is as the orientation that accepts our power to stop the regress as not in itself illegitimate, but rather seeks to investigate when it should and should not be. For example, notice how much leeway is in Rorty’s notion of “justifying our belief to the audience we think relevant”—who determines relevancy? That’s a question that would keep platonists up at night, though pragmatists understand that such relevancy is hashed out in the course of inquiry as people determine their attitudes to various communities. Is every attitude that determines our relationship to a community kosher? No, as every angry parent knows. But what about the black separatist, or black radical demanding reparations? That’s justifiably more complex in America. On that particular complexity, of being African-American in America, still the best negotiations are Ralph Ellison’s Invisible Man and James Baldwin’s The Fire Next Time. For Rorty’s interesting discussion of Baldwin and Elijah Muhammad, see 11-13 of Achieving Our Country (whose title comes from Baldwin’s book).

[11] Brandom elaborates this master argument in “A Kantian Rationalist Pragmatism” in Perspectives on Pragmatism. This particular argument about phenomenalism is the one Sellars forwards in his essay “Phenomenalism,” written around the same time as the more famous attack on the Myth of the Given.

[12] This was, of course, Rorty’s first famous argument in the philosophy of mind, striking an analogy between talk about the mind and talk about demons. Brandom suggests in “Vocabularies of Pragmatism” (in PP) that that argument hinged on the social pragmatism that he would later become famous for, there isolated on a social practice account of minds.

[13] If you noticed a wobble in my vocabulary in this last passage, you’re probably smarter than I am. And hopefully the wobble isn’t pernicious. Given the precision with which analytic reductions have been deployed, I technically slid between the semantic relation between what two things mean and an ontological relation between what two things are. Sloppy, I know, but when you work in the analytic idiom—where “how things are” is, because of the linguistic turn, always paraphrased as “talk about how things are”—it’s easy to do. However, the larger philosophical commitment pragmatists like Rorty and Brandom (if not Peirce, James, and Dewey) are in favor of keeping might be thought of as specifying that the category of “ontological relations” be reduced to another idiom, which is in part semantic. (This would take me too far afoot, but they are not committed to reducing everything to language, the linguistic idealism critics keep foisting on Rorty and Brandom. Brandom thinks they are only committed to what he calls “the entanglement thesis,” which in this context I understand to be the entanglement of pragmatic relations of nonlinguistic bodies with semantic relations of linguistic bodies.) For a recent discussion of Rorty's relationship to anti-analytic pragmatists, see "Some Notes on Rorty and Retropragmatism." For a discussion of the relationship of language to experience after Quine and Sellars, see "Quine, Sellars, Empiricism, and the Linguistic Turn."

[14] In Between Saying and Doing, Brandom develops a very sophisticated apparatus for talking about talking. One area of underdeveloped territory he takes on is beginning to talk about the practices necessary or sufficient for deploying a vocabulary and, conversely, the vocabularies necessary or sufficient for deploying a practice. However, while Brandom favors talk of social practices, his aim in the book is to abstract away from that particular commitment, and so he speaks of (social) practices or (individual) abilities.

[15] BSD 12; this first chapter of his Locke Lectures also appears by itself in Perspectives on Pragmatism, this passage at 169.