Kurt Vonnegut Jr. once offered eight tips for writing. My favourite was the last one:
Give your readers as much information as possible as soon as possible. To heck with suspense. Readers should have such complete understanding of what is going on, where and why, that they could finish the story themselves, should cockroaches eat the last few pages.
This is pretty much how I feel about tonight’s final episode of AMC’s Mad Men. There is no cliffhanger. We are not hanging on the edge of our seats. We want to see what “happens” to the characters, but we sort of already know. And that’s ok. Because it doesn’t really matter. And that’s because the protagonist of Mad Men is not any one character, but American history itself.
Mad Men began as a show about Don Draper, creative director of a NYC advertising firm, and the people around him. Draper represented both sides of the American dream/nightmare, a con artist and an adulterer, but handsome and smart and charismatic enough to succeed in New York’s world of advertising in the 1960s. The best thing about Don was how he never succumbed to the racism and antisemitism he saw around him. He could be a sleazeball, but he seemed to conclude that those sorts of prejudices would only get in the way of his ambition.
After a few seasons, however, Don became boring, almost insufferably so. His former secretary, Peggy, became far more compelling, the real embodiment of the American dream, of feminist success in a man’s world world she made her own. Very quickly, the women of the show, Peggy, Joan, and Betty, became more interesting than the men, who stayed on almost as comic relief. That’s because the story of the 1960s belonged to them.
In the world that Matt Weiner and his writers created, women’s struggles in the workforce represented the driving force of the narrative. Other important events occurred: JFK’s assassination, the Civil Rights movement, the Vietnam War. As viewers, we always asked ourselves: how is Mad Men going to incorporate this or that moment in history? Because in the final analysis, though we learned to love and care about the characters, what we were really watching was American history unfold through the lens of the white-collar working woman’s struggle. That struggle is not over, but we know the way it progresses, even if we don’t know specifically how it will go for Joan, Betty, Peggy, and the others. And this lack of a cliffhanger is an achievement of the show, not a flaw.
PPC, otherwise known as Pay Per Click, is an advertising medium that allows you to pay for incoming traffic to your website by the “Click”. Generally, each click may cost anywhere from a few cents to over $200 PER click. Dentists are typically paying $5-10 per click for key phrases such as “City + Dentist” and “Dentist + City”. If your practice is handling PPC, that’s great. However, if you have hired a company to manage your PPC campaign, then you are over spending on patient acquisition. For the purposes of this article, it will be assumed that the dental office is managing the PPC campaign.
Why Use PPC?
PPC is a great tool (emphasis on tool) to generate a consistent and measurable flow of new patients every month. If your website doesn’t generate much traffic, then PPC is a great supplement to an existing dental marketing strategy. And if your website is established and optimized, PPC is still a great tool for practices interested in taking on even more patients.
DoctorHero’s advice for dentists is to invest more heavily in search engine optimization (SEO) to gain top rankings in the search engines. Once organic search engine traffic has been optimized and you are still interested in getting more patients in your dental office, then PPC is a viable option. The reasoning for this ideology is that once you stop paying for PPC, traffic to your website stops. If, however, you work with a talented and reputable organization that offers SEO, you can expect a strong return on investment for years after SEO services are discontinued.
Cost Per New Patient
Bright Smiles LLC in Brightside, CA (Fictitious Practice and City) has decided to invest in PPC advertising. Their monthly PPC budget is $1000. Here are the assumptions:
CPC (Cost Per Click) = $5-10
Website Conversion Rate (# of Clicks / # of New Patients) = 2%
@ $5/Click = 200 Clicks = 4 New Patients = $250/Patient
@ $7.5/Click = 133 Clicks = 3 New Patients = $333/Patient
@ $10.00/Click = 100 Clicks = 2 New Patients = $500/Patient
This example assumes your website is converting at 2%. However, a lot of dental websites don’t even convert at 2%. So let’s assume a 1% conversion rate. At $5/click you still get 200 clicks. But at the 1% conversion rate you get 2 new patients @ $500/patient. That’s a lot of money to spend on a new patient!
Balanced Online Marketing Strategy Key
The key to Internet marketing success is a balanced strategy that combines the power of search engine optimization (organic search engine traffic), PPC (Paid traffic), and a listing in a prominent business directories. Then combine your online strategy with an offline strategy such as dental mailers and a patient referral program to really give your dental marketing strategy a boost.
One of my favourite episodes of Star Trek: The Next Generation is called “The Wounded.” It aired in season four, on January 28, 1991, so I might have caught it as an eight-year-old, but more likely on reruns. In this episode, a renegade Starfleet captain goes on a rampage with his ship, destroying a bunch of Cardassian vessels, thinking the Cardassians were preparing for war. The Enterprise has hunt him down, and they use transporter chief Miles O’Brien (played by the terrific Colm Meaney), that captain’s former crewman, to try to reason with him. It’s a great episode for a number of reasons: great plot, great acting, heck, anything with an O’Brien focus is pretty great. But the best part of the episode by far is when O’Brien and the rogue captain get together and sing the Irish war ballad, “The Minstrel Boy.”
From the moment I heard it. I loved that song. Perhaps is was because I played Dungeons and Dragons as a boy, and the song had very D&Dish lyrics. At that point in my life, I was attracted to anything that talked of swords and battles. But I think early on, even at this juncture, it was the Irishness of the song, the ethnic-ness of the song. It had survived into the fictional 24th century, yet we still felt its Irish roots, perhaps because O’Brien sang it.
A few years later I encountered the song again. It was a bizarre experience.
If you’re a secular Jewish child of a certain age, and your parents have a record collection, it’s very likely that one of those records is of Paul Robeson. Yes, I’m referring to Paul Robeson, everyone’s favourite African American Communist football player/lawyer/actor, who also sang African American spirituals and gospel music along with traditional folk songs from all over the world. My father introduced me to Robeson through his rendition of the song of the Warsaw Ghetto Uprising aka the “Partisan Song” aka in Yiddish “Der Partizaner Lid” or “Zog Nit Keyn Mol” (“Never Say”). It’s a song that energizes me. I always imagined that if I were to have become a professional prizefighter, that would have been my entrance music.
But Paul Robeson has many other great songs. He sang powerful spirituals like “Joshua Fit the Battle of Jericho” and “Swing Low Sweet Chariot.” He sang passionate renditions of “Joe Hill” and “John Brown’s Body.” He sang the Scottish hymn “Loch Lomond” and the Irish tune “Danny Boy.” And sure enough, he also sang a hauntingly beautiful version of “The Minstrel Boy.”
It makes me shiver every time I hear it. Through song, Robeson united himself to ethnic traditions that were not his own, and yet of course, they were his own, for they resonated with him the way Black spirituals did.
So what is “The Minstrel Boy” exactly? Wikipedia writes:
The Minstrel Boy is an Irish patriotic song written by Thomas Moore (1779–1852) who set it to the melody of The Moreen, an old Irish air. It is widely believed that Moore composed the song in remembrance of a number of his friends, whom he met while studying at Trinity College, Dublin and who had participated in (and were killed during) the Irish Rebellion of 1798.
The article goes on to note that the song was popular among Irish soldiers in the American Civil War and then again in the First World War. It became commonplace at funeral services held by institutions with disproportionately Irish membership like police and fire departments. Though often only the melody is played, the lyrics are simple and beautiful:
The minstrel boy to the war is gone,
In the ranks of death you’ll find him;
His father’s sword he has girded on,
And his wild harp slung behind him;
“Land of Song!” said the warrior bard,
“Though all the world betrays thee,
One sword, at least, thy rights shall guard,
One faithful harp shall praise thee!”
The Minstrel fell! But the foreman’s chain
Could not bring his proud soul under;
The harp he loved ne’er spoke again,
For he tore its chords asunder;
And said “No chains shall sully thee,
Thou soul of love and bravery!
Thy songs were made for the pure and free
They shall never sound in slavery!”
Much to my surprise and delight, I heard the song again, the melody without the lyrics, in the middle of the song “Wandering Ways” by my favourite band, Great Big Sea. Great Big Sea are a folk/celtic/rock bank from Newfoundland. They play traditional Newfoundland, English, Irish, Scottish, Canadian, and French Canadian music spiced up a bit to sound more like rock n’ roll. Their concerts have the intensity of heavy metal/punk performances, but instead of mosh pits there is Irish jigging (I’ve been to seven). Though they write some of their own songs, most are traditional folk songs, and their album liner notes come with explanations of their origins. Their songs are also often medleys, with different ditties contained as a bridge between verses. “The Minstrel Boy” is contained within the recording of “Wandering Ways” from the 2012 album Safe Upon The Shore.
One of the great appeals of Great Big Sea is their incredible respect for the tradition of music that came before them, that made what they do possible. And this reminded me of a passage from one of my favourite novel, The Joke by Milan Kundera. It’s Kundera’s first novel, written in 1965 (published in 1967), a brilliant and hilarious commentary on the absurdities of Soviet era Communism in Czechoslovakia before the Prague Spring of 1968. But Kundera also has a background in ethnomusicology, and in one passage, one of the characters, Ludvik, explains the strength of folk music, and its appeal to socialists and communists:
The romantics imagined that a girl cutting grass was struck by inspiration and immediately a song gushed from her like stream from a rock. But a folk song is born differently from a formal poem. Poets create in order to express themselves, to say what it is that makes them unique. In the folk song, one does not stand out from others but joins with them. The folk song grew like a stalactite. Drop by drop enveloping itself in new motifs, in new variants. It was passed from generation to generation, and everyone who sang it added something new to it. Every song had many creators, and all of them modestly disappeared behind their creation.
While this conception of the folk song may be even too anti-individualistic for my tastes, I appreciate the sentiment greatly. The music I like most is that which makes me feel like I am part of something bigger than myself, bigger than that particular song or artist. Maybe that’s why I love the hora so much. The individual artist is basically irrelevant in the joy of the hora circle. I feel a similar communal spirit at Great Big Sea concerts, or really whenever I hear folk music, especially celtic folk music. I’m not Irish, but I respect and understand the tradition.
Don’t get me wrong, I appreciate the creativity of individual artists. But I’m also amused when they fail to recognize what came before. A few years ago I was at Nields concert, the folk-singing sister duo of Nerissa and Katrina Nields. In 2008, they had released an album, called Sister Holler, where all the tracks were in some sense folk songs that borrowed (or stole, as they admitted) from works that had come before. To introduce one such song, “Abbington Sea Fair,”they told a story. First, the admitted that “Abbington Sea Fair” bore a clear (though not overwhelming) resemblance to Simon and Garfunkel’s “Scarborough Fair” in music and lyrics. Of course, when Simon and Garfunkel had released “Scarborough Fair,” Bob Dylan got upset because it resembled his song “Girl From the North Country.” Nerissa Nields explained that all this was kind of silly, because all three songs are based on a late medieval melody and lyrics. Nothing comes from nothing, and tradition trumps originality.
And so “The Minstrel Boy” fits in to this tradition. It appears in different but similar iterations across the generations and even centuries, forever retaining its communal and ethnic power, uniting people not because of the creativity of who wrote it or performed it, but by the feelings it invokes. You don’t want to be listening to these kinds of songs alone, but rather singing and dancing with other people. “The Minstrel Boy” is a sad song, but it is still communal, to be sung solemnly together. Songs like “The Minstrel Boy” allow you to appreciate that which exists outside of yourself, that which existed before, and that which will exist after. It’s not divine, it’s the power of people, community, and art merging together. You don’t need to be Irish to feel Irish when you listen, to feel intertwined with that proud history and tradition. From Thomas More in the 18th century to Paul Robeson in the 20th, Great Big Sea in the 21st and Miles O’Brien in the 24th, the minstrel boy, forever slain, continues to sing.
A Latenight Rant by Peter
There is no genre more beloved by the old, lazy, and tenured than the “don’t go to grad school,” advice column that seem to spring up every other couple of months or two on the Chronicle or Inside Higher Ed. Writing with nothing but the best paternal intentions, some tenured prof or another explains, with his hand gently patting our shoulder, that he has come to realize that there just aren’t jobs in X field and students really just shouldn’t apply for these PhD programs.
As a member of generation-fucked, I find these types of arguments frustrating. Let me rephrase that. I find them god-damn fucking frustrating. I encounter them mostly from academics, who make some series of arguments about why no one should follow them into graduate school. All the reasons why people say it is a bad idea to go into grad school (terrible job market, no social respect, you will simply be a source of cheap labor, etc…) are all true, of course, but turning them into reasons why you shouldn’t go into grad school misses the point.
Think about it this way: would any good progressive look out across the Rust Belt in 1985, fold their arms, and say (with a certain self-satisfied air of regret), “well I’ve always told Youngstown high school graduates that they shouldn’t go into the steel industry.”
Of course not. They would blame union-busting, and off-shoring, and leveraged buy-outs, and Reagan, and everything else. But they wouldn’t shift the blame onto the workers themselves, who should have known better than to go into that industry.
Obviously people who are considering a PhD or JD have more options than a steel worker did, but anyone who thinks that recent college graduates are just overflowing with good choices is just revealing their own generational entitlement (defined, for the purpose of this post, as anyone who came of age before the country went to the total shitter, especially those who took advantage of that non-shittiness to get good public education, and then gleefully grabbed up all those fun tax cuts and cushy tenured jobs).
What, prey tell, are those would-be English PhDs supposed to do? Journalism? Ha! We know they can’t do law school! Publishing? Not even worth joking about. Secondary school teaching? Not now, after NCLB/Michele Rhee/budget cuts/TFA/Scott Walker have all had a go at teachers. People don’t have interchangeable skills, (we all can’t just smoothly transition from excelling at languages since 7th grade into a career as a chemical engineer) and those of us who hoped to make a living on our writing, thinking, teaching, arguing, etc… don’t have a ton of options these days.
The problem with the “no one should go to grad school” articles are that they, unconsciously or not, shift the blame for the endemic joblessness onto the most vulnerable, those who are, or will soon be, unemployed. This is especially pernicious when these arguments come from tenured faculty who should be exactly the ones who have the greatest responsibility to try to fix the Academy. Implicitly, they accept conservative narratives about individual agency within capitalism. Rather than fight the real enemy (the corporate administrators, the Tea Party Governors, neoliberalism, etc…), they turn it into a moralistic argument about what some 22 year old should be doing. It all becomes a way to justify to themselves why they aren’t helping out the grad student union, or marching with OWS, or challenging their University President.
Now, don’t get me wrong, it often is a terrible idea to go to graduate school. It is generally a terrible idea to be young right now. But let’s not blame some poor kid who wants to dream that he might not have to be a barista for the rest of his life. The people we should be paying attention to are the university presidents, and politicians, and think tank “intellectuals” and everyone else who is destroying our educational system and our economy.
Can both of these statements be true?
1) People of colour, women, the disabled, and members of the LGBT community face real, overt discrimination, along with structural inequalities through many or perhaps all stages of their lives, which hampers their ability to be admitted to selective schools and to compete in the academic job market.
2) Straight, white, able-bodied men are at a distinct disadvantage on the academic job market as compared to people of colour, women, the disabled, and members of the LGBT community.
They can’t both be true if we regard affirmative action the way president Lyndon B. Johnson did in his 1965 commencement address at Howard University. There, LBJ famously stated “you do not take a person who for years has been hobbled in chains, and liberate him, bring him up to the starting line of a race, and then say, ‘you are free to compete with all the others,’ and still justly believe you have been completely fair.”
This is philosopher James Rachels‘ position. Rachels argued that affirmative action was not about advancing the under-qualified over the qualified, but simply about fairness, about leveling the playing field. When Harvard admits a poor Black student with a 1300 SAT score over a rich white kid with a 1400, it does this knowing that the white kid likely benefitted from tutoring, a safe neighbourhood, books in the house, and all sorts of advantages that the Black student may have been lacking. Thus the Black students’ 1300 is worth more than the white students’ 1400. It’s only fair.
But there is another way to look at affirmative action, of course. That way was championed in the famous 1978 supreme court case Regents of the University of California v. Bakke. In the Bakke case, white med school applicant Allan Bakke was denied entry to UC-Davis medical school in favour of several African American candidates with lower test scores. The judges, who ruled partially in favour of Bakke and partially in favour of the university, struck down racial quotas as illegal and unconstitutional, but claimed the school could use race as a factor in admissions in order to achieve the goal of diversity.
This raises the question: which is it? Fairness or diversity? Or is it some combination of the two?
When talking about hiring in academia, the situation becomes even trickier. In his 1992 book, Reflections of an Affirmative Action Baby, African American Yale law professor Stephen L. Carter provocatively states “I got into law school because I am black.” Though a conservative, Carter endorses some forms of affirmative action, though he thinks that affirmative action benefits should be reduced as people advance in life. Thus (I’m extrapolating) poor Black and Latino youth can receive benefits like Head Start and and scholarships to top high schools, and then some preferential treatment in college admissions. At the graduate school level, that preferential treatment should be diminished, in hiring, it should be close to non-existent. The idea being that eventually minority candidates have to stand on their own merits, independent of racial or ethnic background or gender identity or disability.
Carter’s view aligns with the LBJ and Rachels view of affirmative action as remedial, as a form of retributive justice. He doesn’t seem as concerned about diversity among faculty, or grad student population. The question remains, should we be?
Because if we should not, we get to a tricky place. First, there are awkward questions for hiring committees: is a Black man a better minority candidate than a white woman? This is becoming especially tricky as more and more humanities disciplines become feminized. This has already happened to English and Art History (and psychology in the social sciences). The data suggests that history is not there yet, though perhaps not far behind (though philosophy is). Unfortunately, history has shown us that feminized professions come to be disdained: think of elementary and secondary school teaching, nursing, social work, even clinical psychology.
In a sense, I’m “lucky.” In Jewish studies, I compete almost exclusively against other white candidates. I do compete against women though. But when I apply for US history jobs, it’s a different ballgame. And nearly every white person I know in academia, male or female, has a story about a minority candidate being hired immediately, or being sought out by many schools, or generally receiving some form of preferential treatment in hiring. These stories are of course told when only white people are around. This evidence is anecdotal, and I’m certain stories where the reverse is true occur regularly, though I don’t hear about them.
This phenomenon extends beyond the walls of the Ivory Tower. I’ve overheard grumbling about prestigious summer internship programs that admitted a disproportionate number of Black and Latino candidates, where the application process consisted only of writing an essay and checking a box for race, ethnicity, and gender. A white male medical student recently told me that his chief rival for residencies was African American, putting him (the white male) at a disadvantage. At the same time, he acknowledged that his chosen speciality was an old (white) boys club, and he thinks that women and non-whites would have a hard time fitting in.
So with race acting as a double-edged sword, I’m fairly confident that the first statement I made at the beginning of this post is true. Discrimination is real and must be countered. The second statement, that affirmative action rigs the game against whites and Asians, and especially white and Asian males, certainly feels true, though the data don’t yet bear it out. But suppose it is true: is there anything to be done about this? Is there a fairer, better way that still accounts for diversity? I’m not sure. Maybe Stephen Carter’s principle is correct, that diversity should still be accounted for, as a tie-breaker between equal candidates. But who knows? Any suggestions?
This struggle over affirmative action is part of a much larger problem. At major history conferences, it is highly encouraged to have women and people of colour on your panel proposal in order to get those proposals accepted. It seems more like a requirement than a suggestion. This raises several questions: How far should we take the quest for diversity? How essentialized has the female or minority point of view become that it needs to be reflected on each and every panel?
I asked a friend of mine whether analytic philosophy conferences had a similar requirement/suggestion in place. He replied that if they did, there would be no philosophy conferences. That is how dominated the field is by white men. This raises another question: is analytic philosophy like history? Is it necessary to have the perspective of women and non-white minorities on matters of analytic philosophy? Or is analytic philosophy “universal” enough that the gender and ethnicity of those who study it is irrelevant? And what about literature and other fields in the humanities?
In asking all these question, I’m forced to wonder: am I just being a whiny white male, ignorant or in denial of my own privilege? I’m not into political correctness, for the most part, but I don’t think I’m a racist, sexist, bigot. I do see race and gender, but I try not to pay attention to those categories when, for example, I’m grading. So why should I pay attention to them when putting together an academic panel?
I’m not trying to be hyperbolic (okay maybe a little) but I’m trying to figure out where I fit in to this discussion as a progressive minded straight white man with a dedication to equality and justice, an understanding of the history of discrimination, yet also with a commitment to objectivity, to the fact that good scholarship can come from anywhere and anyone.
So there is clearly a problem here. But I’m really not sure how to solve it.
Three days ago it was Yom HaShoah, the Jewish Holocaust Remembrance Day. It’s a solemn occasion, one that should not be politicized. On this next day, however, I’d like to address a political pet peeve of mine, namely the view that fascism, specifically Nazism, was somehow an ideology of the Left. It was not.
People often make this mistake by lumping Nazi Germany and Stalinist Russia together as two sides of the same totalitarian coin. Both regimes were responsible for monstrous crimes, yet the ideological underpinnings behind them should be distinguished and understood, rather than inaccurately melded together. Fundamentally, fascism and its Nazi manifestation were ideologies of the extreme Right, that advanced not only a racist populism but also a socially Darwinistic, hierarchical individualism that celebrated competition and allowed for for some capitalist industry to coexist alongside and in league with a powerful state.
I was spurred to write this post after listening to right-wing talk radio, where the announcer described fascism as an ideology of the left, the result of the expansion of Big Government. These scare tactics are used to form a slippery slope argument, namely that the welfare state leads to the gas chambers. Friedrich Hayek advanced a version of this argument in his famous and erroneous work, The Road to Serfdom, particularly in his chapter “The Socialist Roots of Nazism.” It is certainly true that fascism represents the worship and expansion of state power. Yet it can and did exist alongside capitalism, as was the case in Nazi Germany. Though Adolf Hitler led the National Socialist German Workers’ Party (Nazi), Hitler was not a socialist.
The reasons for this are manifold. First is the obvious: socialist and communist parties existed in Weimar Germany alongside the Nazi party and indeed were its bitter enemy (though Communists and Nazis occasionally colluded too). Second, and equally obvious, Nazism divided Germans along racial rather than class lines. Jews and other enemies of the state were enemies regardless of class, and the Aryan ideal could be achieved at any socioeconomic level.
Third, the Nazi regime did not completely take over all large businesses and industries, but rather colluded with them, most famously with chemical company I.G. Farben. This is a crucial mistake people make about fascism: businesses in fascist states like Hitler’s Germany are not necessarily government owned, and can to some degree function within a market-oriented capitalist framework subject to the laws of supply and demand. Fascism, in this totalitarian form, functioned occasionally with brute force, like on Kristalnacht, but often through more subtle means. Fascism more frequently used coercive force like that at play in Jeremy Bentham and Michel Foucault’s Panopticon, a prison that exerted social control through fear of being watched rather than naked displays of state power. This, along with Hitler’s popularity, rendered capitalist business compatible with Nazism, so long as those involved with it were Aryans who obeyed the regime.
Most important, we know Nazism was an ideology of the far right because of the very logic behind it. Unlike socialism, Nazism was a hierarchical, Socially Darwinistic vision that encouraged competition, and showed disdain for the masses, who Hitler called “mentally lazy.” Most crucially, it did not denigrate individualism, but in fact celebrated it. This is evident in Hitler’s major work, Mein Kampf.
I’m not simply referring to Hitler’s attacks on “Jewish” Marxism and Bolshevism, which he argued was a “comrade” to the equally Jewish “greedy finance capital.” Hitler believed that “the stronger must dominate and not blend with the weaker.” Hitler extrapolated from individual achievement, “true genius,” to racial achievement. Indeed, to ignore racial hierarchy led to an “underestimation of the individual. For denial of the difference between the various races with regard to their general culture-creating forces must necessarily extend this greatest of all errors to the judgment of the individual.” Hitler celebrated the “free play of forces” that enabled both individual and racial advancement in Darwinian struggle. He loved sports, especially boxing, as they served “to make the individual strong, agile and bold.”
Hitler’s individualism and elitism emerged most strongly in his chapter on “Personality and the Conception of the Folkish State.” Hitler distorted Nietzschean philosophy to elevate certain individuals, like himself, above all others. He hoped to organize society that placed ”thinking individuals above the masses, thus subordinating the latter to the former.” This would be true of economic life as well. “in all fields preparing the way for that highest measure of productive performance which grants to the individual the highest measure of participation.”
I could go on. My point here is not to politicize, but to de-politicize. Hitler was of course not a pure capitalist, and Nazi Germany not a purely capitalist state. Nazi Germany’s economy relied on considerable amount of state control and even some Keynesian economics. Many socialists showed similar disdain for the masses. But, and this is crucial, Hitler was not really interested in economics, nor was economic policy central to the Third Reich. Expansion of government and state power was less important to the regime than socially Darwinistic racial competition.
To conclude, I’ll simply say this: socialism and the welfare state should not be advanced by criticizing Nazi Germany and invoking the spectre of the Holocaust, but they should not be attacked that way either.
“Liberty for the few – Slavery, in every form, for the mass!”: the Deep Roots of the Birth Control Freakout
Thanks to Rick Santorum, Rush Limbaugh, and the Virginia Legislature we’re engaged in an elevated and enlightened national debate over just exactly how big slutty slut sluts are our nation’s women. We all know, of course, that sex without the intent to procreate is immoral, unless, like Newt Gingrich, you’re in the sanctity of a Congressman/aide relationship. So the question is, of course, exactly how many sexual experiences should women be allowed? 5? 10? Exactly how much should we humiliate those who have unapproved sex? Should they be forced to videotape the sex for Rush’s sweaty amusement? Be raped by the state of Virginia?
Some commentators have noticed that this rash of attacks on women’s rights is a bit strange coming from a political movement that, a year ago, was screaming about getting the government off its back, but is now so eager to get in between our sheets (and our knees). It does raise a serious question: why does the libertarian tradition in this country seem to be so blind when it comes to women’s rights? Why is it that the party that claims to speak for people’s private property rights, is so careless about the autonomy of people’s privates? We shouldn’t be surprised, though, as the conflation of property rights and control of women have deep roots in American history.
Corey Robin has discovered some great intellectual history that partly explains this disconnect, showing that libertarian hero Ludwig von Mises actually had repugnant views on women, worrying that access to birth control might give women too many free choices. And Mike Konczal has also written on some intellectual background. Together they suggest that there is a strong tradition of libertarianism that is not committed, even in theory, to what Robin calls a “project of universal liberty,” not even a project of negative liberty. At least as so far as women are concerned.
I would like to add a little social history to the mix, in a way that I think supplements the analysis of Robin and others. I’m currently reading Stephanie McCurry’s book on the troubles of Confederate nation-making, Confederate Reckoning. A major theme in her work, going back to her Masters of Small Worlds, is the intersection between domination of the home and perceptions of liberty. Many scholars piously tell us of the need to integrate analyses of race, gender, and class, but, other than maybe Glenda Gilmore, I can’t think of anyone who does this as well as McCurry.
In Masters of Small Worlds, she studies small households in the Low Country South Carolina, those with no or few slaves. These poor whites have always been a bit of a problem in historical understanding. In a nutshell, why did those white men who were not profiting from the slave system, still fight and die to protect it? One traditional answer, going back to Edmund Morgan, and before that W.E.B. Dubois, is that race was the factor that tied the poor white to the rich white, creating a “socialism of fools,” which seemed to unite the interests of all white people. McCurry doesn’t disagree, but adds gender to these analyses.
White men’s self-identity, she argues, in the age of the yeomanry, was intricately linked to domination of the home and, especially, domination of dependents: children, women, and slaves. Moreover, this was a process that linked private property with control of slaves and women. Her first chapter in Masters of Small Worlds is about the spread of laws regarding fencing and boundaries. Once this enclosure is complete, and property is ensured, than the white male can exercise control over his subordinates. “The law elided distinctions between forms of property, rendering a man’s control over his enclosure synonymous with his control over the familial and extrafamilial dependents within it.” (p. 14)
The result was an economic system in which the small property holder had total control of his property and total use of the labor of all dependents on this property. Like many yeomanry, they first produced a subsistence, and the remainder they sold for the market. Thus, they weren’t as totally integrated into the market as, say, a New England millworker was, or even a Western grain farmer was. Women’s labor, then, was crucial for the functioning of the economic unit, as they wove, cooked, cleaned, butchered, etc.. But it was a labor that occurred under the control of the male. In defiance of pro-slavery ideology, in fact, white women often worked in the fields alongside white men and slaves. And, though she doesn’t go into this, the reproduction of both the wife and slave women had direct economic benefit for the master.
White Southern men received real and tangible benefits from this system that ensured their near-total autonomy and power within the boundaries of their own property. While at home, they controlled the labor of their subordinates, and in public their status as a free-holding white man (a master) linked them to the elite. McCurry does not actually argue that this common mastery eliminated all class resentment or divides, but it did provide a common language that could be used to mobilize poor whites. Thus on the eve of the war, planter elites argued that the “black Republicans” would threaten the mastery of white men, an argument laden with gender and racial anxiety.
Moreover, this was a tradition that was hostile to most government action. Sure, you needed the government to capture fugitive slaves, protect against rebellion, and punish other transgressors. But, unlike those Whig factory owners in Massachusetts, a Southern freeholder had no need for tariffs or canals, no need for public education, and no need for a systematized and regularized legal code. The conflation of property with racial and gender privilege also partly explains the seeming paradox that the capitalist North actually had a far greater communitarian tradition, far more advanced public goods (libraries, roads, schools, etc…), and a far more advanced anti-capitalist tradition, than the supposedly agrarian South did. Southern white men had extra-good reasons to be suspicious of the Federal Government, as you would have to share power with those idealists from Ohio or Massachusetts who you couldn’t trust on the issue of slavery.
The result was, publically, an ideology that strongly linked the subordination of women and the subordination of blacks with the defense of white liberty and white private property. Few issues were as intricately linked in antebellum times as were black rights and women’s rights. Southern ideologists weren’t alone in noticing that in the North women’s rights activists came almost exclusively out of the ranks of abolitionists. While abolitionists imagined liberty as about individual self-possession and control, Southern ideologues imagined it as household self-possession and control, possession and control being exercised by the white man. George Fitzhugh wrote that abolitionists “give at once the coup de grace to the old world, and to usher in the new golden age, of free love and free lands, of free women and free negroes, of free children and free men.” (these are all bad things, for Fitzhugh). In Cannibals All, he constantly refers to the “women, children, and free negroes” as one group, those fit to be ruled. He also, interestingly, accuses all abolitionists of being socialists: “men once fairly committed to negro slavery agitation … are, in effect, committed to Socialism and Communism, to the most ultra doctrines of Garrison, Goodell, Smith and Andrews – to no private property, no church, no law, no government, – to free love, free lands, free women and free churches.” (p.368)
Now Fitzhugh was no libertarian, obviously, but he was a spokesman of a Southern ruling class that saw no inconsistently in emblazoning both “liberty” and “slavery” on their banners. The reason, as should be clear from McCurry’s analysis, is that the freedom of the white man (as they saw it) really did depend on the subordination of both women and blacks. As Fitzhugh said, in commendable honesty, “To secure true progress, we must unfetter genius, and chain down mediocrity. Liberty for the few – Slavery, in every form, for the mass!” Moreover, you can see how, in his mind, loss of control over women would literally be an assault on private property, as women join slaves as being essential appendages of private property.
I haven’t finished McCurry’s new book yet. But I gather from what I’ve read so far that she will argue that it is exactly this style of freedom that Confederates think they are preserving when they go to war. But, in fact, the war necessarily politicizes and empowers women and slaves, who play a part in bringing down the Southern project.
The relevance, of course, is that, is that out of this social history comes a strong tradition of understanding liberty, not in abstract terms, but in the concrete, as the ability to dominate and control your own subordinates. Moreover this should remind us that the women’s rights movement does entail real losses for men: loss of status, loss of labor, loss of privileges. I think Robin has made similar arguments from an intellectual history point of view. But I think its important to also embed the arguments of classic conservatives in the particular economic forms that give rise to them and where they best grow. I suspect that the average Tea Partier knows relatively little about von Mises’ actual thinking. But the sort of deep cultural sense of control and hierarchy created in antebellum yeomen life (and continued in Jim Crow and after), laid deep roots in American society.
There has been a running debate, started by Chris Hedges, over the proper tactics of street protests and the role of violence in the Occupy Movement. Hedges, who was one of the first writers with an audience to support Occupy Wall Street, attacked Black Bloc, which he mistakenly seems to have identified as a cohesive movement, rather than a tactic. Black Bloc occurs when protesters dress the same (normally in black hoodies), move in a pack, and, often, provoke confrontation with the cops by smashing windows, overturning garbage cans, etc… By dressing the same, they make it far more difficult for police to single out individuals. Coming on the heels of the Oakland protests, Hedges called the Black Bloc, a “cancer” on the movement, who provoke unnecessary repression by the state, distract from the message, and practice a sort of negative politics of aggression, in which confrontation and the symbolism of militancy takes the place of organizing and coalition building.
In reply, David Graeber, one of the grandfathers of OWS, defended the Black Bloc. He corrected some of Hedges’ factual inaccuracies, but resorted to a fairly hysterical response to Hedges’ (admittedly unnecessarily provocative) language, accusing Hedges of using a rhetoric that “historically, has been invoked by those encouraging one group of people to physically attack, ethnically cleanse, or exterminate another,” and arguing that Hedges would be read as a call to violence against Black Bloc. (I, at least, sure didn’t read Hedges’ article as a call for genocide). More reasonably he pointed out that the police almost always resort to violence and that the media almost always blame this violence on protesters, whether or not the Black Bloc is involved. State repression will happen no matter what that kid in the black hoodie does. Finally he argued that the mythologies that have developed around supposedly non-violent movements have obscured how often they involved violent activities, most often of a far more deadly sort.
As a historian of the abolitionist movement I was struck by how timeless this debates is. Few issues tore the anti-slavery movement apart as much as the question of violence: should fugitives use violence to defend themselves? should abolitionist victims of mob attacks (like Elijah Lovejoy) violently defend themselves? Should insurrection be encouraged? Some, like William Lloyd Garrison (a pacifist and Christian anarchist), maintained that non-violence was both moral and practical in the long run (by getting the conscience of the North on their side). Others, Frederick Douglass being the most notable, but also Theodore Parker, Charles Lenox Remond, and Thomas Wentworth Higginson, argued that it was “right and wise” to kill someone trying to capture a slave. Like today, activists debated both the morality and the pragmatism of violent activism (different issues that are too often conflated).
One interesting difference, though, was the definition of violence, where the line between violence and nonviolence got drawn. As Graeber suggested at the end of his letter, the violence that Black Bloc protesters have been accused of–breaking windows, spray painting, occasionally throwing rocks– is small beans compared to the violent tactics that have been debated in most political movements. For abolitionists, the question was about the morality of taking up arms against the state, something they did over and over again, killing a number of slaveholders and US Marshals. One group I study, called the Boston Anti-Man Hunting League, planned on kidnapping Southerners who were trying to capture slaves. Kidnapping the kidnapper, if you will. And when these actors set the terms, non-lethal force was rarely considered “violent.” In 1851, When a mob of black Bostonians pushed their way into a court room, grabbed a slave, “kicked, cuffed and knocked about,” some guards, and ran off, Garrison applauded the act. If he thought pushing their way into a court room and shoving down police officers crossed the line, he didn’t mention it. The point was, when abolitionists discussed what tactics were violent, they meant things far more radical and dangerous than anything that the Black Bloc thinks about.
Obviously the stakes were much higher in the fight against slavery than they are today in the Occupy movement. But violence of some form has dotted American social movements. Let’s not run away from this: the Left has often used violent tactics, as one, among many strategies. Unions waged pitched battles against state militias and violently kept scabs away from workplaces, black homeowners defended their right to integrate neighborhoods with the force of arms, and even the Stonewall Riot was, well, a riot, complete with firebombs, thrown bottles, and bloodied cops. What’s remarkable, in fact, is how little violence, all in all, the OWS movement has engendered. No talk of running to the barricades, no calls for “the deliberate increase in the chances of death,” or the “conscious acceptance of guilt in the necessary murder,” no naming of “defense ministers” for the movement, or sloganeering about the “birth-pangs” of the new society.
The best defense of Graeber’s point, then, is that by defining “violence,” in such a narrow way (one that, without questioning it, includes property destruction as well as self-defense in the same category as aggressive violence against human beings), Hedges sets up an unrealistic standard, that few if any social movements could meet. If you get 100,000 angry people in the street, its hard to imagine that some won’t throw a rock or fight back when cops try to kick the shit out of them. This is especially true as cities impose greater and greater restrictions on the ability of protesters to meet, and as police resort to greater and greater acts of repression and violence. So hewing too closely to some mythologized vision of nonviolence, and working to exclude those violate the terms, means accepting a paralyzing and self-limiting definition of what are acceptable tactics.
The whole debate illustrates well the elasticity of the term violence, and the historically specific ways that it gets defined. At an earlier time, you were one of the “good” ones, if you eschewed armed struggle, and just limited yourself to the occasional excess in the street protest. Today, according to the administration of Berkeley, linking arms to resist police invasion is an act of violence. The Left should, rather than accept the state’s definition of what is nonviolent (and therefore what is “good” activism) fight back at an ideological level against definitions that only restrict our behavior.
At the same time, its hard to take Graeber’s wounded outrage totally seriously. Does he really not understand why nonviolent protesters are angry when a tiny minority hijacks their events? Does he really not see how a small group trying to provoke the cops endangers everyone? I’m not super offended by Black Bloc tactics, but if I were the type to engage in them, I sure wouldn’t be shocked when other people disapproved. I also have no patience for the ultra-leftists who openly detest unions, community groups, and the Democratic Party as a bunch of pathetic bureaucratic sell-outs, but then clutch their pearls in shock when anyone dares to attack their preferred group or tactic.
As Bhaska Srunkara points out, tactics like the Black Bloc are unlikely to lead to the type of democratic dialogue that will inspire more people to join a movement. Its hard to see how a smashed window will convince anyone to join your movement, but its easy to see how it will keep them out. “Masks, after all, aren’t good for talking to people.” And rarely do you see the “fuck-shit-up” crowd coming to the boring planning meetings or going out flyering with you.
In my mind, the proper response is for all sides to dial down the outrage. This question is old and probably never ending. I have absolutely no interest in throwing a brick or whatnot, but I think history teaches us that at a low level, at least, such things are likely to be part of any significant social movement. As long as serious acts of violence against people (as opposed to against property) don’t erupt, I’m willing to live and let live, while remembering that the real action should be in dialogue, organizing, and recruitment, not whatever happens to the Starbucks’ window.
Jonathan Franzen is driving me nuts. He seems to be clinging to celebrity more and more tenuously every day. First it was David Foster Wallace bashing. Then it was e-book bashing. And now it’s a grudgingly sort of positive review of Edith Wharton.
As someone who has been the cause of feminist opprobrium in the past, maybe he thought his article on Wharton would get him into the good books. Or maybe the New Yorker just wanted someone to write something about her and he wasn’t busy. Who knows.
The review is meant, I think, to be a positive endorsement of Wharton’s novels. Instead, what comes across is Franzen’s inability to sympathize with Wharton because 1) she’s rich (but not in a ‘good’ way, like Tolstoy) 2) she was conservative (because she didn’t like populist politicking) 3) she left America 4) she acted like a spoiled writer (‘writing in bed after breakfast and tossing the completed pages on the floor, to be sorted and typed up by her secretary’…..like no other writers ever did that…..).
He claims, in fact, that her only ‘sympathetic’ characteristic (his words: ‘potentially redeeming disadvantage’) was that ‘she wasn’t pretty,’ and that this made her a social outsider, which made her a good writer. After speculating about her love life (or lack of one), her relationship with her mother (who apparently drove her father to an early death), her lack of friendships with women (of whom she was apparently jealous), we finally come to the crux of Franzen’s problem: ‘Edith Wharton might well be more congenial to us now if, alongside her other advantages, she’d looked like Grace Kelly’ etc.
Now, I get that the rhetorical purpose of all this is probably to then set up the peculiarly sympathetic characters that Wharton created and who are the reason that Wharton’s fiction ‘matters’ in contrast to her, whom we apparently don’t like. But the standards for not liking her? They could be applied to hundreds of writers! These same qualities, in fact [feminist outrage alert], applied to male writers are usually seen as the eccentricities, graces, and charms befitting a Great Novelist. Wealth and privilege? There are literally too many wealthy, privileged writers to know where to begin, but F. Scott Fitzgerald being mentioned in the article (in a different context) comes immediately to mind. Expatriatism? Again Fitzgerald, but also Henry James who is, yes, also mentioned in the article in a different context. And acting like a spoiled writer? Well, even Franzen doesn’t let that one stand, recanting near the end of the article. And really, attributing her writing genius to the fact that ‘she wasn’t pretty’?
Today we launched the official Ph.D. Octopus Facebook page. We’re finally entering the 21st century, I guess. Heck, we haven’t even really decided how we’re spelling Ph.D. But I guess it’s fitting that I contribute this post along with that piece of news, and the above image, which we’ve been hiding for far too long, crafted by the lovely and talented Parisi Audchaevorakul.
See, over the weekend I was having a conversation with my new friend Holger Syme, a professor of English at University of Toronto. Holger also has a wonderful academic blog called Dispositio. And so we discussed our blogs. Eventually, the conversation turned to the horrendous state of the academic job market (as it does) and then to the process of acquiring those disappearing jobs, and getting tenure, and to the process of peer review.
For those who don’t know, peer review is the process by which academic work is rendered legitimate. In practical terms, it means that when we submit articles to academic journals, the article is reviewed by two of our peers, that is to say, by two other academics in our field, two similar specialists, who might be able to speak the article’s accuracy, originality, and importance, and to the author’s general competence.
The goal of the system is for our peers to operate as gatekeepers. They are the ones who decide if the article is good enough to get in, and the number and quality of articles (and books) that we write determines the fellowships and jobs that we get, and whether we get tenure.
It’s not bad in principle. But there are problems. First, it’s never entirely clear that these two readers are actually experts in your field, or that their judgments are good. If your article is rejected by one journal, of course you can take it to another. But the reality is that two people may dislike your piece but a dozen other equally qualified “peers” might have loved it, and you have no way of knowing, because the peers are anonymous and the process is rather opaque.
Second, and perhaps more important, the process is painfully slow. Even if the two reviewers like your article, it might take weeks or even months for them to actually read it, then they send it back to you with the instruction “revise and resubmit,” and then the process repeats itself. Actually getting it to print can take even longer. Sometimes it takes years before the actual discovery or innovation that your work produces ever sees the light of the day, and that being the very dim light of an academic journal, which even at their most prestigious are read by very view people indeed.
What Holger did that so fascinated me was compare this peer review process to his own blogging. Because Holger has tenure, he can write (within reason) anything that he wants on his blog. He can share his academic work there. And so he does. And when he does, he gets responses in real time. If he provides a novel piece of research, say, a new analysis of one of Shakespeare’s plays, or even digital images of marginalia from the early 17th century, he can get comments, that is to say, peer reviews, immediately. Indeed, that is precisely what happened in the above post. Holger wrote it on December 21, 2011. Professor Martin Wiggins, of the University of Birmingham’s Shakespeare Institute, offered comments and corrections on December 22, 2011, the very next day. Then Holger edited the post, and thanked and responded to Dr. Wiggins in the comments.
Now, if Holger didn’t have tenure, and Professor Wiggins wasn’t a nice person, he could have stolen Holger’s work and done published it with more correct information, or simply published it first in a more reputable setting, and Holger’s path to tenure might have been thwarted. After all, we don’t get credit for our blog posts on the tenure clock. So for someone like me, or any of my non-tenured (or unemployed) co-bloggers, it might be academic suicide to publish our original research out here in cyberspace, rather than in a peer-reviewed journal, or in a book printed by a university press.
On the other hand, I wonder if, in the future, blogs such as these will sort of play the role that Sean Parker’s Napster did for the music industry. If we could all publish our work, safely, in real time, and have legitimate critics respond to it in real time, and edit it in real time, wouldn’t that be a more effective way of advancing scholarship?
This is not to say that peer review should be done away with entirely. But it seems like a community of academic bloggers should at least have some effect in speeding the process up, and ideally in making it more transparent and democratic as well. For example, suppose Dr. Martin Wiggins was simply Mr. Martin Wiggins, amateur Shakespeare buff, who knew enough to provide relevant criticism to Holger’s post. Theoretically, as long as the scholarship is sound, it shouldn’t really matter where it’s coming from.
That’s precisely the point of William James’ 1903 essay, “The Ph.D. Octopus,” that we should not fetishize degrees, like the Ph.D., but instead evaluate work, and academics, on their scholarly merit. We’re not quite there yet, and I’m not quite sure where there is. But I think I’d like to get there eventually.