Richard Rorty and Historical Consciousness

October 12, 2007

Originally posted by James Livingston on 06/13/07

[Note: A good introduction to Rorty and his legacy is by Lithium Cola in a diary over at Daily Kos]

Richard Rorty died last Friday in California at the age of 75.  He was a philosopher, to be sure, but all his work since Philosophy and the Mirror of Nature (1979) is one long petition to historians, particularly, to learn something from the demolition of metaphysics accomplished by the original pragmatists-not to mention their distant echoes in Wittgenstein and Heidegger-and, accordingly, to take the rhetorical effect and political consequences of historical narratives seriously.  He took the linguistic turn at full speed, and urged us to follow his example.

I wish we would, and quickly.  But we won’t, and it’s a shame.  Us historians don’t want to be bothered with post-structuralist questions about what we do after we’ve excavated the archive.  For example, we say that Judith Butler is interesting as a theorist, but we always go on to say that she can’t help us “do history.”  We say the same thing about Rorty.  We’re wrong. 

For he was the philosopher who insisted that “the moral justification of the institutions and practices of one’s group”-whatever it may be-“is mostly a matter of historical narratives (including scenarios about what is likely to happen in certain future contingencies) rather than philosophical metanarratives.” 

This moral justification would be our cultural function as historians, no?

And notice who stands with us and behind us: “The principal backup for historiography is not philosophy but the arts, which serve to develop and modify a group’s self-image by, for example, apotheosizing its heroes, diabolizing its enemies, mounting dialogue among its members, and refocusing its attention.”

Rorty was a very old leftist.  He died young, but his heart was always with the New Deal coalition that came apart in the 1970s and 80s, when, as he told the story, the New Left and then the Cultural Left of academe lost interest in issues of class and “economic arrangements.”  Even so, he was more than fair to these Lefts in characterizing their politics in Achieving Our Country (1998).  You can consult my sympathetic reading of this book in Pragmatism, Feminism, and Democracy (2001), chapter 4.

These Old Left leanings put him at odds with cultural studies icons such as Andrew Ross-Rorty went after Ross in Dissent, Irving Howe’s decaying monument to anti-Stalinist labor politics, back in 1991.  But he found constituents, and useful monographic corroboration, among labor historians such as Gary Gerstle, Steve Fraser, and Nelson Lichtenstein, who diagnosed the collapse of the New Deal electoral coalition under Reagan’s auspices as a disease that had to be cured by larger doses of class consciousness, conflict, coalition.  Their slogan was Joe Hill’s: Don’t mourn, Organize! 

This constituency always confused me because for all his ranting about the “cynical greed” of Republicans, Rorty hated the profound influence of Marx on American intellectual life-even though his heroes, John Dewey and Walt Whitman, thought that “only Hegel is fit for America” (this is Whitman’s conclusion to his glancing read of the German philosopher’s corpus).

How do you love Hegel and hate Marx?  There are a lot Marxists, I know, who want to forget, or displace, the old one, following the example of Louis Althusser among others.  But the suture of the so-called idealist (Hegel) and the so-called materialist (Marx) was accomplished in different ways by the 1940s, for example in the published works of Herbert Marcuse and Alexandre Kojeve.  Long before that, even Lenin insisted that understanding Hegel was crucial to understanding Marx.

And by now we can see that the origins of Marxism and the origins of pragmatism are at least similar if not merely identical-both were ways of mediating between German Idealism and British Empiricism, between Continental and Anglo-American philosophies. 

Karl Marx cited Petty, Smith, Ricardo, and Ferguson in his attempt to turn Hegel on his head.  William James cited Hume in carrying out his vow to “fight Hegel” and to eradicate the “transcendental ego” of Kantian metaphysics.  Sidney Hook got it right in 1928, when he exclaimed that both Marxism and pragmatism were results of an intellectual rebellion “against two opposed tendencies-sensationalistic empiricism and absolute idealism.”

So what was Rorty’s problem with Marx?  My guess is that the Marxists he knew were never much interested in what Rorty called “real politics,” that is, reform, compromise, small steps toward “amelioration” of the human condition.  They were too busy being revolutionaries, or, even more boring, radicals. 

Unlike such radicals, Rorty didn’t want final answers to big questions.  Radicalism and revolution scared him as much as it spooked Hegel, who never got over The Terror in France.  Rorty remained a strict Hegelian in the sense that he remained a consistent historicist-he tried to convert every philosophical question into a historical one.  He wasn’t interested in the biggest question of metaphysics and its Cartesian annex, epistemology: “How do we know what we know-what are the rules that govern the mind’s designation of reality?”

Like Hegel, who insisted that the “discipline of culture” consisted of work and language, Rorty asked a different question: “What have we said we have known, why did we say it, and how did we convince ourselves of its truth?” 

That is the question historians ask, and they are better equipped to answer it than philosophers.  We should be grateful to Rorty, the philosopher, for asking it.

We should also be grateful to him for his attacks on the academic Left (of which, let me admit, I am a part).  These started with an op-ed in the New York Times called “The Unpatriotic Academy,” which started a firestorm of hysterical criticism and a thoughtful movement toward reinterpreting “cosmopolitanism,” both centered in departments of literature (see the recent work of Bruce Robbins for a sober assessment of the controversy).  They culminated-I don’t want to say ended-in Achieving Our Country, where Rorty called not for patriotism but for “American national pride.” 

Here is what I said back in 2001, before 9/11, about this call to end our exile from this weird thing we call America, apropos of Alfred Kazin’s earlier demand that we learn how to be “both critical of `the system’ and crazy about the country.”  Well, shoot, I guess I’m quoting myself.

“Rorty’s critics on the Left take issue with his call for `American national pride,’ in part because they confuse nationalism as such with recent episodes of `ethnic cleansing,’ in part because they believe that the erasure of the difference between `American national pride’ and missionary faith in a `gunfighter nation’ cause the Vietnam War.

“But the value of Rorty’s argument does not lie in the particular way he narrates the history of the Left; it lies instead in his attempt to connect the Left to the history of the United States, and thus to rid us of the idea that either is exempt from the corruptions of historical time [or power, I would now add].  I mean that the historical consciousness informing his argument is more important than the story he tells about the 20th-century American Left.  He names this consciousness `American national pride,’ but in doing so he is trying, I think, to say that a radical without a country-without some attachment to a political tradition that acknowledges but also transcends ethnic and class divisions [or rather uses these divisions to avoid transcendence: see how you change your mind?]-will inevitably sound like a tourist or a terrorist. 

“He is claiming that intellectuals on the Left cannot realistically project their potential constituents into a better future without some knowledge of and respect for the many pasts of their fellow citizens.  He is claiming that unless we can show how the ethical principles of the Left reside in and flow from these pasts, from the historical circumstances we call the American experience, we have no good reasons to hope for that future. 

“So he is reminding us of what John Dewey said in his first major work, Outlines of a Critical Theory of Ethics (1891): `An “ought” which does not root in and flower from the “is,” which is not the fuller expression of the actual state of social relationships, is a mere pious wish that things should be better.”

The actual state of social relationships.  That was where Rorty, like Dewey, wanted us to focus our attention.  That was the site on which the future could be deciphered because it was the residue of the past.  Historians, take heart.  A great philosopher is dead, but his ideas can only make us better at what we do.


Al Gore’s Historical Template: Habermas’s “Public Sphere”

October 12, 2007

Originally posted by Cassiodorus on 06/03/07

(Crossposted at DailyKos)

Al Gore’s The Assault On Reason is largely an extended critique of the Bush administration?s policies.  But, in suggesting in his introduction that Chapters 1 through 5 of The Assualt On Reason, the first half of his book, would be about the “enemies of reason,” Gore suggests a theory of the media, of history, and of reason that identifies Jurgen Habermas’ characterization of “the refeudalization of the public sphere” as a trend of the present era of politics (18).  So for this book review I will consider both Gore’s (2007) book and Habermas’s (1962, originally) book as analyses of media history.  Here I will concentrate upon the similarities of Gore’s template to Habermas’s.

(Of the diaries that have been written about Gore’s book, let me grant some kudos: first to algebrateacher for trying to put together a study guide, Nonpartisan’s attempt to fit Gore into the Progressive legacy, jamesboyce’s  note on E. J. Dionne, and of course teacherken’s long moving diary.)

Introduction:  Al Gore’s The Assault On Reason is mostly about the Bush administration.  But it’s also about, as its title suggests, “reason.”  The first five chapters designate supposed “threats to reason,” such as fear, dogmatism, the conquest of the “public sphere” by the wealthy, the spread of lies, and government violations of individual rights.  The concentration of all of these “threats to reason” in the Bush administration, Gore argues, leads to three negative outcomes: America is less secure, abrupt climate change threatens the globe’s ecosystems, and American democracy is threatened. 

As a counterweight to the Bush administration, Gore suggests that the Internet will at some point make an effective “public sphere,” reinvigorating the “conversation of democracy”:

In fact, the Internet is perhaps the greatest source of home for reestablishing an open communications environment in which the conversation of democracy can flourish.  The ideas that individuals contribute are dealt with, in the main, according to the rules of a meritocracy of ideas.  It is the most interactive medium in history and the one with the greatest potential for connecting individuals to one another and to a universe of knowledge.  (260)

Behind Gore’s critique of Bush, then, is a history of communication media.  The introduction even contains a sustained critique of the theories of Marshall McLuhan, who was (in North America in the 1960s, at least) the world’s most famous media historian and who had a theory of “cool” and of “hot” media (20).  In this history, the print media promoted a “public sphere” in an earlier era, television is a harmful enabler to Bush in the current era, and the Internet has the potential to bring back democracy to America.

The theory of the media that would seem to most thoroughly inform Gore’s notion of an “assault on reason” (and thus his critique of Bush) is that given by Jurgen Habermas in his early (1962) book Structural Transformation of the Public Sphere, whence Gore’s citation of the “refeudalization of the public sphere” on p. 18 of The Assault on Reason.  Gore’s Habermasian idealism goes all the way down to his interest in the notion of the “unforced force of the better argument” (or at least in a “meritocracy of ideas” of some sort) that one sees in Habermas’ later works on argumentation.  But, generally, Gore sees his work as a contribution to the “public sphere” that is mentioned in the abovecited early Habermas work.  In order to see Gore?s adoption of Habermas’ (1962) historical template, I will discuss Structural Transformation of the Public Sphere and then situate the argument of The Assault on Reason within its premises.


(photo credited to mimax via creative commons)

Habermas: Habermas’s Structural Transformation of the Public Sphere can be argued in a nutshell: The historical appearance of the “public sphere” has a distant echo in Classical Athens, to be sure, but its modern appearance comes with the separation of the “public” and the “private” in the early eras of capitalism, most specifically in the 18th and 19th centuries.  The “traffic in commodities and news” (17), which expands at the beginning of the capitalist era in the 16th and 17th centuries, later becomes a “public sphere” with the proliferation of newspapers, leaflets, and other forms of literary culture.  The main distinction of the “public sphere,” and of the “civil society” which participated in it, was that it was a forum in which “civil society” could criticize the state.  The actual places where this criticism were performed were the literary salons of early modernity and, essentially, the Victorian coffeehouse, where “public opinion” could be voiced.

Now, this historical “public sphere’ was marked by gender and class exclusions — this is why Habermas calls it the “bourgeois public sphere,” and it’s why historians like Mary P. Ryan note in Habermas and the Public Sphere that “women were patently excluded from the bourgeois public sphere, that ideal historical type that Habermas traced to the eighteenth century, and were even read out of the fiction of the public by virtue of their ideological consignment to a separate realm called the private.”  But within the clubs, the coffeehouses, the salons, matters of status were disregarded (36).  Generally, however, the bourgeoisie were the “public” which constituted “civil society,” with the working class peering in from the outside.  Habermas goes into detail about the developments in literary life that reflected the development of the public sphere.

However, Habermas tells us that as the economy of capitalist society was transformed by the consolidation of corporate power, the “public sphere” was transformed into a mass society, in which the public and the private were no longer held separate.  The most important political difference between the “public sphere” and mass society is reflected in the author’s heading, “From a Culture-Debating to a Culture-Consuming Public.”  In this development, we are told that “the public sphere in the world of letters was replaced by the pseudo-public or sham-private world of culture consumption.” (160)  Participation in public life, for Habermas, becomes just another form of commodity consumption, and political meaning is lost in the spatial dominance of market culture. (164)  Political consensus formation in consumer society, we are told,

…ensures a kind of pressure of nonpublic opinion upon the government to satisfy the real needs of the population in order to avoid a risky loss of popularity. On the other hand, it prevents the formation of a public opinion in the strict sense.  For inasmuch as important political decisions are made for manipulative purposes (without, of course, for this reason being factually less consequential) and are introduced with consummate propagandistic skill as publicity vehicles into a public sphere manufactured for show, they remain removed qua political decisions from both a public process of rational argumentation and the possibility of a plebicitary vote of no confidence in the awareness of precisely defined alternatives.  (221)

Thus public opinion becomes an object of domination “even when it forces (the dominators) to make concessions or to reorient itself.  It is not bound to rules of public discussion or forms of verbalization in general, nor need it be concerned with political problems or even addressed to political authorities.” (243)

Now, one can see Bush from Habermas’s (1962) perspective as someone trying to use the tools of media manipulation to make the Presidency into a complete autocracy.  In Habermas’s (1962) sense, Bush is completing a trend that was there as a potential since the first days of universal access to radio or television.  Bush, then, can easily be seen as the ultimate consequence of what Herman and Chomsky call “manufacturing consent.”  The general insinuation of this type of history is that the public sphere was useful for the triumph of the bourgeoisie in their struggle with the old aristocracies of Europe but, once its rights had spread to the rest of the public, it became absorbed in the “consumer” dispensation described above by Habermas.  So let’s see, then, what Al Gore makes of the template of the (bourgeois) public sphere.


(Photo credited to Matthew Bradley via Creative Commons)

Gore: In The Assault on Reason, Gore puts aside the class content and economic analysis of Habermas’s earlier analysis, and attempts to make a case for this historical template based on media history.  Gore’s version of this history is stated in its most Habermasian vein on pp. 130-131:

African Americans, Native Americans, and women were not included in the circle of respect two centuries ago, of course.  And in reality, access to the public forum was much more freely available to educated elites than to the average person.  Even though literacy rates were high in the late eighteenth century, illiteracy was a barrier for many then, as it is for many Americans still.

Nevertheless, with the dominance of television over the printing press and the continued infancy of the Internet in its development as a serious competitor to television, we have temporarily lost a common meeting place in the public forum where powerful ideas from individuals have the potential to sway the opinions of millions and generate genuine political change.  What has emerged in its place is a very different kind of public forum — one in which individuals are constantly flattered but rarely listened to.  When the consent of the governed is manufactured and manipulated by marketers and propagandists, reason plays a diminished role.  (130-131)

For Gore, the main consolation for this (generally gloomy) picture of history is the Internet, which (of course) wasn’t around in 1962 when Jurgen Habermas published Structural Transformation of the Public Sphere:

With each passing month, the Internet is bringing new opportunities for individuals to reassert their historic role in American democracy. (131)

It remains to be seen, however, what historic role in the resolution of the problems of world-historic scope (and Gore’s book is full of them) the Internet will play.  (It is, after all, far cheaper to get a TV or radio than to get a computer with an Internet service provider, so we might run into some class-analysis problems in that light.)

Sure, Al Gore’s use of this historical template lacks the class analysis, the eye for political economy, that made Habermas’s (1962) book so cogent.  Perhaps in this light Gore could consider how the public sphere developed in light of the domination of politics by the wealthy, and re-elaborate on  his discussion at the beginning of Chapter 3 (the “The Politics of Wealth” chapter) where he starts by praising capitalism and then criticizing it.  Gore:

The inner structure of liberty is a double helix: One strand — political freedom — spirals upward in tandem with the other strand — economic freedom.  But the two strands, though intertwined, must remain separate in order for the structure of freedom to maintain its integrity.  If political and economic freedoms have been siblings in the history of liberty, it is the incestuous coupling of wealth and power that poses the  deadliest threat to democracy. (72-73)

I would ask Al Gore to consider that “capitalism” and “economic freedom” mean different things to people of different social classes.  To the poorest among us, “capitalism” means the obligation to pay, and “economic freedom” means having enough money to pay, or at least to be able to make a living without being trapped in debt peonage.  To the wealthiest among us, the “incestuous coupling of wealth and power” IS “economic freedom.”  “Capitalism,” then, is not equivalent to “economic freedom” for everybody.

I would also like to encourage all readers of this diary, and especially Al Gore should he encounter it, to read some of the derivative works of Habermas’s Structural Transformation of the Public Sphere.  My favorites:

Mike Hill and Warren Montag’s Masses, Classes, and the Public Sphere
John Forester, ed. Critical Theory and Public Life
Craig Calhoun, ed. Habermas and the Public Sphere

and of course the ever-informative

Rolf Wiggershaus, The Frankfurt School: Its History, Theories, and Political Significance

that’s enough for now.


Marx and the Present: An Introduction

October 12, 2007

Originally posted by Cassiodorus on 03/18/07

(crossposted to Daily Kos)

Plenty of discursive forces have arisen in the present, leading to a diary on Marx.  I’ve had discussions with people on DKos who wish to contest what they see as my “marxism.”  Actually, I suppose it all boils down to the student who asked me in my instructional communication class last fall, “are you a socialist.”  So what is my relationship to the O.G. of socialism?  I’m sure this topic comes up whenever I’ve disputed the ability of the capitalist system to come up with a solution to any significant ecological problem.

My reliance on terms such as “class struggle” and “political economy” is motivated by my interest in Kees van der Pijl, who works from a Marxist framework typically given the name of “neo-Gramscian international political economy.”  Now, “neo-Gramscian” refers to Antonio Gramsci, about whom I’ve written a diary previously.  But I haven’t really said much about Karl Marx (1818-1883), about whom much has been said and little actually known.  Marx is a symbol of “communism,” but his main contribution to intellectual thinking has been his critique of capitalism.  “Marxism” has been called “obsolete,” especially after the (1991) fall of the Soviet Union, yet its theorists produce works that meaningfully describe the political and economic realities of this era.

Marxism is, of course, a political football.  So-called “conservatives” never tire of describing the Soviet Union, the world’s first ostensibly Marxist regime, as a “failure,” ignoring its unparalleled success in transforming Russia into a superpower.  Few interested commentators on Marxism can distinguish the advocacy of a communist utopia from the critique of capitalism.  At any rate, here I shall attempt to say something brief about Marx, about 1) who he was and 2) what he said and 3) what it means for us today.

  • Who was Marx?

    Anyone who wants the full scope on this can read the biographies, especially: Marx: A Life by Francis Wheen, Marx: A Biography by Robert Payne, or Karl Marx: His Life and Thought by David McLellan. 

    At any rate, Marx grew up in Trier, a Catholic city in a Protestant kingdom, Prussia.  He was the son of a Jewish businessman who converted to Lutheranism to avoid losing his profession.  Marx’s daddy wanted him to be a lawyer; he decided to be a philosopher instead.

    At first Marx hung out with a school of philosophers which history calls the “Young Hegelians” – these were young men who had decided to criticize the structure of society using the philosophical framework of “dialectics” of the philosopher Georg Wilhelm Friedrich Hegel.  Marx soon thereafter abandoned said school, and developed a communist philosophy which history calls “historical materialism.”  The first inkling of this philosophy was in a series of notebooks, published only long after Marx’s death, called the “1844 manuscripts.”  The 1844 manuscripts are about the meaning of “alienation,” by which Marx meant the loss of ownership the worker feels after selling his/ her labor to an employer.  But, within them, Marx has this stunning thought about the power of money:

    That which is for me through the medium of money – that for which I can pay (i.e., which money can buy) – that am I myself, the possessor of the money. The extent of the power of money is the extent of my power. Money’s properties are my – the possessor’s – properties and essential powers. Thus, what I am and am capable of is by no means determined by my individuality. I am ugly, but I can buy for myself the most beautiful of women. Therefore I am not ugly, for the effect of ugliness – its deterrent power – is nullified by money. I, according to my individual characteristics, am lame, but money furnishes me with twenty-four feet. Therefore I am not lame. I am bad, dishonest, unscrupulous, stupid; but money is honoured, and hence its possessor. Money is the supreme good, therefore its possessor is good. Money, besides, saves me the trouble of being dishonest: I am therefore presumed honest. I am brainless, but money is the real brain of all things and how then should its possessor be brainless? Besides, he can buy clever people for himself, and is he who has a power over the clever not more clever than the clever? Do not I, who thanks to money am capable of all that the human heart longs for, possess all human capacities? Does not my money, therefore, transform all my incapacities into their contrary?
    If money is the bond binding me to human life, binding society to me, connecting me with nature and man, is not money the bond of all bonds? Can it not dissolve and bind all ties? Is it not, therefore, also the universal agent of separation? It is the coin that really separates as well as the real binding agent – the [. . .] chemical power of society.
    Shakespeare stresses especially two properties of money:
    1. It is the visible divinity – the transformation of all human and natural properties into their contraries, the universal confounding and distorting of things: impossibilities are soldered together by it.
    2. It is the common whore, the common procurer of people and nations.
    The distorting and confounding of all human and natural qualities, the fraternisation of impossibilities – the divine power of money – lies in its character as men’s estranged, alienating and self-disposing species-nature. Money is the alienated ability of mankind. (167-168)

    The power of this passage to evoke the cynicism of a society based on money is dramatic.

    Anyway, Marx went on to join various communist movements, getting kicked out of country after country until finally settling in London.  Marx’s early revolutionary participation was pretty much over by 1852, though.  In the meantime, however, he wrote a piece of propaganda, the Communist Manifesto, which egged on the international rebellions which history would call the uprisings of 1848.  It contains a famous phrase which all anti-Marxists like to cite in denouncing Marx:

    The advance of industry, whose involuntary promoter is the bourgeoisie, replaces the isolation of the labourers, due to competition, by the revolutionary combination, due to association. The development of Modern Industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie produces and appropriates products. What the bourgeoisie therefore produces, above all, are its own grave-diggers. Its fall and the victory of the proletariat are equally inevitable.

    Of course, nothing of the sort happened.  The creation of factory life did not bring the workers together, and the victory of the proletariat is by no means inevitable.  In that quote, Marx was displaying a Victorian sense of hubris, a hubris which doubtless left him by the time 1872 rolled around (see below).

    Later in his life, Marx participated in the “International Working Men’s Association,” the First International, founded in 1864. 

    The various Internationals were organizations dedicated to uniting the working class for the sake of global revolution.  They all failed to achieve this goal, for various reasons: the First International had serious ideological and national problems, the Second International did not survive the hostilities of the First World War, and the Third International was Lenin’s attempt to institute ideological conformity among the various Communist Parties, which (according to Julius Braunthal in the 2nd volume of his 3-vol. History of the International) led to a 75% desertion from said Parties in western Europe.  Communist movements on planet Earth all appear to have been premature.

    But that’s a side-note.  By 1872, Marx effectively controlled the International, though, and claiming division between the various nations, Marx proposed that the headquarters of the International be moved to New York City, thus killing it off.  Why did Marx do away with the International as such?  Francis Wheen’s biography put it as follows:

    By exiling the International to America, Marx had deliberately condemned it to death? So why did he do it?  Marxian scholars have treated the question as an insoluble riddle, but there is no great mystery: he was simply exhausted by the effort of holding the warring tribes together? Marx knew that without his commanding presence the General Council would disintegrate anyway and might do serious damage to communism before expiring.  Far better to put the wounded beast out of its misery. (Wheen 344-345)

    It’s easy to see, too, why “Marxian scholars” would want to gloss this over: it’s a clear sign of the pessimism of the Marx of 1872.  Marx himself had grown to doubt the immediate possibility of revolution.

    Marx married Jenny von Westphalen, a member of the petty German nobility, and lived in London as a bourgeois, draining his and his wife’s inheritances and sponging off of his friend and collaborator Friedrich Engels, whose daddy owned a factory.  He earned money now and then by publishing his writings, although this was work he disliked.  In this vein, he published some stuff for the New York newspapers during the (American) Civil War.

    Although Marx was very cutting in dealing with those who in the least disagreed with his approach, he was exceedingly mellow in dealing with allies and with children, was known as “Moor” to his immediate friends and family (for his swarthy complexion) and grew a very shaggy beard.  He was survived by his wife and by five daughters.

  • What did Marx say?

    Marx’s theories can be grouped fourfold:

    1) Theories of revolution and utopia

    2) Theories of political-economic life

    3) Theories of history, and then (of course) there are

    4) Overall statements of philosophy

    When we talk of “Marxism,” then, we should be specific about what we mean.  Marx’s critique of political-economic life is a far different vegetable than his theory of revolution and utopia, for instance.  I see these theories as follows: 


    1) Theories of the revolution are probably the most problematic of Marx’s theories.  The Manifesto is problematic, for reasons stated above, and also for the ten-point program given in Chapter 2:

    1. Abolition of property in land and application of all rents of land to public purposes.
    2. A heavy progressive or graduated income tax.
    3. Abolition of all rights of inheritance.
    4. Confiscation of the property of all emigrants and rebels.
    5. Centralisation of credit in the banks of the state, by means of a national bank with State capital and an exclusive monopoly.
    6. Centralisation of the means of communication and transport in the hands of the State.
    7. Extension of factories and instruments of production owned by the State; the bringing into cultivation of waste-lands, and the improvement of the soil generally in accordance with a common plan.
    8. Equal liability of all to work. Establishment of industrial armies, especially for agriculture.
    9. Combination of agriculture with manufacturing industries; gradual abolition of all the distinction between town and country by a more equable distribution of the populace over the country.
    10. Free education for all children in public schools. Abolition of children’s factory labour in its present form. Combination of education with industrial production, &c, &c.

    This sometimes comes in for criticism as a sort of state authoritarianism, a sort of Marx endorsement of the proto-Soviet Union.  One must remember, however, that the “state” Marx had in mind was a preliminary, ad hoc conspiracy of the working class to keep from being crushed by the owning class and its armies, and that said “state” was to abolish itself as soon as its program, communism, was abolished.

    Marx eventually decided that the Paris Commune, an uprising of democrats in France over two months of 1871, would be his ideal of communism.  The Paris Commune, which risked all for the sake of democratic elections when its very existence was at stake, was hardly the sort of dictatorship Stalin would institute in Russia.  (Unfortunately, Marx did not endorse the Paris Commune until it had been brutally suppressed by the mass murder of its participants; during the Commune’s two months, says Wheen, Marx was sick with bronchitis.)

    Now and then, Marx’s statements of “communism” specify increases in the rate of production.  The assumption Marx was under, then in the 19th century, was that capitalism hindered overall production, and that when people were free of capitalism they would be free to produce more.  Today, however, capitalism is plagued by overproduction.  Any socialism that arises tomorrow will have to offer global society the right to produce less, not more.

    Occasionally Marx’s notion of utopia comes under fire for not offering “incentive.”  Only capitalism, the Right likes to babble, will offer everyone incentive to work.  Never mind that the Soviet Union used the ideal of communism to turn a peasant nation, ruined by war, plague, and famine, into the world’s first spacefaring superpower in the space of four months.  And, of course, said babblers never look at the following passage from the Critique of the Gotha Program:

    What we have to deal with here is a communist society, not as it has developed on its own foundations, but, on the contrary, just as it emerges from capitalist society; which is thus in every respect, economically, morally, and intellectually, still stamped with the birthmarks of the old society from whose womb it emerges. Accordingly, the individual producer receives back from society — after the deductions have been made — exactly what he gives to it. What he has given to it is his individual quantum of labor. For example, the social working day consists of the sum of the individual hours of work; the individual labor time of the individual producer is the part of the social working day contributed by him, his share in it. He receives a certificate from society that he has furnished such-and-such an amount of labor (after deducting his labor for the common funds); and with this certificate, he draws from the social stock of means of consumption as much as the same amount of labor cost. The same amount of labor which he has given to society in one form, he receives back in another.

    Try again, right-wingers.  Socialism and communism are fully workable social systems; the main question, as Marx well knew, is one of whether or not they have a chance to develop out of capitalism.

    Another primary fallacy of the Right with respect to Marx is that they imagine that under socialism everyone would be “equal.”  The Critique of the Gotha Program ends that speculation, too.  Marx didn’t give two hoots about equality — he knew that people weren’t equal.  His goal was the abolition of social classes, and thus of all political systems aimed at the domination of men.  Equality had nothing to do with that.  From the domination of men to the administration of things, an old slogan once went.


    2) Marx’s theories of political-economic life are really the main contribution of Marxism to the future.  The main document to read, of course, is Capital – biographer Francis Wheen quotes Marx’s letters to suggest that Capital was intended as a “work of art,” a basically literary attempt to uncover capitalism’s claims to rationality and morality.  In Capital, Marx argues, wage labor is a form of exploitation, for those who hire wage laborers profit off of them by taking the “surplus,” that portion of the worker’s daily labor not necessary for the working class’s survival.; Thus, suggests Marx, the working class creates the capitalist world, but does not participate in its benefits.

    Marx’s world is that of a class system, composed not of many classes but of two (essential) classes:

    1. the owning class, which lives off of its investments, and
    2. the working class, which really has nothing to sell but its labor.

    We can see from the literature on neo-Marxism that other classes than these two may be important within the same framework.  This is true most especially the contingent working class (discussed in great detail in Vijay Prashad’s Keeping Up with the Dow Joneses), the class of workers who don’t work all the time and thus suffer from exemplary poverty.  Also, the idea that professionals constitute a separate class is endorsed by Donald C. Hodges’ Class Politics in the Information Age.  At any rate, later writers have sought to amend Marx’s notions of class by looking carefully at other relationships to the means of productions than just “owners” and “workers.”  Marxist ideas of class, however, share a common orientation: class is defined by one’s relation to production.


    One element of capitalist propaganda is that the owning class is in its privileged position because it is “better” than the working class.  Marx debunks this by showing how the class society of his 19th century Europe was created through an ongoing theft by those who now call themselves owners, in what he calls “primitive accumulation”:

    This primitive accumulation plays in Political Economy about the same part as original sin in theology. Adam bit the apple, and thereupon sin fell on the human race. Its origin is supposed to be explained when it is told as an anecdote of the past. In times long gone-by there were two sorts of people; one, the diligent, intelligent, and, above all, frugal elite; the other, lazy rascals, spending their substance, and more, in riotous living. The legend of theological original sin tells us certainly how man came to be condemned to eat his bread in the sweat of his brow; but the history of economic original sin reveals to us that there are people to whom this is by no means essential. Never mind! Thus it came to pass that the former sort accumulated wealth, and the latter sort had at last nothing to sell except their own skins. And from this original sin dates the poverty of the great majority that, despite all its labour, has up to now nothing to sell but itself, and the wealth of the few that increases constantly although they have long ceased to work. Such insipid childishness is every day preached to us in the defence of property. M. Thiers, e.g., had the assurance to repeat it with all the solemnity of a statesman to the French people, once so spirituel. But as soon as the question of property crops up, it becomes a sacred duty to proclaim the intellectual food of the infant as the one thing fit for all ages and for all stages of development. In actual history it is notorious that conquest, enslavement, robbery, murder, briefly force, play the great part. In the tender annals of Political Economy, the idyllic reigns from time immemorial. Right and “labour” were from all time the sole means of enrichment, the present year of course always excepted. As a matter of fact, the methods of primitive accumulation are anything but idyllic.

    We might, in fact, see this “conquest, enslavement, robbery, murder, briefly force” playing a part in modern economy, e.g. Bush Junior’s conquest and occupation of Iraq.  When they can’t exploit the people’s labor, the dirtiest of elites just steal.

    Now, it needs to be mentioned at this point that the conditions of a portion of the working class (especially in the most industrialized nations) are much better than they were when Marx was alive.  This is, for the main part, true because of the 20th century invention of Keynesian economics, wherein the prosperity of a portion of the working class is seen as necessary toward overall prosperity.  This was determined through a Keynesian mechanism called the “multiplier effect.”  The multiplier effect is said to work as follows: the state runs a deficit, for the sake of increasing the circulation of goods and services.  As an increased supply of money circulates from person to person within society, more people are supposedly put to work producing, thus increasing society’s stock of goods and services.  Thus with Keynesian economics the capitalist world saw the advent of a semi-planned capitalist society.  With Keynesianism one also sees a “middle class,” characterized by home ownership.

    Keynesianism has proved invaluable, moreover, in advertising capitalism to the world.  Under Keynesianism, ostensibly, anyone can join the middle classes.  The cornucopia of consumer life awaits.  The problem is, however, that Keynesian economics does not change the essentially exploitative reality of capitalist economics as a whole, and that the operation of a Keynesian economy requires the satisfaction of at least two basic material conditions.

    Two prerequisites for a Keynesian economy, it must be said, are 1) an autonomous national currency, and 2) a national economy capable of significant growth.  Both of these prerequisites are endangered today, the first by dollar hegemony, which makes the dollar everyone’s currency, the second by ecological limits to capitalist growth.  Keynesian economics became possible because, in a period of great struggle (the 1930s), it was seen as necessary to create a consumer class so that capitalist society’s ability to produce would be absorbed by someone.  Whether capitalist society can continue to produce enough consumers in the 21st century future is in my opinion an open question.  My guess is that, in its heedless production of consumers, the capitalist system will soon exhaust some of Planet Earth’s important resource bases.


    3) Marx’s theory of history is a theory of the development of productive forces, from those of hunter-gatherer societies to those of settled agricultural societies, and from there to the empires of Antiquity to the feudal societies of the Middle Ages to the capitalism of the present day.  This theory is given in general in Marx and Engels’ The German Ideology, and (in my opinion) has been superseded by the theory of history given by Kees van der Pijl.


    4) Marx’s philosophy, as displayed above, is about the overall primacy of economic life in society.  The central statement of historical materialism is given in the preface to A Contribution to the Critique of Political Economy:

    In the social production of their existence, men inevitably enter into definite relations, which are independent of their will, namely relations of production appropriate for a given stage in the development of their material forces of production.  The totality of these relations of production constitutes the economic structure of society, the real foundation, on which arises a legal and political superstructure and to which correspond definite forms of social consciousness.  The mode of production of material life conditions the general process of social, political, and intellectual life.  (20-21)

    Sometimes this theory of historical materialism is presented as a “determinism,” wherein the events of history are seen as inevitable consequences of the development of productive forces in world society.  However, historical materialism need not be that way.  All Marx is describing, in the paragraph cited above, is the notion that law, politics, and the other various and sundry social dramas of culture take place on a foundation composed of economic facts, and that the most important of economic facts concerns the relations of production.  Marx says nothing about economics determining culture.

    In fact, Marx had no idea what would happen in the future.  This is revealed most distinctly in Volume 3 of Capital, where Marx, after discussing the disintegrative tendencies of capitalism, suggests that he has no clue as to where it is all headed.


    What does Marx mean for us today?

    In developing historical materialism, Marx laid the theoretical basis for modern neo-Marxism, which offers contributions to the social sciences which are at least as meaningful as those of the other schools.  The neo-Marxists continue in meaningful vein the theories of political economy displayed with literary verve in CapitalRobert Brenner, for instance, deserves at least a diary…

    Marx’s utopia, communism, will have to look a lot different in its vision than it did in the 19th century if it is to remain viable today, mostly in the realm of the global society’s relationship to global ecosystems.  More likely than not, participants in the post-capitalist future will be cleaning up the enormous mess left behind by the dead carcass of capitalist society.  It will not, in short, be half as fun as Marx proclaimed it would be in the Critique of the Gotha Program when he said:

    In a higher phase of communist society, after the enslaving subordination of the individual to the division of labor, and therewith also the antithesis between mental and physical labor, has vanished; after labor has become not only a means of life but life’s prime want; after the productive forces have also increased with the all-around development of the individual, and all the springs of co-operative wealth flow more abundantly — only then then can the narrow horizon of bourgeois right be crossed in its entirety and society inscribe on its banners: From each according to his ability, to each according to his needs!

    Sorry, I’m disappointed too, but it ain’t gonna happen.

    That’s all I’m going to say for now: this diary is way too long, and so more discussions of Marx will probably be needed later to cover the subject with anything close to adequacy.  Really all I’m trying to do is incite some conversation about a much-misunderstood topic.


  • Magna Charta III: To Runnymede and Beyond

    October 12, 2007

    Originally posted by mirrim on 10/06/06

    This is the third of my Introduction to Magna Charta diaries originally posted at My Left Wing. As before, I have revised and added a bit, smoothing out some rough edges (I hope!) and making a few matters clearer. My previous diaries on this topic can be found (here at PH).

    So what happened to turn grumbling and dislike of John’s policies into open rebellion?

    In 1210, having been excommunicated the November before, John took his army to Ireland. Several of the great Anglo-Norman lords there had been acting, well, like typical Anglo-Norman lords, and needed reminding that they were, after all, subjects of the king. (For once it was not the native Irish causing trouble.) John had no trouble fielding an army for Ireland of English knights and Flemish mercenaries since they expected an easy time, with plunder, and despite his nickname of “Softsword” (given to him later by the chroniclers for his loss of Normandy), in two months’ time he chased down and defeated in battle, and exiled, the worst offenders. (The oldest part of Dublin Castle was started during this campaign on his orders.)  On his return home, he found the Welsh princes—mostly his son-in-law, Llewellyn ap Iorwerth—getting restive, and the following year, after an unpromising start, he and the army he had gathered for the occasion chased Llewellyn down. Then his neighbor to the north, William the Lion of Scotland, asked his aid in putting down a claimant to his throne, and swore fealty to John in return (thus setting up the next four hundred years of English kings claiming Scotland and the Scots resisting, until James VI of Scotland became James I of England—but that is several other stories). By early 1212, then, Ireland, Wales, and Scotland seemed quiet. In John’s mind, it was time to tackle the problem of his lost French possessions again.

    Out went the letters, summoning the knights to service. This was the third summer in a row, and the summons was for a prolonged campaign, not the relatively brief ones of the previous two summers. As usual, John also raised money by amercements, money assessed by a lord, or the courts, as punishment for various, usually minor, infractions of the law. An amercement was “at the mercy” of the court, or in this case, the king. (The term “fine” in those days referred to a sum paid to gain access to a privilege or avoid an onerous duty, such as following the king to war.)  More importantly, there were justices sent out to discover—and set amercements for—violations of Forest Law, which were laws which applied to great swathes of land, not all of it wooded, which had been declared “forest” by John or his predecessors. Basically, these were lands that the king claimed as his personal hunting territory, and despite the name, quite a bit of it was actually cultivated. That that different “Forest” law, with many unusual details, was enforced in these areas, and that Forest lands had been greatly expanded by John and his predecessors, were sore points with everybody. John’s use of Forest Law as a cash cow didn’t help. John’s plans for a French campaign were put on hold when Llewellyn revolted again (whereupon John hanged the hostages he had demanded from Llewellyn the previous year), and then were abandoned for that summer when rumor apparently reached the king of a conspiracy to assassinate him, headed by two barons, Robert FitzWalter and Eustace de Vesci, whose initial grievances against him are obscure. We don’t know if they were noble Norman thugs who would have made trouble no matter who the king was, were reacting to the general grievances, or had specific complaints (though de Vesci may have been the noble in the incident about John’s reputation as a lecher I mentioned in the first installment, who substituted a commoner in the king’s bed in place of de Vesci’s wife). Our sources here are administrative writs and chroniclers, and while some chroniclers were good solid historians, all were churchmen, often writing far from the scene, and the quality and detail are very uneven. The chronicler for the last part of John’s reign is one of the better ones, but even he spends little time on motivations or details:

    Then King John’s heart was troubled, since it was being said, without authority, that rumors had been heard that the barons who had gathered together were conspiring against him, and that in many ears there were tales of letters [from the Pope—mirrim] absolving the barons from John’s allegiance; it was said that another king should be elected in his place and that John should be expelled from the kingdom. If on the other hand the king captured them, they would suffer death or perpetual imprisonment.
    Having announced his return, the king began to have misgivings and would go nowhere without either being armed or accompanied by a great force of armed men. Having taken captive some who seemed to be too intimate with the rebels, he quickly seized the castles of the earls and barons, so that there was unrest for some time. Then the nobles of the country, fearing either the king’s anger or the scruples of conscience, left England secretly. Eustace de Vesci was received in Scotland and Robert FitzWalter departed to the French. Their goods were confiscated…  (from The Plantagenet Chronicles, Elizabeth Hallam, ed.)

      With the worst of the malcontents fled to Scotland, or to Philip in France (who was also hosting several of JohnÂ’s bishops who had left either because of the Interdict or John’s response), John seems, temporarily at least, to have gotten some sense. He reined in his bureaucrats and justices. He made good faith overtures to William Marshal, who was keeping his remaining fellow barons in Ireland loyal, and who appears to have been the one who first counseled making peace with the Pope, especially since other rumors had Philip of France, with the Pope’s blessing, preparing to invade and depose him. John needed to cut his losses, and fast.

    So in 1213, John made his peace with the Pope, though hammering out the details took another year. As part of the settlement with the Pope, de Vesci and FitzWalter were allowed to return, and Stephen Langton finally took his seat as Archbishop of Canterbury. In the meantime, it became clear that Philip was indeed planning an invasion, with the Pope’s blessing or no. John called up the feudal levies for the fourth year in a row, but again the loyal barons and lesser vassals, or at least the ones not overtly in rebellion, didn’t grouse too much: they may have been uncertain that Philip would be an improvement over John. John evidently wasn’t planning on relying on them anyway. Instead, he had been building ships, arguably the forerunner of the Royal Navy, and sent them off under the Earl of Salisbury (John’s half-brother, one of Henry’s bastards) to find Philip’s invasion fleet and destroy it. They caught it at anchor in Flanders, with much of Philip’s army away trying to conquer Flanders for Philip, and destroyed or captured a large portion of it. The threat was gone for the time being, and since it was early in the summer, John planned to keep the army together—and take it to France. At this, even some of the more loyal barons became irritated: they were the ones paying to keep their knights and men-at-arms clothed, fed, and supplied, and it was quite a drain on their finances. They were tired from the previous three summers under arms, their men were tired, and they claimed (inaccurately) that their feudal duty to John did not include service on the Continent. When John put out to sea the barons and their men sat on the shore, giving him the medieval equivalent of “Fuggedaboutit” and “Hell, no.” John must have been furious: the Plantagenet temper, which John had in full measure, was already legendary. Only the intervention of Archbishop Langton prevented John from turning his mercenaries on the recalcitrant then and there.

      John nonetheless (and for the third year in a row!) started preparations for a campaign in France the following year: and called for a scutage at triple, not double, the “customary” rate of his father’s day. The barons were downright resistant to both the ideas, the war and the scutage. They had been under arms and paying their own way the whole time Philip’s invasion threatened. They were tired of fighting in France, and of paying for fighting in France: most of those he called upon refused to go, again claiming that “traditionally” they had not had to serve in France (again, not true). Had the plan been to invade Normandy, where many had had fiefs ten years before, the story might have been different; but John was headed to his other ancestral lands first, to Poitou (from his mother Eleanor) and Anjou (the ancestral lands of the Plantagenet family), where none of them had held lands. They also refused to come up with the cash for their fines and scutage. John went anyway, with an army of mercenaries, in February of 1214, leaving some of his favorite “enforcers” in charge of England. From France in May he demanded the barons pay up; they refused.

      Now, if John had been successful in France, all might have been forgiven on both sides. Initially, he was, bringing several Poitevin and Angevin lords to heel and generally pacifying both provinces; admittedly, the lords may have thought that an overlord in England was preferable to Philip right next door, or that it was best to placate the man with the army on-scene. Then news came to John of the defeat, on the other side of France at Bouvines, of the allies who had been keeping Philip busy and away from Poitou and Anjou. Not John’s fault, but the French lords he had defeated promptly saw which way the wind was blowing and defected back to Philip. By the time a defeated John returned to England in October of 1214, between his demands for money, and the behavior of the henchmen (most of whom were not native Englishmen) he had left in charge in England, resentment had progressed to open defiance on the part of many.

    A lot of the details of what happened next really aren’t available: again, our sources are limited, many were written long after the facts, and the ones that weren’t can be contradictory. Since 1213 there had been some discussion among the powerful in England about presenting the king with a possible charter of liberties. We know this due to an obscure document found in the French archives in the 1800s which mentions John by name and refers to some matters which were a problem in 1213 but less so in 1215 (and didn’t make it into Magna Charta; other matters in the same document did). The chroniclers tell of the barons’ demands to John to restore the good laws of King Edward the Confessor’s day (interesting, because that would have meant restoring the old pre-Norman law code based on Alfred the Great’s, but nothing came of this), and then of a charter of Henry I, William the Conqueror’s son, never really enforced, but providing a useful precedent for a general charter of liberties given by a king, as opposed to a charter for a specific place such as a town. While the chroniclers imply that the entire baronage was opposed to John, it seems that the barons were, unofficially at least, splitting into three groups. One group eventually called itself the Army of God; these were the out and out rebels, led by FitzWalter and de Vesci. A second group was that of the barons who remained loyal to the king; though slightly smaller than the first, this one included most of the most powerful men in England. Its leader was our old friend William Marshal, Earl of Pembroke, Lord of Leinster and Justiciar (governor) of Ireland. It may have been a matter of honor, of keeping of oaths, for Pembroke and the others, but there is no doubt that the presence of a man who had Marshal’s reputation as a man of honor, as a competent administrator, and still (though probably pushing seventy) a man to be reckoned with in battle, may have induced some others not to join the rebels, but to remain among the moderates. This was still an era in which honor, and the keeping of one’s oath, were acceptable political motives. (Many families actually covered their bets: the actual baron, who had given his oath of fealty, stayed with the king, and his son or a younger son, who had taken no oath of fealty, held no land and thus had less to lose, went with the rebels. The younger William Marshal, son of the great Earl, was with the rebels.) The moderates waiting to see what would happen were the largest group: and apparently among its leaders, and a major contributor to the document we know, was Stephen Langton.

    The legend is that it was Langton who first brought the charter of Henry I to the barons’ attention. Great story, but probably not true: the rebels, the moderates, and the king’s party all had those within their number who were versed in law and precedent. What he may have done is point out to them its importance as precedent. It is also clear that the document we have shows the hand of someone who was used to thinking in more catholic, universal terms than the barons in one important point: some matters in the 1213 draft charter which mention only “barons”, in the final Magna Charta are rights claimed for “free men”. In addition, it seems to have been Langton and the moderates who kept negotiations going when one side or the other threatened to quit talking: amazingly, that appears to have been more frequently the rebels rather than the king. John played for time, took vows as a crusader (which was always a good move for delaying things a king didn’t want to deal with), held off on concessions until he could get “guidance” from the Pope, and called in mercenaries as a threat (Langton called his bluff on that one, making him send them back), but, via William Marshal, never stopped discussing things with Langton and the moderates. The rebels evidently tried as hard as they could to get John to lose his temper and move against them with an army. Finally in early May they made demands the king rejected outright, renounced their oaths of fealty (making open warfare against the king “legal”) and then proceeded to besiege one of John’s castles. For once, the legendary Plantagenet temper held. John managed to bring to his side some of the moderates and a few of the rebels by being conciliatory and offering arbitration to settle grievances. Then the rebels, with the help of the townsfolk, took London in mid-May 1215. Then as now, London was the premier city of England, the trade and financial capital. The king did not want to attack the city, and the rebels didn’t really have the strength to do much more than hold it. Langton and the moderates got busy during the standoff.

    A bit more than three weeks later, on the 10th of June, the king came to the meadow called Runnymede, “between Windsor and Staines”, to meet with the leaders of the moderates. The rough draft document, “The Articles of the Barons”, survives, and was sent to the rebels after the king’s approval. On June 15th, they joined the others at Runnymede, and an agreement was reached. While that is the traditional date for the document’s signing, it may have taken a few more days for the Chancery Office to get it into final form, copied, signed and seal affixed; and then given to the rebels as they renewed their allegiance to the king on June 19th. Four originals still survive: two in the British Library, one in Lincoln Cathedral (the copy I saw on the “Magna Charta to the Constitution” tour in 1987), one in Salisbury Cathedral.

    And what of the legend of the signing, of the king sitting with few or no supporters in the tent at Runnymede, surrounded by hostile barons with their hands on their swords, forced to set his hand to the parchment, then falling to the ground in a rage and chewing bits of wood in his fury? Well, it’s a great story; but it probably never happened. The whole affair seems to have been fairly politely handled, and in fact, as I said, the final signing and sealing of the document may have had to wait until the Chancery Office got the terms into proper legal shape a few days later. John had no need for fury: he was already plotting to get the whole thing thrown out by an appeal to his “overlord” the Pope, on the grounds that it had been extorted from him and signed under duress, and that the people demanding it were in rebellion against their lawful overlord—which put them in rebellion against the Pope. The story may have originated in its one known source: Holinshed’s Chronicles, a compendium of English history, written in the 1500s (so 300 years after these doings, and much longer for some of its matter), and not before. Holinshed is, alas, not reliable, either because he relied on sensationalistic and poor sources, or because he just couldn’t resist making up a good story. He was, for instance, also the source for Shakespeare’s portrayal of the historical Scottish king MacBeth (which play makes Braveheart look like a history lesson). Here is his version (not rendered into modern English. It’s basically Shakepearean-era, don’t panic):

    Great rejoicing was made for this conclusion of peace betwixt the king and his barons, the people, knowing that God had touched the king’s heart, and mollified it, whereby happie daies were come for the realme of England, as though it had been delivered out of the bondage of Aegypt: but they were much deceived, for the king, having condescended to make such grant of liberties, farre contrarie to his mind, was right sorowfulle in his heart, cursed his mother that bare him, the hour that he was borne, and the paps that gave him sucke, wishing that he had received death by violence of sword or knife, instead of naturall nourishment: he whetted his teeth, he did bite now on one staffe and now on an other as he walked, and oft brake the same in peeces when he had done, and with such disordered behavior and furious gestures he uttered his greefe, in such sort that the Noble men verie well perceived the inclination of his inward affection concerning these things, before the breaking up of the councell, and therefore sore lamented the state of the realme, gessing what would follow of his impatiencie, and displeasant taking of the matter.

    Now, the picture of John in a towering rage is a great one, and accords well with our knowledge of the Plantagenet temper which John had in full measure, no less than his father Henry and brother Richard, both of whom could terrify anyone short of William Marshal when they were in full rant. But compare this to the Barnwell chronicler, a contemporary source:

    Having agreed upon a place where the parties could conveniently gather, after many deliberations they made peace with the king, and he gave to them all that they wanted, and confirmed it in his charter. (Plantagenet Chronicles)

    How long did the charter remain in force? Initially, about ten weeks: just enough time for messengers on fast horses to get to Rome and back with Pope Innocent’s condemnation of the whole deal. Some of the extremist barons didn’t want peace, either. While many of the complaints and grievances were addressed, by the end of September, with Langton on his way to plead the barons’ case to the Pope, the Pope’s letter condemning the charter arrived in England; and John was again putting together an army of continental mercenaries to fight the remaining rebels, who were themselves negotiating with Philip to come over and help them depose John. The fighting began in December of 1215, and continued through the summer of 1216, with as many as two-thirds of the barons at one point in opposition to John (and Philip having sent over a small force to assist them). John was in fact on campaign still in October of 1216 and more than holding his own when he was taken ill and died.

    Three things happened almost immediately. The rebels laid down their arms; they had no quarrel with John’s nine-year-old son Henry III, and at any rate many of the worst hotheads, like de Vesci, had been killed by this time. Then they told Philip that his aid was no longer required, sending his son Louis, the commander of PhilipÂ’s “loaner” force, and his men home. Last, and most importantly, the elderly (now probably over seventy), still highly respected William Marshal was named Regent for England and the young king’s guardian. On November 11, 1216, with a few modifications and omissions, Marshal reissued the Charter in the young king’s name, and he did so again in 1217. In 1225 the young King Henry, still not of age and under regency, reissued it, again with some omissions and changes. Its status remained somewhat uncertain (since nothing done during a king’s minority became permanent until he reconfirmed it after coming of age) until Henry, now ten years adult, reconfirmed it in 1237.

      Subsequent kings did the same (usually because their subjects refused to consider granting a tax until they did—a situation set up by one of the provisions of Magna Charta, as will be shown in the analysis) until the Charter of Runnymede, The Great Charter, became engrained in the English psyche as the basis for their civil rights, whether they ever really read the text or not. As such, it was used in attempting to rein in the “divine right of kings” espoused by the Stuarts, via the Petition of Right prior to the English Civil War and the beheading of Charles I, in establishing Habeas Corpus during Charles II’s reign, and all the other various and sundry Acts of Parliament—itself arising from Magna Charta—which taken together are the British Constitution. And of course, a group of disgruntled and rebellious colonists, while never referring to it by name in the documents they produced, used what they felt to be its basic principles freely in both 1776 and 1787.

    (Just in case you’re interested: The Plantagenet Chronicles [Elizabeth Hallam, ed.; Weidenfeld and Nicolson, 1985], a compendium of excerpts from the chroniclers of the day, is fascinating reading.  The quote from Holinshed is from the 1587 edition, available on-line from the Furness Memorial Library at UPenn. Go (here): it’s lovely to look at, though the font’s a bitch to read, and you can scroll through the whole book; if you scroll down the “Divisions”, John’s reign is in Volume III. Holinshed was a major source for several of Shakespeare’s plays, which explains their historical inaccuracy.)


    Magna Charta II: The Sources of Conflict

    October 12, 2007

    Originally posted by Mirrim on 09/21/06

    Previously, I reviewed the deeper background to Magna Charta: the reasons why the barons started out disliking John, and then got even crankier (here). Now it’s time for an overview of the more immediate causes: the Interdict and related matters, taxes (it’s always taxes!), and more about John’s relationships with his barons—and perhaps the dispelling of a few myths about the background of Magna Charta. One more diary about the history after this, as I get into the barons’ revolt and the scene at Runnymede next time, and then I’ll start reviewing the document itself; it’s longer than most people think. And that project is a lot more complex than I thought it would be. Again, I’ve revised and added to this essay for its reposting here.

      In June of 1205, Archbishop Hubert Walter of Canterbury, who was also John’s Chancellor, died. The two roles were not officially linked, but powerful medieval churchmen frequently were tapped by kings for important roles in government due to their education and experience. (The Church was trying to forbid the practice, and eventually succeeded; indeed, it continues the ban to this day by requiring priests or nuns running for public office to renounce one or the other, their vows or their political aspirations. The former president of Haiti, Jean-Claude Aristide, is an example.) Walter, as Archbishop of Canterbury, had supported John for the throne, and had been an able and effective head of John’s Chancery, the office that handled much of the non-financial bureaucracy for the kingdom.

    Unlike today, when bishops are mostly appointed by Rome and Rome is a phone call or e-mail away, most dioceses had a small monastery attached to the cathedral called the “cathedral chapter”, and those monks had the right to elect the bishop of the diocese. These elections were supposed to be “free”: the cathedral chapter in theory could elect anyone they pleased. Not infrequently, though, the king would have someone in mind and made his opinions known; sometimes it was as blatant as “You are required to hold free elections; We (the king) require you freely to elect So-and-So.” It was understandable that a king would want someone he felt he could work with; besides the possibility of bishops or archbishops becoming royal bureaucrats, they were also major feudal landholders and members of any advisory council a king might have. There was a tradition amounting to law in England, that the major benefactor or feudal lord of a church or monastery had the right to appoint its priest or abbot (the Church was trying to end this), and this principle dovetailed nicely with the king’s appointment of bishops. As the archbishop of Canterbury was a landholder on a par with the great earls of the kingdom as well as the spiritual head of all England, John naturally wanted his own man in the job: he proposed giving it to his secretary, one John de Gray, already Bishop of Norwich.

      The election was subject to approval by the Pope (which could take months to arrive, but in the interim the business of the diocese could go on under the elected bishop), and the Pope intervened only when an election was disputed. Since the Archbishop of Canterbury was the head of the English “province” of the Church, the other English bishops wanted to be involved in the election. The monks objected, which meant the election was under dispute, and immediately John referred the question, quite properly, to Rome—and the delegation was told to encourage the Pope to appoint John’s candidate, though no real election could take place until the dispute between the monks and the bishops was resolved. When the monks found out that John’s agents were pushing his candidate with the Pope, they secretly elected the head of their chapter, Prior Reginald, as bishop, and sent him on to Rome with a delegation of monks; then denied they had done so to John, and at a second election chose John’s candidate de Grey, whom they sent to Rome with another delegation. The Pope now had two apparently “elected” bishops; neither John, nor the chapter, nor for that matter the two candidates or their entourages, would back down, and when the Pope told the monks who were part of the two delegations to start over then and there, they deadlocked between the two. The pope thereupon suggested a third man: Stephen Langton, already a cardinal (this was originally an honor given to those priests of certain parishes in Rome who were the local “chapter” and elected the Pope, aka “the Bishop of Rome”) and a respected teacher of theology at the University of Paris, and the monks from Canterbury agreed. Everyone was happy, except the two disappointed candidates—and the king.

      John had several objections to the Pope’s choice: Langton was a teacher, not an administrator; and though he was English by birth, he had taught for many years in Paris, the heart of the realm of the King of France, John’s enemy. The real reason, I suspect, was that Langton was not someone John knew; and John automatically distrusted anyone he didn’t know, especially if they weren’t dependent on him for power and prestige. John refused to allow Langton into England. The dispute went back and forth for over a year, and at this point, John probably had his barons agreeing with him; most of them were comfortable with the idea that the king “appointed” bishops, since they claimed the right to appoint abbots to monasteries their ancestors had founded, and priests in “their” parishes. In March of 1208, the Pope lost patience and declared England under Interdict. John had known it was coming for some time, and immediately counterattacked. On the basis that if the clergy weren’t going to be doing their jobs—Masses, marriages, burials, and the like—they didn’t deserve to be supported, he revoked all their land holdings. They could pay a fine and get them back, but were required to turn the bulk of the income over to the king for the duration of the Interdict. To add insult to injury, he arrested and held for ransom the female “companions” many priests had in violation of Church law; scandalously, most priests bailed their girlfriends out promptly. Between the two, John got a huge chunk of cash for the treasury immediately and the ongoing profits from Church land for the duration, solving his money problems for quite a while.  This gave him no incentive to resolve the situation. By 1213, the Interdict had dragged on for five long years, despite the Pope upping the ante by allowing Langton to excommunicate the king in 1209. Along with personally damning his soul to hell, it made life uncomfortable for a ruler, since it allowed his subjects to ignore their oaths of fealty if they so chose. John remained unmoved; most of the barons stayed loyal, though they were muttering.

      Then John got word in 1212 that Pope Innocent was about to unleash his ultimate sanction: deposing the king. This was a step Popes rarely needed to take, though they claimed the right: it would absolve all John’s subjects of any oaths of loyalty, make null and void any treaties, and such was the political power of the Church at the time that the Pope could possibly make it stick (unlike the later Pope who took the same step against Elizabeth I), usually by encouraging an enemy of the deposed king to invade or empowering unhappy nobles to revolt. John had both. Deposition would encourage not only the barons, but Philip of France, who was considering invading England, and allow him to do so with the Church’s blessing. John caved in, and along with accepting the archbishop he distrusted, he also yielded his kingdom to the Pope and received it back as a fief: in other words, he became officially the Pope’s vassal. This was not an innovation: several other kingdoms already had the same status. It didn’t stop Philip from his invasion plans (ultimately scotched by the destruction of his invasion fleet), but it did buy John a little breathing room. The barons continued to mutter and worry, despite the reconciliation.

      The reasons for their concern were many. If John distrusted people he didn’t know, he could be positively paranoid about people he did know, especially those who owed little or none of their power and prestige to him. The most loyal baron could be suspected on no real evidence. Even William Marshal, who had served three Plantagenet kings, had assisted John to the throne, and was loyal to a fault, was not immune from John’s paranoia. (Marshal himself is a fascinating character: the younger son of an earl, he had started out with only his knighthood, horse and armor, and his talent and skill. He made his fortune originally on the “tournament circuit”, a semi-organized round of competition and the favorite spectator sport of the age; winners were awarded the arms and horses of men they defeated, and could ransom them back to the owner. Marshal did well enough to become the Tiger Woods, David Beckham, or Michael Jordan of his day, with the friendship of John’s father Henry, and had married the heiress of the earl of Pembroke. This gave him lands in Wales, England, Ireland and France, and made him the highest-ranking noble in the kingdom after anyone royal.) John regularly required his nobles to send him hostages to ensure their good behavior, something ordinarily asked of defeated enemies such as William of Scotland or Llewellyn of Wales; that John had hanged his Welsh hostages in July of 1212 in response to another revolt by Llewellyn did not reassure those nobles who had sent hostages to him. John commonly demanded unheard-of amounts from heirs to titles and lands before they could take possession: called “reliefs”, these were expected, but were supposed to be “reasonable”, not the equivalent of a year’s income or more, an amount which could, on the king’s whim, be forgiven, paid in installments (leaving the person in hock to the king and at risk of foreclosure), or demanded up-front. The last meant borrowing, and the only real source of credit (since Christians could not charge interest) were the Jewish financiers; interest was high, on a par with some credit cards today or worse, and payments could be crippling. And while John insisted that his barons not dispossess any of their vassals and tenants of lands without a legal judgment, even before the Interdict he had had no such compunctions himself, demanding castles and lands be turned over to him apparently arbitrarily, on the basis of whim, dislike, or profound distrust. John hounded more than one of his earls into exile, and pushed several into bankruptcy.

      Last, there were taxes. Giving John his due, he had started with a problem. Richard had begun his rule with a full treasury; Henry had preferred diplomacy to fighting when he could. Richard, on the other hand, had campaigned year-round, not holding to the usual truces which kept feudal levies home from harvesttime through the nasty weather of fall and winter, into the Truce of God periods (Christmas and Lent through Easter and beyond), which “fortuitously” got the crops in before the fighting season started, and allowed men to go off to war without worrying about next winter’s food as much. Then, too, the men of feudal levies legally could be required to stay in the army only forty days a year, or they had to be paid. The possibility of campaigning year-round required a king to hire mercenaries, which were expensive. Add in the costs to the kingdom of Richard’s Crusade and ransom, and John inherited the throne with a fairly empty Treasury by comparison. Unfortunately, there were few reliably recurrent sources of income for a medieval king: profits and rents from his own lands, of course; and fines and penalties for legal misdemeanors and violations of Forest Law (a special category). Reliefs for inheritances were useful, but sporadic. Taxes were a matter of royal decree, some with the “consent” of the barons: customs duties; taxes on various kinds of items (wine, for instance); general taxes or “aids”, such as a tax on wealth: the Thirteenth I mentioned last time. Lastly, there were the customs of “scutage” and “fine”. Scutage (from the word scutum, Latin for shield) was a particular tax which derived from the feudal requirement that a lord come in person to serve in war for a set period of time, usually forty days, bringing so many knights (the number varied with the size of the fief), with their support personnel, to his lord’s banner when called upon. For many years, it had been acceptable for a lord to pay hard cash, so much per knight per day. On top of this, a lord was required to attend the king in person; if he did not want to do so, he had to pay a “fine” to be excused. A baron could recoup all or most of the money for the scutage itself from his knightly tenants, but the fine came out of his own pocket. Henry had asked for eight scutages in thirty-four years; Richard three in ten years (plus his ransom, which was a separate matter). John had so far asked for ten in thirteen years, mostly for the purpose of getting back the lands in France Philip had taken, and had asked for double the rates his father used; mercenaries were getting more expensive. The barons were tired of paying, especially since they seemed to be no closer to regaining the estates (and the income from those estates) some of them had lost when Philip had taken Normandy nearly ten years before.

      When John initially gave in to pressure from the Pope in mid 1213 and accepted Stephen Langton as Archbishop, from the least prominent simple landed knight to the highest in the land, Earl William Marshal, the barons were unhappy with their king (to say the least). Some were more unhappy than others: Marshal and many others were unswervingly and uncomplainingly loyal, but there were several who were openly hostile. (Most of those had been exiled, but that could quickly change.) Any of the lords who had attracted John’s suspicion, though, had had to surrender lands or give John hostages for their good behavior. That included most of the great lords of England from Marshal on down, and many of the lesser ones. All of them saw John’s behavior toward the Church during the Interdict and must have wondered if the same tactics would be used wholesale against them. There were rumblings about many things John had done as being against law and custom, and the “good customs” of previous reigns. (Everybody had conveniently forgotten, after fifteen years without him, what an autocratic, imperious bastard John’s father could be.) Circumstances would soon increase the tension.

     


    Magna Charta I: Background to Runnymede

    October 12, 2007

    Originally posted by Mirrim on 09/13/06

    Originally posted at My Left Wing, Feb. 19, 2006. It is somewhat revised and expanded for this site; I purposely kept the length very short over there since I was, at the time, testing the waters.

    As I said when I first posted this at My Left Wing: before we really can know where we’re going, it’s sometimes useful to see how we got where we are. And the earliest really accessible source for the principles that gave rise to our system is the document considered the mother of all our laws: Magna Charta. Notice I said “accessible”. Magna Charta didn’t spring de novo like Athena from Zeus’s head. It was based on a large body of law, some customary (the “common law”), some written and current, and some written though discarded, forgotten, or superceded even at the time (like the Code of Alfred the Great). Though it is frequently cited rhetorically, I wonder how many have actually read the document. 

      There’s an additional reason as well. When I first wrote this for posting at My Left Wing, the parallels between John and our current administration were clear. The past few months have not lessened those parallels, making the background to Magna Charta, and the document itself, even more pertinent.

    Before I speak of Magna Charta, though, it might be worthwhile to take a look at the causes of its production: the quarrel between King John of England and his barons, and even some of the causes of that. This will be a very idiosyncratic look: I’m no expert on Plantagenet England, just an interested amateur. While I hope to get my facts right (anyone who is an expert in the era—like, perchance, weeping for brunnhilde—feel free to correct them!), comments on people’s actions and conclusions will be my own. I’ve expanded this section from my MLW diary to include some of the more distant history involved. There are bits and pieces of 60 years of history here, so please bear with me; it’s all important to understanding Magna Charta. 

      Now, John of England (reigned 1199-1216), even before his reign started, had several strikes against him in the eyes of the English barons, and it only got worse with time. For one thing, he was not his father, and he didn’t come to the throne with his father Henry II’s advantages. Henry (reigned 1154-1189) had been almost literally the last of William the Conqueror’s heirs left standing, and he came to the throne in 1154 after one of those long dragged-out civil wars, usually dynastic but occasionally with other causes, which flared up in England every few generations until the defeat of Bonnie Prince Charlie in 1745. William the Bastard of Normandy had left three legitimate sons. His eldest, Robert, got Normandy. His second, William Rufus, became King of England. The third, Henry I, grabbed the throne of England when William was killed before Robert could move to claim it, married a descendant of Alfred the Great, and held onto it…inheriting Normandy himself when both Robert’s sons died before their father. The Conqueror also had a daughter, Adela, who married a French nobleman; their son was Stephen of Blois. Unfortunately, Henry I’s son also died before he could inherit—leaving only Henry’s daughter Matilda, who had married first the Emperor of the Holy Roman Empire (thus she is sometimes called “the Empress Matilda” or “the Empress Maud”) and then, ten years before her father’s death, Geoffrey called Plantagenet from the broom plant (planta genista) he used as a badge. To the consternation of some of the Norman nobility Henry designated Matilda his heir, and she claimed the throne as “lady of the English”.

      No queen had ruled England—or France, for that matter—in her own right, ever (and none would until Henry VIII’s daughter Mary). Despite their misgivings, some of the nobility backed Matilda anyway, considering the daughter of the last king to have the better, or at least dynastically closer, claim; others backed Stephen, who had the weaker dynastic claim as the son of William the Conqueror’s daughter, but at least was male. The fighting went on for nearly 20 years, on and off; Matilda’s husband Geoffrey gained her Normandy, and she (and later her eldest son Henry) fought Stephen in England. Fans of the Brother Cadfael series, book or PBS-TV movies, will recognize the political scene here: this is indeed the background of those stories. The war only ended when, about a year before Stephen’s own death, his son Eustace drowned, and he was forced to recognize Matilda’s son as his heir. Henry was an energetic and effective king, intelligent and cunning, and too many people remembered the Bad Old Days of the civil war to cavil at his being autocratic, collecting taxes and other income with efficiency, reclaiming royal lands (and then some) as his own, demanding that castles built during the days of civil war be torn down, and generally reining in the barons. Even his quarrel with, and implication in the death of, Thomas Becket Archbishop of Canterbury in 1170 did not lead to significant unrest in England; indeed, until his sons were old enough to chafe at their father’s refusal to give them any real share in power the realm was relatively peaceful. Part of John’s problems with his barons may well stem from his attempts to rule as his father did; but while in John’s day the king was still the “absolute” ruler, combining functions of government we would consider to be separate “branches” in his single person, there had been peace enough for long enough when he came to the throne that the barons may have been less likely to accept behavior from John that had been routinely tolerated from Henry.

      For another thing, John was not his brother. Richard Coeur-de-Lion (reigned 1189-1199), who inherited from their father, had been respected, even lionized (pun intended) by his nobility and people. Richard had been tall, blond, handsome, athletic, and a true hero: a leader of the Third Crusade, eager to respond to anyone’s incursions on his lands, or rebellious barons, with force, and looked every inch a king. John, if the accounts are true, was shorter, dark-haired, not particularly handsome, pudgy as he got older, and had more of a reputation for intrigue and treachery than warfare. Richard was not a total slouch at intrigue or treachery either, though he didn’t have the reputation for it his brother earned; none of the sons of Henry II were. After all, they had spent their early years of manhood, all of them (Henry the Young King, Richard, Geoffrey, and John), fighting their father in various combinations and with various other allies (including their mother, the already legendary Eleanor of Aquitaine) adding to the fun. Fraternal love, and even respect for their father, generally took a back seat to personal gain. But John, unlike his brothers Richard and Geoffrey, had little reputation as a fighter or a leader of men.  Richard, even before he left on crusade soon after taking the throne of England and the ducal seat of Normandy (and all the other lands belonging to his father and his mother) was one of the greatest knights of his time, with reputation as a fighter and leader of men few could equal.

      Which brings up another reason John was unlike his brother: Richard, King of England, was almost never there. Even when he wasn’t on crusade or held for ransom (and those two filled up the first five years of his reign!), Richard preferred his territories in France, and especially his mother’s Aquitaine. (Why? Maybe the food was better, the weather was better, people drank wine instead of beer. Seriously, the fighting was there, and his French nobles were more overtly rebellious; Richard died at the siege of a castle belonging to one of his vassals.) This left the barons freer to bicker amongst themselves. In the whole of his ten-year reign, Richard spent about six months in England. Nor, because of this, could Richard be the administrator John was, and their father had been; possibly in an attempt to get every penny owed him in taxes (and as much more as he could squeeze), John oversaw a reorganization of the royal chancery and administration that enabled him and his clerks to keep much better track of taxes owed and collected. This did not endear him to his barons, nor did his habit of calling for new taxes without their input. He assigned to himself, or occasionally to his supporters, another lucrative source of income, called “wardships”. Legally, a minor child or an unmarried woman not a widow (and even not all of them) could not control inherited property. Both groups required guardians, and those guardians had the right to take for their own use a “reasonable” portion of the profits from the property. John became notorious for assigning those wardships to himself, and even refusing to allow heiresses to marry (since the profits would now go to their husbands). There were many other legal complaints of this nature.

      Last but not least in the barons’ eyes, John had a reputation as a lecher, unlike his brother. Richard may not have been homosexual, as modern lore would claim. His appetite for women was never as strong as John’s, though. His production of acknowledged bastards (one) was clearly sub-par for the time. He married the most beautiful woman in Europe—and then parked her on Cyprus while he fought Saracens in Palestine. The marriage was childless, though it probably was consummated, failure to “perform one’s husbandly duties” at least once being one of the few grounds for annulment women had. John, on the other hand and if the stories are true, would force himself on anyone reasonably attractive, even if she were married to one of his nobles— especially if she were married to one of his nobles. No wife or daughter was safe, according to the tales. Whether it was simply lust and the power to indulge, or whether it was a deliberate attempt on John’s part to assert his power and humiliate his nobility by raping their wives and daughters is impossible, of course, to tell—as is the truth of the stories, since some of the noblemen supposedly wronged by John weren’t exactly “good citizens” themselves and had every reason to lie.

      John’s reputation wasn’t helped by his earlier attempt to grab power after engineering (with the help of Philip, King of France and Henry, Emperor of the Holy Roman Empire) Richard’s capture and imprisonment on his way home from the Crusade. This power grab was foiled by Richard’s Chancellor, Longchamp, backed by Richard’s Regent: Richard and John’s elderly but still formidable mother Eleanor of Aquitaine. If this all sounds a bit familiar, it should be: it’s the historical background behind most of the Robin Hood stories. Philip’s grievances are too complex to go into here: but among other reasons, he had a long history of encouraging quarrels among the Plantagenets—and picking up any bits of their territory he could while they were busy with each other. Richard he feared and respected on the battlefield; John he did not. For almost three years, while the ransom for Richard was collected, Philip felt free to help himself, bit by bit, to Plantagenet territory, a habit he continued with much more enthusiasm after Richard’s death. This ultimately led to another major grievance the English barons had with John: Philip by 1204 had dispossessed him of all his lands in northern France and for the next ten years John demanded their help—and their money—to finance wars to get them back.

      Philip was encouraged by the consternation caused by John’s treatment of the other heir to the throne, behavior not reassuring, legal, or even truly acceptable by the standards of the age; and it was not one of John’s better political moves. Arthur Duke of Brittany was the son of John’s and Richard’s brother Geoffrey; since Geoffrey was older than John, the argument could be made, by strict primogeniture, that Arthur’s claim was the better one, despite his being Henry’s grandson. Despite their differences over the years, though, Eleanor, always the practical politician, preferred her son to her grandson, and threw her support to John. Ultimately, John managed to capture Arthur, who was besieging his grandmother at the time (standard Plantagenet family politics again; they make Dallas or Desperate Housewives look tame), whereupon he tossed his fifteen-year-old nephew into one of his castles, and the young man was never seen again.  Nor were a lot of John’s other political prisoners over the years, but this one gave Philip legal cause for officially confiscating, and then actually conquering, John’s northern French territories. Again, it’s a bit complicated: John had seriously violated the medieval rules of war by not setting a ransom for his nephew after his capture and allowing him to be released after its payment. He was in violation of the oath of fealty he owed Philip by attacking another of Philip’s vassals (Arthur, Count of Brittany); while that may seem a bit strange, as Duke of Normandy and holder of other territories within the realm of France, John (as his father and brother before him) was a vassal of the King of France for those territories, as Arthur was for Brittany. Never mind that the King of France had no real authority in those areas; the legal principle still held, and Philip was within his rights to object to Arthur’s treatment and declare John dispossessed.

      Then there was the little matter of the Interdict. It’s difficult for the modern person to appreciate what a big deal this really was. John pissed off the Pope thoroughly by refusing to live with the result of the monks of Canterbury’s election, at the Pope’s suggestion, of Stephen Langton as Archbishop, since John had his own candidate in mind. (I’ll relate this story in more detail in a later installment.) Now, I assume most here have heard of excommunication; Pope Innocent eventually excommunicated John, too, but that is a punishment which applies primarily to one person, denying him or her attendance at Mass and the Sacraments of the Catholic Church (which was the only game in town then), and condemning his or her soul to Hell (unless atonement is made). It’s a personal damnation. Interdict condemned an entire country to hell, usually for the sins of its ruler: it’s clearly not useful in an era with religious alternatives, but in Europe’s Middle Ages, it meant: no Masses for the people to attend, no marriages, or Christian funerals (baptism still could occur); no forgiveness of sins. In an era in which Heaven and Hell were known as real places, not metaphors, this terrified even many of the nobility. Henry had seen England placed under Interdict for Thomas Becket’s death, and had only gotten out of it after swearing he hadn’t actually ordered it (despite the famous outburst taken as an order by four of his knights), an expensive penance, and a vow to go crusading (his penitential scourging by the monks of Canterbury took place much later). John seems not to have cared as much: whether he simply was less of a believer than many, or felt he was right, by God, and the Pope wrong, is impossible to tell.

      So the barons felt they had many causes for complaint, even before the precipitating events leading to the writing of the Great Charter and the meadow of RunnymedeÂ…and that is my story for next time.
     


    History for Kossacks: Slavery comes to America (Special Guest Edition)

    October 7, 2007

    Originally posted by Aphra Behn on 11/20/06

    Greetings Cave Dwellers! Welcome back to another guest edition of History for Kossacks, and our series on the history and roots of American slavery. In Part One, Unitary Moonbat gave us  slavery in the ancient world; last week in Part Two, we continued with a peek at Roman and Ottoman slavery and the serfdom of medieval Europe. Tonight, our story continue in Africa and the Americas. Join me, aphra behn, your guest host and humble lady scribbler, in colonial Virginia…

    Photobucket - Video and Image Hosting

    In 1619, a ship named the White Lion arrived at Jamestown, in the English colony of Virginia. From it came “20 and odd” human beings recorded as “Negars,” Angolans purchased from slave traders, who were destined to become the first slaves in English America.

    Or were they? The men were recorded as “Servants,” rather than slaves. Did they enter into a system of indenture? What were the English doing with Angolan slaves anyway? The answers lie in Portugal, Africa, and South America…

    Sugar and Slaves
    In 1400, Africa was home to a wide range of kingdom, countries, and other political and social organizations. North Africa had long ties with Europe; quickly brought into Christendom in the Roman period, it had produced more than one pope, an emperor, several saints, and perhaps the most important father of the early Church: St. Augustine.  In the Muslim era, North Africa served as connector between  Muslim Iberia and the great Muslim kingdoms of the Middle East. These connections helped nourish a vibrant Trans-Saharan trade in a wide variety of goods, including gold, ivory, and slaves. In East Africa,  this trade focused mainly on the Middle East.

    Photobucket - Video and Image Hosting
    In West Africa, trade linkages were more and more to Western Europe. The Portuguese began trading directly with Africa in the 1400s, after advances in maritime technology (like the stern rudder and a new system of sails that allowed for more flexibility than the old one-sailed vessels) allowed them to sail all the way to West Africa and beyond. They traded for many kinds of valued goods, especially gold.  They also bought a few slaves, beginning in 1441 when the first African slaves were brought directly to Lisbon.

    They were not the first African slaves in Europe; as slaves and servants (depending on local laws), small numbers of non-Christian Africans had long made up a  small portion of the serving classes of Europe. But the Portuguese were about to embark on the large-scale importation of African slaves, to be used in agricultural labour. Why did they need these workers?

    The answer lies in the eastward voyages that the Portuguese were undertaking simultaneously with their attempts to sail around Africa (and thence to Asia…but that’s another diary). The Portuguese laid claim to the Azores in 1427, and  quickly set up agricultural colonies on these islands and on Madeira through the 1430s and 1440s. Their wine production was impressive, but even more so was their sugar production.

    Sugar is so common in our lives today that it’s hard to believe that it was a luxury commodity to most Europeans in the 1400s and 1500s. It had been introduced to Iberia by the Berbers and brought to much of the rest of Western Europe by Crusaders. While Iberia produced a significant amount of sugar in Andalucia and the Algarve, Portugal and Spain both hoped to expand their production in tropical colonies in Africa–or in the West. When Columbus found the “Indies” in 1492, his Spanish masters hoped not only that he would find gold and silks, but perhaps a profitable spot to expand the sugar business. In fact, he brought with him cuttings of sugar cane from the Canary islands.

    (Historiorantrix:The sugar was allegedly a gift from his SWEETheart, Beatrice de Boabdilla, the local landlady…SWEETheart, get it? har har. I slay me.)

    Photobucket - Video and Image Hosting
    Sugar cane requires careful attention during its growth, and requires significant amounts of processing. Further, the crop exhausts the land fairly quickly, necessitating new land be cleared. It’s not the sort of work that many labourers would voluntarily choose. And so the Portuguese were the first of many European nations to turn toward unfree labour in the production of sugar. As non-Christians (mostly Muslim or practitioners of various animistic faith traditions), these Africans were fine to enslave according to the old dictates of the Christian Church. ( Weird Historical Sidenote: Meanwhile, the Christian Africans could be sold to Muslim markets…the Africans really could not win.)

    The Portuguese would continue to dominate the slave trade through the 1500s,  trading the first slaves into the Americas in 1518. The Dutch achieved ascendency in the 1600s, and by the 1700s, the British were using their naval superiority to control the trade.

    It’s no exaggeration to say that slavery (alongside indenture) made our current western addiction to sugar possible.From the BBC’s Story of Africa series:

    Two thirds of all slaves captured in the 18th century went to work on sugar plantations. This reflected the enormous demand for sugar in food and drink at the time. In the 16th century a pound of sugar in Britain cost the equivalent of two days wages for a labourer. By the 17th century the price of sugar fell by half. In the space of 150 years sugar consumption in Britain rose by 2500 percent. By the late 1790’s what had been a luxury only enjoyed by the aristocracy was part of the diet of poor families in Britain. Sugar’s cheapness in the 18th century was made possible by slave labour.

    Slavery in Africa
    So why would the rulers of West Africa wish to sell their fellow men into this brutal business? It’s important to understand that slavery within Africa was quite different than its counterpart in the New World. In most of West Africa, slaves were a sign of power and wealth, seldom traded for financial gain. Slavery in West Africa might better be termed serfdom, so similar was it to that European institution of half-freedom that we discussed last week; we might also compare it to indentured servitude (which we’ll get to below). As PBS’s Africa in the Americas put it:

      In the Ashanti Kingdom of West Africa, for example, slaves could marry, own property and even own slaves. And slavery ended after a certain number of years of servitude. Most importantly, African slavery was never passed from one generation to another, and it lacked the racist notion that whites were masters and blacks were slaves.

    The initial appearance of European traders in West Africa did little to change that. However, as the demand grew from Portuguese and Spaanish traders to supply the vast labour needs of Iberian colonies overseas, African slavetrading shifted to become an important source of profit for many West African kings. The Asanti Kingdom of modern Ghana had long held the tradition of household slaves, but had viewed gold as its main trade commodity in 1500; by the early 18th century, they, like many African kingdoms, began concentrasting on slaves as their main export. As one European slave trader put it:

    Concerning the trade on this Coast, we notified your Highness that nowadays the natives no longer occupy themselves with the search for gold, but rather make war on each other in order to furnish slaves. . . The Gold Coast has changed into a complete Slave Coast. 

    – William De La Palma
    Director, Dutch West India Co.
    September 5, 1705

    Photobucket - Video and Image Hosting

    Benin (the Kingdom of Dahomey) led the way in waging war on its neighbors in order to meet the demand and reap the profits–profits often paid in European-style firearms which helped perpetuate Dahomean military dominance. One of the many tragic side-effects of this, however, was the growing dependence of the Dahomean economy on slavery. As King Gezo put it in the 1840s:

    “The slave trade is the ruling principle of my people. It is the source and the glory of their wealth…the mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery…”

    The slave trade made some African kingdoms very wealthy and weakened others, encouraging Portugal and, later, other European nations to set up client-kingdoms and even outright colonies on the coasts. Although this interference did not become outright control until the 19th century,  it proved devastating for some nations, such as  Angola, which fought both the Portuguese and the Dutch for control of its rich slaving lands. And the persistance of the slave trade late into the 19th century, long after it was outlawed by Great Britain, actually became a justification for late 19th century imperialism.  It is highly ironic that  European powers, whose demand for slaves had helped to alter the African slave trade so fundamentally, then used its continued existence as a justification for the “Scramble for Africa” in 1870-1900.

    The Middle Passage
    By the end of the 1700s, perhaps 70,000 people were enslaved and boarded onto ships destined for the Americas. What is now Angola was ravaged by rival slaving parties and slave-driven wars that devastates large swathes of the countryside. At minimum, 12 million Africans were enslaved. African slave-merchants kidnapped isolated adults or children, or took war captives to sell at the coast. Potential slaves had to walk in slave caravans to the European coastal forts, perhaps 1,000 miles away. Poorly fed and chained together, many died on what can only be called “death marches.” Those who could not keep up were killed or left to die. At the trading posts or “slave fortresss,”  they might wait in a  dungeon for months or even a year to be sold in a humiliating fashion to prospective transatlantic traders.

    Photobucket - Video and Image Hosting
    Propective buyers looked for signs of strength and endurance; different ethnic groups were believed to have different qualities, and, over time, slave traders developed elaborate practices for determining and choosing slave availability. Books were published in Europe giving advice as detailed as licking the sweat of slaves to determine health. It is difficult to imagine learning to grade human being like cattle, but slave traders did so all the time.

    Once purchased, slaves were crammed into slave ships that had just been full of firearms, gunpowder, brandy,  and other trade goods. For the voyage back to the Americans, they might be packed 300-400 in a space with 5 feet of headroom and no room to move around or fully lie down. Many were confused, and disoriented, unable to communciate with their slavers. Those who could communicate (many sailors spoke African  pidgin, a mixed language of the West African coast that some African sailors, traders, and fishermen also spoke), they could hardly believe what they were being told. They were destined for agricultural labour?  After all, in Africa, agriculture did not need so many people. What were their kidnappers hiding? Ex-slave Olaudah Equiano, who published his memoirs in the 1790s, gave a possible explanation that many slaves might have believed: cannibalism.

    When I looked round the ship too and saw a large furnace of copper boiling, and a mulititude of black people of every description chained together, every one of their countenances expressing dejection and sorrow, I no longer doubted of my fate and quite overpowered with horrow and anguish, I fell motionless on the deck and fainted. . . . I asked if we were not to be eaten by those white men with horrible looks, red faces and long hair? — The Interesting Narrative of the Life of Olaudah Equiano

    .

    Photobucket - Video and Image Hosting
    This terrifying journey, called the “Middle Passage,” with its disease, stench, and desapir, prompted many Africans to try to take their own lives. Slavers tried to watch out for this, and would force feed those intent on starvation. From all causes combined, it’s estmated that bout 20% of slaves died on this horrible journey. Was it worth it to import all this labour? Couldn’t the earliest Europeans in the New World have found slaves, servants, or serfs closer at hand? In fact, the Spanish enslaved Native peoples long before they enslaves Africans. Let’s take a quick look at this institution of slavery from a Native American perspective.

    Unfreedom in Mexico and Spanish America
    Amongst the Aztec, slavery ultimately derived from the practice of taking captives during war. Slavery was not inheritable, and slaves could own possessions and buy their own way out of servitude. Poeple might be sentenced to become slaves as a form of punishment for crimes such as murder or disobedience to one’s parents. (Historiorantrix: That’s gotta be at least as good as the bogeyman for threatening your kids with.)

    In order to provide labor for their exploits throughout the Americas, the Spanish conquistadors enslaved many of the native peoples to work on plantations or in the gold and silver mines they developed. Their justification was the “encomienda,” a grant by the Spanish crown that gave conquistadors the right to the labor of the people on lands the conquistadors claimed. If that sounds fmailiar to you, it should. It was essentially the logic of serfdom, transferred to the Americas. The difference was that the conquistadors did not actually get the land, just the claim to control the labour of a certain area.

    Photobucket - Video and Image Hosting
    In practice, the early years of this slave experience were horrific. Many conquistadors of the early generation wanted to make a quick profit in the New World.The worst of these men demanded heavy taxes, worked slaves to death, and generally treated the poor Natives as less than human.  The horrors of this era resulted in numerous protests, notably from clergyman Bartolomeo de las Casas, about the brutality practiced by conquistadors toward the Natives:

    There was a custom among the Spaniards that one person, appointed by the captain, should be in charge of distributing to each Spaniard the food and other things the Indians gave. And while the Captain was thus on his mare and the others mounted on theirs, and the father himself was observing how the bread and fish were distributed, a Spaniard, in whom the devil is thought to have clothed himself, suddenly drew his sword. Then the whole hundred drew theirs and began to rip open the bellies and, to cut and kill those lambs-men, women, children, and old folk, all of whom were seated, off guard and frightened, watching the mares and the Spaniards. And within two credos, not a man of all of them there remains alive.

    The Spaniards enter the large house nearby, for this was happening at its door, and in the same way, with cuts and stabs, begin to kill as many as they found there, so that a stream of blood was running, as if a great number of cows had perished.

    ….The cleric, moved to wrath, opposes and rebukes them harshly to prevent them, and having some respect for him, they stopped what they were going to do, so the forty were left alive. The five go to kill where the others were killing. And as the cleric had been detained in hindering the slaying of the forty carriers, when he went he found a heap of dead, which the Spaniards had made among the Indians, which they thought was a horrible sight. When Narvaez, the captain, saw him he said: “How does Your Honor like what these our Spaniards have done?”

    Seeing so many cut to pieces before him, and very upset at such a cruel event, the cleric replied: “That I command you and them to the devil!”  — Bartolomé de las Casas: A Selection of His Writings (New York: Knopf, 1971), via Norton’s website

    Spanish Slavery in Theory and in Practice
    These practices were out of line with Spanish law, which contained provisions for slaves from its old medieval laws. The 13th century Siete Partidas treated slaves as humans, guaranteed protection from abusive masters, and even made provision for them to testify in court against abusive masters. A drive to bring Spanish practices in line with these principles  resulted in the “New Laws” of 1542, which attempted to regulate the encomienda system to moderate the lives of slaves.  They were unpoplar with the encomederos (holders of encomienda), who openly revolted at the attempt to impose these laws in Peru. The gradual introduciton of the hacienda system, which gave rights to landowners, helped institute a system of slavery that looked more and more like serfdom.

    Photobucket - Video and Image Hosting
    Key to this development was the insistence of the Spanish Church that slaves be Christianized. While forced mass conversions may seem repugnant to us—something out of Focus on the Family’s Christmas wish list (along with a pony, a plastic rocket, and a protestant Papacy for James Dobson )—it was a sort of progress in the 16th century. Becomig Christian guaranteed slaves certain basic human rights in Spanish society. On a human level, Christianization also provided  recognition that the Native Americans were human beings,  posessed of a soul. Furthermore, by bringing the Indians into the Christian fold, the church *in theory* guaranteed that family units would be preserved intact; marriage was a sacrament in the Catholic church and could not be broken by masters. In theory.

    Despite the apparent legal advantages enjoyed by these slaves, it is worth noting that Latin American slavery could be incredibly brutal, especially on large sugar and coffee plantations in the West Indies and Brazil, where masters, unchecked by the eyes of the Church or their neighbors, practied brutal floggings and often expected slaves to produce their own food in their free time.  The tragic death of Native peoples due to disease and overwork was a terrible blow to Iberian hopes of using the Native Americans as primary sources of labour. And so the Spanish and Portuguese colonial powers became the first to import large numbers of African slaves for heavy field work in the 1500s.

    The mixed system of unfree labour included both Native and African slavery:  Native American slaves usually worked on farms (haciendas) and in households. Ladinos or skilled workers from Spain, were given the greatest freedom and might even work apart from their masters. Most oppressed were African bodales, who did heavy labour on plantations and in mines. Relatively few African women were imported, keeping the African-descended birth rate quite low, necessitating the constant importation of new slaves.

    It is estimated that of all the slaves who left Africa over the course of the Atlantic slave trade,  60- 70 percent ended up in Brazil or the sugar colonies of the Caribbean, where slaves might make up 80 to 90 percent of the population. Of course, it’s also worth noting that under Spanish rule, there was considerable mixing between Africans, Natives, and Europeans, so much so that elaborate racial classifications sprang up—mulatto, estizo, octoroon, quintroon and the like–all of which identified a person as a separate “race” based on precise admixtures of European, Native, and African blood.

    The English in North America Ireland
    Compared to the Spanish, the English were certainly latecomers to the American colonization business. After several abortive colonization attempts, they established a settlement at Jamestown, in the Virginia colony. But that didn’t mean the English had not been about the business of planting colonies. Far from it. They were busy in Ireland, where, Henry VIII (of several wives fame) had instituted a policy under which Irish Lords (of either English or Gaelic descent) could gain Royal protection by surrendering their lands to the crown and then swearing loyalty, getting their lands re-granted in the process.

    Henry’s aim was to gradually transform Ireland into an English-style kingdom. Henry’s daughters attempted to continue this process by “planting”  colonies of English farms, which the wild Irish were supposed to look at, admire, and then imitate. It didn’t work that way. The plantations were subject to warfare from disgruntled chiefs and outright rebellion in some cases. The failed Ulster plantation, for example, descended into civil war and mass murders in the 1570s. (Historiorantrix:That only counts as a “successful” colony if you’re Donald Rumsfeld.)

    Photobucket - Video and Image Hosting
    So, in the 1580s the English shifted their tactics, settling new plantations around extensive fortifications and importing reliable Protestant settlers from England (and, later, Scotland). They did not trust the native Irish, and in fact expected trouble, disloyalty and rebellion. English policy encouraged watchful separation, not intermixing and “civilization” of the natives.  When Ulster again erupted in rebellion in 1698, English settlers fled their fields and retrated to fortresses. It was not an atmosphere that built trust, nor did it make the Irish seem like reliable labourers.

    This helpes to explain why the English in North America made relatively little use of Native Americans as labourers, and built large fortifications, mainly keeping themselves separate from the (obviously untrustworthy and rebellion-prone!) Native Americans. Those private persons sponsoring “plantations”(17th century English term for colony) in Ireland were forbiddden from taking Irish tenant farmers; it is little wonder that the English in Ameica did not plan to set up a hacienda-style system with Native peoples labouring on their farms. Instead they shut themselves away from their Native neighbours, always on the lookout.

    Slavery and the First Nations of North America
    When the English did practve slavery on their Native Americna neighbors, it was most often in retaliation for war; after many major colonial conflicts, Englishmen sold off those Native leaders they deemed responsible for uprisings and rebellion. This was not totally unlike the Native approach. Among many of the First Nations of eastern North America, captives taken in war might face several fates, which ranged from ritual torture (as a means of spiritually avenging the wrongs of fallen warriors) or adoption into the family, perhaps to take the place of the fallen. Somewhere in between, they might become unfree labourers bound to a particular person or family. These tended to be outcasts with few rights. These practices continued amongst the First Nations of the Eastern Woodlands long after cotact with Europeans, and in fact English colonists sometimes found themselves enslaved after being taken prisoners of war. Puritan Mary Rowlandson of Massachusetts was so enslaved during King Philip’s War (1675-1676), and recorded her experiences among the Wampanoag in detail:

    This morning I asked my master whether he would sell me to my husband. He answered me “Nux,” which did much rejoice my spirit. My mistress, before we went, was gone to the burial of a papoose, and returning, she found me sitting and reading in my Bible; she snatched it hastily out of my hand, and threw it out of doors. I ran out and catched it up, and put it into my pocket, and never let her see it afterward. Then they packed up their things to be gone, and gave me my load. I complained it was too heavy, whereupon she gave me a slap in the face, and bade me go; I lifted up my heart to God, hoping the redemption was not far off; and the rather because their insolency grew worse and worse..–Mary Rowlandson, The Narrative of the Captivity of Mary Rowlandson

    Photobucket - Video and Image Hosting
    (Weird Historical Sidenote Rowlandson, like other New England Puritans, especially feared being sold to rival European colonists in New France. Worse than servitude to Native Americans was the prospect of becoming French and Catholic! And talk about payback: at the conclusion of King Phillip’s War, the English colonists of New England sold Phillip’s family and hundreds of his tribesmen into slavery in the Caribbean. )

    The English enslaved Native Americans on a regular basis in the Carolinas, where a brisk trade in captured Native Americans was in place by the 1730s. Yet the practice was never as widespread as African slavery as later to be, for two important reasons. First, Native peoples could run away fairly easily and make their way back home. Second, they  seemed  to have a very high death rate when enslaved–dying both from close exposure to European diseases and from ill-treatment from their masters. The English relied on other means of bringing large-scale unfree labour into their colonies…and I’m not talking about race-based enslavement of Africans.

    Indenture
    If you learned about indentured labour in school, you may have heard that it was a great way for prospective colonists to make their poor-but-humble way across the Atlantic. An indentured servant would sell his labour to a master in return for passage. For seven years or so (contracts usually ran four to seven years but other lengths of time were possible), the servant worked  in his or her master’s fields or house. At the end of that time, s/he might receive land, gifts, and other lovely things as s/he became a free member of society.

    And *sometimes* it actually worked that way.

    Photobucket - Video and Image Hosting
    As often as not, however, indentured servants might be kidnapped, impressed, or otherwise bamboozled into making the voyage to the New World. (Historiorantrix:see, Bush’s America, military recruiting in.) Especially in the early part of the seventeenth century, Virginia and the Caribbean had a reputation of being malaria-infested death traps; a servant was actually more likely to die than to survive indenture! In some cases, criminals, orphans, the homeless and “wandering vagabonds”  might be “sold” to the enterprising Virginia Company or other colonial joint stock companies in order to help with the labour shortage. (Historiorantrix: I told you last time that you’d better have a pass if you were wandering around Stuart England!)

    The Irish got hit especially hard with indenture. 17th century Irish rebels and “troublesome” peasants might be sold into lifetime indenture as a punishment; after Oliver Cromwell re-conquered Ireland in 1649-51, thousands of Irish were sold to colonies in the Caribbean. They are often referred to as “slaves,” and in light of their legal status that’s pretty much correct. It was not an inheritable slavery, however, and, if pardoned, they might live as free men and women. Few achieved such status–they died or rebelled, leading to worse punishments.

    It is not surprising that these indentured servants often provided less-than-co-operative labour. One group of servants bound for Virginia found themselves shipwrecked on Bermuda in 1609. Discovering that it was an island paradise requiring little labour to survive, the settlers and the working sailors of the ship, the Sea-Venture, mutinied against their masters in the Virginia company, refusing to leave Bermuda. Although they were eventually forced to Virginia, ponder this:

    …during their forty-two weeks on the siland, sailors and others among the “idle, untoward, and wretched” had organized five different onspiracies against the Virginia Company and their leaders who had responded with two of the earliest capital punishments in English America, hanging one man and executing another by firing squad to quell the resistance and carry on with the task of colonization. —Marcus Rediker and Peter Linebaugh, The Many-Headed Hydra:Sailors, Slaves, Commoners, and the Hidden History of the Revolutuonary Atlantic

    Photobucket - Video and Image Hosting
    Weird Historical Sidenote: Didja know Shakespeare was an investor in the Virginia Company? The Tempest may be based partly on his knowledge of this incident and other reports of the islands of the New World.

    Getting indentured servants to work obediently was not easy.  And why should it be? Many of the indentured servants did not want to be there. Disease, privation, violence from their fellow servants and from masters were bad enough. Add in the constant threat of Indian attack and you have a recipe for disaster. As one servant wrote his parents in 1623:

    This is to let you understand that I your child am in a most heavy case by reason of the country, [which] is such that it causeth much sickness, [such] as the scurvy and the bloody flux and diverse other diseases, which maketh the body very poor and weak. And when we are sick there is nothing to comfort us; for since I came out of the ship I never ate anything but peas, and loblollie (that is, water gruel). As for deer or venison I never saw any since I came into this land. There is indeed some fowl, but we are not allowed to go and get it, but must work hard both early and late for a mess of water gruel and a mouthful of bread and beef. A mouthful of bread for a penny loaf must serve for four men which is most pitiful. [You would be grieved] if you did know as much as I [do], when people cry out day and night – Oh! That they were in England without their limbs – and would not care to lose any limb to be in England again, yea, though they beg from door to door….

    …And I have nothing to comfort me, nor is there nothing to be gotten here but sickness and death… —Richard Frethorne, 1623, at History Matters

    Africans in English America: Servants or Slaves?
    So if we return to our “20 odd” Angolans who landed in Virginia in 1619, we should not be blinded by their “servant” status. Perhas they were not slaves. Perhaps they worked in fields on equal footing alongside the English and Irish servants who laboured with them—but that did not mean they were “free.” Rather, it meant they were equally likely to be starved, beaten, and severely punished for the slightest infraction. Worst of all (from the owner’s perspective) was the problem of running away. The typical English method dealing with runaway servants (in addition to whipping, of course) was additional time on a sentence: servitude might be extended by a  year, two, three, four, or many more…not something most servants could bear thinking about.

    The Africans, however, possessed a key trait that their English and Irish counterparts lacked. They were much more resistant to hot-weather diseases. Northern Europeans, little used to heat, malaria, and other nasty tropicana, succumbed to infection and heatstroke far more often than Africans. This mean that a few African servants might do more than just survive. They might be freed and eventually prosper in Virginia. The first of these, one Antonio Johnson,  arrived as a “servant” in 1621. It is unclear what kind of status Antonio Johnson had—was he more of a “slave” as we might understand it, or a servant?

    Photobucket - Video and Image Hosting
    Whatever has status, he worked producing tobacco, which emerged int he 1610s as Virginia’s major cash crop–not as profitable as the later cotton, but good enough to justify the colony’s continured existence. He survived the harsh labour and Indian attacks, and was even able to wed one “Mary, a Negro.” At some point, they were both freed–the first free blacks recorded in English America. Records show them farming on their own in the 1640s; by the 1650s, they had acquired 250 acres..and several slaves or servants of their own. But a fire in 1653 devastated them and their four children; the Johnsons sold the land and moved to Maryland in 1665, where they rented 300 acres. When Antonio died in 1670, Mary took over the lease and renegotiated it to a 99 year term. Their sons inherited the land and contiued to farm.

    A curious addendum to the Johnson story provides us a clue about the changing perceptions of Africans held by English colonists in 17th century Virginia. In August of 1670, several months after “Anthony” Johnson’s death, a jury in a Virginia court decided that ownership of the 250 acres formerly owned by Johnson should be escheated, or reverted, to the crown of England. The reason?

    [the jury]…doth declare that the said Anthony Johnson lately deceased in his life tyme was seized of fifty acres of land now in the possession of Rich. Johnson in the County of Accomack aforesaid and further that the said Anthony Johnson was a negro and by consequence an alien and for that cause the said land doth escheat to this . . .PBS’s Africans in America

    Hmmm. ..so a “negro” was now automatically an “alien.” Sounds like something’s up, and not something good.

    In fact, the middle and later seventeenth century saw shifts in Virginia slavery; by 1640, at least one African servant had been declared a “slave”—forced to endure lifetime servitude in punishment for running away. No European servant in Virginia ever received such a designation (although as we’ve seen, something similar was assigned to the Irish in the Caribbean). This slave seems to have been non-Christian, which may have accounted for his status, but the connection between slavery and skin colour was not far off.

    For English slavery did not exist in a vacuum. Although England had no formal statutes relating to slavery, and although its system of serfdom had faded away fairly early in the medieval period, English traders interacted with Spanish, French, Dutch, Portuguese, and other traders quite extensvely. Their world was one in which unfree labour was cheap—and a lot of it was enslaved and African. The first English colony in the present U.S. to pass slave-related statutes was Massachusetts in 1641; Connecticut followed in 1650. While slavery was never extensive in these northern colonies, slaves/servants served as labour on small farms and as household servants.

    (Weird Historical Sidenote:One New England slave is somewhat famous to us: Tituba, the West Indian woman who served the household of Minister Samuel Parris during the Salem witchcraft crisis in 1692. Accused of witchcraft herself, Tituba confessed that she had learned the trade from “her mistress” in the West Indies, alluding, perhaps, to the practice of African folk-magic amongst white and black alike in the Caribbean. Or she may have meant something else entirely; some scholars argue that Tituba was Native, not African. Whatever she was, she certainly wasn’t free, and it seems likely that her “confession” was coerced by her master through whipping and other violent punishment.)

    Photobucket - Video and Image Hosting
    The first statute dealing with slaves was passed in Virginia in 1661; it referred to the punishment of white servants who ran away with black slaves. A 1662 law stated that children would be born bonded or free according to the status of the mother. And, as we saw in the case of Anthony Johnson, “negros” might be defined as foreigners. Despite these changes, Virginia’s courts might side with Africans, who still enjoyed some equal protection under the law:

    Philip Cowen Case: At her death in 1664, a Mrs. Amye Beazlye left to her cousin a black servant named Philip Cowen. The will stated that Cowen should work for the cousin for eight years, then be given his freedom and three barrels of corn and a suit of clothes. At the end of the eight years, the cousin extended the contract three years. At the end of those three years, he informed Cowen that another nine years of service was due. In 1675, Cowen petitioned the court for his freedom. The court sided with Cowen, asking the owner to release him from servitude and to pay him the corn and the cost of a suit.
    ….
    Elizabeth Key Case: The illegitimate daughter of an enslaved black mother and a free white settler father, Elizabeth Key spent the first five or six years of her life at home. Then in 1636, ownership of Elizabeth was transferred to another white settler, for whom she was required to serve for nine years before being released from bondage. At some point, ownership was transferred again, this time to a justice of the peace. When this owner died in 1655, Elizabeth, through her lawyer, petitioned the court, asking for her freedom; by this time she had already served 19 years…Elizabeth was ultimately freed.—PBS,Africans in America

    The Growth of Race-Based Slavery in English America
    By 1705, however, an important shift had occurred in America. In 1650, there were only about 300 Africans in Virginia, but by 1700  over one thousand slaves entered the colony every year; black labour was increasingly more important than free white indenture. Maryland and the newer Carolina colony also demanded large numbers of slaves to tend their crops. England had aggressively entered the slave trade in 1672 with the formation of the Royal African Company, which held a monopoly on slave trading. In 1698, bowing to the ever-more-powerful merchant lobby, Parliament revoked this monopoly, and English slavers entered into the trade without restriction. The number of slaves transported on English ships soared to about 20,000 a year.

    The permeable line between indentured servitude and slavery was sealed in 1705, when Virginia passed the following statue:

    All servants imported and brought into the Country…who were not Christians in their native Country…shall be accounted and be slaves. All Negro, mulatto and Indian slaves within this dominion…shall be held to be real estate. If any slave resist his master…correcting such slave, and shall happen to be killed in such correction…the master shall be free of all punishment…as if such accident never happened.

    Photobucket - Video and Image Hosting
    With this statute, skin colour trumped all else. Even if a slave converted to Christianity in America, that did not affect his or her status. They were now “real estate,” to be killed if a master did so during the course of “correction.” What could  kind of “correction” could slaves face?  For minor offense, like leaving the plantation without permission, they could be whipped, branded, or maimed. Slaves convicted of murder or rape would be hanged; for robbery, they would receive sixty lashes and be placed in stocks, before having his or her ears surgucally removed.(Historiorantrix: Who wrote this thing–Abu Gonzales?) It is probably worth noting that such treatment was not totally out of line from what any criminal might face in England, where pickpocketing could be a hanging offence. Still, the 1705 statute for the first time created crimes that black or Indian slaves–and ONLY those slaves–could commit, such as “consorting” with whites. And slaves lost all rights to sue at court for better treatment; the 17th century world that occasionally gave black servants and slaves a break in the law was gone forever.

    Under these new attitudes, slavery expanded in English America, especially in South Carolina, where close connections with the Caribbean ensured a steady stream of new slaves; by about 1715, blacks began to outnumber whites in that area. Certain slaves were highly prized for their ability to produce rice, the first cash crop of the Carolinas. Masters sought out men and women of Sierra Leone and other rice-producing areas in Africa. This acknowledgment that slaves brought with them skills and trades from Africa was not accompanied by a curiosity about other aspects of  slaves’ previous lives, or their personalities. Disobedience was chalked up as evidence of inferior morals or nature, rather than recognized as justifiable anger about the state of unfreedom.

    William Byrd, a Virginia slaveholder, recorded this attitude in his diary, which includes frequent, almost casual references, to punishing his slaves:

    ? June 10, 1709. I rose at 5 o’clock this morning but could not read anything because of Captain Keeling, but I played at billiards with him and won half a crown of him and the Doctor. George B-th brought home my boy Eugene. . . . In the evening I took a walk about the plantation. Eugene was whipped for running away and had the [bit] put on him. I said my prayers and had good health, good thought, and good humor, thanks be to God Almighty.
    Photobucket - Video and Image Hosting
    ? September 3, 1709. . . . I read some geometry. We had no court this day. My wife was indisposed again but not to much purpose. I ate roast chicken for dinner. In the afternoon I beat Jenny for throwing water on the couch. . . .

    ? December 1, 1709. I rose at 4 o’clock and read two chapters in Hebrew and some Greek in Cassius. I said my prayers and ate milk for breakfast. I danced my dance. Eugene was whipped again for pissing in bed and Jenny for concealing it. . . . —Excerpts from The Diary of William Byrd.

    Resistance and Rebellion
    Although open rebellions were rare in what is now the United States (a marked contrast to the Caribbean colonies), slaves were prepared to defend themselves and take up arms in the service of freedom if they could. Many in South Carolina were former soldiers, captured in the course of wars between the various Central and West African kingdoms.  A significant number could  speak some Portuguese or Spanish, and many from central Africa  believed in a form of Catholicism, thanks to the 15th century conversion of the royal House of Kongo.

    It’s hardly surprising that many of these capable men, trained in firearms and military tactics, decided to run a away from South Carolina and make their way to Spanish Florida, where they could at elast practice their religion and enjoy somewhat better treatment. (Historiorantrix Enslaving military veterans ranks way up there with engaging in land wars in Asia as a Very Bad Idea, historically speaking.) A steady stream of these runaways fled South Carolina in the 1720s, irritating the English in South Carolina to no end:

    The Spanish are receiving and harboring all our runaway negroes, they have found out a new way of sending our own slaves against us, to rob and plunder us — they are continually fitting out parties of Indians from St. Augustine to murder our white people, to rob our plantations and carry off our slaves so that we are not only at a vast expense of guarding our southern frontiers, but the inhabitants are continually alarmed and have no leizure to look after their crops.—Arthur Middleton, 1728, Acting Governor of Carolina, from the PBS Africans in America website.

    The most spectacular of these runaway attempts came in 1739–the so-called Stono Rebellion. On September 9, 1739, a man named Jemmy, one of these Angolan veterans, led a large group of slaves from Stono, near Charleston, toward Florida. They raided different slaveholding homesteads, gaining rifles and killing a number of slaveholding families (at least one was spared because of his reputation for kindness). Slaves from these plantations joined the march. The 100-strong army, now possessed of firearms, shouting “freedom” and waving a banner, fought a pitched battle against the South Carolina militia, resulting in several deaths. 
    Photobucket - Video and Image Hosting

    The English colonists executed the rebels whom they managed to capture, and the  South Carolina legislature promptly enacted a Negro Code that forbade slaves from growing their own food, assembling in groups, earning  their own money, or learning to read.  But some of the rebels  escaped and did make their way to Florida. St. Augustine’s Spanish rulers were delighted to have the escapees, and impressed them into the Spanish military forces. The fought alongside Spanish soldiers against English incursions, so bravely that Spanish authorities granted freedom to several of them. They integrated into the black community of St. Augustine, attending Catholic services and praying in Kikongo.

    Other runaways joined Native American settlements, or found work in outlaying colonies where no questions might be asked. A significant number joined the community of free black men who made their living as sailors. Pirate crews were particularly unlikely to ask questions; in his essay “Black Men under the Black Flag,” Kenneth J. Kinkor  notes that 20-30 percent of pirate crews prosecuted under English law in the first part of the 1700s were recorded as African or black.

      (Weird Historical Sidenote: He also suggests that Blackbeard might have been  of mixed African and European descent. Arrrrrrr…)

    Some slaves might have been plotting more open, more widespread rebellion. One of the great historical mysteries of the colonial era is the alleged New York Plot of 1741. New York City had the highest density of slaves of any northern city (2,000 of the 20,000 inhabitants were black). In the wake of the Stono rebellion, white inhabitants cast a nervous eye on their supposed inferiors. In the winter of 1741, a serious of mysterious fires broke out across the city, destroying part of Fort George and several mansions of the wealthy and powerful. Africans and Irish were accused of plotting a vast uprising; the witch-hunt that followed led to accusations against 160 blacks and at least a dozen white servants, mostly Irish Catholics. Thirty-one Africans were killed; 13 were burned at the stake (a punishment for treason of all sorts, including “petty treason” against one’s master or husband). 70 were deported. Four whites were also hanged, in a judicial travesty that far outran the Salem Witch Trials in its fatality.

    Photobucket - Video and Image Hosting
    Was it all the result of paranoia? Historians are deeply divided over this question. Certainly, the episode reeks of racism and prejudice against servants of all sorts. The English middle class of the 18th century feared their own servants and slaves, with good reason. With somewhat less good reason, they believed that Catholics made up a secret Fifth Column, waiting to strike and murder good Protestants in their beds. (Historiorantrix: We wouldn’t know anything about irrational fears of religious minorities-as-terrorists today, now, would we? ) The hysterical identification of a local Latin teacher as a secret Jesuit agent certainly seems implausible. On the other hand, it’s worth noting that some of the Africans accused were probably ex-soldiers from Angola…exactly the sort of men we might think prone to plotting and executing a sucessful rebellion.

    If nothing else, the 1741  episode helps summarize some of the fears and prejudices that underlay the institution of in 18th century English America. There is no simple answer to why Africans became the primary victims of race-based slavery in what is now the United States. Different historians have argued for a wide variety of explanations. Here are a few factors that seem important to the enslavements of Africans and their very powerless status under English law:

    * English attitudes about their colonial experiences in Ireland made them uneasy about enslaving a native population in their own land; it was better to send punished slaves away and/or import new slaves from a  different area
    * English laws also drew clear dividing lines in legal status by Catholic/Protestant status, contributing to a “them/us” mentality
    * English law and custom that granted significant powers of punishment to male heads of household–over wives, children, servants, apprentices and employees
    * Contact with African slavery in other colonies, combined with the easy availability of unfree African labour
    * Tremendous demand for agricultural labour
    * Tradition of violent punishments for unfree labourers in early colonies leading to an acceptance of the worst kinds of punishments for servants
    * Skin colour as a convenient marker of unfree (or at least potentially unfree) status
    * Superior African resistance to hot-weather diseases
    * Superior African knowledge of certain crop cultivation (such as rice)

    This is not a comprehensive list, but rather a pointer towards the complexity that was the evolving English colonial system of slavery. With no immediate precedent in English law, slave statutes borrowed from anti-Catholic statutes in Ireland, Iberian precedent, English property laws and classical Roman law. Cultural attitudes reflected wider hostilities toward outsiders and suspicions of servants and landless labourers. For all these reasons and more, English colonists in the mid-to-late 18th century held increasingly racist, negative attitudes towards Africans that made inheritable, race-based slavery easy to justify.

    Photobucket - Video and Image Hosting
    At the same time, a few tentative English voices began to protest the system, While runaway slaves seized their own freedom, and other slaves throughout North America found myriads of little ways ot protest their unfree state, a few English thinkers expressed discomfort with slavery and the slave trade. Throughout the 18th century their voices would grow. One of the texts that 18th century antislavery advocates claimed as their own was a 1688 novel written by one Aphra Behn (perhaps you’ve heard of her!) While not a protest against the entire system, Oroonoko does paint a sympathetic treatment of an African prince, unjustly tricked into a slave ship. Whatever his skin colour, he is naturally noble, and in no way deserves to be enslaved. As he entreats his fellow slaves to rise up against an unjust system, Behn gave him words that might have been spoken by Jemmy, or any of the smoldering Angolan soldiers who fought so bravely for their freedom:

    “And why,” said he, “my dear friends and fellow-sufferers, should we be slaves to an unknown people? Have they vanquished us nobly in fight? Have they won us in honorable battle? And are we by the chance of war become their slaves? This would not anger a noble heart; this would not animate a soldiers soul: no, but we are bought and sold like apes or monkeys, to be the sport of women, fools, and cowards; and the support of rogues and runagates, that have abandoned their own countries for rapine, murders, theft, and villainies. Do you not hear every day how they upbraid each other with infamy of life, below the wildest savages? And shall we render obedience to such a degenerate race, who have no one human virtue left, to distinguish them from the vilest creatures? Will you, I say, suffer the lash from such hands?” —- Oroonoko by Aphra Behn, 1688

    Next week, the Moonbat returns as we conclude our series on slavery with yet another collaboration. I’ll relate the story of slavery in the era of the American Revolution while Moonbat takes us to the finish line. This has been yours truly, aphra behn, guesting in the Cave of the Moonbat.


    History for Kossacks: Of Slaves and Serfs (Special Guest Edition)

    October 7, 2007

    Originally posted by aphra behn on 11/13/06

    Greetings Cave Dwellers! aphra behn guest-hosting here. Tonight The Moonbat and I continue our exploration of the history of “unfreedom,” with a special focus on the institutions that led to the development of slavery in the United States.

    Photobucket - Video and Image Hosting

    If you missed last week, UM got us started with a look at the roots of slavery in the ancient Western world. Tonight we’ll continue the journey  with a wildly experimental attempt at co-authoring! He’ll talk for a while (and I’ll interrupt) about slavery in the Roman world. Then I’ll pick up where he leaves off and look at several kinds of “unfreedom” in Western Christendom, in Byzantium, and under the Ottoman Empire.  There’s plenty of room in the Cave for guests, so pull up a chair, pour yourself a cup of tea, grab some pie (leave the doughnuts for the trolls) and follow us below the fold to the Hills of the Eternal City….

    (Crossposted at Never in Our Names and Progressive Historians.)

    The following section on ancient Rome comes to you straight from the pen of the Moonbat!—ab

    Slavery and Freedom in Ancient Rome

    It’s hard to say if slavery existed in the vicinity of the Seven Hills during the time of Romulus and Remus, ca. 753 BCE, as the Etruscan then-masters of the Italian Peninsula didn’t leave much in the way of written records – currently, only a couple of hundred or so words of Etruscan are known.  Still, slavery must have been a practice of Rome’s northern neighbor/overlords; this Etruscan Dictionary lists five or six different ways of referring to slaves as an individual or as a class, and it certainly was known among the Greek colonies in southern Italy.

    The Romans liberated themselves from Etruscan-puppet oppression in 509 BCE, and though they swore to themselves that they would never be ruled by a king again (ha!), the newfound freedom to make Senatorial orations about what a great thing is liberty was not to be extended to every level of society.  By 451 BCE, when the Romans were ready to carve their legal code into stone, slavery already had a considerable body of laws regarding the status of slaves.  According to The Twelve Tables:

    Table IV.

    2. If a father sell his son three times, the son shall be free from his father.

    Table VIII.

    2. If one has maimed a limb and does not compromise with the injured person, let there be retaliation. If one has broken a bone of a freeman with his hand or with a cudgel, let him pay a penalty of three hundred coins.  If he has broken the bone of a slave, let him have one hundred and fifty coins. If one is guilty of insult, the penalty shall be twenty-five coins.

    Historiorant:  there’s a lot of people out there that owe me a lot of money – u.m.)

    Table XII.

    2. If a slave shall have committed theft or done damage with his master’s knowledge, the action for damages is in the slave’s name.

    Via Ancient History Sourcebook

            Photobucket - Video and Image Hosting

    Later legal advances regarding slaves included laws against selling them to fight beasts in the amphitheater, killing them due to age or infirmity, and executing them without due process of law (Fun with History – insert your own Military Commissions Act joke here), but back in those pre-Imperial times, life wasn’t all legal protections and protected status for the Roman slave:

    The master’s power over the slave was called dominica potestas, and it was absolute. Torture, degradation, unwarranted punishment, and even killing a slave when he was old or sick, in the eyes of the law, slaves were property who could not legally hold property, make contracts, or marry, and could testify in court only under torture. The death of his master did not free a slave.

    classicsunveiled.com (an excellent resource, btw – u.m.)

    Sub Hasta Venire

    The Romans acquired many (probably most) of their slaves through warfare, but their military wasn’t really in the slaving business per se.  Since the legions were military outfits not designed for the long-term care and feeding of a captive population, they were usually followed by slave wholesalers, who bought people in bulk and arranged for their shipment back to civilization.  After a victory, the army’s quaestor (paymaster), would oversee the sale of the newly-enslaved, usually with a spear marking the site of the auction and wreaths used to mock/identify the victims, and those slaves who were not to be retained for the public good (on construction sites and in city maintenance) were auctioned off to the private market.

    Photobucket - Video and Image Hosting

    Not that it mattered much to a slave, who might find himself (or herself) with an iron band riveted around his (her) neck like the one in Rome which reads: “I have run away. Catch me. If you take me back to my master Zoninus, you’ll be rewarded”.  They might also find themselves owned by anyone from Cicero, who really liked his slave Tiro, to somebody like Vedius Pollio, who once ordered a slave thrown into a pool of carnivorous fish because the guy dropped a goblet.  The reputation of the Romans as brutal masters was not one that was ignored by people who saw the legions approaching, either – at the siege of Alesia in 50 BCE, in which Julius Caesar’s 60,000 men were encircling a fortress containing 80,000 warriors and 100,000 civilians, the Celtic chieftain Critognatus let his people know in no uncertain terms what was at stake:

    “What counsel, then, have I to offer?  I think we should do what our ancestors did in a war that was much less serious than this one.  When they were forced into their strongholds by the Cimbri and Teutoni, and overcome like us by famine, instead of surrendering they kept themselves alive by eating the flesh of those who were too old or too young to fight.  Even if we had no precedent for such action, I think that when our liberty is at stake it would be a noble example to set to our descendents.  For this is a life and death struggle, quite unlike the war with the Cimbri, who, though they devastated Gaul and grievously afflicted her, did eventually evacuate our country and migrate elsewhere, and left us free men, to live on our own land under our own laws and in possession of our rights.  The Romans, we know, have a very different purpose.  Envy is the motive that inspires them.  They know that we have won renown by our military strength, and so they mean to install themselves in our lands and our towns and fasten the yoke of slavery on us for ever.  That is how they have always treated conquered enemies.  You do not know much, perhaps, of the condition of distant peoples; but you need only look at that part of Gaul on your own borders that has been made into a Roman province, with new laws and institutions imposed upon it, ground beneath the conqueror’s iron heel in perpetual servitude.”

    extract from The Conquest of Gaul by Julius Caesar (tr. S.A. Handford), London, 1982.

    Historiorantrix:Lemme interrupt the Moonbat just briefly to point up an example of Roman slavery that may be familiar to some of us. It’s found in Matthew 8:5-13 and Luke 7:1-10: the story of the centurion with a sick “servant,” whom he begs Jesus to heal. The centurion clearly loves his slave a great deal–so much that he is willing to beg a Jewish prophet man for assistance. (To put this in modern terms, imagine the circumstances that might drive a U.S. Marine officer to beg an Iraqui holy man for help.)

    Why did the centurion care so much about his slave? Matthew and Luke don’t tell us directly; apparently their first century audiences would have found it perfectly understandable that a centurion might care so much about one of his household slaves. Matthew uses the Greek word “pais” to describe the servant; in the context of the story, it sounds very much as if the servant is a male concubine to the centurion. As Moonbat noted, male heads of household had absolute power over their slaves. This included the  right to sexually access any of their slaves, male or female.

        Photobucket - Video and Image Hosting

     Romans did not view this as adultery, even if the slaveholder was married. Some of these encounters consisted of sexual harassment, abuse, and rape. Others may have been intense, loving relationships.

    So, here’s a little “Good News” for Ted Haggard. It’s possible that  centurion was begging Jesus to heal his male lover. The earliest readers of the Gospels would have understood that possibility. And ya know what? Jesus didn’t say, “Gee, Roman dude, I’d love to heal your servant…but are you gay? Because that is the Worst. Thing. EVER!!!” Nope. Jesus doesn’t say anything disapproving about “teh gay.” He just praised the centurion’s faith and healed the man’s “pais.” No questions asked. Food for thought, eh?—-a.b.

    …..and now back to your regularly scheduled Moonbat:

    A Couple of Ways Out

    Say you’ve recently been enslaved by the Romans.  Right now, your prospects for freedom don’t look so good, but you’ll find there are several possible avenues that might open up later on.  You might find yourself sold to a wealthy aristocrat, for example – Horace, ca. 20 BCE, implied that even a man of moderate circumstance would own around ten – and so might be able to set yourself up to buy your freedom with a small hoard of tips collected as you ingratiate yourself to your master’s buddies.  Long-term loyal service could also occasionally result in deathbed manumission, and depending on the relative gratitude of an owner, a single, particularly thanks-worthy act might rise to the level of a proclamation of emancipation.  Should your owner decide to grant you your freedom thus, try’n make sure it’s official by getting it done in front of a praetor, but legally, any witnesses will do in a pinch.  Getting manumitted in this way will entitle you to wear a cap on your head and call yourself a libertinus, which you’ll find is an increasingly large socioeconomic class as the Pax Romana (27 BCE – 180 CE) rumbles on.

    If you’re a farm slave, or one bought as part of a labor-force-for-hire, then it’s probably your dream to run away and be a slave in the city, where you’d most likely get to focus on one, single task – looking after master’s sandals, scrubbing out master’s crapper, say – instead of doing the varied (but all hard) tasks that come your way as an involuntary agricultural worker.  If you do run away, and you’re caught, you’ll likely be branded on the forehead with a big “F” (for fugitivus).  Do that enough, and you’ll probably be sold as gladiator fodder.

                                Photobucket - Video and Image Hosting

    You could always go that route: try for fame and fortune in the ring, and win your freedom by public acclaim.  Since this would have been about as likely to happen then as it is in our modern gladiatorial arenas, counting on this approach is not recommended.  Of course, if you’re the rabble-rousing type, a gladiator school might be just the thing for you: Though the First and Second Servile Wars (135 & 104 BCE, respectively) were localized Sicilian uprisings that didn’t threaten the Republic as a whole, the Third Servile War, led by the slave Spartacus from 73-70 BCE, began in a gladiator school.

    For that little act of freedom-assertion, Spartacus and his 6000 of his closest friends suffered the terrible fate of the rebellious slave: crucifixion, in their case along the Appian Way from Rome to Capua.  This led to the slave-slang Cheneyism, i ad malam crucem, “Go to the bad cross.”

    The population of slaves kept growing into and throughout most of the Imperial period, and many slaves found themselves with increasingly less to do.  To avoid a reprise of the Spartacus Incident, the Emperors and the Senate began turning to bread and circuses to keep the mobs of idle slaves mollified – but like all things Roman, this, too, eventually got out hand, and by 400 CE, nearly half of the Roman calendar consisted of holidays.  To add inevitability to ineptitude, as the Empire’s economy decayed, the currency devalued, making it more expensive to buy slaves and less profitable to employ their labor – and of course, there was the increasingly powerful sect of Christians, who were denouncing the institution even as they were trying to wean themselves off the habit.  By the time the Western Empire lay prostrate before Attila in 453 CE, much of the slave population of Rome had either died or had melted back into populations around Europe, there to provide a ready source of labor for the feudal warlords that would come to dominate the land.    

    We now end your evening’s  Moonbat programming. Welcome to the aphra behn show!

    Historiorantrix: Hmm. Trust the Moonbat to leave me here with an Empire laying prostrate before me!

    Decline and Fall

    Slavery in Western Europe outlived the slow deflation of the Roman Empire. Things got pretty hairy for a while; without the protection of a central authority, rural people became especially vulnerable to kidnaping and subsequent enslavement. A Romanized Briton named Succat (that’s St. Patrick to you and me ) was kidnaped by Celtic raiders and spent time as a slave, herding goats in Ireland. He later returned as a bishop to try to convert the Irish.

                        Photobucket - Video and Image Hosting

    Unsurprisingly, Patrick did not approve of Christians being enslaved by other Christians:

    Now you, Coroticus— and your gangsters, rebels all against Christ, now where do you see yourselves? You gave away girls like prizes: not yet women, but baptized. All for some petty temporal gain that will pass in the very next instant. “Like a cloud passes, or smoke blown in the wind,” so will “sinners, who cheat, slip away from the face of the Lord. But the just will feast for sure” with Christ. “They will judge the nations” and unjust kings “they will lord over” for world after world. Amen. —Letter to the Soldiers of Coroticus (c.450?) at at Wikipedia.

    The idea that Christians should avoid enslaving other Christians meant that most slaves in later antiquity came from outside the Christian world, members of Germanic or Slavic groups. There is a story that Pope Gregory, (late 500s) was walking through the Roman market and was struck by the appearance of some fair-haired slaves. He asked what tribe they were form and was told they were Angles. Vowing to work for their conversion, he is supposed to have quipped “Non Angli, sed angeli”–“Not Angles, but angels.” (So that’s why those Christmas angel tree-toppers are always blondes—ab.)

        Photobucket - Video and Image Hosting

    The spread of Christianity diminished the practice of slavery in Europe, but certainly didn’t eliminate it. Some Germanic law even made slavery a punishment for certain crimes: exposing a newborn, for example. In hard times, desperate men and women might offer themselves as slaves to anyone who could offer them food and shelter.

    By the 1200s, slavery was in great decline in the Christian West. St. Thomas Aquinas wrote in 1256 that slavery ran counter to the idea of free will-that each person is responsible for his or her own spiritual choices. Yet he also accepted the church’s position that slavery was acceptable as long as it was practiced only on non-Christians.

    The Bonds of Feudalism                           

    The decline of slavery didn’t mean that most people were “free” in medieval Western Europe, at least not in the sense that we know it. Rather, the feudal system ensured that pretty much everybody had obligations of some sort to a higher authority. At the risk of oversimplifying, one of the most important reasons that feudalism worked was fear of guys like this:

    Photobucket - Video and Image Hosting

    Historiorantrix: Scary, huh? And I don’t just mean the wax job.–ab

    Like St. Patrick, many people in rural Europe needed protection from Republican governors bandits, barbarians, pirates, and kidnappers of all sorts.  The classic formulation of medieval society is that it was made up of “those who pray” (that the bandits will stay away) “those who fight” (kings, knights and other members of the hereditary ruling classes–the guys with big swords who keep the bandits away) and “those who work” (everybody else–no swords for you!)

    At the bottom of the “those who work” scale were two groups: serfs and slaves. They weren’t the same thing. Yet in some ways serfdom–the largescale use of less-than-free agricultural workers–looks a lot like later slavery  in the Americas. Where did it come from?

    Let’s re-visit Moonbat’s Rome for a moment and check out the scene when Constantine came to the imperial throne. He had to deal with the fact that a lot of small farmers had fled their lands during the uneasy 200s (Rome had 35 rulers in 40 years–lots of civil warring and instability.) The collapse of the small free farming class left most land in the hands of large landowners on estates (latifundia) where coloni, or tenant farmers, rented land. These coloni paid rent on their own plots and also farmed the fields of their landlord.

    Many coloni were unable to pay their debts, thanks to the crumbling economy. They faced eviction and exploitation by landowners eager to bind them into permanent service. Many simply ran away, skipping out on the untenable (ha ha) situation.  In an effort to fix the labour problem, Constantine issued a fateful edict in 322 that both protected and restricted coloni

                Photobucket - Video and Image Hosting

    Coloni were protected from eviction and unreasonable raises in rent under Constantine’s edict; however, they were permanently fixed in their status. They could not leave the estate, or marry someone from outside it, without the landlord’s permission.

    The binding of the coloni coincided with the settlement of large numbers of barbarian hordes immigrants in the former Roman Empire. Germanic peoples–Lombards, Franks, Angles, and all the rest–took up the reigns of government that had once been held in Rome. They blended Roman and German customs, laws, and culture.

    Germanic tribes generally considered their society to include 3 kinds of people: free men and women, slaves, and half-free people. The half-free men and women were not technically “owned” by any person, but they were tied to the land on which they worked, just like the coloni They might be called haleffreemen or geburs or ceorls (although some Germanic tribes regarded “churls” as free men) or any number of things, but the point was the same: they weren’t slaves, but they were legally tied to a person or place.

    Having workers tied to the land is a useful thing when you become an agrarian society. The blending of the coloni with the Germanic half-free workers gives us the concept of “serf,” from the Latin servus, to serve. They formed the backbone of the farming culture of early medieval Europe. And they performed other work too; on larger estates in particular, serfs might serve as bakers, cobblers, smiths, and a wide range of other functions.

    Photobucket - Video and Image Hosting

    Here is a description of the serf’s obligations from Louis the Pious in 817:

    He is to plough, sow, enclose, harvest, haul, and put away the crops from the regular enclosures—which are four ten-foot measuring rods in width and forty in length. He is to enclose, reap, gather, and put away one arpent in meadow. Every colonus ought to collect and put away corn to the value of a triens for seed. He is to plant, enclose, dig up, extend, prune, and collect the harvest of the vineyards. They each pay ten bundles of flax. Four hens they must pay also. –Duties of the Coloni from the Internet History Sourcebook

    Serfdom Vs Slavery

    So how much did serfdom really differ from slavery? In many ways, quite a lot. Here is Professor Lynn Nelson on the distinction between slaves and serfs:

    The man (and of course there were women slaves) who was enslaved in ancient times was considered to have died; all that was his passed to his master, including the power of life and death. …The serf, by contrast, was a free man except for the obligations he owed to his lord and the rights his lord claimed over him…The master could not deny his serf the amenities of the Church, work him on holy days, or demand actions of him that were immoral. As a living creature, the serf had the rights accorded him by natural law. He could resist a lord attempting to take his life or one attempting to withhold the necessities of life from him and his.Lynn Nelson, “Classical Slavery and Medieval Serfdom”

    Guillaume Serf’s life was better than Joe Slave’s…but being free would be best of all. Serfdom declined throughout Western Europe in the 1200s and 1300s. (Interestingly, it actually became more extensive in Eastern Europe at this time…but serfdom in Eastern Europe is a whole different story.) In part, the Black Death is responsible. As it ravaged Europe in the mid 1300s, it killed off countless thousands—including a lot of surplus labor.

    Photobucket - Video and Image Hosting

    If you managed to survive, then hey presto! You were suddenly worth a lot more. And if, as  a tenant farmer, you still couldn’t make the rent, he could run away to one of the burgeoning cities and melt into the general population. Here is Henry, King of Romans, on the proof needed for a landlord to reclaim his serf:

    That if any person pertaining to any noble or ministerial betake himself to our cities with the idea of staying there, and his lord wish to reclaim him, the lord ought to be allowed to take him, if he has seven relatives on the mother’s side, who are commonly called nagilmage, who will swear that he belongs to the lord by right of ownership. But if for any reason the lord be unable to obtain the relatives or friends, let him obtain two suitable witnesses from the neighborhood from which the fugitive came, and let him prove that he had that man in his undisturbed possession by right of ownership before he betook himself to our cities, and with his witnesses let him take oath on the relics of the saints, and so let his man be restored to him.  —Concerning Serfs Who Flee to the Cities of Alsace, 1224at the Internet History Sourcebook.

    Landlords had to spend a lot of time to hunt down serfs in urban centers; was it really worth it? Increasingly, the answer was no.

    FREEDOM! (Or…not so much.)

    Still, the decline of serfdom in the West does not mean that personal “freedom” as we understand it was the rule. For example, serfdom declined early in England, but if we had been around in, say, 1600, we might be shocked by how proscripted the life of an average person (let’s call him Johnny Worker) really was.

    For Johnny to learn a trade, he must bind himself as an apprentice–or rather his parents would. Johnny then comes under the near-complete authority of his “master” who is authorized to punish Johnny just as he can punish ay member of his own family–with physical force. If Johnny has a cruel master, too bad. It is a crime for him to run away.

    Photobucket - Video and Image Hosting

    And Johnny’s sister Jane Worker doesn’t have it so good either; she’s a servant and cannot freely leave her service. Although the law no longer gives the paterfamilias legal sexual access to his servants and/or slaves, it will be difficult for her to do anything about it if her master decides to seduce or rape her. Johnny and Jane will still need a pass to travel freely–they have to prove that they are not “idle poor”–vagabonds–or they might be whipped publicly in the streets.

    And remember, these are free people. Whether “slave,” “serf,” or “free,” ordinary Western Europeans lived in a world that emphasized hierarchy and obedience to a degree unimaginable to most living in America today. Slavery was just a very low rung on a long ladder of reciprocal relations: obedience in return for some kind of patronage and/or protection.  

    And, after 1569, that rung disappeared in England, at least in theory. In that year, an English court ruled against an Englishman who had been accused of beating a slave he had acquired while living in Russia. The “slave” accused the “master” of assault. The court ruled that English common law made no provision for slavery–and therefore, slavery did not exist in England. So how could English become slaveholders in the Americas? That’s a story we’ll get to. To really answer that question, we’ve first got to set sail for a more exotic location:  Byzantium!

    Slavery in the Eastern Empire

    The Roman Empire didn’t fold up shop when Attila did his horde thing. Its center had already shifted eastward. What we call “Byzantium” was then simply called “Rome.” The empire ruled from Constantinople  maintained strong continuities with old Roman culture, and viewed itself as the continuance of the old Empire. In the 520s, the  Emperor Justinian set about tidying up the mess of laws and legal opinion that made up Roman law. His 5-volume Corpus Juris Civilis (better known to you and me as the Justinian Code) included definitive clarifications on the  Roman laws of slavery.

    Photobucket - Video and Image Hosting

    In the Justinian Code, society is described as divided between slave and free:

    The Law of Persons

    In the law of persons, then, the first division is into free men and slaves.

    Freedom, from which men are called free, is a man’s natural power of doing what he pleases, so far as he is not prevented by force or law. 1

    Slavery is an institution of the law of nations, against nature, subjecting one man to the dominion of another 2.  

    The name ‘slave ‘ is derived from the practice of generals to order the preservation and sale of captives, instead of killing them; hence they are also called mancipia, because they are taken from the enemy by the strong hand.  3

    Slaves are either born so, their mothers being slaves themselves, or they become so; and this either by the law of nations, that is to say by capture in war, or by the civil law, as when a free man, over twenty years of age, collusively allows himself to be sold in order that he may share the purchase money.4 The Justinian Code,  Title III, Book I at Humanistic Texts.

    So, according to the interpretations favored by Justinian, and slavery goes against nature. In his view, one can be born a slave, if one inherits that state from one’s mother. (Note that the law makes no assumptions about the skin color or ethnicity of a slave.)  The code is actually fairly generous about those born into slavery. It can’t be inherited from the faher, only the mother. And if she is free at the time of birth or conception, the child is still free, even if the mother’s status has changed:

    It is enough if the mother be free at the moment of  birth, though a slave at that of conception: and conversely if she be free at the time of conception, and then becomes a slave before the birth of the child, the latter is held to be free born, on the ground that an unborn child ought not to be prejudiced by the mother’s misfortune. Hence arose the question whether the child of a woman is born free, or a slave, who, while pregnant, is manumitted, and then becomes a slave again before delivery. Marcellus thinks he is born free, for it is enough if the mother of an unborn infant is free at any moment between conception and delivery: and this view is right.  Title IV, Book I

    The code encouraged manumission; a vow in the church, a verbal statement before witnesses, a letter or a will could all be used to free a slave. The code also removed obstacles to masters who wished to free all of their slaves at their own deaths.

    As in Rome, a few slaves in Byzantium might achieve high status, by serving as assistants to the Emperor or his household. Perhaps the most interesting of these were the eunuchs–usually slaves or free servants. Because they were castrated, they were freed of the ties of family that were believed to drive most corruption and disloyalty. (They also made “trustworthy” guards for elite women, for obvious reasons.)

            Photobucket - Video and Image Hosting

     In later Rome, emperors often preferred that their personal assistants be eunuchs–if a guy is going to, say, give the emperor a shave or a haircut, then you want to make sure he doesn’t have any ambitions or reason to let the razor slip. In Byzantium, eunuchs formed an important class within the imperial government, with archieunuchs in charge of other groups of eunuchs. In theory, it was a crime to  kidnap and castrate unwilling boys; eunuchs were supposed to be purchased from abroad. In practice, foundlings and poor children were vulnerable to this abuse because eunuch were such profitable slaves to sell.

    It’s Istanbul, not Constantinople

    So this more-or-less Roman version of slavery flourished in Byzantium, where slaves of many nations were traded freely. The slave trade continued, within certain limitations, as the Ottoman Turks nibbled away at the empire. Their conquests quickly gobbled up much of Byzantium’s old lands, and culminated in the fall of the city of Constantinople in  1453 CE.

    Islamic law forbade the enslaving of one’s own countrymen—if they were Muslim, that is. In Islamic law, one might become a slave under two circumstances only: capture in war, or birth to slave parents. If either parent was free, then so was the offspring.

    Slaves from Africa, Asia, and Eastern Europe might be captured or kidnaped and then sold in the great markets of the Islamic world. (In theory, the Ottomans frowned on kidnaping, but it was pretty hard to prove that a slave hadn’t been taken in legitimate warfare.)  If you became a slave, then as long as you remained loyal, you had the right to good treatment under Islamic law Freeing a slave was considered a holy act.

                Photobucket - Video and Image Hosting

    It wasn’t all roses and puppies for slaves. Islamic law did not give them any legal standing in court, nor did it guarantee their right to own property, nor were they allowed to marry without permission of their master. And married women taken as captives were required to engage in sexual intercourse with their masters if he so desired; previous marriages were essentially annulled.

    The Ottomans limited the slave trade in their empire, but several forms of slavery flourished in their empire. First and foremost, the sultan technically “enslaved” many of the ruling elite; they owned their properties and estates at his whim. On a more general level, slaves usually served as domestic servants, not agricultural workers. Some enslaved captives of war manned the navy’s galley oars. And eunuchs continued to serve in the Sultan’s household, often guarding the women of the harem.

    The Ottomans also practiced a form of slave-tax called the  devshirme. This  child-levy required non-Muslim subjects to contribute a certain percentage of their children to the state.

    Photobucket - Video and Image Hosting

    Beginning in the 1400s, some of these became soldiers; they might convert to Islam and be pressed into the “Janissaries,” the 30,000 man army of the Sultan. Janissaries were highly disciplined elite soldiers who considered the corps their family and the Sultan as their  father. They enjoyed significant privileges and could amass quite a bit of property–property that reverted to the order upon a Janissary’s death. Except for the killing and dying part, this wasn’t a bad deal.

    Not  all levied children ended up in the army. For some, becoming a slave was actually a good career move. In the Ottoman world, slaves served as administrators, statesmen, and in a wide range of government positions.  Some Christian families were actually eager to turn their children over to the Sultan, because it was a way for those with talent to rise to the top. Color and ethnicity were no bar—only religion was.  If one was Muslim, then one’s ethnicity mattered not one bit.  

    One of these extraordinary success stories is of an enslaved woman named Roxelana, or Hurrem (“cheerful one.”) Probably born to a Polish family around 1500, she was kidnaped and sold into slavery. She ended up in the harem of Emperor Suleiman the Magnificent and quickly became his favorite concubine, bearing him five children.

    Photobucket - Video and Image Hosting

    Now the story takes a turn for the unusual; Suleiman, not content to have Roxelana as his concubine, actually married her. She served as one of his most important advisors and may have helped him keep peaceful relations with the Polish king. Before her death in 1558, she founded schools, mosques, a hospital and several other public works. Not bad for an ex- slave.

    Why are these Byzantine and Ottoman concepts of slavery important to our story? Because they influenced, both directly and indirectly, the re-envisioning of slavery by Western Europeans in the Era of Encounter. Tune in next week to see how. We’ll take a look at the institution of slavery in Africa, the rise of unfree labor in the Americas, and the beginnings of the Atlantic slave trade.  Until then, this has been aphra behn guest-hosting in the Cave of the One and Only Moonbat!


    Mission Statement

    October 7, 2007

    Originally posted by Nonpartisan on 08/26/06

    History is not simply an art, nor simply a science,” wrote Jean-Jules Jusserand in “The Historian’s Work”; “it participates in the nature of both.”  The academicians who govern the current historical establishment seem to have forgotten Jusserand’s dictum.  With pseudoscientific zeal, they have promulgated a history dominated by a narrow field of experts, created a disconnect between historical inquiry and popular memory, and excised informed opinion about current events from the purview of historical analysis.

    The disastrous results of such a policy are obvious.  To many Americans today, history has become the study of obscurity, the prerogative of trained scholars, the prisoner of an enforced neutrality.  History has been pigeonholed, overprofessionalized, and defanged as an agent of social change.  Historians have never commanded so little respect as they do now; likewise, history as a discipline has never seemed so remote, so otherworldly.

    All of us who are involved in online political activism are here to speak truth to power; many of us came to the blogosphere through the Dean campaign, an organic popular movement that encouraged us to “take our country back.”  While we at ProgressiveHistorians view our country through the specific lens of history, our goal remains the same: to take back our discipline, and by extension our country, by expanding it — by opening it to those not formally trained in historical scholarship, and by encouraging its practitioners to view engagement with today’s political climate as a necessary and welcome addition to historical analysis.

    At ProgressiveHistorians, we hope that all academic disciplines will eventually have their own online homes for open and lively debate.  Still, it is fitting that history be the first to receive such recognition.  Both the etymology and the study of history have their roots in story, and storytelling is as old as humanity itself.  In prehistoric times, a tribe’s oral historian was often its shaman, an individual revered for his knowledge and wisdom.  In ancient times, historians were internationally-renowned authorities (Herodotus), respected aristocrats (Thucydides), and companions of heads of state (Polybius).  During the Middle Ages, monks with historical training, such as the priest charged with determining the true birth year of Christ (Dionysius Exiguus), were among the most highly-regarded spiritual figures in Europe.  The Enlightenment saw historians revered in intellectual circles (Edward Gibbon) and entrusted scholar-historians with leading roles in a political revolution (Voltaire, Denis Diderot).  The Chinese civil service system valued historian-scholars above all others and offered them positions as commander-in-chief (Zuo Zontang, Zeng Guofan) and Prime Minister (Li Hongzhang).  In the twentieth century, three historians — two amateur (Theodore Roosevelt, John F. Kennedy) and one professional (Woodrow Wilson) — were elected President of the United States.

    Throughout the human experience, then, history has been recognized as a discipline with special applications to everyday life, and as a study with particular relevance to politics.  Good government requires a knowledge of old government.  Good citizenship requires an understanding of people, which can be gleaned from a rendering of how they have acted.  A government cannot tell the truth about the present unless it understands what truth is — and history is the study of truth, of many truths for many people, of narratives woven through human souls.  Thus the historian’s natural role in the body politic is to focus the past like a beacon on the present, to arouse communal memory, and to mine her expertise for lessons relevant to contemporary life and governance.

    At its best, history is high, enlightened, a paean to the better gods of human nature; at its worst, however, it is institutional pablum, the product of an academic elitism so insular as to be wholly divorced from reality.  (See Francis Fukuyama’s The End of History for an extreme instance of an academic caught up in his own words in utter disregard of the ludicrousness of their meaning.)  Those plastic “presidential historians” whose faces adorn the television news — Doris Kearns Goodwin, Joseph Ellis, and the rest — are particularly egregious examples of a history bound to banalities out of fear of unpopularity.  After the 2004 election, TIME Magazine hosted a roundtable of six such historians and asked them for their educated impressions of George Bush’s presidency and of the recent vote.  The sum total of “intelligent” analysis from the six scholars — aside from Goodwin’s shameless plug for her upcoming book — was David Kennedy’s improbable claim that, because of the election results, the Religious Right should now be considered a mainstream political force.

    It was in a moment of bewildered outrage, after reading the article and discovering the vapidity of thought among these “leaders” in my chosen profession, that the idea for ProgressiveHistorians was born.  If the professionals are not utilizing their considerable talents to explicate our political and cultural realities, then the rest of us should do it for them.  Let the people speak; let the voiceless be heard.  Let every American have a forum in which to try to fashion from the indistinct clay of past experience a shining structure for the future.

    For the history of America cannot be written without its people.  It is no accident that the shelves of popular bookstores are filled with historical volumes penned by non-academically-trained writers, while there is nary a professional historian to be found there.  This is not to say, of course, that trained historians are not of value; in fact, many of us, myself included, are in training to join their ranks.  But to assign the telling of history as the sole prerogative of people whose names begin with “Doctor” is to separate America’s past from those who need it most — the people who are living it in the present.  Each of us, trained or not, should have the ability to analyze and consider our history, and to be taken seriously in the attempt.   At ProgressiveHistorians, we hope to provide them that opportunity.

    America needs a new kind of history — one bold, open, and critically engaged with the present day.  A history that is open to all who seek to learn about themselves and to dialogue about their insights with others.  A history that never forgets that the purpose of the past is to inform the present so the future can be brighter.  The mission of ProgressiveHistorians is to aid in fostering that history.  With any luck, what we say and do here will have an impact on the American historical profession as well as on our collective understanding of our past, and will pave the way for a better, more enlightened tomorrow.


    Wednesday Open Thread

    October 6, 2007

    Originally posted by Ahistoricality on 05/06/07

    Jerry Falwell’s dead.

    In his honor, I reproduce below the fold the 26-point “Don’ts for Students” distributed in 1981 by the North Carolina Moral Majority. I did a quick look around and it doesn’t seem like these are on the internet anywhere else. So, my spouse and I did this as a dramatic reading tonight: it was in my spouse’s old braille files (thanks to an old friend who’d brailled it) so the quickest way to get it onto the computer was to read it out loud to me. Fun, too.

    Don’ts for students.

    1. Don’t get into science-fiction values discussions or trust a teacher who dwells on science fiction in his/her “teaching.”

    2. Don’t discuss the future or future social arrangements or governments in class.

    3. Don’t discuss values.

    4. Don’t write a family history.

    5. Don’t answer personal questions or questions about members of your family.

    6. Don’t play blindfolded games in class.

    7. Don’t exchange “opinions” on political or social issues.

    8. Don’t write an autobiography.

    9. Don’t keep a journal of your opinions, activities and feelings.

    10. Don’t take intelligence tests. Write tests only on your lessons. Force others to judge you on your own personal achievement.

    11. Don’t discuss boy-girl or parent-child relationships in class.

    12. Don’t confide in teachers, particularly sociology or social studies and english teachers.

    13. Don’t judge a teacher by his/her appearance or personality, but on his/her competence as a teacher of solid knowledge.

    14. Don’t think a teacher is doing you a favor if he/she gives you a good grade for poor work or in useless subjects.

    15. Don’t join any social action or social work group.

    16. Don’t take “social studies” or “future studies.” Demand course definition: history, geography, civics, French, English, etc.

    17. Don’t role-play or participate in socio-dramas.

    18. Don’t worry about the race or color of your classmates. Education is of the mind, not the body.

    19. Don’t get involved in school-sponsored or government-sponsored exchange or camping programs which place you in the homes of strangers.

    20. Don’t be afraid to say “no” to morally corrupting literature, games and activities.

    21. Don’t submit to psychological testing.

    22. Don’t fall for books like “Future Shock,” which are intended to put readers in a state of panic about “change” so they will be willing to accept slavery. Advances in science and technology don’t drive people into shock. It is government and vain-brain intrusions in private lives, which cause much of the unbalance in nature and in people.

    23. Don’t get into classroom discussions which being: What would you do if….? What if….? Should we….? Do you suppose….? Do you think….? What is your opinion of….? Who should….? What might happen if….? Do you value….? Is it moral to….?

    24. Don’t sell out important principles for money, a scholarship, a diploma, popularity or a feeling of importance.

    25. Don’t think you have to associate with morally corrupt people or sanction their corruption just because “society” now accepts such behavior.

    26. Don’t get discouraged. If you stick to firm principles, others will respect you for it and perhaps gain courage from your example.

    Something tells me that he wouldn’t have approved of blogging…. [crossposted at Ahistoricality]

    What’s on your minds?


    Monday Open Thread

    October 6, 2007

    Originally posted by Ahistoricality on 07/23/07

    I’ve mentioned the Historical Society’s Historically Speaking before: Pretty high-powered folks often participate in the forums and contribute articles, and I’ve come to take it seriously as part of my “continuing ed” efforts to keep up with my field.

    But the latest issue contains a somewhat troubling juxtaposition which raises questions about its editorial independence and quality. I’ve reproduced two pages (click on the image for detailed view), because they’re not on the HS website.

    There is an article there by Robert Self, author of a new biography of Neville Chamberlain, a very interesting restatement of the conventional and revisionist narratives of his career (he’s mostly a revisionist, as I read it, though he casts himself as being something new and different). An article by the author of a new book is nothing new or unusual: HNN does it all the time, HS has done it before, many journals will publish article-length versions of monographs. There’s also a half-page advertisment from Ashgate for their new biography … of Neville Chamberlain… by Robert Self. Now advertising is not a new thing for academic journals, particularly advertising by publishers, but I’ve never seen the advertising attached to a review or to an extract like this one.

    To make it more interesting, it’s not just a matter of placing the ad and the article together: there’s a 20% discount code for HS readers in the ad which is repeated in Self’s biographical blurb at the end of the article. So the ad placement was done as part of the editorial process, raising the question of editorial independence. With any advertising there’s a question of influence, but academic publications — even more so than news organizations — are supposed to maintain a strong separation between content and economic decision-making.

    I guess the question would have to be which came first: the decision to publish the article, or the advertising?

    What’s on your mind?


    Monday Open Thread

    October 6, 2007

    Originally posted by Ahistoricality on 08/20/07

    I was reading an interview with Kevin R. C. Gutzman, an early American and constitutional historian of some note and considerable conservative views. He’s written the Politically Incorrect Guide to the Constitution, which, like most of the genre, is likely to be a combination of half-truths, deeply partisan positions, highly selective interpretations and absurdly cherry-picked facts and “facts.”

    His principle argument seems to be that Marbury v. Madison, which established the principle of judicial review, was contrary to the spirit and intent of the constitution

    “Constitutional law,” the product of judicial review, is not really law at all, but the judges’ whims gussied up in a legalistic argot. It is, if we understand the real Constitution as being the one the people actually ratified, in the sense they ratified it, absolutely anti-constitutional.

    FP: How did such an anti-constitutional way of government become normalized?

    Gutzman: With the elimination of the centrifugal pressures on the federal system provided by the threats of nullification and secession in the nineteenth century and the elimination of state governments’ role in selecting senators in the twentieth, the way was open for the federal government to claim authority over virtually all political issues.

    The chief problem, it seems to me, is that although judicial review was said by the Constitution’s proponents in some states to be among the powers federal courts were intended to have — and thus is legitimate — the people were not told that it would be exercised by federal courts over state statutes. They certainly were not told that under the title of a “living, breathing” constitution, the federal courts would be empowered to disallow enforcement basically of any state statute they disliked. They also were not told that the federal courts would effectively write the Tenth Amendment federalism principle out of the Constitution, thus allowing Congress to do more or less anything it wanted. Far from it! In fact, they were told the opposite, and, as James Madison noted in response to McCulloch v. Maryland (1819), if the people had known in 1787-88 how the courts were going to remake the Constitution through “interpretation,” they would never have ratified it.

    He then goes on to suggest that the power of the federal courts be “reined in” which, even if I was buying his bill of goods, would have broken the deal.

    He seems to be trying to balance state’s rights against the Federalist position (which is dubious, but let’s go with it), but the fundamental problem with regard to states’ rights isn’t the courts, but Congress, and — to a greater extent now than ever before — the Executive-as-national-daddy. Reducing the role of the courts at a time when Unitary Executive theories are alive in the Administration is a recipe for disaster, but it got me thinking a bit about balance. Thus, the quiz (and explain yourself in comments, please).

    What’s on your minds?


    The Blogger and the Libertarian: Constitutional History Death-Match

    October 6, 2007

    Originally posted by Ahistoricality on 09/06/07

    A few weeks ago I made note of an interview with Prof. Kevin Gutzman in which he argued that the principle of judicial review was fundamentally unconstitutional:

    The chief problem, it seems to me, is that although judicial review was said by the Constitution’s proponents in some states to be among the powers federal courts were intended to have — and thus is legitimate — the people were not told that it would be exercised by federal courts over state statutes. They certainly were not told that under the title of a “living, breathing” constitution, the federal courts would be empowered to disallow enforcement basically of any state statute they disliked. They also were not told that the federal courts would effectively write the Tenth Amendment federalism principle out of the Constitution, thus allowing Congress to do more or less anything it wanted. Far from it! In fact, they were told the opposite, and, as James Madison noted in response to McCulloch v. Maryland (1819), if the people had known in 1787-88 how the courts were going to remake the Constitution through “interpretation,” they would never have ratified it.

    I went on to say

    He seems to be trying to balance state’s rights against the Federalist position (which is dubious, but let’s go with it), but the fundamental problem with regard to states’ rights isn’t the courts, but Congress, and — to a greater extent now than ever before — the Executive-as-national-daddy. Reducing the role of the courts at a time when Unitary Executive theories are alive in the Administration is a recipe for disaster, but it got me thinking a bit about balance. Thus, the quiz (and explain yourself in comments, please).

    Last week, the distinguished Prof. Gutzman, author of The Politically Incorrect Guide to the Constitution (which I also maligned without even looking to confirm that it was published by Regnery…. yup), struck back:

    Prof. Gutzman starts off with a solid shot: I haven’t read the book! True…. but irrelevant. I was commenting on the interview. I mentioned the book to give some context, and if anyone else wanted to follow that dead-end argument, they’d have my blessing. (But use the sidebar link to Amazon, so NP gets a cut!)

    Prof. Gutzman then takes what is, unbeknownst to him, his best shot: I got the case wrong! Though he acknowledges that “Judicial review … is usually associated with the Supreme Court’s decision in Marbury v. Madison (1803)” he goes on to point out that the 1803 decision was Federal when he’s concerned with “Fletcher v. Peck (1810), in which the Court claimed authority to review state laws for ‘constitutionality.'” In other words, as I correctly noted, he’s concerned with States Rights. (and apparently opposed to “‘rights’ to abortion, homosexual sodomy, one man-one vote, Miranda warnings, secular schools, etc.”)

    I’ll give Prof. Gutzman credit for one thing: Chutzpah. I don’t know much about early American history (more than my students, mostly, but less than most Americanists), but he actually argues that the Federalists — the gentlemen who brought us the Alien and Sedition Acts — were states rights advocates in the Constitutional Convention and were betrayed by the clarification and extension of federal authority in the early 19c. To say that Prof. Gutzman is an originalist might be understating the case. I have a sneaking suspicion that he prefers the original Articles of Confederation anyway, but that ship sailed.

    Prof. Gutzman concludes by questioning my priorities, noting that “If he [that’s me!] dislikes lawless executives, well, lawless courts are not the answer; for judges to usurp state legislative authority is not to return to respecting constitutional limits on presidential power.” I’ll grant him that…. no, actually, I won’t. I don’t accept the premise: that every federal review of state law in the last two centuries has been illegitimate (and, by extension, that the Bill of Rights actually has no force within state boundaries). The “constitutional limits” Prof. Gutzman wants to “return” to would be a recipe for chaos in which Balkanization look like a step up.

    I’ll grant that the powers of the presidency are out of control, mostly because this presidential administration is out of control. I’ll grant that the powers of Congress are very broad, and overwhelm state legislatures in some objectionable ways. In principle, I’m in favor of the laws which give citizens (not corporations) the most rights taking precedence in any given situation; the Constitution and Congress can set minimum rights, but should not be able to prevent states from giving their citizens more rights, so long as basic Constitutional protections aren’t violated in the process.

    Prof. Gutzman has written a few pieces on similar subjects for his libertarian allies. Like most libertarians, he seems to have a good eye for abuses of authority, but the literalism of his legal theories and tendentious political history are off the charts.


    Monday Open Thread

    October 6, 2007

    Originally posted by Ahistoricality on 09/10/07

    Over the past couple of months, the sidebar polls have been an interesting experiment, and it’s time to look at the results. The first poll we did asked you to pick the world leaders (i.e. leaders of individual nations) about which you had a positive opinion. Thirty of you voted, for a total of 96 selections. The second poll asked you to pick the world leaders about which you had a negative opinion. Again, thirty people voted, but with a total of 166 selections. On average, you have stronger negative feelings about world leaders than positive ones; that’s the first conclusion. Now let’s look at the tabulated results: I’ve inverted the positive vote results, so they should, if we are reasonably consistent, be very similar lists.

    Negative Opinion (worst at top) ……. Positive Opinion (worst at top)
    George W. Bush (United States) 25 votes George W. Bush (United States) 0 votes
    Nicolas Sarkozy (France) 15 votes Laurent Kabila (Democratic Republic of Congo) 0 votes
    Stephen Harper (Canada) 14 votes Felipe Calderon (Mexico) 0 votes
    Fidel Castro (Cuba) 13 votes Susilo Bambang Yudhoyono (Indonesia) 0 votes
    Hu Jintao (China) 13 votes Nicolas Sarkozy (France) 1 votes
    Laurent Kabila (Democratic Republic of Congo) 11 votes Stephen Harper (Canada) 1 votes
    Felipe Calderon (Mexico) 11 votes Hu Jintao (China) 1 votes
    Shinzo Abe (Japan) 10 votes Shinzo Abe (Japan) 2 votes
    Hugo Chavez (Venezuela) 10 votes Vojislav Kostunica (Serbia) 2 votes
    Thabo Mbeke (South Africa) 7 votes Thabo Mbeke (South Africa) 3 votes
    Vojislav Kostunica (Serbia) 6 votes Roh Moo-Hyun (South Korea) 3 votes
    Gordon Brown (United Kingdom) 6 votes Fidel Castro (Cuba) 4 votes
    Nestor Kirchner (Argentina) 4 votes Angela Merkel (Germany) 7 votes
    Angela Merkel (Germany) 4 votes Nestor Kirchner (Argentina) 7 votes
    Romano Prodi (Italy) 4 votes Romano Prodi (Italy) 9 votes
    Evo Morales (Bolivia) 4 votes Evo Morales (Bolivia) 10 votes
    Susilo Bambang Yudhoyono (Indonesia) 3 votes Hugo Chavez (Venezuela) 11 votes
    Roh Moo-Hyun (South Korea) 2 votes Gordon Brown (United Kingdom) 11 votes
    “Lula” Da Silva (Brazil) 2 votes “Lula” Da Silva (Brazil) 12 votes
    Jose Luis Rodriguez Zapatero (Spain) 2 votes Jose Luis Rodriguez Zapatero (Spain) 12 votes
    Total votes: 166 Total Votes: 96
    Total voters: 30 Total voters: 30

    Analysis below the jump:

    First, of course the caveats. Nonpartisan selected the leaders without any idea that I was going to turn this into an event; if your favorite, or least favorite, leaders are missing, chalk it up to the vagaries of fate.  If it’s not blazingly obvious at this point, this isn’t a “scientific” poll; it barely qualifies as a “pseudo-scientific” poll, but I’m going to look at the results anyway. Historians are trained to handle incomplete, biased and otherwise flawed data.

    The most obvious thing is to look for anomalies, leaders who are in significantly different positions on the list. For control, I’ll just note that a lot of leaders ended up in pretty similar positions: Bush at the top (least liked) and Da Silva and Zapatero at the bottom (most liked). Merkel and Kircher and Prodi and Morales and Mbeki and Abe are all close to themselves in both lists. A few Asian leaders should just be considered “obscure” with very few positive or negative votes: Roh of South Korea and Yudhoyono of Indonesia.

    The biggest anomalies that I can see are Fidel Castro and Hugo Chavez. Castro got very strong negatives, but mid-range positives; Chavez got strong positives, but mid-range negatives. Gordon Brown of England also got strong positives and mid-range negatives; since he just replaced Blair, I imagine that’s mostly what’s registering.

    Your thoughts?

    What’s on your minds?


    FOIA : Declaration of Independence

    October 6, 2007

    Originally posted by se portland on 09/02/07

    As a class project fifth grader Cecile Lipscomb submitted a Freedom of Information request to the White House for a copy of the Declaration of Independence. Now 17, Cecile shared with me the document he just now received.

    View my reconstruction of the heavily redacted Declaration of Independence Cecile finally got from the White House below the fold.

    [cross posted at Daily Kos]

    IN CONGRESS, JULY 4, 1776
      The unanimous Declaration of the thirteen united States of America

    When in the Course of human events it becomes necessary for one people  to assume among the powers of the earth,  the Laws of  God  requires that they should

    We hold these truths to be self-evident,  their Creator with certain unalienable Rights, that among these are  to secure these rights, Governments are instituted  – That whenever  Government becomes  the Right  and to institute  Government, laying  its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light  and to provide new Guards for their future security. – Such has been  Systems of Government. The history of the present  is a history of repeated  Facts  submitted to a candid world.

    Laws, the most wholesome and necessary for the public good.

    pass Laws of immediate and pressing importance,  to attend to them.

    pass other Laws for the accommodation of large districts of people,

    He has called together legislative bodies  purpose  compliance with his measures.

    He has  manly firmness

    He has  to be elected,  exposed to all the dangers of invasion from without, and convulsions within.

    He has endeavoured  Laws for Naturalization of Foreigners;

    He has  Administration of Justice by  Laws for establishing Judiciary Powers.

    He has made Judges

    He has erected a multitude  substance.

    He has kept  Standing Armies with the Consent of our legislatures.

    He has affected  the Military  superior  Power.

    He has combined  a jurisdiction  to our constitution, and acknowledged by our laws; giving  Assent to their Acts of pretended Legislation:

    For  armed troops

    For protecting  the Inhabitants of these States:

    For  our Trade with all parts of the world:

    with our Consent:

    For transporting us beyond Seas

    For  the free System of  Laws in a neighbouring Province, establishing therein a government,  so as to render it at once an example and fit instrument

    For  our most valuable Laws  fundamentally the Forms of our Governments:

    invested with power  in all cases

    He has  by declaring us out of his Protection and waging War against us.

    He has  our seas,  our coasts,  our towns, and  the lives of our people.

    He is  Head of a civilized nation.

    our fellow Citizens  become  their friends and Brethren,

    excited domestic insurrections amongst us,  is an undistinguished destruction of all ages, sexes and conditions.

    In every stage  whose character is thus marked by every act which may define  the ruler of a free people.

    Nor have We been wanting in attentions to our British brethren. We have warned them from time to time of attempts by their legislature to extend a warrantable jurisdiction . We have reminded them of the circumstances of our  justice and magnanimity, and we have conjured them by the ties of our common kindred  They too have  the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity,  War,

    We, therefore,  do,  declare,  Allegiance to  the State  ought to be total and  have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. –  with a firm reliance on the protection of Divine Providence,


    Of Poetry and PTSD: Musings on World War I

    October 6, 2007

    Originally posted by aphra behn on 09/24/06

    I can’t bear to write history this week. I’ve been spending quite a bit of time e-mailing a former student of mine, who is currently serving in Afghanistan with  the U.S. Army.  He should be in graduate school, but stop-loss interfered with that. He’s not optimistic about how things are going. From his (admittedly limited) perspective, the U.S. presence isn’t helping much right now.  I don’t know if his perception is accurate, but this diary isn’t about that.

    It’s about  how  soldiers cope when they’re tasked with something impossible, hopeless. Perhaps foolhardy, and probably morally objectionable.  My student is treating his Afghanistan experience  as an academic problem (I told you he should be in graduate school). He’s documenting his experiences, writing about them, treating this as a sort of anthropological project, as it were.

    How have others coped in similar situations? Join me, if you care to, as I thumb through the poetry books on my shelf and consider the response of five writers—three British, one Canadian, and one American– to the brutal carnage of World War One.

     

    (Cross-posted at Daily Kos and The Next Agenda, a DKos-style blog for Canadian politics.)

    England, 1914. A surprisingly broad cross-section of English society joined up in an early wave of patriotism and optimism at the war’s commencement.  Like their German counterparts, the jingiostic Brits were positive that the war would be over quickly–and that their cause was just, even noble.
    Photobucket - Video and Image Hosting

    27 year old Rupert Brooke,  a published poet and academic, celebrated the war as a welcome challenge for British masculinity:

    “Peace”

    Now, God be thanked Who has matched us with His hour,  
    And caught our youth, and wakened us from sleeping,  
    With hand made sure, clear eye, and sharpened power,  
    To turn, as swimmers into cleanness leaping,  
    Glad from a world grown old and cold and weary,  
    Leave the sick hearts that honour could not move,  
    And half-men, and their dirty songs and dreary,  
    And all the little emptiness of love!

    Oh! we, who have known shame, we have found release there,
    Where there’s no ill, no grief, but sleep has mending,
    Naught broken savethis body, lost but breath;  
    Nothing to shake the laughing heart’s long peace there  
    But only agony, and that has ending;  
    And the worst friend and enemy is but Death.——Rupert Brooke

    That poem always strikes me as something that might be written by certain warmongers today. But let’s give Brooke his due. He was no Yellow Elephant–he was commissioned as an officer in the Royal Navy and sailed for the Dardanelles in 1915. He apparently  embraced his new identity as “The Solider”:

    If I should die, think only this of me:
    That there’s some corner of a foreign field
    That is for ever England. There shall be
    In that rich earth a richer dust concealed;
    A dust whom England bore, shaped, made aware,
    Gave, once, her flowers to love, her ways to roam,
    A body of England’s, breathing English air,
    Washed by the rivers, blest by suns of home.

    And think, this heart, all evil shed away,
    A pulse in the eternal mind, no less
    Gives somewhere back the thoughts by England given;
    Her sights and sounds; dreams happy as her day;
    And laughter, learnt of friends; and gentleness,
    In hearts at peace, under an English heaven.
    —––Rupert Brooke

    “The Soldier” was an incredibly popular poem from its first publication in The Times on April 24,  1915—the day after Brooke died (of septic pneumonia) on the way to Gallipoli. Praised as an exemplary  hero by no less than Winston Churchill, Brooke became a sort of sort of emblem of self-sacrificing patriotism.
    Photobucket - Video and Image Hosting

    American journalist and poet Joyce Kilmer penned this tribute to him:

    “In Memory of Rupert Brooke”

    In alien earth, across a troubled sea,
    His body lies that was so fair and young.
    His mouth is stopped, with half his songs unsung;
    His arm is still, that struck to make men free.
    But let no cloud of lamentation be
    Where, on a warrior’s grave, a lyre is hung.
    We keep the echoes of his golden tongue,
    We keep the vision of his chivalry.

    So Israel’s joy, the loveliest of kings,
    Smote now his harp, and now the hostile horde.
    To-day the starry roof of Heaven rings
    With psalms a soldier made to praise his Lord;
    And David rests beneath Eternal wings,
    Song on his lips, and in his hand a sword.

    —Joyce Kilmer

    Kilmer himself joined the American    army in 1917. He was killed by a sniper at the Second Battle of Marne, on 30 July 1918, aged 31.  The French Republic  posthumously awarded Kilmer the Croix de Guerre.
    Photobucket - Video and Image Hosting

    The First World War turned increasingly fatal—and brutal–in 1915. The Second Battle of Ypres, for example, was a four-day action resulting on over 100,000 combined casualties. Although it would necessitate an entire diary to explore the reasons WHY that war was so deadly, the short version is this: the tactics and did not keep up with weaponry. What use is an infantry charge against a machine gun? It was an industrialized war, with automatic weapons, mass-produced chlorine gas, barbed wire, and many other weapons that could wreak incredible carnage. No-one had ever imagined anything like it.

    Photobucket - Video and Image Hosting

    The Second Battle of Ypres was especially fateful for the Canadian army. Later singled out for their bravery and incredible discipline, the CEF at Ypres suffered nearly 6,000 casualties to their 10,000-man force. The carnage prompted one Canadian military surgeon to pen what may be the best-known poem of the war.

    The first two stanzas of John McCrae’s “In Flanders Fields” are commonly recited every Remembrance Day:

    In Flanders fields the poppies blow
    Between the crosses, row on row
    That mark our place; and in the sky
    The larks, still bravely singing, fly
    Scarce heard amid the guns below.

    We are the Dead. Short days ago
    We lived, felt dawn, saw sunset glow,
    Loved and were loved, and now we lie
    In Flanders fields.

    Photobucket - Video and Image Hosting

    McCrae, who had been a medical school professor before the war, wrote the poem after the death of his friend and former student, Lt. Alexis Hamer, killed at Ypres. No stranger to battle, McCrae had previously served as an artillery man in the Second Boer War. Perhaps that accounts for the fierceness of  the third stanza:       

    Take up our quarrel with the foe:
    To you from failing hands we throw
    The torch; be yours to hold it high.
    If ye break faith with us who die
    We shall not sleep, though poppies grow
    In Flanders fields.              

    —John McCrae

    Photobucket - Video and Image Hosting
    Watch a one-minute film about the poem’s creation here.

    McCrae died of pneumonia on January 18, 1918, aged 45.

    The use of gas, at Ypres and elsewhere, was surely one of the most horrifying aspects of the World War I battlefield. It’s probably impossible for us to grasp the full horror of watching a comrade die from chlorine gas. I have met a few WW I vets who described it to me, and it still fails my imagination, I’m sure.  Perhaps the closest I can get to it is in the words of my favourite war poet, Wilfred Owen.

    He opens with a vivid description of bone-weary soldiers, almost too tired to notice the gas canisters landing behind them::

    Bent double, like old beggars under sacks,
    Knock-kneed, coughing like hags, we cursed through sludge,
    Till on the haunting flares we turned our backs
    And towards our distant rest began to trudge.
    Men marched asleep. Many had lost their boots
    But limped on, blood shod. All went lame; all blind;
    Drunk with fatigue; deaf even to the hoots
    Of gas shells dropping softly behind.

    Photobucket - Video and Image Hosting

    They have only a few precious moments to don their protective gear. But what of the man who is too slow?

    Gas! GAS! Quick, boys!- An ecstasy of fumbling,
    Fitting the clumsy helmets just in time;
    But someone still was yelling out and stumbling,
    And flound’ring like a man in fire or lime . . .
    Dim, through the misty panes and thick green light,
    As under a green sea, I saw him drowning.    

    In all my dreams, before my helpless sight,
    He plunges at me, guttering, choking, drowning.

    The conclusion should be obligatory DAILY reading for Rummy, Cheney, Bush, and all the cowboys so eager to go to war. Owen tells us what a man dying of gas really looks like:

    If in some smothering dreams you too could pace
    Behind the wagon that we flung him in,
    And watch the white eyes writhing in his face,
    His hanging face, like a devil’s sick of sin;
    If you could hear, at every jolt, the blood
    Come gargling from the froth-corrupted lungs,
    Obscene as cancer, bitter as the cud
    Of vile, incurable sores on innocent tongues, –
    My friend, you would not tell with such high zest
    To children ardent for some desperate glory,
    The old Lie: Dulce et decorum est
    Pro patria mori.
    —–Wilfred Owen

    For those of us who skipped Latin class, the last line translates as “It is sweet and fitting to die for your country.” Owen’s experiences, like those of many men, had worn away the Rupert Brooke-esque enthusiasm of the early years.

    But the horror of the trenches was not so easily communicated to the Home Front. Sympathetic family members often had a hard time coping with their battle-scarred sons and husbands. What we call “PTSD” was then known as “shell shock.” It was difficult for those who had not been to the front to understand why so many men’s minds simply stopped working in the face of the unremitting horrors of the trenches.
    Photobucket - Video and Image Hosting

    I’ve never bought that it was “sweet and beautiful” to die for your country. But I’ve believed it might be useful, or  necessary.  I still have a few relics from my attempts to do so: a few webbed belts, a uniform hat stuck away in the closet, some insignia and pins jumbled in a jewelry box.

    And a recurring bad dream. It’s nothing so awful as Owen’s. It’s even funny in a “OMG! I’m naked in high school!” sort of way. Here goes:

    I’m polishing my dress uniform shoes, and they just won’t shine. I’ve tried every trick in the book and they look as dull when I finish as when I started. Then suddenly–in that fashion that only makes sense in dreams– I  realize that I’m polishing the wrong shoes. Horrors! I’m wearing my Service Dress Whites,  but I’m polishing my black shoes. I put down the shoes only to discover I’ve gotten black shoe polish all over my whites. I try to take them off, but I’m smearing the O&^(&^*&!!-ing black shoe polish everywhere.

    And then the bell comes, and I know I’m supposed to be— somewhere, I never know where. But it’s urgent. And (dream fashion) some unrelated person shows up to urge me on: my mother, FDR, my best friend from grade school. Last week, my ex strolled into my dream, demanding to know why I’d gotten shoe polish all over his uniform too. I looked, and sure enough, the “Canada” on his DEUs was completely obscured by shoe polish. What had I been doing to get shoe polish there?

    I’m sure the answer lies in my concerns about Afghanistan, where Canadian Forces are paying an especially heavy toll, and my concern that the American presence may actually be making things worse for everyone. My dumb  nightmare usually shows up when I’m completely conflicted about something. And I’m very, very conflicted about Afghanistan.

    So I read some Siegfried Sassoon. Talk about conflicted. The well-educated scion of a wealthy family, he joined up early on in the war. He blamed the Germans for the death of his brother in 1915, and went on to show insane bravery in the face of enemy fire–his nickname was “Mad Jack.” He was decorated for his exploits.
    Photobucket - Video and Image Hosting
    Yet as the war went on, he found himself more and more appalled by its unending carnage—carnage which never seemed to gain more than a few feet of land. He drew a cutting portrait of  incompetent leadership in a few stark lines:

    “The General”

    ‘GOOD-MORNING; good-morning!’ the General said     
    When we met him last week on our way to the line.     
    Now the soldiers he smiled at are most of ’em dead,     
    And we’re cursing his staff for incompetent swine.     
    ‘He’s a cheery old card,’ grunted Harry to Jack           
    As they slogged up to Arras with rifle and pack.
        .    .    .    .     
    But he did for them both by his plan of attack.
    –S.  Sassoon

     He was equally scathing in his criticism of those who dismissed the suffering of shell-shocked men:

    “Survivors”
    No doubt they’ll soon get well; the shock and strain
    Have caused their stammering, disconnected talk.
    Of course they’re ‘longing to go out again,’ —
    These boys with old, scared faces, learning to walk.
    They’ll soon forget their haunted nights; their cowed
    Subjection to the ghosts of friends who died,—
    Their dreams that drip with murder; and they’ll be proud
    Of glorious war that shatter’d all their pride…
    Men who went out to battle, grim and glad;
    Children, with eyes that hate you, broken and mad.
    —Siegfried Sassoon

    Sassoon wrote those lines from Caiglockheart, a medical facility were he was sent in 1917  to recover from alleged shell-shock. In point of fact, Sassoon had publicly protested the war, and refused to return to it after his convalescent leave. He  was only saved from court-martial by the “shell-shock” diagnosis.  The words of his protest seem all too sadly relevant today:

    I am making this statement as an act of wilful defiance of military authority, because I believe that the War is being deliberately prolonged by those who have the power to end it. I am a soldier, convinced that I am acting on behalf of soldiers. I believe that this War, on which I entered as a war of defence and liberation, has now become a war of aggression and conquest. I believe that the purpose for which I and my fellow soldiers entered upon this war should have been so clearly stated as to have made it impossible to change them, and that, had this been done, the objects which actuated us would now be attainable by negotiation. I have seen and endured the sufferings of the troops, and I can no longer be a party to prolong these sufferings for ends which I believe to be evil and unjust. I am not protesting against the conduct of the war, but against the political errors and insincerities for which the fighting men are being sacrificed…
    –S. Sassoon

    In spite of his feelings about the war’s immorality, Sassoon chose to return to the front. His reasoning seemed to be that he was of much more use to his men at the front than protesting it–protests which could be dismissed as delusional symptoms of his “neurasthenia” (the clinical term for shell shock).

    While at the Craiglockheart medical facility, Sassoon befriended Wilfred Owen,  author of “Dulce et Decorum Est.”  Owen was genuinely suffering from shell shock, the result in part of his experiences at the Battle of the Somme (one of the bloodiest battles in human history, with over a million casualties).    
    Photobucket - Video and Image Hosting

    Sassoon greatly influenced Owen, both personally and as a poet. It is thanks to Sassoon that Owen’s poetry was published. Despite his almost suicidal bravery and several wounds, Sassoon survived the war, and promoted Owen’s works (as well as his own) as testaments to the conflict’s folly.

    Of all these published works, it is Owen’s “Parable of the Old Man and the Young” that I found myself turning to most frequently this week. Using the story of Isaac and Abraham as a model, Owen offers an observation on the old men who send their young to die. It seems all too fitting today:

    So Abram rose, and clave the wood, and went,
    And took the fire with him, and a knife.
    And as they sojourned, both of them together,
    Isaac the first-born spake, and said, My Father,
    Behold the preparations, fire and iron,
    But where the lamb for this burnt-offering?
    Then Abram bound the youth with belts and straps,
    And builded parapets the trenches there,
    And stretched forth the knife to slay his son.
    When lo! an angel called him out of heaven,
    Saying, Lay not thy hand upon the lad,
    Neither do anything to him. Behold,
    A ram, caught in a thicket by its horns;
    Offer the Ram of Pride instead of him.  

    But the old man would not so, but slew his son,
    And half the seed of Europe, one by one.
    –Wilfred Owen

    Photobucket - Video and Image Hosting

    Wilfred Owen returned to active service. He was killed in action on November 4, 1918, aged 25.

    The war ended on November 11, one week later.

    For Further Reading:
    An inexpensive collection of British war poetry is available here. The Penguin Collection of First World War Poetry is nicely comprehensive. A classic study by Paul Fussell, The Great War and Modern Memory provides an academic look at the cultural impact of the war. I also like Jay Winter’s Sites of Memory, Sites of Mourning:The Great War in European Cultural History. For a general introduction to World War One, I like James Stokesbury’s affordable and readable text. For a less academic look at the experience of trench warfare (at least from a British perspective), try John Ellis’s Eye Deep in Hell.

    (All poems above are believed to be in the public domain.)


    BREAKING: Wonder Woman Runs for President (Ubergeeky MetaSnark)

    October 6, 2007

    Originally posted by aphra behn on 01/21/07

    By LOIS LANE with Peter Parker. (DC Press International.) Senator Diana Prince of Themiscyra, more popularly known as “Wonder Woman,” announced her candidacy today, provoking both interest and criticism from the Democratic blogosphere. She joins a slate of Democratic super-candidates who are getting tough reviews online.

    Photobucket - Video and Image Hosting“I battled an ancient Gorgon to a standstill. I’ve defeated Hades himself.  I’m a warrior with vast experience leading Amazons in battle, as well as a scientist and ambassador  fluent in most of the earth’s languages. I’m an ex-demigoddess who embodies the principles of truth, love, and peace,” she noted in her statement. Senator Prince (whose website calls her merely ‘Diana,’) also emphasized her credentials as a bloody-knuckled brawler. To demonstrate her divinely-granted powers, she tore down the entire press platform and rebuilt it at lightning speed, before fending off bullets with her amazonium bracelets. 
     

    Republicans were unimpressed. “Yes, she has led vast armies of superpowered Amazons and saved the earth multiple times, but…I simply don’t picture her as a Commander-in-chief. Can you?  She just doesn’t seem, you know, very tough. She’s always on about peace and understanding…ha! …ha!…ha!” chuckled official White House spokesperson the Penguin. “It’s not that she’s a woman, of course,” he quickly added. “Nothing to do with that at all, ha.. ha… ha…! In these times, we just need someone tougher than a hot babe blessed with the strength of the earth itself. Notwithstanding that, we wish her all the best….ha…!ha…!ha..!”

    Democratic-leaning blogs quickly tagged Senator Prince a “hawk” and questioned her credentials. Poster *bluebeetlelives* at DailyJusticeLeague wrote in a blazing front page discussion: “Truth? Peace? Love? She has NO credibility, NONE, ever since she murdered Maxwell Lord. Oh, I know she says she was saving the earth from certain doom, yada yada yada…but, really.”

    Others were troubled by the idea of White House royalty, noting that Senator Prince’s mother was once Queen of the Amazons. “This is the United States, not some crackpot little utopia,” wrote poster *formercheckmateagent* There were also questions about her fervently expressed devotion to the ancient Greek religion. “How convenient, now that Governor Thor has started to explore a run, that she suddenly starts talking about her “demigoddess” status and making references to her relationship with the Greek pantheon. It’s like she’s trying to compete with him for the Pagan vote, and I think it’s disgusting, calculated pandering,” blazed poster *iluvelogan*.
    Photobucket - Video and Image Hosting

    Some voices were supportive, but questioned her media reception. “Her ideas are amazing-I mean, she’s a master of all known human  philosophy, ancient and modern. But the press just wants to talk about her clothes!” mused user *helenawayne*. “How many articles will we read about  her star-spangled granny panties? Look at the way the press treats Speaker Black Canary. Senator Superman prances around in tights, and no one says a word. But let a woman show up for work in fishnet stockings and a black bustier, and all anyone can talk about is her appearance.  It’s ridiculous.”

    Wonder Woman can take some comfort in the fact that her closest rival, Senator Superman (D-Smallville), has also been treated with mixed reviews from the blogosphere.

    “He’s young, and charming, and from the Midwest. He’s inspirational, has old-timey values. And the smile! The abs of steel-literally!” gushed *lanalangwannabe* at SuperfriendsUnderground. “I think he could be this generation’s Captain America. I really do.”

    “Ridiculous,” scoffed *anglemanwasframed*. “Look, I  know that Senator Kent has saved the world from certain doom and found a cure for cancer. I realize that he has the power to turn the earth backwards on its axis and revive the dead.  But the South will never accept him. He’s an illegal alien-literally. I mean, when you arrive by meteor, you don’t go through INS.  You KNOW the Republicans are going to make out that he’s really just  here freeloading on our healthcare system.”

    Several bloggers pointed to recent smear jobs by InJustice News, which claimed that Superman has an undisclosed ability: X-Ray vision. He has been very careful not to let the world know that he sees everyone in the nude. While the “secret ogler” label has yet to stick, it seems that his real name may also cause him problems.  InJustice News anchor Black Manta recently  questioned whether “Kal-El” betokened a connection to known terrorists El-Kelda, and the phonetic similarity has some Dems spooked.

    “Supes is truly the man for me! He’s probably the perfect human being. ” mused *krypto* at blog Xpeople. “But I just don’t know if Americans can get past his funny name.”
    Photobucket - Video and Image Hosting
    All of the front-running superheroes come in for tremendous criticism, and no favorites have emerged.

    *Batman*: Senator Wayne has expressed serious interest, but Dems are not eager to let him off the hook for his 2004 loss to the Joker. “He was up against the Joker-the friggin’ JOKER!-and he BLEW IT! He’s just too cold, too distant, and too darn rich. Let him stay up at Wayne Manor and experiment with his little devices to save the world. He ain’t gettin’ my vote. No way,” stated *blackcat* at WingoftheLegion.

    *Robin*: Batman’s former running mate, Robin, now going by Nightwing, is an early favorite for his boyish good looks, nifty black uniform, and  passionate advocacy for the victims of supervillain crime. He too has his critics. “Oh come on. *Nightwing*? He tries to step out of Batman’s shadow and reinvents himself as *Nightwing*? Lame, lame, lame. He’s the same old Robin-lite, spoiled rich boy trying to relate to the ordinary man on the Gotham street. Fake, fake, fake,” hissed *princessorono.*

    *Professor X*: Even those not apparently in the  running still draw well-wishers…and harsh critics. Professor Xavier has emerged with new credibility since his loss to the Joker in 2000, but says he is not running in 2008. His “eggheaded” intellectualism about Global Mutation now seems almost psychic in light of the rising Mutant birthrate. President Joker continues to cast doubt on  Global Mutation, and has been known to release laughing gas into the White House press chamber when asked about the issue. Yet polls show more and more Americans support Professor Xavier’s position. Republicans dismiss this as “mind control,” but Xavier’s fans in the blogosphere beg him to run.

    *The Hulk and Sgt Rock*: Rivalry between the Hulk and General(formerly Sgt.) Rock engulfs FantasticHQ, where Dr. Banner’s fans  still see his volatility as a strength. “I think that a freakishly strong green giant wearing ridiculously tiny purple pants is exactly what America needed in 2004. And it’s what we need today,” mused *icemancometh* “It’s not just a chemical reaction caused by government experimentation-it’s passion.” General  Rock’s supporters, meanwhile, are still begging him to run despite doubts about his charisma and political acumen. As one *xanderharris* put it:”It’s still all about security in 08. So how can we afford to throw away he only guy who’s been in continuous action since World War II? He may be a little rough around the edges, sure, but come on. He’s come back from the dead, people. Now that’s staying power!”

    Photobucket - Video and Image Hosting
    But if one fact can cheer Dem fans in the blogosphere, it’s that the Republican front-runners are so slow to appear. While Secretary of State Catwoman is an inside-the-Beltway Republican favorite, there are many concerns about her appeal to base voters. As *neronguy* at BlogofEvilMutants put it: “Let’s just say it: unmarried dominatrix. It will never play in the heartland. Although I personally love it…her long, leather-clad legs, her cruelly spiked boots…the way she flicks the whip just so…uh, I gotta go.” 

    Reached for comment in his secret lair, Vice President Lex Luthor explained that his party was unconcerned with the current lack of candidates. “Frankly, we do have candidates. We’ve just been feeding them one by one to the Great Cthulhu (R-R’leyeh). As one of the Great Old Ones, he has designs on devouring every single human he encounters. That’s a narrative that  really resonates with our base. Destruction, raw power, devouring worlds-what can I say? He polls well.”

    Meanwhile, since super-powers are not enough to impress the Democratic blogosphere, a new online movement seeks out super-*natural*-powers. The website http://www.DraftJesus08.com has attracted considerable attention. Its founder, known only as *pauloftarsus* says, “It’s a long shot, admittedly. He’s single and a marginally employed construction worker. But as I was going over the disgruntlement with our current slate, I kept thinking, ‘What do they want? The friggin’ SON of GOD’? And then I thought: HELL YES!” There is as yet no response from Heaven about the website, although reportedly the Cherubim and Seraphim are organizing an exploratory committee. 
    Photobucket - Video and Image Hosting 
    Crossposted at The Next Agenda and Daily Kos.
    All images in public domain and obtained from the Art Renewal Center.. Minerva and the Combat of Mars and Minerva by Jaques-Louis David. Hercules by John Singer Sargeant. Hell (cropped) by Hieronymous Bosch. Knight of the Holy Grail by Frederick Judd.
    Text copyright the author and may not be used without permission.


    PTSD and the Myth of WW II

    October 6, 2007

    Originally posted by aphra behn on 03/12/07

    Another myth of good wars versus bad wars is that only the combat veterans from Vietnam suffered lasting adjustment problems; the 1945 vet came home to enjoy prosperity, satisfied with a job well done, and with few qualms about the war…But some suffered an anguish that damaged their lives and that of their families. For some, the stress continues even today.-Michael C.C. Adams, The Best War Ever: America and World War II

    When do we let go of the myth that only in “bad” wars do combat veterans suffer from mental wounds? When do we let go of the idea that only weak people are affected by the overwhelming mental stress of combat? Because that myth is killing America’s young veterans today, as witnessed by Ilona’s rec’d diary over at dKos.

    But history suggests that the inherent justness of the war cause doesn’t cause or prevent PTSD; if it did, then the “Greatest Generation,” fighting in the Second World War,  would have had no problems, right? Yet they did. Below the fold is a look at how PSTD affected combat veterans in “the Best War Ever.”

    Cross-posted at Daily Kos. Warning! Some disturbing images follow.

    Photo Sharing and Video Hosting at Photobucket

    *Awareness of Combat Stress*
    Ilona’s diary struck me because, not only have I been reading Adams’ book recently, I’ve also long been interested in the way men reacted to combat stress in World War I. I’ve diaried about PTSD and WW I, but perhaps it bears noting again that “shell shock” was widely documented by the time the Second World War broke out, thanks to the horrors of the previous conflict. While imperfectly understood at the time, many military physicians noted the strange sickness that afflicted many combat veterans-stuttering for some, silence for others. Panic attacks at cars starting or at fireworks. A slow retreat into depression and alcohol abuse. Suicidal thoughts and actions. All the signs were there.

      But since U.S. involvement in the war was so late and so limited, perhaps it is understandable that by the time the Second World War rolled around, “shell shock” wasn’t something most soldiers or their families were familiar with. Besides, large numbers of American military personnel never saw combat in WW II.

    Of sixteen million military personnel, 25 percent never left the United States, and less than 50 percent of those overseas were ever in a battle zone.-Adams, p. 70

    So if I’ve done my math correctly, only 35-40% of American troops were ever even in a combat zone. That’ s still a lot, of course, but it also means that many American civilians might not actually encounter any of the men (much less the handful of women) who had been near combat. But for those who were in combat zones, they were destined to remain overseas for most of the war, with little respite or rest-a terrible formula for their mental health.
    Photo Sharing and Video Hosting at Photobucket

    *The Battlefield Reality*
    At induction, the Army classified men as clerical, technical, or infantry. If classified as infantry, you stayed in the infantry. There was little hope of being rotated out unless you were wounded, the war ended…or, of course, you were killed.  Knowing little of this, America civilians at home tried to believe that not only was this war morally justified (which I believe it was) but it was also psychologically “clean,” even when the battlefields were dirty.

    But it wasn’t. Try to imagine YOUR reaction to D-Day, not Hollywood’s  D-Day but as recorded by Ernie Pyle and recovered in interviews by Studs Terkel:

    …for a mile out, the coast was littered with chattered boats, tanks, trucks, rations, packs, buttocks, thighs, torsoes, hands, heads… Hostile fire swept the beach, creating more confusion and casualties among the men, who naturally went to earth in the face of such carnage…Timuel Black, and African-American GI, recalled that on Utah Beach on D-Day there were “young men crying for their mothers, wetting and defecting themselves, others tellin’ jokes…” …To make a shattered naval officer take his men within wading distance of the shore on D-Day, Elliott Johnson had to shove his pistol in the man’s mouth and order his every movement-Adams, 101

    Photo Sharing and Video Hosting at Photobucket
    Adding to the mental confusion were the rapidity, noise, and disjointed experience of modern warfare. Bullets and shells came from all directions, even from one’s own lines (friendly fire was a problem in World War II as it still is today).  In the jungles, snipers were as deadly and invisible in WW II as they were in Korea or Vietnam. In the 1980s, one of my mothers’ doctoral professors mentioned that birdsong still made him tense. The reason? Japanese snipers imitated birdsong as a means of communication. 40 years later, a simple, benign walk in the woods could leave him uneasy, with a racing heart and knot of dread in his stomach.

    But as bad as snipers and bullets were, the shells may have been worse. Adams notes that 85% of the casualties in the second world came from shells, bombs, and grenades, not bullets (only 10% of casualties came from these). Body armor and helmets provided little protection from many of these weapons. Incoming fire came quickly and vanished just as quickly; battles seldom had a neat climax in which the platoon knew they had “won”; rather, the fighting simply receded, leaving adrenaline, fear, and confusion in its wake. Men remained alive, some untouched, some suddenly and senselessly maimed. It all happened so fast.
      Photo Sharing and Video Hosting at Photobucket

    *Mangled Bodies, Disappeared Men*
    All too often, a friend or comrade was simply no longer there, save for a few pieces of what had once been a man:

    At one burial, the only recognizable parts were a scalp and a rib cage…Reporter Martha Gelhorn, examining a Sherman tank that had taken a direct hit from a German 88-mm shell, saw only “plastered pieces of flesh and much blood.” There were seventy-five thousand missing in action (MIAs) in World War II. Most had been blown into vapor. A WAC who assisted families coming to Europe to visit their relatives’ graves said, “I don’t think they know that in many cases, what remains in that grave. You’d get an arm here, a leg there.”-Adams, 105

    In films, death comes with nobility; cradled in his sergeant’s arms, the young recruit gets to gasp out a dying message of purpose: “We did it, Pops, didn’t we? We got them Nazis”(or “Japs,”  depending on the film).

    Photo Sharing and Video Hosting at Photobucket
    In reality, friends and comrades-people who might have laughed with you a moment before– were just as likely to be suddenly scattered into bits: flesh, blood, and shit spattered on the ground, the trees, one’s own clothing, smelling of charred meat and burning hair. Enough left to bury? If you were lucky.

    A tank officer found he was choking on bone fragments from his shattered left hand. A GI was killed by his buddy’s flying head, another by the West Point ring on his captain’s severed finger. ..A new phosphorus shell, developed in 1944, threw out pellets, which ignited with air to cause massive burns: one member of a forward observer team cracked up when two buddies, hit by friendly fire, flared up “like Roman Candles. “No more killing, no more killing,” he sobbed.-Adams, 107

    Photo Sharing and Video Hosting at Photobucket
    These are only the infantry stories. Although air and naval combatants fought under different conditions, they too saw horror: men burning to death in oil, slow deaths from exposure and quick ones from explosion and fire; men dying of oxygen deprivation because they had vomited into their masks, and more.

    Men of all services encountered civilians caught up in the terrible war; victims of bombings, strafings, in the wrong time and place (for when war moved so quickly how could one not be in the “wrong time and place”?) Sometimes they found evidence of massacres and horrors. Worse, sometimes those horrors had been dealt by one’s own gun, tank, plane, or ship. Wherever one saw death, it left scars, guilt, and mental conflict.

    *Mental Casualties*

    About 25-30 percent of WWII casualties were psychological cases; under very sever conditions that number could reach as high as 70-80 percent. In Italy, mental problems accounted for 56 percent of total casualties. On Okinawa, where fighting conditions were particularly horrific, 7,613 Americans died, 31,807 sustained physical wounds, and 26, 221 were mental casualties.-Adams, 95

    There were naysayers-Patton is famous for twice hitting men in mental hospitals, calling them cowards. Some Americans could not believe or understand the depths of what this “good” war was doing to their brave men. A common charge was that men who broke down suffered from “mom-ism,” being overprotected by their mothers. (Sounds a lot like those who claim today that  PTSD stems from the “feminization” of the military.) But this was contradicted by the Army’s own researchers:

    The charge that men who had failed in combat had grown up too much under the influence of women, tied to mom’s apron strings, was examined by the Stouffer team of army researchers. They found no evidence that psychiatric casualties had more protective or possessive mothers. There were, however, other clear reasons for combat stress. First, we teach children that killing is a sin. The better this lesson is learned, the more traumatizing will be the taking of life…
      Photo Sharing and Video Hosting at Photobucket
    At some level, the solider was caught between competing and incompatible values: killing was both reprehensible and admirable. Similarly, he was the victim of conflicting loyalties: to be of service to his family, a man had to stay alive and provide for the; to be of use to his nation, he had to be willing to die. The tensions between those incompatibles produced nervous breakdowns.

    It is not a natural thing to kill others constantly.  It is not natural to be at constant risk of instant death oneself. Humans have achieved agriculture, cities, and technology in part because we have achieved long periods of peace. And even though we are animals, with an animal’s ability to kill when we believe it necessary, we now kill with mechanized force, dealing instantaneous and violent deaths not seen among the rest of the animal kingdom. In short, we are neither biologically nor culturally disposed to the realities of modern warfare.

    *Lasting Wounds*
    The veterans who came back from world war two were affected both physically and mentally in ways that their families could not understand. Penicillin and MASH units allowed many to live who might have died before the war, but some families recoiled from the men with missing digits and arms, from the men so badly burned as to be unrecognizable. The fat of human flesh, like any other fat, melt sunder high temperatures. When that fat is located in the human face, it changes the features irrevocably. Someone wounded in this fashion will have the additional mental stress of dealing with other people’s reactions-for the rest of his life.

    Some men had mental wounds relating to specific combat situations:

    Men who had faced land mines couldn’t walk on grass for years afterwards. One pilot had to pull off the road when the thwacking of his tires on the concrete joints reminded him fo the sound fo flak over Germany. Another dived under his in-laws table when planes buzzed overhead.-Adams, 149

    Photo Sharing and Video Hosting at Photobucket
    Why do we think these reactions were limited to the men of Vietnam? Maybe because in the 1940s and 1950s, it was considered culturally inappropriate for men to  talk about feelings with anyone. Not a wife, not a priest, not a”shrink.” A later generation was somewhat less reticent in expressing itself, so those on the outside might claim there was something “special” about Vietnam. Its men suffered because they were weak, or were moral cowards, perhaps. Or because Vietnam was a “bad” war.

    Either way, many Americans politicized the suffering of Vietnam veterans, assuming it was related only to its time and place. While every war has its distinctive horrors (the experience of Agent Orange in Vietnam, for example), perhaps the Vietnam generation would have been treated more kindly had they been more aware of the suffering that their fathers, uncles and older brothers went through during the Second World War.

    *When does Normal Come?*
    But in 1945 and 1946, the United States wasn’t ready to deal with its combat veterans. Good Housekeeping told wives that men would cease their “oppressive remembering” in no more than three weeks. The Army’s opinion was that it would take about 2 months. No one expected it to take years. Some GIs sought and received counseling, but most simply assumed they should “get over it.” Thousands could not. How long does it take to forget the stench, taste, sounds, and sight of constant death?

    Photo Sharing and Video Hosting at Photobucket

    The divorce rate for veterans under 29 reached 1 out of 29 in late 1945, as compared to a general rate of 1 on 60. Behind those divorces lay a myriad of problems, some of which related directly to the combat experience:

    Trying to repress feelings, they drank, gambled suffered paralyzing depression, and became inarticulately violent. A paratrooper’s wife would “sit for hours and just hold him when he shook.” Afterward, he started beating her and the children: “He became a brute.” And they divorced —-Adams, 150

    How many of the wife-beating jokes that 1950s comedians told were a way of quietly acknowledging this ugly reality? How many families hushed the cries and ignored the bruises because they could not understand its causes? Reading about this, I think of the ubiquitous 1950s “cocktail hour” in rather a different light; the boozy Dean Martin fans and suave JFK-look alikes, drinking heavily because it was the only anesthetic that could wipe out the feelings of horror. Don’t forget that fictional James Bond was supposed to be a vet. Throwing back martini after martini as he coldly kills for Queen and country, he never stays too long in one relationship, always doing what he must. No wonder Ian Fleming’s books sold so well. He offered a powerful redemptive myth for the men struggling with all they had seen and done.

    For some the horrors receded, but how many other men never really left the prison camps and the battlefields? And not just the men.  The handful of women who had been under fire faced similar problems; the female PTSD victim is not some issue unique to the conflicts in Iraq. As Elizabeth Norman details in We Band of Angels, the nurses of Bataan and Corregidor who had been interned in the Philippines found themselves both hailed as heroes and shunned as “tainted” women by their communities. If Americans could not talk about why their sons came back from the front shattered, how much less could they imagine that their daughters might also have suffered from shells, starvation, and strafing? In open-air jungle “wards”  and from beachhead tents that looked out over the carnage of an industrial war, the nurses had seen sights that almost no woman of her era could relate to.

    *If Not Now, When?*
    PTSD is not a sign of individual weakness, nor is it a comment on the rightness or wrongness of a war. It’s not the fault of the “liberal media” for reporting on what war is actually like. It’s just a fact of modern combat.

    Photo Sharing and Video Hosting at Photobucket
    Mired in the myth of Vietnam as  the “bad” war, some politicians seem to fear that if we acknowledge and deal fairly with PTSD, they are admitting fault in Iraq. That’s baloney. Even if Osama had been hiding out with Saddam and we had found WMDs, even if Iraq were now humming with the fairy-tale rebuilding we were promised,  we would still have men and women coming home with PTSD.  Whether they die for oil or die for “democracy,” in war, soldiers die. Horribly.

    This war seems hopeless and pointless. But that is not the sole cause of PTSD. Even the most moral combat seemed hopeless and pointless when all that remains of your best friend is a couple of fingers. When death can rain down any instant. When birdsong is dangerous. When you’ve seen and smelled piles of bodies rotting in dank jungle mud. When your legs were crushed by your own tank.

      Let’s look at the statistics .(h/t Ilona). Let’s deal with the facts. Call your representatives, of whatever party, and ask them  to support legislation that would give veterans decent care. (h/t Ilona).

    After 60 years, isn’t it about time we stopped playing politics with PTSD?

    Image information:  All photographs are believed to be in the public domain of the United States because they are  the works of a U.S. Army soldier or employee, taken or made during the course of the person’s official duties. All photos and the poster are the work of the U.S. federal government, and therefore in the  public domain. Photographs available from NARA and The Army Center for Military History. Poster from the Northwestern Library collection.


    The Mushy Middle: A Response

    October 6, 2007

    Originally posted by Bastoche on 06/12/07

    Nonpartisan posted two very stimulating essays on the Mushy Middle and Overton Windows over the weekend. To the second post I attached a comment on Nonpartisan’s characterization of the mushy middle, and he suggested I post it as diary.

    Nonpartisan draws on James MacGregor Burns’ idea of a Jeffersonian two-party system, in which each party can confidently claim a dedicated portion of the electorate. In between is another portion of the electorate, the Vital Center, committed to neither party but open to voting for either.

    To Nonpartisan this center is no longer vital but a mushy and indecisive crowd of swing voters. Worse, even though the swingsters are incapable of choosing between the clear political alternatives flanking them, they control the political process since neither political party can win without them.

    By seeking in each cycle to win just enough of these voters to win the election, the Democrats willfully dilute their core progressive message and rush to occupy an inoffensive and incoherent center, thus alienating many of their own progressive adherents and reducing the political process to an exercise in opportunism and cynicism.

    I agree with Nonpartisan that the Democrats must commit to a message that is openly and consistently progressive. But I disagree with his characterization of the swing voter, and I think it’s important that we have a clear sense of who the middlers are and why they remain middlers.

    My description of the middlers is by no means complete. It could use some demographic data, further analysis of their ideological preferences and, of course, historical context. I might try to provide such analysis and context in future posts, but for now I’ll offer these few preliminary thoughts.

    So, who are these guys and gals?

    Many of these middlers (the mushy middle, the swing voter, the alleged independent) are middle- or working-class people who work forty to sixty hours a week and whose priorities are work/career and family/personal relationship. They are, like all of us, ideological, and their ideological preferences generally revolve around two sets of beliefs and ideals. One set has to do with their material goals and their Ideal of America, and this set connects with what a candidate puts forward as policy, domestic and foreign. The other set has to do with their own personal sense of honesty and integrity, and this set connects not just with a candidate’s policy but with a candidate’s attitude toward that policy: does the candidate adhere to a policy out of political expediency or inner conviction? 

    Middlers, though, usually do not take the time to articulate these ideological preferences and firmly connect them to a policy or a party or a candidate until the presidential campaign heats up and it’s time to vote. This is not being “mushy.” As I said, these middlers work forty to sixty hours a week. In their off hours they have to repair the leaky pipe in the basement, get the kids to and from school, catch the last innings of the hometeam game, give mom that overdue phonecall, and get to the gym for an halfhour on the treadmill before the dinnerdate at seven. And while they’re doing all this, they’re worrying about the mortgage and car payments, how to keep the kids from falling into drug or alcohol abuse, what to do if dad can no longer take care of himself, and, maybe, how to deal with a relationship or a marriage that is no longer working. Who can blame them if, in what little spare time they have, they have no desire to study the specifics of a candidate’s position but would rather veg out in front of the TV for an hour or two before dropping off to sleep?

    During the presidential race they will take more time to pay attention to politics. True, to a great extent their attention will be directed to soundbites and slogans, and they will bring to bear on those bites and slogans their ideological preferences: which candidate will help me meet my mortgage and car payments; which candidate will help me raise my kids; which candidate will honor my Ideal of what America stands for in the world, etc.

    This accounts for the proliferation of political consultants and in-house pollsters (and also for the decadence of modern punditry). Since these middlers vote according to what is worrying them now, the consultants say, let’s find out what those worries are and base our campaign on it. Let’s find out, through polling and focus groups, the two or three magic-bullet issues that will gain victory for us in this cycle. The result? Politicians who follow polls and focus groups, who stand for nothing consistent, who do not lead, or as Nonpartisan puts it, who do not have the gumption to at least try to shift the debate in a more progressive direction (shift the Overton Window), come what may when the polls close. 

    This, though, need not be the case, I think. As I said, these middlers also bring to bear on the bites and slogans preferences that come out of their basic sense of honesty and integrity: which candidate is not following a focus group but a consistent inner conviction; which candidate is genuinely concerned about my mortgage payments and my kids; which candidate has the capacity to listen to the other side without abandoning longheld principles; which candidate has a bone in his or her back that you can’t put your hand through; which candidate will lead.

    If a candidate consistently speaks from conviction, that candidate will appeal to the basic sense of honesty and integrity in many of these middlers. They might not finally vote for the candidate because the candidate adheres to policies that are just a little too progressive for them. But some of them just might because they’ve been leaning progressive and they admire this candidate’s consistent and unapologetic adherence to a progressive view. And those who don’t vote for the true progressive now just might next cycle, which is why, I think, we want someone who is willing now to speak from conviction on progressive matters, because at least some in the mushy middle aren’t all that mushy but simply need someone to articulate for them, with a calm and unfaltering conviction, viewpoints that they know are decent and fair.

    Edwards, for me, is right now that candidate. I think he’s speaking from an inner conviction and trying to do so consistently. He’s a longshot for the nomination-Hillary will probably wrap it up before most people pay much attention to the process-but while he’s campaigning, we’ll have the satisfaction of watching someone who is learning how to lead, even if he won’t be given the opportunity to do so. And if he’s not given the opportunity to lead, he’ll at least push the debate in the direction it needs to go and maybe, just maybe make a few of those in the mushy middle a little less mushy next time around. 


    plus c’est la meme chose

    October 6, 2007

    Originally posted by Bastoche on 09/09/07

    The events of 9/11, according to Norman Podhoretz in the essay I discussed yesterday, did not change everything. What they changed was our perception of an enemy that had been developing, adapting, and preparing for the Long Struggle ahead. Prior to 9/11, many Americans had been willfully blind to the preparations our new enemy was making, and even after 9/11 some Americans remained unwilling to see what needed to be seen and do what needed to be done.

    Crossposted at dailykos

    Not so George W. Bush. The events of 9/11, Podhoretz claims, served him as a revelation: the terrorists hate the very core ideals that define us as a nation, freedom and democracy, and were on an implacable mission to destroy them. His mission, the mission that would define and vindicate his Presidency, now and forever, would be to protect freedom here and to promote it throughout the world and especially in the very heartland of the enemy, the Middle East. Proceeding from his new knowledge, both of the enemy and of himself, he crafted the elements of our new strategy: preemptive military action in order to change the evil regimes harboring and nurturing the terrorists. The Struggle would be a Long and difficult one, but 9/11 had clarified his vision and he would never deviate from his goal.

    Can we be confident, though, that his successor to the presidency will adhere to the doctrinal realism George W. Bush has established? If that successor is Rudy Giuliani we can. Rudy has taken on Norman Podhoretz as one of his foreign policy advisors, and in his September/October Foreign Affairs essay, Rudy shows us that he will indeed adhere to the precedents George W. Bush has established, though like any good pupil he will modify them as new challenges arise (those new challenges, I assume, being the ones posed by China and Russia). Like George W. Bush, Rudy wants to establish peace in the world. Like his mentor and advisor, Norman Podhoretz, Rudy understands that peace can only be established on the basis of a realistic assessment of the forces that oppose us in the world, forces intent on eradicating the ideals we espouse and embody.

    1. The Real, the Ideal, and the Post-9/11 World

    Rudy begins his essay by reminding us that all Americans were transformed by the events of 9/11 into a single “generation,” one compelled to confront “the first great challenge of the twenty-first century”: Radical Islam’s “assault on world order.” Responding effectively to that assault, however, demanded both courage and new thinking.

    Confronted with an act of war on American soil, our old assumptions about conflict between nation-states fell away. Civilization itself, and the international system, had come under attack by a ruthless and radical Islamist enemy.

    The old assumptions that fell away were those that, as Podhoretz has argued, held sway prior to the enunciation and implementation of the Bush Doctrine. One such assumption is that nations conduct war by means of their military arms. The events of 9/11 powerfully clarified for us that such need no longer be the case. On 9/11 we were attacked not by the military arms of a state, Afghanistan, but by members of a terrorist organization that it had fostered and to whom it gave safe harbor. Such support amounted to an act of war, and the government that supplied it had to be held accountable. We therefore responded with strength and will to the attacks of 9/11, invading the responsible nation, toppling its autocratic regime, and establishing in its place a democratic form of government. But, Rudy cautions us,

    this war will be long, and we are still in its early stages. Much like at the beginning of the Cold War, we are at the dawn of a new era in global affairs, when old ideas have to be rethought and new ideas have to be devised to meet new challenges.

    Many of those “new ideas” have already been “devised.” They comprise, as Podhoretz has shown us, the major elements of the Bush Doctrine and concern the relation between autocratic states and the terrorist organizations they harbor, aid, and abet. We must now steadfastly apply these new ideas to the foreign policy challenges that face us. “First and foremost” among those challenges, Rudy tells us, “will be to set a course for victory in the terrorists’ war on global order.” In addition to this primary challenge, we face two others: to “strengthen the international system” and to extend the political and economic benefits of that system “in an ever-widening arc of stability and security across the globe.” The tools we have to reach these goals are three: “building a stronger defense, developing a determined diplomacy, and expanding our economic and cultural influence.”

    Rudy thus seems to be charting a course between the realist school of foreign policy, which upholds diplomacy and economic exchange as the best ways to achieve stability among the members of the “international system,” and the idealist school exemplified by Reagan and George W. Bush and touted by Podhoretz, which sneers at stability and urges a resolute confrontation with evil backed by military force. Indeed, Rudy goes on to say that the two factions must be united if we are to face our challenges effectively. “Achieving a realistic peace means balancing realism and idealism in our foreign policy.”

    It immediately becomes clear, though, that for Rudy, as for any true affiliate of the Reagan School of foreign policy, realism must remain subservient to idealism. Rudy affirms that our relations with other nations must never become unmoored from our basic ideals. “At the core of all Americans is the belief that all human beings have certain inalienable rights that proceed from God but must be protected by the state.” The basic rights on which our civilization is based-life, liberty, and the pursuit of happiness-are guaranteed by a particular form of government: democracy. History has taught us that those nations who value the ideals of democracy and freedom establish with one another peaceful relations based on trust and mutual support. History has also taught us that nations who despise the ideals of freedom and democracy establish with one another (and with freedom-loving nations) antagonistic relations based on deceit and mutual distrust. Most Americans grasp this fundamental distinction.

    Americans believe that to the extent that nations recognize these rights [of life, liberty, and the pursuit of happiness] within their own laws and customs, peace with them is achievable. To the extent that they do not, violence and disorder are much more likely.

    In order, therefore, to eradicate violence and disorder among nations and achieve a realistic peace, we must act to transform tyrannical and terrorist states into free and democratic ones. “Preserving and extending American ideals must remain the goal of all U.S. policy, foreign and domestic,” as Rudy rather blandly puts it. His intent, though, is clear. We must preserve democracy from the assaults of terrorists. We must also extend the American ideals of freedom and democracy throughout the globe by eliminating terrorism and transforming the states that nurture it.

    But our idealism must be tempered by realism. “Idealism should define our ultimate goals; realism must help us recognize the road we must travel to achieve them.” One way to extend the American ideals of freedom and democracy is certainly through diplomacy and economic exchange, as the realists argue. But Rudy knows that such realism will no longer suffice. The realism that we now need is a deep and profound one that understands and is willing to acknowledge the character of our new and insidious enemy. “We cannot afford to indulge any illusions about the enemies we face,” Rudy tells us, and prior to 9/11 indulge those illusions we did. We retreated before terrorist aggression in Lebanon in 1983 and again in Somalia in 1993 and wishfully thought that the terrorists-and the states that supported them-would not be emboldened by our timidity and weakness. “A realistic peace,” Rudy concludes, “can only be achieved through strength,” through military strength, that is, and the will to use it.

    2. Rudy’s Way to a Realistic Peace

    We thus see that Rudy’s “realism” is not that of the realist school of foreign policy, but an adjunct and supplement to his Reagan-school idealism. Indeed, Rudy explicitly rejects the realist approach to foreign policy.

    A realistic peace is not a peace to be achieved by embracing the “realist” school of foreign policy thought. That doctrine defines America’s interests too narrowly and avoids attempts to reform the international system according to our values.

    The realists strive to protect America’s security and economic interests by maintaining international stability. Not for them the idealistic attempt to “reform the international system” by actively spreading American values throughout the world. The hardheaded realists have too much worldly wisdom to entertain the visionary notion that America’s “vital interests” are best served by resolutely confronting the evil of fanatic and oppressive tyrannies-who are only too happy to harbor and nurture terrorist organizations-and transforming them into free and open democracies.

    But the so-called wisdom of the realist fails to see that the fight we are presently engaged in is one for the hearts and minds of people-for the fierce and passionate ideals that propel people to shape and make history-and not just for those material interests that the realists refer to as “vital.” 

    To rely solely on this type of realism would be to cede the advantage to our enemies in the complex war of ideas and ideals. It would also place too great a hope in the potential for diplomatic accommodation with hostile states.

    Diplomacy has always been the favorite tool of the realist who assumes that our opponents are motivated by a reasonable self-interest and are willing to bargain and compromise in order to get something of what they want. “Holding serious talks may be advisable even with our adversaries,” Rudy concedes. But one cannot bargain with or accommodate or appease nations “bent on our destruction or those who cannot deliver on their agreements.” The realists are thus deluded if they think that a timid diplomacy coupled with economic exchange will prompt our enemies to change. Only a diplomacy grounded in our core value of freedom and backed by a will to use our military strength can make our enemies accede to our demands.

    “Iran,” Rudy says, “is a case in point.” Rudy does not insist “that talks with Iran cannot possibly work.” He admits that they could-“but only if we came to the table in a position of strength, knowing what we wanted.”

    We already know what we want ultimately to happen in Iran: regime change. We want to transform Iran from an autocratic nation that harbors and nurtures terrorist organizations into a democratic nation with whom can deal politically and economically on a basis of mutual respect and trust. 

    For the moment, though, we’ll settle for something less radical and traumatic than regime change: Iran must dismantle its nuclear facilities and must discontinue its support of terrorist activities. Iran must accede to these demands. We will tolerate neither half-measures nor stonewalling. We are the generation of 9/11 and we have learned well the harsh lessons of that day. We cannot be fooled and we will not be manipulated. We know what is behind the ingratiating smile of the autocrat: a passion to oppress. We will, through diplomacy, urge the theocrats in Iran to mend their ways. But if they do not we will accomplish the mending ourselves.

    The theocrats ruling Iran need to understand that we can wield the stick as well as the carrot, by undermining popular support for their regime, damaging the Iranian economy, weakening Iran’s military, and, should all else fail, destroying its nuclear infrastructure.

    Rudy does not state precisely how he intends to accomplish these goals, but I think the implications are clear. We will damage the Iranian economy by means of economic sanctions. We will undermine support for the regime through covert operations. We will weaken Iran’s military and destroy its nuclear facilities by means of air strikes.

    The application of such sticks, and especially the preemptive application of that most effective of all sticks, air strikes, might very well accomplish our ultimate goal in Iran: regime change. If not, they will at least accomplish our short-term goal of rendering Iran’s influence in the region ineffective. We might not be able to extend freedom to Iran in the near term. But, again, this Struggle will be a Long one, and if we cannot now extend freedom to Iran, we can at least prevent Iran from exporting fanaticism and tyranny beyond its borders. 

    The events of 9/11, it seems, produced in Rudy Giuliani the same idealistic fervor that they produced in George W. Bush. That fervor has been shaped by the ministrations of his great tutor, Norman Podhoretz, into a potent ideological system. Rudy now accepts the fundamental neocon notion that a realistic foreign policy must anchor itself in the great American ideal of freedom and must commit itself to spreading that ideal to every other nation on earth. He sees clearly the nature of the new enemy we face: terrorist organizations supported by nations intent on eradicating freedom in the world and replacing it with fanatic tyranny. Drawing on that clarity of vision, he has concluded that the old methods of defense, containment and deterrence, will not work and must be replaced by the new ones of preemption and regime change. Finally, he recognizes that the Struggle in which we are now engaged will be a Long one.

    Rudy obviously has two goals in this essay. He wants us to carry away from it specific policy recommendations.  But he also, and even primarily, I think, wants us to carry away from it a specific impression of his political character, namely, that he is imbued with the same visionary idealism that propelled his great predecessors, Ronald Reagan and George W. Bush, into their confrontation with evil. However long the struggle he will not falter or flinch. If he does he will have beside him as an advisor a man whose intellectual strength and moral determination will once again stiffen his spine: Norman Podhoretz.

    Should Rudy prove successful in his quest for power, we will, of course, change one Republican regime for another. Ideologically, though, things will remain the same. And we will have to appropriate from Podhoretz part of the subtitle of his new book: our struggle against the neocon worldview will be a long one indeed.