The white working class, which usually inspires liberal concern only for its paradoxical, Republican-leaning voting habits, has recently become newsworthy for something else: according to economist Anne Case and Angus Deaton, the winner of the latest Nobel Prize in economics, its members in the 45- to 54-year-old age group are dying at an immoderate rate. While the lifespan of affluent whites continues to lengthen, the lifespan of poor whites has been shrinking. As a result, in just the last four years, the gap between poor white men and wealthier ones has widened by up to four years. The New York Times summed up the Deaton and Case study with this headline: “Income Gap, Meet the Longevity Gap.”

This was not supposed to happen. For almost a century, the comforting American narrative was that better nutrition and medical care would guarantee longer lives for all. So the great blue-collar die-off has come out of the blue and is, as the Wall Street Journal says, “startling.”

It was especially not supposed to happen to whites who, in relation to people of color, have long had the advantage of higher earnings, better access to health care, safer neighborhoods, and of course freedom from the daily insults and harms inflicted on the darker-skinned. There has also been a major racial gap in longevity — 5.3 years between white and black men and 3.8 years between white and black women — though, hardly noticed, it has been narrowing for the last two decades. Only whites, however, are now dying off in unexpectedly large numbers in middle age, their excess deaths accounted for by suicide, alcoholism, and drug (usually opiate) addiction.

There are some practical reasons why whites are likely to be more efficient than blacks at killing themselves. For one thing, they are more likely to be gun-owners, and white men favor gunshots as a means of suicide. For another, doctors, undoubtedly acting in part on stereotypes of non-whites as drug addicts, are more likely to prescribe powerful opiate painkillers to whites than to people of color. (I’ve been offered enough oxycodone prescriptions over the years to stock a small illegal business.)

Manual labor — from waitressing to construction work — tends to wear the body down quickly, from knees to back and rotator cuffs, and when Tylenol fails, the doctor may opt for an opiate just to get you through the day.

The Wages of Despair

But something more profound is going on here, too. As New York Timescolumnist Paul Krugman puts it, the “diseases” leading to excess white working class deaths are those of “despair,” and some of the obvious causes are economic. In the last few decades, things have not been going well for working class people of any color.

I grew up in an America where a man with a strong back — and better yet, a strong union — could reasonably expect to support a family on his own without a college degree. In 2015, those jobs are long gone, leaving only the kind of work once relegated to women and people of color available in areas like retail, landscaping, and delivery-truck driving. This means that those in the bottom 20% of white income distribution face material circumstances like those long familiar to poor blacks, including erratic employment and crowded, hazardous living spaces.

White privilege was never, however, simply a matter of economic advantage. As the great African-American scholar W.E.B. Du Bois wrote in 1935, “It must be remembered that the white group of laborers, while they received a low wage, were compensated in part by a sort of public and psychological wage.”

Some of the elements of this invisible wage sound almost quaint today, like Du Bois’s assertion that white working class people were “admitted freely with all classes of white people to public functions, public parks, and the best schools.” Today, there are few public spaces that are not open, at least legally speaking, to blacks, while the “best” schools are reserved for the affluent — mostly white and Asian American along with a sprinkling of other people of color to provide the fairy dust of “diversity.” While whites have lost ground economically, blacks have made gains, at least in the de jure sense. As a result, the “psychological wage” awarded to white people has been shrinking.

For most of American history, government could be counted on to maintain white power and privilege by enforcing slavery and later segregation. When the federal government finally weighed in on the side of desegregation, working class whites were left to defend their own diminishing privilege by moving rightward toward the likes of Alabama Governor (and later presidential candidate) George Wallace and his many white pseudo-populist successors down to Donald Trump.

At the same time, the day-to-day task of upholding white power devolved from the federal government to the state and then local level, specifically to local police forces, which, as we know, have taken it up with such enthusiasm as to become both a national and international scandal. The Guardian, for instance, now keeps a running tally of the number of Americans (mostly black) killed by cops (as of this moment, 1,209 for 2015), while black protest, in the form of the Black Lives Matter movement and a wave of on-campus demonstrations, has largely recaptured the moral high ground formerly occupied by the civil rights movement.

The culture, too, has been inching bit by bit toward racial equality, if not, in some limited areas, black ascendency. If the stock image of the early twentieth century “Negro” was the minstrel, the role of rural simpleton in popular culture has been taken over in this century by the characters in Duck Dynasty and Here Comes Honey Boo Boo. At least in the entertainment world, working class whites are now regularly portrayed as moronic, while blacks are often hyper-articulate, street-smart, and sometimes as wealthy as Kanye West. It’s not easy to maintain the usual sense of white superiority when parts of the media are squeezing laughs from the contrast between savvy blacks and rural white bumpkins, as in the Tina Fey comedy Unbreakable Kimmy Schmidt. White, presumably upper-middle class people generally conceive of these characters and plot lines, which, to a child of white working class parents like myself, sting with condescension.

Of course, there was also the election of the first black president. White, native-born Americans began to talk of “taking our country back.” The more affluent ones formed the Tea Party; less affluent ones often contented themselves with affixing Confederate flag decals to their trucks.

On the American Downward Slope

All of this means that the maintenance of white privilege, especially among the least privileged whites, has become more difficult and so, for some, more urgent than ever. Poor whites always had the comfort of knowing that someone was worse off and more despised than they were; racial subjugation was the ground under their feet, the rock they stood upon, even when their own situation was deteriorating.

If the government, especially at the federal level, is no longer as reliable an enforcer of white privilege, then it’s grassroots initiatives by individuals and small groups that are helping to fill the gap — perpetrating the micro-aggressions that roil college campuses, the racial slurs yelled from pickup trucks, or, at a deadly extreme, the shooting up of a black church renowned for its efforts in the Civil Rights era. Dylann Roof, the Charleston killer who did just that, was a jobless high school dropout and reportedly a heavy user of alcohol and opiates. Even without a death sentence hanging over him, Roof was surely headed toward an early demise.

Acts of racial aggression may provide their white perpetrators with a fleeting sense of triumph, but they also take a special kind of effort. It takes effort, for instance, to target a black runner and swerve over to insult her from your truck; it takes such effort — and a strong stomach — to paint a racial slur in excrement on a dormitory bathroom wall. College students may do such things in part out of a sense of economic vulnerability, the knowledge that as soon as school is over their college-debt payments will come due. No matter the effort expended, however, it is especially hard to maintain a feeling of racial superiority while struggling to hold onto one’s own place near the bottom of an undependable economy.

While there is no medical evidence that racism is toxic to those who express it — after all, generations of wealthy slave owners survived quite nicely — the combination of downward mobility and racial resentment may be a potent invitation to the kind of despair that leads to suicide in one form or another, whether by gunshots or drugs. You can’t break a glass ceiling if you’re standing on ice.

It’s easy for the liberal intelligentsia to feel righteous in their disgust for lower-class white racism, but the college-educated elite that produces the intelligentsia is in trouble, too, with diminishing prospects and an ever-slipperier slope for the young. Whole professions have fallen on hard times, from college teaching to journalism and the law. One of the worst mistakes this relative elite could make is to try to pump up its own pride by hating on those — of any color or ethnicity — who are falling even faster.

Barbara Ehrenreich, a TomDispatch regular and founding editor of the Economic Hardship Reporting Project, is the author of Nickel and Dimed: On (Not) Getting By in America (now in a 10th anniversary edition with a new afterword) and most recently the autobiographical Living with a Wild God: A Nonbeliever’s Search for the Truth about Everything.

Copyright 2015 Barbara Ehrenreich

Dead, White, and Blue

It’s been exactly 50 years since Americans, or at least the non-poor among them, “discovered” poverty, thanks to Michael Harrington’s engaging book The Other America. If this discovery now seems a little overstated, like Columbus’s “discovery” of America, it was because the poor, according to Harrington, were so “hidden” and “invisible” that it took a crusading left-wing journalist to ferret them out.  

Harrington’s book jolted a nation that then prided itself on its classlessness and even fretted about the spirit-sapping effects of “too much affluence.” He estimated that one quarter of the population lived in poverty — inner-city blacks, Appalachian whites, farm workers, and elderly Americans among them. We could no longer boast, as President Nixon had done in his “kitchen debate” with Soviet Premier Nikita Khrushchev in Moscow just three years earlier, about the splendors of American capitalism.

At the same time that it delivered its gut punch, The Other America also offered a view of poverty that seemed designed to comfort the already comfortable. The poor were different from the rest of us, it argued, radically different, and not just in the sense that they were deprived, disadvantaged, poorly housed, or poorly fed. They felt different, too, thought differently, and pursued lifestyles characterized by shortsightedness and intemperance. As Harrington wrote, “There is… a language of the poor, a psychology of the poor, a worldview of the poor. To be impoverished is to be an internal alien, to grow up in a culture that is radically different from the one that dominates the society.”

Harrington did such a good job of making the poor seem “other” that when I read his book in 1963, I did not recognize my own forbears and extended family in it. All right, some of them did lead disorderly lives by middle class standards, involving drinking, brawling, and out-of-wedlock babies. But they were also hardworking and in some cases fiercely ambitious — qualities that Harrington seemed to reserve for the economically privileged.

According to him, what distinguished the poor was their unique “culture of poverty,” a concept he borrowed from anthropologist Oscar Lewis, who had derived it from his study of Mexican slum-dwellers. The culture of poverty gave The Other America a trendy academic twist, but it also gave the book a conflicted double message: “We” — the always presumptively affluent readers — needed to find some way to help the poor, but we also needed to understand that there was something wrong with them, something that could not be cured by a straightforward redistribution of wealth. Think of the earnest liberal who encounters a panhandler, is moved to pity by the man’s obvious destitution, but refrains from offering a quarter — since the hobo might, after all, spend the money on booze. 

In his defense, Harrington did not mean that poverty was caused by what he called the “twisted” proclivities of the poor. But he certainly opened the floodgates to that interpretation. In 1965, Daniel Patrick Moynihan — a sometime-liberal and one of Harrington’s drinking companions at the famed White Horse Tavern in Greenwich Village — blamed inner-city poverty on what he saw as the shaky structure of the “Negro family,” clearing the way for decades of victim-blaming. A few years after The Moynihan Report, Harvard urbanologist Edward C. Banfield, who was to go on to serve as an advisor to Ronald Reagan, felt free to claim that:

“The lower-class individual lives from moment to moment… Impulse governs his behavior… He is therefore radically improvident: whatever he cannot consume immediately he considers valueless… [He] has a feeble, attenuated sense of self.”

In the "hardest cases," Banfield opined, the poor might need to be cared for in “semi-institutions… and to accept a certain amount of surveillance and supervision from a semi-social-worker-semi-policeman.”

By the Reagan era, the “culture of poverty” had become a cornerstone of conservative ideology: poverty was caused, not by low wages or a lack of jobs, but by bad attitudes and faulty lifestyles. The poor were dissolute, promiscuous, prone to addiction and crime, unable to “defer gratification,” or possibly even set an alarm clock. The last thing they could be trusted with was money. In fact, Charles Murray argued in his 1984 book Losing Ground, any attempt to help the poor with their material circumstances would only have the unexpected consequence of deepening their depravity.

So it was in a spirit of righteousness and even compassion that Democrats and Republicans joined together to reconfigure social programs to cure, not poverty, but the “culture of poverty.” In 1996, the Clinton administration enacted the “One Strike” rule banning anyone who committed a felony from public housing. A few months later, welfare was replaced by Temporary Assistance to Needy Families (TANF), which in its current form makes cash assistance available only to those who have jobs or are able to participate in government-imposed “workfare.”

In a further nod to “culture of poverty” theory, the original welfare reform bill appropriated $250 million over five years for “chastity training” for poor single mothers. (This bill, it should be pointed out, was signed by Bill Clinton.)

Even today, more than a decade later and four years into a severe economic downturn, as people continue to slide into poverty from the middle classes, the theory maintains its grip. If you’re needy, you must be in need of correction, the assumption goes, so TANF recipients are routinely instructed in how to improve their attitudes and applicants for a growing number of safety-net programs are subjected to drug-testing. Lawmakers in 23 states are considering testing people who apply for such programs as job training, food stamps, public housing, welfare, and home heating assistance. And on the theory that the poor are likely to harbor criminal tendencies, applicants for safety net programs are increasingly subjected to finger-printing and computerized searches for outstanding warrants.

Unemployment, with its ample opportunities for slacking off, is another obviously suspect condition, and last year 12 states considered requiring pee tests as a condition for receiving unemployment benefits. Both Mitt Romney and Newt Gingrich have suggested drug testing as a condition for all government benefits, presumably including Social Security. If granny insists on handling her arthritis with marijuana, she may have to starve.

What would Michael Harrington make of the current uses of the “culture of poverty” theory he did so much to popularize? I worked with him in the 1980s, when we were co-chairs of Democratic Socialists of America, and I suspect he’d have the decency to be chagrined, if not mortified. In all the discussions and debates I had with him, he never said a disparaging word about the down-and-out or, for that matter, uttered the phrase “the culture of poverty.” Maurice Isserman, Harrington’s biographer, told me that he’d probably latched onto it in the first place only because “he didn't want to come off in the book sounding like a stereotypical Marxist agitator stuck-in-the-thirties.”

The ruse — if you could call it that — worked. Michael Harrington wasn’t red-baited into obscurity.  In fact, his book became a bestseller and an inspiration for President Lyndon Johnson’s War on Poverty. But he had fatally botched the “discovery” of poverty. What affluent Americans found in his book, and in all the crude conservative diatribes that followed it, was not the poor, but a flattering new way to think about themselves — disciplined, law-abiding, sober, and focused. In other words, not poor.

Fifty years later, a new discovery of poverty is long overdue. This time, we’ll have to take account not only of stereotypical Skid Row residents and Appalachians, but of foreclosed-upon suburbanites, laid-off tech workers, and America’s ever-growing army of the “working poor.” And if we look closely enough, we’ll have to conclude that poverty is not, after all, a cultural aberration or a character flaw. Poverty is a shortage of money.

Barbara Ehrenreich, a TomDispatch regular, is the author of Nickel and Dimed: On (Not) Getting By in America (now in a 10th anniversary edition with a new afterword).

This is a joint TomDispatch/Nation article and appears in print at the Nation magazine.

Copyright 2012 Barbara Ehrenreich

Rediscovering Poverty

Class happens when some men, as a result of common experiences (inherited or shared), feel and articulate the identity of their interests as between themselves, and as against other men whose interests are different from (and usually opposed to) theirs.

— E.P. Thompson, The Making of the English Working Class

The “other men” (and of course women) in the current American class alignment are those in the top 1% of the wealth distribution — the bankers, hedge-fund managers, and CEOs targeted by the Occupy Wall Street movement. They have been around for a long time in one form or another, but they only began to emerge as a distinct and visible group, informally called the “super-rich,” in recent years.

Extravagant levels of consumption helped draw attention to them: private jets, multiple 50,000 square-foot mansions, $25,000 chocolate desserts embellished with gold dust. But as long as the middle class could still muster the credit for college tuition and occasional home improvements, it seemed churlish to complain. Then came the financial crash of 2007-2008, followed by the Great Recession, and the 1% to whom we had entrusted our pensions, our economy, and our political system stood revealed as a band of feckless, greedy narcissists, and possibly sociopaths.

Still, until a few months ago, the 99% was hardly a group capable of (as Thompson says) articulating “the identity of their interests.” It contained, and still contains, most “ordinary” rich people, along with middle-class professionals, factory workers, truck drivers, and miners, as well as the much poorer people who clean the houses, manicure the fingernails, and maintain the lawns of the affluent.

It was divided not only by these class differences, but most visibly by race and ethnicity — a division that has actually deepened since 2008. African-Americans and Latinos of all income levels disproportionately lost their homes to foreclosure in 2007 and 2008, and then disproportionately lost their jobs in the wave of layoffs that followed.  On the eve of the Occupy movement, the black middle class had been devastated. In fact, the only political movements to have come out of the 99% before Occupy emerged were the Tea Party movement and, on the other side of the political spectrum, the resistance to restrictions on collective bargaining in Wisconsin.

But Occupy could not have happened if large swaths of the 99% had not begun to discover some common interests, or at least to put aside some of the divisions among themselves. For decades, the most stridently promoted division within the 99% was the one between what the right calls the “liberal elite” — composed of academics, journalists, media figures, etc. — and pretty much everyone else.

As Harper’s Magazine columnist Tom Frank has brilliantly explained, the right earned its spurious claim to populism by targeting that “liberal elite,” which supposedly favors reckless government spending that requires oppressive levels of taxes, supports “redistributive” social policies and programs that reduce opportunity for the white middle class, creates ever more regulations (to, for instance, protect the environment) that reduce jobs for the working class, and promotes kinky countercultural innovations like gay marriage. The liberal elite, insisted conservative intellectuals, looked down on “ordinary” middle- and working-class Americans, finding them tasteless and politically incorrect. The “elite” was the enemy, while the super-rich were just like everyone else, only more “focused” and perhaps a bit better connected.

Of course, the “liberal elite” never made any sociological sense. Not all academics or media figures are liberal (Newt Gingrich, George Will, Rupert Murdoch). Many well-educated middle managers and highly trained engineers may favor latte over Red Bull, but they were never targets of the right. And how could trial lawyers be members of the nefarious elite, while their spouses in corporate law firms were not?

A Greased Chute, Not a Safety Net

“Liberal elite” was always a political category masquerading as a sociological one. What gave the idea of a liberal elite some traction, though, at least for a while, was that the great majority of us have never knowingly encountered a member of the actual elite, the 1% who are, for the most part, sealed off in their own bubble of private planes, gated communities, and walled estates.

The authority figures most people are likely to encounter in their daily lives are teachers, doctors, social workers, and professors. These groups (along with middle managers and other white-collar corporate employees) occupy a much lower position in the class hierarchy.  They made up what we described in a 1976 essay as the “professional managerial class.” As we wrote at the time, on the basis of our experience of the radical movements of the 1960s and 1970s, there have been real, longstanding resentments between the working-class and middle-class professionals. These resentments, which the populist right cleverly deflected toward “liberals,” contributed significantly to that previous era of rebellion’s failure to build a lasting progressive movement.

As it happened, the idea of the “liberal elite” could not survive the depredations of the 1% in the late 2000s. For one thing, it was summarily eclipsed by the discovery of the actual Wall Street-based elite and their crimes. Compared to them, professionals and managers, no matter how annoying, were pikers. The doctor or school principal might be overbearing, the professor and the social worker might be condescending, but only the 1% took your house away.

There was, as well, another inescapable problem embedded in the right-wing populist strategy: even by 2000, and certainly by 2010, the class of people who might qualify as part of the “liberal elite” was in increasingly bad repair. Public-sector budget cuts and corporate-inspired reorganizations were decimating the ranks of decently paid academics, who were being replaced by adjunct professors working on bare subsistence incomes. Media firms were shrinking their newsrooms and editorial budgets. Law firms had started outsourcing their more routine tasks to India. Hospitals beamed X-rays to cheap foreign radiologists. Funding had dried up for nonprofit ventures in the arts and public service. Hence the iconic figure of the Occupy movement: the college graduate with tens of thousands of dollars in student loan debts and a job paying about $10 a hour, or no job at all.

These trends were in place even before the financial crash hit, but it took the crash and its grim economic aftermath to awaken the 99% to a widespread awareness of shared danger. In 2008, “Joe the Plumber’s” intention to earn a quarter-million dollars a year still had some faint sense of plausibility. A couple of years into the recession, however, sudden downward mobility had become the mainstream American experience, and even some of the most reliably neoliberal media pundits were beginning to announce that something had gone awry with the American dream.

Once-affluent people lost their nest eggs as housing prices dropped off cliffs. Laid-off middle-aged managers and professionals were staggered to find that their age made them repulsive to potential employers. Medical debts plunged middle-class households into bankruptcy. The old conservative dictum — that it was unwise to criticize (or tax) the rich because you might yourself be one of them someday — gave way to a new realization that the class you were most likely to migrate into wasn’t the rich, but the poor.

And here was another thing many in the middle class were discovering: the downward plunge into poverty could occur with dizzying speed. One reason the concept of an economic 99% first took root in America rather than, say, Ireland or Spain is that Americans are particularly vulnerable to economic dislocation. We have little in the way of a welfare state to stop a family or an individual in free-fall. Unemployment benefits do not last more than six months or a year, though in a recession they are sometimes extended by Congress. At present, even with such an extension, they reach only about half the jobless. Welfare was all but abolished 15 years ago, and health insurance has traditionally been linked to employment.

In fact, once an American starts to slip downward, a variety of forces kick in to help accelerate the slide. An estimated 60% of American firms now check applicants' credit ratings, and discrimination against the unemployed is widespread enough to have begun to warrant Congressional concern. Even bankruptcy is a prohibitively expensive, often crushingly difficult status to achieve. Failure to pay government-imposed fines or fees can even lead, through a concatenation of unlucky breaks, to an arrest warrant or a criminal record. Where other once-wealthy nations have a safety net, America offers a greased chute, leading down to destitution with alarming speed.

Making Sense of the 99%

The Occupation encampments that enlivened approximately 1,400 cities this fall provided a vivid template for the 99%’s growing sense of unity. Here were thousands of people — we may never know the exact numbers — from all walks of life, living outdoors in the streets and parks, very much as the poorest of the poor have always lived: without electricity, heat, water, or toilets. In the process, they managed to create self-governing communities.

General assembly meetings brought together an unprecedented mix of recent college graduates, young professionals, elderly people, laid-off blue-collar workers, and plenty of the chronically homeless for what were, for the most part, constructive and civil exchanges. What started as a diffuse protest against economic injustice became a vast experiment in class building. The 99%, which might have seemed to be a purely aspirational category just a few months ago, began to will itself into existence.

Can the unity cultivated in the encampments survive as the Occupy movement evolves into a more decentralized phase?  All sorts of class, racial, and cultural divisions persist within that 99%, including distrust between members of the former “liberal elite” and those less privileged. It would be surprising if they didn’t. The life experience of a young lawyer or a social worker is very different from that of a blue-collar worker whose work may rarely allow for biological necessities like meal or bathroom breaks. Drum circles, consensus decision-making, and masks remain exotic to at least the 90%. “Middle class” prejudice against the homeless, fanned by decades of right-wing demonization of the poor, retains much of its grip.

Sometimes these differences led to conflict in Occupy encampments — for example, over the role of the chronically homeless in Portland or the use of marijuana in Los Angeles — but amazingly, despite all the official warnings about health and safety threats, there was no “Altamont moment”: no major fires and hardly any violence.  In fact, the encampments engendered almost unthinkable convergences: people from comfortable backgrounds learning about street survival from the homeless, a distinguished professor of political science discussing horizontal versus vertical decision-making with a postal worker, military men in dress uniforms showing up to defend the occupiers from the police.

Class happens, as Thompson said, but it happens most decisively when people are prepared to nourish and build it. If the “99%” is to become more than a stylish meme, if it’s to become a force to change the world, eventually we will undoubtedly have to confront some of the class and racial divisions that lie within it. But we need to do so patiently, respectfully, and always with an eye to the next big action — the next march, or building occupation, or foreclosure fight, as the situation demands.

Barbara Ehrenreich, TomDispatch regular, is the author of Nickel and Dimed: On (Not) Getting By in America (now in a 10th anniversary edition with a new afterword).

John Ehrenreich is professor of psychology at the State University of New York, College at Old Westbury. He wrote The Humanitarian Companion: A Guide for International Aid, Development, and Human Rights Workers. 

This is a joint TomDispatch/Nation article and appears in print at the Nation magazine.

Copyright 2011 Barbara Ehrenreich and John Ehrenreich

The Making of the American 99%

I completed the manuscript for Nickel and Dimed in a time of seemingly boundless prosperity. Technology innovators and venture capitalists were acquiring sudden fortunes, buying up McMansions like the ones I had cleaned in Maine and much larger. Even secretaries in some hi-tech firms were striking it rich with their stock options. There was loose talk about a permanent conquest of the business cycle, and a sassy new spirit infecting American capitalism. In San Francisco, a billboard for an e-trading firm proclaimed, “Make love not war,” and then — down at the bottom — “Screw it, just make money.”

When Nickel and Dimed was published in May 2001, cracks were appearing in the dot-com bubble and the stock market had begun to falter, but the book still evidently came as a surprise, even a revelation, to many. Again and again, in that first year or two after publication, people came up to me and opened with the words, “I never thought…” or “I hadn’t realized…”

To my own amazement, Nickel and Dimed quickly ascended to the bestseller list and began winning awards. Criticisms, too, have accumulated over the years. But for the most part, the book has been far better received than I could have imagined it would be, with an impact extending well into the more comfortable classes. A Florida woman wrote to tell me that, before reading it, she’d always been annoyed at the poor for what she saw as their self-inflicted obesity. Now she understood that a healthy diet wasn’t always an option.  And if I had a quarter for every person who’s told me he or she now tipped more generously, I would be able to start my own foundation.

Even more gratifying to me, the book has been widely read among low-wage workers. In the last few years, hundreds of people have written to tell me their stories: the mother of a newborn infant whose electricity had just been turned off, the woman who had just been given a diagnosis of cancer and has no health insurance, the newly homeless man who writes from a library computer.

At the time I wrote Nickel and Dimed, I wasn’t sure how many people it directly applied to — only that the official definition of poverty was way off the mark, since it defined an individual earning $7 an hour, as I did on average, as well out of poverty. But three months after the book was published, the Economic Policy Institute in Washington, D.C., issued a report entitled “Hardships in America: The Real Story of Working Families,” which found an astounding 29% of American families living in what could be more reasonably defined as poverty, meaning that they earned less than a barebones budget covering housing, child care, health care, food, transportation, and taxes — though not, it should be noted, any entertainment, meals out, cable TV, Internet service, vacations, or holiday gifts. Twenty-nine percent is a minority, but not a reassuringly small one, and other studies in the early 2000s came up with similar figures.

The big question, 10 years later, is whether things have improved or worsened for those in the bottom third of the income distribution, the people who clean hotel rooms, work in warehouses, wash dishes in restaurants, care for the very young and very old, and keep the shelves stocked in our stores. The short answer is that things have gotten much worse, especially since the economic downturn that began in 2008.

Post-Meltdown Poverty

When you read about the hardships I found people enduring while I was researching my book — the skipped meals, the lack of medical care, the occasional need to sleep in cars or vans — you should bear in mind that those occurred in the best of times. The economy was growing, and jobs, if poorly paid, were at least plentiful.

In 2000, I had been able to walk into a number of jobs pretty much off the street. Less than a decade later, many of these jobs had disappeared and there was stiff competition for those that remained. It would have been impossible to repeat my Nickel and Dimed “experiment,” had I had been so inclined, because I would probably never have found a job.

For the last couple of years, I have attempted to find out what was happening to the working poor in a declining economy — this time using conventional reporting techniques like interviewing. I started with my own extended family, which includes plenty of people without jobs or health insurance, and moved on to trying to track down a couple of the people I had met while working on Nickel and Dimed.

This wasn’t easy, because most of the addresses and phone numbers I had taken away with me had proved to be inoperative within a few months, probably due to moves and suspensions of telephone service. I had kept in touch with “Melissa” over the years, who was still working at Wal-Mart, where her wages had risen from $7 to $10 an hour, but in the meantime her husband had lost his job. “Caroline,” now in her 50s and partly disabled by diabetes and heart disease, had left her deadbeat husband and was subsisting on occasional cleaning and catering jobs. Neither seemed unduly afflicted by the recession, but only because they had already been living in what amounts to a permanent economic depression.

Media attention has focused, understandably enough, on the “nouveau poor” — formerly middle and even upper-middle class people who lost their jobs, their homes, and/or their investments in the financial crisis of 2008 and the economic downturn that followed it, but the brunt of the recession has been borne by the blue-collar working class, which had already been sliding downwards since de-industrialization began in the 1980s.

In 2008 and 2009, for example, blue-collar unemployment was increasing three times as fast as white-collar unemployment, and African American and Latino workers were three times as likely to be unemployed as white workers. Low-wage blue-collar workers, like the people I worked with in this book, were especially hard hit for the simple reason that they had so few assets and savings to fall back on as jobs disappeared.

How have the already-poor attempted to cope with their worsening economic situation? One obvious way is to cut back on health care. The New York Times reported in 2009 that one-third of Americans could no longer afford to comply with their prescriptions and that there had been a sizable drop in the use of medical care. Others, including members of my extended family, have given up their health insurance.

Food is another expenditure that has proved vulnerable to hard times, with the rural poor turning increasingly to “food auctions,” which offer items that may be past their sell-by dates. And for those who like their meat fresh, there’s the option of urban hunting. In Racine, Wisconsin, a 51-year-old laid-off mechanic told me he was supplementing his diet by “shooting squirrels and rabbits and eating them stewed, baked, and grilled.” In Detroit, where the wildlife population has mounted as the human population ebbs, a retired truck driver was doing a brisk business in raccoon carcasses, which he recommends marinating with vinegar and spices.

The most common coping strategy, though, is simply to increase the number of paying people per square foot of dwelling space — by doubling up or renting to couch-surfers.

It’s hard to get firm numbers on overcrowding, because no one likes to acknowledge it to census-takers, journalists, or anyone else who might be remotely connected to the authorities.

In Los Angeles, housing expert Peter Dreier says that “people who’ve lost their jobs, or at least their second jobs, cope by doubling or tripling up in overcrowded apartments, or by paying 50 or 60 or even 70 percent of their incomes in rent.” According to a community organizer in Alexandria, Virginia, the standard apartment in a complex occupied largely by day laborers has two bedrooms, each containing an entire family of up to five people, plus an additional person laying claim to the couch.

No one could call suicide a “coping strategy,” but it is one way some people have responded to job loss and debt. There are no national statistics linking suicide to economic hard times, but the National Suicide Prevention Lifeline reported more than a four-fold increase in call volume between 2007 and 2009, and regions with particularly high unemployment, like Elkhart, Indiana, have seen troubling spikes in their suicide rates. Foreclosure is often the trigger for suicide — or, worse, murder-suicides that destroy entire families.

“Torture and Abuse of Needy Families”

We do of course have a collective way of ameliorating the hardships of individuals and families — a government safety net that is meant to save the poor from spiraling down all the way to destitution. But its response to the economic emergency of the last few years has been spotty at best. The food stamp program has responded to the crisis fairly well, to the point where it now reaches about 37 million people, up about 30% from pre-recession levels. But welfare — the traditional last resort for the down-and-out until it was “reformed” in 1996 — only expanded by about 6% in the first two years of the recession.

The difference between the two programs? There is a right to food stamps. You go to the office and, if you meet the statutory definition of need, they help you. For welfare, the street-level bureaucrats can, pretty much at their own discretion, just say no.

Take the case of Kristen and Joe Parente, Delaware residents who had always imagined that people turned to the government for help only if “they didn’t want to work.” Their troubles began well before the recession, when Joe, a fourth-generation pipe-fitter, sustained a back injury that left him unfit for even light lifting. He fell into a profound depression for several months, then rallied to ace a state-sponsored retraining course in computer repairs — only to find that those skills are no longer in demand. The obvious fallback was disability benefits, but — catch-22 — when Joe applied he was told he could not qualify without presenting a recent MRI scan. This would cost $800 to $900, which the Parentes do not have; nor has Joe, unlike the rest of the family, been able to qualify for Medicaid.

When they married as teenagers, the plan had been for Kristen to stay home with the children. But with Joe out of action and three children to support by the middle of this decade, Kristen went out and got waitressing jobs, ending up, in 2008, in a “pretty fancy place on the water.” Then the recession struck and she was laid off.

Kristen is bright, pretty, and to judge from her command of her own small kitchen, probably capable of holding down a dozen tables with precision and grace. In the past she’d always been able to land a new job within days; now there was nothing. Like 44% of laid-off people at the time, she failed to meet the fiendishly complex and sometimes arbitrary eligibility requirements for unemployment benefits. Their car started falling apart.

So the Parentes turned to what remains of welfare — TANF, or Temporary Assistance to Needy Families. TANF does not offer straightforward cash support like Aid to Families with Dependent Children, which it replaced in 1996. It’s an income supplementation program for working parents, and it was based on the sunny assumption that there would always be plenty of jobs for those enterprising enough to get them.

After Kristen applied, nothing happened for six weeks — no money, no phone calls returned. At school, the Parentes’ seven-year-old’s class was asked to write out what wish they would present to a genie, should a genie appear. Brianna’s wish was for her mother to find a job because there was nothing to eat in the house, an aspiration that her teacher deemed too disturbing to be posted on the wall with the other children’s requests.

When the Parentes finally got into “the system” and began receiving food stamps and some cash assistance, they discovered why some recipients have taken to calling TANF “Torture and Abuse of Needy Families.” From the start, the TANF experience was “humiliating,” Kristen says. The caseworkers “treat you like a bum. They act like every dollar you get is coming out of their own paychecks.”

The Parentes discovered that they were each expected to apply for 40 jobs a week, although their car was on its last legs and no money was offered for gas, tolls, or babysitting. In addition, Kristen had to drive 35 miles a day to attend “job readiness” classes offered by a private company called Arbor, which, she says, were “frankly a joke.”

Nationally, according to Kaaryn Gustafson of the University of Connecticut Law School, “applying for welfare is a lot like being booked by the police.”  There may be a mug shot, fingerprinting, and lengthy interrogations as to one’s children’s true paternity. The ostensible goal is to prevent welfare fraud, but the psychological impact is to turn poverty itself into a kind of crime.

How the Safety Net Became a Dragnet

The most shocking thing I learned from my research on the fate of the working poor in the recession was the extent to which poverty has indeed been criminalized in America.

Perhaps the constant suspicions of drug use and theft that I encountered in low-wage workplaces should have alerted me to the fact that, when you leave the relative safety of the middle class, you might as well have given up your citizenship and taken residence in a hostile nation.

Most cities, for example, have ordinances designed to drive the destitute off the streets by outlawing such necessary activities of daily life as sitting, loitering, sleeping, or lying down. Urban officials boast that there is nothing discriminatory about such laws: “If you’re lying on a sidewalk, whether you’re homeless or a millionaire, you’re in violation of the ordinance,” a St. Petersburg, Florida, city attorney stated in June 2009, echoing Anatole France’s immortal observation that “the law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges…”

In defiance of all reason and compassion, the criminalization of poverty has actually intensified as the weakened economy generates ever more poverty. So concludes a recent study from the National Law Center on Poverty and Homelessness, which finds that the number of ordinances against the publicly poor has been rising since 2006, along with the harassment of the poor for more “neutral” infractions like jaywalking, littering, or carrying an open container.

The report lists America’s ten “meanest” cities — the largest of which include Los Angeles, Atlanta, and Orlando — but new contestants are springing up every day. In Colorado, Grand Junction’s city council is considering a ban on begging; Tempe, Arizona, carried out a four-day crackdown on the indigent at the end of June. And how do you know when someone is indigent? As a Las Vegas statute puts it, “an indigent person is a person whom a reasonable ordinary person would believe to be entitled to apply for or receive” public assistance.

That could be me before the blow-drying and eyeliner, and it’s definitely Al Szekeley at any time of day. A grizzled 62-year-old, he inhabits a wheelchair and is often found on G Street in Washington, D.C. — the city that is ultimately responsible for the bullet he took in the spine in Phu Bai, Vietnam, in 1972.

He had been enjoying the luxury of an indoor bed until December 2008, when the police swept through the shelter in the middle of the night looking for men with outstanding warrants. It turned out that Szekeley, who is an ordained minister and does not drink, do drugs, or cuss in front of ladies, did indeed have one — for “criminal trespassing,” as sleeping on the streets is sometimes defined by the law. So he was dragged out of the shelter and put in jail.

“Can you imagine?” asked Eric Sheptock, the homeless advocate (himself a shelter resident) who introduced me to Szekeley. “They arrested a homeless man in a shelter for being homeless?”

The viciousness of the official animus toward the indigent can be breathtaking. A few years ago, a group called Food Not Bombs started handing out free vegan food to hungry people in public parks around the nation. A number of cities, led by Las Vegas, passed ordinances forbidding the sharing of food with the indigent in public places, leading to the arrests of several middle-aged white vegans.

One anti-sharing law was just overturned in Orlando, but the war on illicit generosity continues. Orlando is appealing the decision, and Middletown, Connecticut, is in the midst of a crackdown. More recently, Gainesville, Florida, began enforcing a rule limiting the number of meals that soup kitchens may serve to 130 people in one day, and Phoenix, Arizona, has been using zoning laws to stop a local church from serving breakfast to homeless people.

For the not-yet-homeless, there are two main paths to criminalization, and one is debt. Anyone can fall into debt, and although we pride ourselves on the abolition of debtors’ prison, in at least one state, Texas, people who can’t pay fines for things like expired inspection stickers may be made to “sit out their tickets” in jail.

More commonly, the path to prison begins when one of your creditors has a court summons issued for you, which you fail to honor for one reason or another, such as that your address has changed and you never received it. Okay, now you’re in “contempt of the court.”

Or suppose you miss a payment and your car insurance lapses, and then you’re stopped for something like a broken headlight (about $130 for the bulb alone). Now, depending on the state, you may have your car impounded and/or face a steep fine — again, exposing you to a possible court summons. “There’s just no end to it once the cycle starts,” says Robert Solomon of Yale Law School. “It just keeps accelerating.”

The second — and by far the most reliable — way to be criminalized by poverty is to have the wrong color skin. Indignation runs high when a celebrity professor succumbs to racial profiling, but whole communities are effectively “profiled” for the suspicious combination of being both dark-skinned and poor. Flick a cigarette and you’re “littering”; wear the wrong color T-shirt and you’re displaying gang allegiance. Just strolling around in a dodgy neighborhood can mark you as a potential suspect. And don’t get grumpy about it or you could be “resisting arrest.”

In what has become a familiar pattern, the government defunds services that might help the poor while ramping up law enforcement.  Shut down public housing, then make it a crime to be homeless. Generate no public-sector jobs, then penalize people for falling into debt. The experience of the poor, and especially poor people of color, comes to resemble that of a rat in a cage scrambling to avoid erratically administered electric shocks. And if you should try to escape this nightmare reality into a brief, drug-induced high, it’s “gotcha” all over again, because that of course is illegal too.

One result is our staggering level of incarceration, the highest in the world.  Today, exactly the same number of Americans — 2.3 million — reside in prison as in public housing. And what public housing remains has become ever more prison-like, with random police sweeps and, in a growing number of cities, proposed drug tests for residents. The safety net, or what remains of it, has been transformed into a dragnet.

It is not clear whether economic hard times will finally force us to break the mad cycle of poverty and punishment. With even the official level of poverty increasing — to over 14% in 2010 — some states are beginning to ease up on the criminalization of poverty, using alternative sentencing methods, shortening probation, and reducing the number of people locked up for technical violations like missing court appointments. But others, diabolically enough, are tightening the screws: not only increasing the number of “crimes,” but charging prisoners for their room and board, guaranteeing they’ll be released with potentially criminalizing levels of debt.

So what is the solution to the poverty of so many of America’s working people? Ten years ago, when Nickel and Dimed first came out, I often responded with the standard liberal wish list — a higher minimum wage, universal health care, affordable housing, good schools, reliable public transportation, and all the other things we, uniquely among the developed nations, have neglected to do.

Today, the answer seems both more modest and more challenging: if we want to reduce poverty, we have to stop doing the things that make people poor and keep them that way. Stop underpaying people for the jobs they do. Stop treating working people as potential criminals and let them have the right to organize for better wages and working conditions.

Stop the institutional harassment of those who turn to the government for help or find themselves destitute in the streets. Maybe, as so many Americans seem to believe today, we can’t afford the kinds of public programs that would genuinely alleviate poverty — though I would argue otherwise. But at least we should decide, as a bare minimum principle, to stop kicking people when they’re down.

Barbara Ehrenreich is the author of a number of books, most recently Bright-Sided: How the Relentless Promotion of Positive Thinking Has Undermined America. This essay is a shortened version of a new afterword to her bestselling book Nickel and Dimed: On (Not) Getting By in America, 10th Anniversary Edition, just released by Picador Books.

Excerpted from Nickel and Dimed: On (Not) Getting By in America, 10th Anniversary Edition, published August 2nd by Picador USA. New afterword © 2011 by Barbara Ehrenreich. Excerpted by arrangement with Metropolitan Books, an imprint of Henry Holt and Company, LLC. All rights reserved.

Nickel and Dimed (2011 Version)

For a book about the all-too-human “passions of war,” my 1997 work Blood Rites ended on a strangely inhuman note: I suggested that, whatever distinctly human qualities war calls upon — honor, courage, solidarity, cruelty, and so forth — it might be useful to stop thinking of war in exclusively human terms.  After all, certain species of ants wage war and computers can simulate “wars” that play themselves out on-screen without any human involvement.

More generally, then, we should define war as a self-replicating pattern of activity that may or may not require human participation. In the human case, we know it is capable of spreading geographically and evolving rapidly over time — qualities that, as I suggested somewhat fancifully, make war a metaphorical successor to the predatory animals that shaped humans into fighters in the first place.

A decade and a half later, these musings do not seem quite so airy and abstract anymore. The trend, at the close of the twentieth century, still seemed to be one of ever more massive human involvement in war — from armies containing tens of thousands in the sixteenth century, to hundreds of thousands in the nineteenth, and eventually millions in the twentieth century world wars.

It was the ascending scale of war that originally called forth the existence of the nation-state as an administrative unit capable of maintaining mass armies and the infrastructure — for taxation, weapons manufacture, transport, etc. — that they require. War has been, and we still expect it to be, the most massive collective project human beings undertake. But it has been evolving quickly in a very different direction, one in which human beings have a much smaller role to play.

One factor driving this change has been the emergence of a new kind of enemy, so-called “non-state actors,” meaning popular insurgencies and loose transnational networks of fighters, none of which are likely to field large numbers of troops or maintain expensive arsenals of their own. In the face of these new enemies, typified by al-Qaeda, the mass armies of nation-states are highly ineffective, cumbersome to deploy, difficult to maneuver, and from a domestic point of view, overly dependent on a citizenry that is both willing and able to fight, or at least to have their children fight for them.

Yet just as U.S. military cadets continue, in defiance of military reality, to sport swords on their dress uniforms, our leaders, both military and political, tend to cling to an idea of war as a vast, labor-intensive effort on the order of World War II. Only slowly, and with a reluctance bordering on the phobic, have the leaders of major states begun to grasp the fact that this approach to warfare may soon be obsolete.

Consider the most recent U.S. war with Iraq. According to then-president George W. Bush, the casus belli was the 9/11 terror attacks.  The causal link between that event and our chosen enemy, Iraq, was, however, imperceptible to all but the most dedicated inside-the-Beltway intellectuals. Nineteen men had hijacked airplanes and flown them into the Pentagon and the World Trade Center — 15 of them Saudi Arabians, none of them Iraqis — and we went to war against… Iraq?

Military history offers no ready precedents for such wildly misaimed retaliation. The closest analogies come from anthropology, which provides plenty of cases of small-scale societies in which the death of any member, for any reason, needs to be “avenged” by an attack on a more or less randomly chosen other tribe or hamlet.

Why Iraq? Neoconservative imperial ambitions have been invoked in explanation, as well as the American thirst for oil, or even an Oedipal contest between George W. Bush and his father. There is no doubt some truth to all of these explanations, but the targeting of Iraq also represented a desperate and irrational response to what was, for Washington, an utterly confounding military situation.

We faced a state-less enemy — geographically diffuse, lacking uniforms and flags, invulnerable to invading infantries and saturation bombing, and apparently capable of regenerating itself at minimal expense. From the perspective of Secretary of Defense Donald Rumsfeld and his White House cronies, this would not do.

Since the U.S. was accustomed to fighting other nation-states — geopolitical entities containing such identifiable targets as capital cities, airports, military bases, and munitions plants — we would have to find a nation-state to fight, or as Rumsfeld put it, a “target-rich environment.” Iraq, pumped up by alleged stockpiles of “weapons of mass destruction,” became the designated surrogate for an enemy that refused to play our game.

The effects of this atavistic war are still being tallied: in Iraq, we would have to include civilian deaths estimated at possibly hundreds of thousands, the destruction of civilian infrastructure, and devastating outbreaks of sectarian violence of a kind that, as we should have learned from the dissolution of Yugoslavia, can readily follow the death or removal of a nationalist dictator.

But the effects of war on the U.S. and its allies may end up being almost as tragic. Instead of punishing the terrorists who had attacked the U.S., the war seems to have succeeded in recruiting more such irregular fighters, young men (and sometimes women) willing to die and ready to commit further acts of terror or revenge. By insisting on fighting a more or less randomly selected nation-state, the U.S. may only have multiplied the non-state threats it faces.

Unwieldy Armies

Whatever they may think of what the U.S. and its allies did in Iraq, many national leaders are beginning to acknowledge that conventional militaries are becoming, in a strictly military sense, almost ludicrously anachronistic. Not only are they unsuited to crushing counterinsurgencies and small bands of terrorists or irregular fighters, but mass armies are simply too cumbersome to deploy on short notice.

In military lingo, they are weighed down by their “tooth to tail” ratio — a measure of the number of actual fighters in comparison to the support personnel and equipment the fighters require. Both hawks and liberal interventionists may hanker to airlift tens of thousands of soldiers to distant places virtually overnight, but those soldiers will need to be preceded or accompanied by tents, canteens, trucks, medical equipment, and so forth. “Flyover” rights will have to be granted by neighboring countries; air strips and eventually bases will have to be constructed; supply lines will have be created and defended — all of which can take months to accomplish.

The sluggishness of the mass, labor-intensive military has become a constant source of frustration to civilian leaders. Irritated by the Pentagon’s hesitation to put “boots on the ground” in Bosnia, then-Secretary of State Madeline Albright famously demanded of Secretary of Defense Colin Powell, “What good is this marvelous military force if we can never use it?” In 2009, the Obama administration unthinkingly proposed a troop surge in Afghanistan, followed by a withdrawal within a year and a half that would have required some of the troops to start packing up almost as soon as they arrived. It took the U.S. military a full month to organize the transport of 20,000 soldiers to Haiti in the wake of the 2010 earthquake — and they were only traveling 700 miles to engage in a humanitarian relief mission, not a war.

Another thing hobbling mass militaries is the increasing unwillingness of nations, especially the more democratic ones, to risk large numbers of casualties. It is no longer acceptable to drive men into battle at gunpoint or to demand that they fend for themselves on foreign soil. Once thousands of soldiers have been plunked down in a “theater,” they must be defended from potentially hostile locals, a project that can easily come to supersede the original mission.

We may not be able clearly to articulate what American troops were supposed to accomplish in Iraq or Afghanistan, but without question one part of their job has been “force protection.” In what could be considered the inverse of “mission creep,” instead of expanding, the mission now has a tendency to contract to the task of self-defense.

Ultimately, the mass militaries of the modern era, augmented by ever-more expensive weapons systems, place an unacceptable economic burden on the nation-states that support them — a burden that eventually may undermine the militaries themselves. Consider what has been happening to the world’s sole military superpower, the United States. The latest estimate for the cost of the wars in Iraq and Afghanistan is, at this moment, at least $3.2 trillion, while total U.S. military spending equals that of the next 15 countries combined, and adds up to approximately 47% of all global military spending.

To this must be added the cost of caring for wounded and otherwise damaged veterans, which has been mounting precipitously as medical advances allow more of the injured to survive.  The U.S. military has been sheltered from the consequences of its own profligacy by a level of bipartisan political support that has kept it almost magically immune to budget cuts, even as the national debt balloons to levels widely judged to be unsustainable.

The hard right, in particular, has campaigned relentlessly against “big government,” apparently not noticing that the military is a sizable chunk of this behemoth.  In December 2010, for example, a Republican senator from Oklahoma railed against the national debt with this statement: “We're really at war. We're on three fronts now: Iraq, Afghanistan, and the financial tsunami  [arising from the debt] that is facing us.” Only in recent months have some Tea Party-affiliated legislators broken with tradition by declaring their willingness to cut military spending.

How the Warfare State Became the Welfare State

If military spending is still for the most part sacrosanct, ever more spending cuts are required to shrink “big government.”  Then what remains is the cutting of domestic spending, especially social programs for the poor, who lack the means to finance politicians, and all too often the incentive to vote as well. From the Reagan years on, the U.S. government has chipped away at dozens of programs that had helped sustain people who are underpaid or unemployed, including housing subsidies, state-supplied health insurance, public transportation, welfare for single parents, college tuition aid, and inner-city economic development projects.

Even the physical infrastructure — bridges, airports, roads, and tunnels — used by people of all classes has been left at dangerous levels of disrepair. Antiwar protestors wistfully point out, year after year, what the cost of our high-tech weapon systems, our global network of more than 1,000 military bases, and our various “interventions” could buy if applied to meeting domestic human needs. But to no effect.  

This ongoing sacrifice of domestic welfare for military “readiness” represents the reversal of a historic trend. Ever since the introduction of mass armies in Europe in the seventeenth century, governments have generally understood that to underpay and underfeed one's troops — and the class of people that supplies them — is to risk having the guns pointed in the opposite direction from that which the officers recommend.  

In fact, modern welfare states, inadequate as they may be, are in no small part the product of war — that is, of governments' attempts to appease soldiers and their families. In the U.S., for example, the Civil War led to the institution of widows' benefits, which were the predecessor of welfare in its Aid to Families with Dependent Children form. It was the bellicose German leader Otto von Bismarck who first instituted national health insurance.

World War II spawned educational benefits and income support for American veterans and led, in the United Kingdom, to a comparatively generous welfare state, including free health care for all. Notions of social justice and fairness, or at least the fear of working class insurrections, certainly played a part in the development of twentieth century welfare states, but there was a pragmatic military motivation as well: if young people are to grow up to be effective troops, they need to be healthy, well-nourished, and reasonably well-educated.

In the U.S., the steady withering of social programs that might nurture future troops even serves, ironically, to justify increased military spending. In the absence of a federal jobs program, Congressional representatives become fierce advocates for weapons systems that the Pentagon itself has no use for, as long as the manufacture of those weapons can provide employment for some of their constituents.

With diminishing funds for higher education, military service becomes a less dismal alternative for young working-class people than the low-paid jobs that otherwise await them. The U.S. still has a civilian welfare state consisting largely of programs for the elderly (Medicare and Social Security). For many younger Americans, however, as well as for older combat veterans, the U.S. military is the welfare state — and a source, however temporarily, of jobs, housing, health care and education.

Eventually, however, the failure to invest in America’s human resources — through spending on health, education, and so forth — undercuts the military itself. In World War I, public health experts were shocked to find that one-third of conscripts were rejected as physically unfit for service; they were too weak and flabby or too damaged by work-related accidents.

Several generations later, in 2010, the U.S. Secretary of Education reported that “75 percent of young Americans, between the ages of 17 to 24, are unable to enlist in the military today because they have failed to graduate from high school, have a criminal record, or are physically unfit.” When a nation can no longer generate enough young people who are fit for military service, that nation has two choices: it can, as a number of prominent retired generals are currently advocating, reinvest in its “human capital,” especially the health and education of the poor, or it can seriously reevaluate its approach to war.

The Fog of (Robot) War

Since the rightward, anti-“big government” tilt of American politics more or less precludes the former, the U.S. has been scrambling to develop less labor-intensive forms of waging war. In fact, this may prove to be the ultimate military utility of the wars in Iraq and Afghanistan: if they have gained the U.S. no geopolitical advantage, they have certainly served as laboratories and testing grounds for forms of future warfare that involve less human, or at least less governmental, commitment.

One step in that direction has been the large-scale use of military contract workers supplied by private companies, which can be seen as a revival of the age-old use of mercenaries.  Although most of the functions that have been outsourced to private companies — including food services, laundry, truck driving, and construction — do not involve combat, they are dangerous, and some contract workers have even been assigned to the guarding of convoys and military bases.

Contractors are still men and women, capable of bleeding and dying — and surprising numbers of them have indeed died.  In the initial six months of 2010, corporate deaths exceeded military deaths in Iraq and Afghanistan for the first time. But the Pentagon has little or no responsibility for the training, feeding, or care of private contractors.  If wounded or psychologically damaged, American contract workers must turn, like any other injured civilian employees, to the Workers’ Compensation system, hence their sense of themselves as a “disposable army.”  By 2009, the trend toward privatization had gone so far that the number of private contractors in Afghanistan exceeded the number of American troops there.

An alternative approach is to eliminate or drastically reduce the military’s dependence on human beings of any kind.  This would have been an almost unthinkable proposition a few decades ago, but technologies employed in Iraq and Afghanistan have steadily stripped away the human role in war. Drones, directed from sites up to 7,500 miles away in the western United States, are replacing manned aircraft.

Video cameras, borne by drones, substitute for human scouts or information gathered by pilots. Robots disarm roadside bombs. When American forces invaded Iraq in 2003, no robots accompanied them; by 2008, there were 12,000 participating in the war.  Only a handful of drones were used in the initial invasion; today, the U.S. military has an inventory of more than 7,000, ranging from the familiar Predator to tiny Ravens and Wasps used to transmit video images of events on the ground.  Far stranger fighting machines are in the works, like swarms of lethal “cyborg insects” that could potentially replace human infantry.

These developments are by no means limited to the U.S. The global market for military robotics and unmanned military vehicles is growing fast, and includes Israel, a major pioneer in the field, Russia, the United Kingdom, Iran, South Korea, and China. Turkey is reportedly readying a robot force for strikes against Kurdish insurgents; Israel hopes to eventually patrol the Gaza border with “see-shoot” robots that will destroy people perceived as transgressors as soon as they are detected.

It is hard to predict how far the automation of war and the substitution of autonomous robots for human fighters will go. On the one hand, humans still have the advantage of superior visual discrimination.  Despite decades of research in artificial intelligence, computers cannot make the kind of simple distinctions — as in determining whether a cow standing in front of a barn is a separate entity or a part of the barn — that humans can make in a fraction of a second.

Thus, as long as there is any premium on avoiding civilian deaths, humans have to be involved in processing the visual information that leads, for example, to the selection of targets for drone attacks. If only as the equivalent of seeing-eye dogs, humans will continue to have a role in war, at least until computer vision improves.

On the other hand, the human brain lacks the bandwidth to process all the data flowing into it, especially as new technologies multiply that data. In the clash of traditional mass armies, under a hail of arrows or artillery shells, human warriors often found themselves confused and overwhelmed, a condition attributed to “the fog of war." Well, that fog is growing a lot thicker. U.S. military officials, for instance, put the blame on “information overload” for the killing of 23 Afghan civilians in February 2010, and the New York Times reported that:

“Across the military, the data flow has surged; since the attacks of 9/11, the amount of intelligence gathered by remotely piloted drones and other surveillance technologies has risen 1,600 percent. On the ground, troops increasingly use hand-held devices to communicate, get directions and set bombing coordinates. And the screens in jets can be so packed with data that some pilots call them “drool buckets” because, they say, they can get lost staring into them.”

When the sensory data coming at a soldier is augmented by a flood of instantaneously transmitted data from distant cameras and computer search engines, there may be no choice but to replace the sloppy “wet-ware” of the human brain with a robotic system for instant response.

War Without Humans

Once set in place, the cyber-automation of war is hard to stop.  Humans will cling to their place “in the loop” as long as they can, no doubt insisting that the highest level of decision-making — whether to go to war and with whom — be reserved for human leaders. But it is precisely at the highest levels that decision-making may most need automating. A head of state faces a blizzard of factors to consider, everything from historical analogies and satellite-derived intelligence to assessments of the readiness of potential allies. Furthermore, as the enemy automates its military, or in the case of a non-state actor, simply adapts to our level of automation, the window of time for effective responses will grow steadily narrower. Why not turn to a high-speed computer? It is certainly hard to imagine a piece of intelligent hardware deciding to respond to the 9/11 attacks by invading Iraq.

So, after at least 10,000 years of intra-species fighting — of scorched earth, burned villages, razed cities, and piled up corpses, as well, of course, as all the great epics of human literature — we have to face the possibility that the institution of war might no longer need us for its perpetuation. Human desires, especially for the Earth’s diminishing supply of resources, will still instigate wars for some time to come, but neither human courage nor human bloodlust will carry the day on the battlefield.

Computers will assess threats and calibrate responses; drones will pinpoint enemies; robots might roll into the streets of hostile cities. Beyond the individual battle or smaller-scale encounter, decisions as to whether to match attack with counterattack, or one lethal technological innovation with another, may also be eventually ceded to alien minds.

This should not come as a complete surprise. Just as war has shaped human social institutions for millennia, so has it discarded them as the evolving technology of war rendered them useless. When war was fought with blades by men on horseback, it favored the rule of aristocratic warrior elites. When the mode of fighting shifted to action-at-a-distance weapons like bows and guns, the old elites had to bow to the central authority of kings, who, in turn, were undone by the democratizing forces unleashed by new mass armies.

Even patriarchy cannot depend on war for its long-term survival, since the wars in Iraq and Afghanistan have, at least within U.S. forces, established women’s worth as warriors. Over the centuries, human qualities once deemed indispensable to war fighting — muscular power, manliness, intelligence, judgment — have one by one become obsolete or been ceded to machines.

What will happen then to the “passions of war”? Except for individual acts of martyrdom, war is likely to lose its glory and luster. Military analyst P.W. Singer quotes an Air Force captain musing about whether the new technologies will “mean that brave men and women will no longer face death in combat,” only to reassure himself that “there will always be a need for intrepid souls to fling their bodies across the sky.”

Perhaps, but in a 2010 address to Air Force Academy cadets, an under secretary of defense delivered the “bad news” that most of them would not be flying airplanes, which are increasingly unmanned. War will continue to be used against insurgencies as well as to “take out” the weapons facilities, command centers, and cities of designated rogue states. It may even continue to fascinate its aficionados, in the manner of computer games. But there will be no triumphal parades for killer nano-bugs, no epics about unmanned fighter planes, no monuments to fallen bots.

And in that may lie our last hope. With the decline of mass militaries and their possible replacement by machines, we may finally see that war is not just an extension of our needs and passions, however base or noble. Nor is it likely to be even a useful test of our courage, fitness, or national unity. War has its own dynamic or — in case that sounds too anthropomorphic — its own grim algorithms to work out. As it comes to need us less, maybe we will finally see that we don’t need it either. We can leave it to the ants.

Barbara Ehrenreich is the author of a number of books including Nickel and Dimed: On (Not) Getting By in America and Bright-Sided: How the Relentless Promotion of Positive Thinking Has Undermined America. This essay is a revised and updated version of the afterword to the British edition of Blood Rites: Origins and History of the Passions of War (Granta, 2011).  To listen to Timothy MacBain’s latest TomCast audio interview in which Ehrenreich discusses the nature of war and how to fight against it, click here, or download it to your iPod here.

Copyright 2011 Barbara Ehrenreich

War Without Humans

Has feminism been replaced by the pink-ribbon breast cancer cult? When the House of Representatives passed the Stupak amendment, which would take abortion rights away even from women who have private insurance, the female response ranged from muted to inaudible.

A few weeks later, when the United States Preventive Services Task Force recommended that regular screening mammography not start until age 50, all hell broke loose. Sheryl Crow, Whoopi Goldberg, and Olivia Newton-John raised their voices in protest; a few dozen non-boldface women picketed the Department of Health and Human Services.  If you didn’t look too closely, it almost seemed as if the women’s health movement of the 1970s and 1980s had returned in full force.

Never mind that Dr. Susan Love, author of what the New York Times dubbed “the bible for women with breast cancer,” endorses the new guidelines along with leading women’s health groups like Breast Cancer Action, the National Breast Cancer Coalition, and the National Women’s Health Network (NWHN). For years, these groups have been warning about the excessive use of screening mammography in the U.S., which carries its own dangers and leads to no detectible lowering of breast cancer mortality relative to less mammogram-happy nations.

Nonetheless, on CNN last week, we had the unsettling spectacle of NWHN director and noted women’s health advocate Cindy Pearson speaking out for the new guidelines, while ordinary women lined up to attribute their survival from the disease to mammography. Once upon a time, grassroots women challenged the establishment by figuratively burning their bras. Now, in some masochistic perversion of feminism, they are raising their voices to yell, “Squeeze our tits!”

When the Stupak anti-choice amendment passed, and so entered the health reform bill, no congressional representative stood up on the floor of the House to recount how access to abortion had saved her life or her family’s well-being. And where were the tea-baggers when we needed them? If anything represents the true danger of “government involvement” in health care, it’s a health reform bill that – if the Senate enacts something similar — will snatch away all but the wealthiest women’s right to choose.

It’s not just that abortion is deemed a morally trickier issue than mammography. To some extent, pink-ribbon culture has replaced feminism as a focus of female identity and solidarity. When a corporation wants to signal that it’s “woman friendly,” what does it do?  It stamps a pink ribbon on its widget and proclaims that some miniscule portion of the profits will go to breast cancer research. I’ve even seen a bottle of Shiraz called “Hope” with a pink ribbon on its label, but no information, alas, on how much you have to drink to achieve the promised effect. When Laura Bush traveled to Saudi Arabia in 2007, what grave issue did she take up with the locals? Not women’s rights (to drive, to go outside without a man, etc.), but “breast cancer awareness.” In the post-feminist United States, issues like rape, domestic violence, and unwanted pregnancy seem to be too edgy for much public discussion, but breast cancer is all apple pie.

So welcome to the Women’s Movement 2.0: Instead of the proud female symbol — a circle on top of a cross — we have a droopy ribbon. Instead of embracing the full spectrum of human colors — black, brown, red, yellow, and white — we stick to princess pink. While we used to march in protest against sexist laws and practices, now we race or walk “for the cure.” And while we once sought full “consciousness” of all that oppresses us, now we’re content to achieve “awareness,” which has come to mean one thing — dutifully baring our breasts for the annual mammogram.

Look, the issue here isn’t health-care costs. If the current levels of screening mammography demonstrably saved lives, I would say go for it, and damn the expense. But the numbers are increasingly insistent: Routine mammographic screening of women under 50 does not reduce breast cancer mortality in that group, nor do older women necessarily need an annual mammogram. In fact, the whole dogma about “early detection” is shaky, as Susan Love reminds us:  the idea has been to catch cancers early, when they’re still small, but some tiny cancers are viciously aggressive, and some large ones aren’t going anywhere.

One response to the new guidelines has been that numbers don’t matter — only individuals do — and if just one life is saved, that’s good enough. So OK, let me cite my own individual experience. In 2000, at the age of 59, I was diagnosed with Stage II breast cancer on the basis of one dubious mammogram followed by a really bad one, followed by a biopsy.  Maybe I should be grateful that the cancer was detected in time, but the truth is, I’m not sure whether these mammograms detected the tumor or, along with many earlier ones, contributed to it: One known environmental cause of breast cancer is radiation, in amounts easily accumulated through regular mammography.

And why was I bothering with this mammogram in the first place? I had long ago made the decision not to spend my golden years undergoing cancer surveillance, but I wanted to get my Hormone Replacement Therapy (HRT) prescription renewed, and the nurse practitioner wouldn’t do that without a fresh mammogram.

As for the HRT, I was taking it because I had been convinced, by the prevailing medical propaganda, that HRT helps prevent heart disease and Alzheimer’s. In 2002, we found out that HRT is itself a risk factor for breast cancer (as well as being ineffective at warding off heart disease and Alzheimer’s), but we didn’t know that in 2000. So did I get breast cancer because of the HRT — and possibly because of the mammograms themselves — or did HRT lead to the detection of a cancer I would have gotten anyway?

I don’t know, but I do know that that biopsy was followed by the worst six months of my life, spent bald and barfing my way through chemotherapy. This is what’s at stake here: Not only the possibility that some women may die because their cancers go undetected, but that many others will lose months or years of their lives to debilitating and possibly unnecessary treatments.

You don’t have to be suffering from “chemobrain” (chemotherapy-induced cognitive decline) to discern evil, iatrogenic, profit-driven forces at work here.  In a recent column on the new guidelines, patient-advocate Naomi Freundlich raises the possibility that “entrenched interests — in screening, surgery, chemotherapy and other treatments associated with diagnosing more and more cancers — are impeding scientific evidence.” I am particularly suspicious of the oncologists, who saw their incomes soar starting in the late 80s when they began administering and selling chemotherapy drugs themselves in their ghastly, pink-themed, “chemotherapy suites.” Mammograms recruit women into chemotherapy, and of course, the pink-ribbon cult recruits women into mammography.

What we really need is a new women’s health movement, one that’s sharp and skeptical enough to ask all the hard questions: What are the environmental (or possibly life-style) causes of the breast cancer epidemic? Why are existing treatments like chemotherapy so toxic and heavy-handed? And, if the old narrative of cancer’s progression from “early” to “late” stages no longer holds, what is the course of this disease (or diseases)? What we don’t need, no matter how pretty and pink, is a ladies’ auxiliary to the cancer-industrial complex.

Barbara Ehrenreich is the author of 17 books, including the bestsellers Nickel and Dimed and Bait and Switch. A frequent contributor to Harper’s and the Nation, she has also been a columnist at the New York Times and Time magazine. Her seventeenth book, Bright-Sided: How the Relentless Promotion of Positive Thinking Has Undermined America (Metropolitan Books), has just been published.

Copyright 2009 Barbara Ehrenreich

Not So Pretty in Pink

If you can’t find any swine flu vaccine for your kids, it won’t be for a lack of positive thinking. In fact, the whole flu snafu is being blamed on “undue optimism” on the part of both the Obama administration and Big Pharma.

Optimism is supposed to be good for our health. According to the academic “positive psychologists,” as well as legions of unlicensed life coaches and inspirational speakers, optimism wards off common illnesses, contributes to recovery from cancer, and extends longevity. To its promoters, optimism is practically a miracle vaccine, so essential that we need to start inoculating Americans with it in the public schools — in the form of “optimism training.”

But optimism turns out to be less than salubrious when it comes to public health. In July, the federal government promised to have 160 million doses of H1N1 vaccine ready for distribution by the end of October. Instead, only 28 million doses are now ready to go, and optimism is the obvious culprit. “Road to Flu Vaccine Shortfall, Paved With Undue Optimism,” was the headline of a front page article in the October 26th New York Times. In the conventional spin, the vaccine shortage is now “threatening to undermine public confidence in government.” If the federal government couldn’t get this right, the pundits are already asking, how can we trust it with health reform?

But let’s stop a minute and also ask: Who really screwed up here — the government or private pharmaceutical companies, including GlaxoSmithKline, Novartis, and three others that had agreed to manufacture and deliver the vaccine by late fall? Last spring and summer, those companies gleefully gobbled up $2 billion worth of government contracts for vaccine production, promising to have every American, or at least every American child and pregnant woman, supplied with vaccine before trick-or-treating season began.

According to Health and Human Services Secretary Kathleen Sebelius, the government was misled by these companies, which failed to report manufacturing delays as they arose. Her department, she says, was “relying on the manufacturers to give us their numbers, and as soon as we got numbers we put them out to the public. It does appear now that those numbers were overly rosy.”

If, in fact, there’s a political parable here, it’s about Big Government’s sweetly trusting reliance on Big Business to safeguard the public health: Let the private insurance companies manage health financing; let profit-making hospital chains deliver health care; let Big Pharma provide safe and affordable medications. As it happens, though, all these entities have a priority that regularly overrides the public’s health, and that is, of course, profit — which has led insurance companies to function as “death panels,” excluding those who might ever need care, and for-profit hospitals to turn away the indigent, the pregnant, and the uninsured.

As for Big Pharma, the truth is that they’re just not all that into vaccines, traditionally preferring to manufacture drugs for such plagues as erectile dysfunction, social anxiety, and restless leg syndrome. Vaccines can be tricky and less than maximally profitable to manufacture. They go out of style with every microbial mutation, and usually it’s the government, rather than cunning direct-to-consumer commercials, that determines who gets them. So it should have been no surprise that Big Pharma approached the H1N1 problem ploddingly, using a 50-year old technology involving the production of the virus in chicken eggs, a method long since abandoned by China and the European Union.

Chicken eggs are fine for omelets, but they have quickly proved to be a poor growth medium for the viral “seed” strain used to make H1N1 vaccine. There are alternative “cell culture” methods that could produce the vaccine much faster, but in complete defiance of the conventional wisdom that private enterprise is always more innovative and resourceful than government, Big Pharma did not demand that they be made available for this year’s swine flu epidemic. Just for the record, those alternative methods have been developed with government funding, which is also the source of almost all our basic knowledge of viruses.

So, thanks to the drug companies, optimism has been about as effective in warding off H1N1 as amulets or fairy dust. Both the government and Big Pharma were indeed overly optimistic about the latter’s ability to supply the vaccine, leaving those of us who are involved in the care of small children with little to rely on but hope — hope that the epidemic will fade out on its own, hope that our loved ones have the luck to survive it.

And contrary to the claims of the positive psychologists, optimism itself is neither an elixir, nor a life-saving vaccine. Recent studies show that optimism — or positive feelings — do not affect recovery from a variety of cancers, including those of the breast, lungs, neck, and throat. Furthermore, the evidence that optimism prolongs life has turned out to be shaky at best: one study of nuns frequently cited as proof positive of optimism’s healthful effects turned out, in fact, only to show that nuns who wrote more eloquently about their vows in their early twenties tended to outlive those whose written statements were clunkier.

Are we ready to abandon faith-based medicine of both the individual and public health variety? Faith in private enterprise and the market has now left us open to a swine flu epidemic; faith alone — in the form of optimism or hope — does not kill viruses or cancer cells. On the public health front, we need to socialize vaccine manufacture as well as its distribution. Then, if the supply falls short, we can always impeach the president. On the individual front, there’s always soap and water.

Barbara Ehrenreich is the author of 16 books, including the bestsellers Nickel and Dimed and Bait and Switch. A frequent contributor to Harper’s and the Nation, she has also been a columnist at the New York Times and Time magazine. Her seventeenth book, Bright-Sided: How the Relentless Promotion of Positive Thinking Has Undermined America (Metropolitan Books), has just been published. An examination of recent studies of the medical ineffectiveness of positive thinking, mentioned in this essay, can be found in the book. To listen to the TomDispatch audio interview with Ehrenreich that accompanies this piece, click here.

Copyright 2009 Barbara Ehrenreich

The Swine Flu Vaccine Screw-up

Feminism made women miserable. This, anyway, seems to be the most popular takeaway from “The Paradox of Declining Female Happiness,” a recent study by Betsey Stevenson and Justin Wolfers which purports to show that women have become steadily unhappier since 1972. Maureen Dowd and Arianna Huffington greeted the news with somber perplexity, but the more common response has been a triumphant: I told you so.

On Slate’s DoubleX website, a columnist concluded from the study that “the feminist movement of the 1960s and 1970s gave us a steady stream of women’s complaints disguised as manifestos… and a brand of female sexual power so promiscuous that it celebrates everything from prostitution to nipple piercing as a feminist act — in other words, whine, womyn, and thongs.” Or as Phyllis Schlafly put it, more soberly: “[T]he feminist movement taught women to see themselves as victims of an oppressive patriarchy in which their true worth will never be recognized and any success is beyond their reach… [S]elf-imposed victimhood is not a recipe for happiness.”

But it’s a little too soon to blame Gloria Steinem for our dependence on SSRIs. For all the high-level head-scratching induced by the Stevenson and Wolfers study, hardly anyone has pointed out (1) that there are some issues with happiness studies in general, (2) that there are some reasons to doubt this study in particular, or (3) that, even if you take this study at face value, it has nothing at all to say about the impact of feminism on anyone’s mood.

For starters, happiness is an inherently slippery thing to measure or define. Philosophers have debated what it is for centuries, and even if we were to define it simply as a greater frequency of positive feelings than negative ones, when we ask people if they are happy, we are asking them to arrive at some sort of average over many moods and moments. Maybe I was upset earlier in the day after I opened the bills, but then was cheered up by a call from a friend, so what am I really?

In one well-known psychological experiment, subjects were asked to answer a questionnaire on life satisfaction, but only after they had performed the apparently irrelevant task of photocopying a sheet of paper for the experimenter. For a randomly chosen half of the subjects, a dime had been left for them to find on the copy machine. As two economists summarize the results: “Reported satisfaction with life was raised substantially by the discovery of the coin on the copy machine — clearly not an income effect.”

As for the particular happiness study under discussion, the red flags start popping up as soon as you look at the data. Not to be anti-intellectual about it, but the raw data on how men and women respond to the survey reveal no discernible trend to the naked eyeball. Only by performing an occult statistical manipulation called “ordered probit estimates,” do the authors manage to tease out any trend at all, and it is a tiny one: “Women were one percentage point less likely than men to say they were not too happy at the beginning of the sample [1972]; by 2006 women were one percentage more likely to report being in this category.” Differences of that magnitude would be stunning if you were measuring, for example, the speed of light under different physical circumstances, but when the subject is as elusive as happiness — well, we are not talking about paradigm-shifting results.

Furthermore, the idea that women have been sliding toward despair is contradicted by the one objective measure of unhappiness the authors offer: suicide rates. Happiness is, of course, a subjective state, but suicide is a cold, hard fact, and the suicide rate has been the gold standard of misery since sociologist Emile Durkheim wrote the book on it in 1897. As Stevenson and Wolfers report — somewhat sheepishly, we must imagine — “contrary to the subjective well-being trends we document, female suicide rates have been falling, even as male suicide rates have remained roughly constant through most of our sample [1972-2006].” Women may get the blues; men are more likely to get a bullet through the temple.

Another distracting little data point that no one, including the authors, seems to have much to say about is that, while “women” have been getting marginally sadder, black women have been getting happier and happier. To quote the authors: “…happiness has trended quite strongly upward for both female and male African Americans… Indeed, the point estimates suggest that well-being may have risen more strongly for black women than for black men.” The study should more accurately be titled “The Paradox of Declining White Female Happiness,” only that might have suggested that the problem could be cured with melanin and Restylane.

But let’s assume the study is sound and that (white) women have become less happy relative to men since 1972. Does that mean that feminism ruined their lives?

Not according to Stevenson and Wolfers, who find that “the relative decline in women’s well-being… holds for both working and stay-at-home mothers, for those married and divorced, for the old and the young, and across the education distribution” — as well as for both mothers and the childless. If feminism were the problem, you might expect divorced women to be less happy than married ones and employed women to be less happy than stay-at-homes. As for having children, the presumed premier source of female fulfillment: They actually make women less happy.

And if the women’s movement was such a big downer, you’d expect the saddest women to be those who had some direct exposure to the noxious effects of second wave feminism. As the authors report, however, “there is no evidence that women who experienced the protests and enthusiasm in the 1970s have seen their happiness gap widen by more than for those women were just being born during that period.”

What this study shows, if anything, is that neither marriage nor children make women happy. (The results are not in yet on nipple piercing.) Nor, for that matter, does there seem to be any problem with “too many choices,” “work-life balance,” or the “second shift.” If you believe Stevenson and Wolfers, women’s happiness is supremely indifferent to the actual conditions of their lives, including poverty and racial discrimination. Whatever “happiness” is…

So why all the sudden fuss about the Wharton study, which first leaked out two years ago anyway? Mostly because it’s become a launching pad for a new book by the prolific management consultant Marcus Buckingham, best known for First, Break All the Rules and Now, Find Your Strengths. His new book, Find Your Strongest Life: What the Happiest and Most Successful Women Do Differently, is a cookie-cutter classic of the positive-thinking self-help genre: First, the heart-wrenching quotes from unhappy women identified only by their email names (Countess1, Luveyduvy, etc.), then the stories of “successful” women, followed by the obligatory self-administered test to discover “the role you were bound to play” (Creator, Caretaker, Influencer, etc.), all bookended with an ad for the many related products you can buy, including a “video introduction” from Buckingham, a “participant’s guide” containing “exercises” to get you to happiness, and a handsome set of “Eight Strong Life Plans” to pick from. The Huffington Post has given Buckingham a column in which to continue his marketing campaign.

It’s an old story: If you want to sell something, first find the terrible affliction that it cures. In the 1980s, as silicone implants were taking off, the doctors discovered “micromastia” — the “disease” of small-breastedness. More recently, as big pharma searches furiously for a female Viagra, an amazingly high 43% of women have been found to suffer from “Female Sexual Dysfunction,” or FSD. Now, it’s unhappiness, and the range of potential “cures” is dazzling: Seagrams, Godiva, and Harlequin, take note.

Barbara Ehrenreich is the author of 16 books, including the bestsellers Nickel and Dimed and Bait and Switch. A frequent contributor to Harper’s and the Nation, she has also been a columnist at the New York Times and Time magazine. Her seventeenth book, Bright-Sided: How the Relentless Promotion of Positive Thinking Has Undermined America (Metropolitan Books), has just been published.

Copyright 2009 Barbara Ehrenreich

Are Women Getting Sadder?