Is the Internet Making Us Crazy? Yes!

By Tony Dokoupil

Before he launched the most viral video in Internet history, Jason Russell was a half-hearted Web presence. His YouTube account was dead, and his Facebook and Twitter pages were a trickle of kid pictures and home-garden updates. The Web wasn’t made “to keep track of how much people like us,” he thought, and when his own tech habits made him feel like “a genius, an addict, or a megalomaniac,” he unplugged for days, believing, as the humorist Andy Borowitz put it in a tweet that Russell tagged as a favorite, “it’s important to turn off our computers and do things in the real world.”

But this past March Russell struggled to turn off anything. He forwarded a link to “Kony 2012,” his deeply personal Web documentary about the African warlord Joseph Kony. The idea was to use social media to make Kony famous as the first step to stopping his crimes. And it seemed to work: the film hurtled through cyberspace, clocking more than 70 million views in less than a week. But something happened to Russell in the process. The same digital tools that supported his mission seemed to tear at his psyche, exposing him to nonstop kudos and criticisms, and ending his arm’s-length relationship with new media.

He slept two hours in the first four days, producing a swirl of bizarre Twitter updates. He sent a link to “I Met the Walrus,” a short animated interview with John Lennon, urging followers to “start training your mind.” He sent a picture of his tattoo, TIMSHEL, a biblical word about man’s choice between good and evil. At one point he uploaded and commented on a digital photo of a text message from his mother. At another he compared his life to the mind-bending movieInception, “a dream inside a dream.”

On the eighth day of his strange, 21st-century vortex, he sent a final tweet—a quote from Martin Luther King Jr.: “If you can’t fly, then run, if you can’t run, then walk, if you can’t walk, then crawl, but whatever you do, you have to keep moving forward”—and walked back into the real world. He took off his clothes and went to the corner of a busy intersection near his home in San Diego, where he repeatedly slapped the concrete with both palms and ranted about the devil. This too became a viral video.

Afterward Russell was diagnosed with “reactive psychosis,” a form of temporary insanity. It had nothing to do with drugs or alcohol, his wife, Danica, stressed in a blog post, and everything to do with the machine that kept Russell connected even as he was breaking apart. “Though new to us,” Danica continued, “doctors say this is a common experience,” given Russell’s “sudden transition from relative anonymity to worldwide attention—both raves and ridicules.” More than four months later, Jason is out of the hospital, his company says, but he is still in recovery. His wife took a “month of silence” on Twitter. Jason’s social-media accounts remain dark.

Questions about the Internet’s deleterious effects on the mind are at least as old as hyperlinks. But even among Web skeptics, the idea that a new technology might influence how we think and feel—let alone contribute to a great American crack-up—was considered silly and naive, like waving a cane at electric light or blaming the television for kids these days. Instead, the Internet was seen as just another medium, a delivery system, not a diabolical machine. It made people happier and more productive. And where was the proof otherwise?

Now, however, the proof is starting to pile up. The first good, peer-reviewed research is emerging, and the picture is much gloomier than the trumpet blasts of Web utopians have allowed. The current incarnation of the Internet—portable, social, accelerated, and all-pervasive—may be making us not just dumber or lonelier but more depressed and anxious, prone to obsessive-compulsive and attention-deficit disorders, even outright psychotic. Our digitized minds can scan like those of drug addicts, and normal people are breaking down in sad and seemingly new ways.

In the summer of 1996, seven young researchers at MIT blurred the lines between man and computer, living simultaneously in the physical and virtual worlds. They carried keyboards in their pockets, radio-transmitters in their backpacks, and a clip-on screen in front of their eyes. They called themselves “cyborgs”—and they were freaks. But as Sherry Turkle, a psychologist at MIT, points out, “we are all cyborgs now.” This life of continuous connection has come to seem normal, but that’s not the same as saying that it’s healthy or sustainable, as technology—to paraphrase the old line about alcohol—becomes the cause of and solution to of all life’s problems.

In less than the span of a single childhood, Americans have merged with their machines, staring at a screen for at least eight hours a day, more time than we spend on any other activity including sleeping. Teens fit some seven hours of screen time into the average school day; 11, if you count time spent multitasking on several devices. When President Obama last ran for office, the iPhone had yet to be launched. Now smartphones outnumber the old models in America, and more than a third of users get online before getting out of bed.

Meanwhile, texting has become like blinking: the average person, regardless of age, sends or receives about 400 texts a month, four times the 2007 number. The average teen processes an astounding 3,700 texts a month, double the 2007 figure. And more than two thirds of these normal, everyday cyborgs, myself included, report feeling their phone vibrate when in fact nothing is happening. Researchers call it “phantom-vibration syndrome.”

Altogether the digital shifts of the last five years call to mind a horse that has sprinted out from underneath its rider, dragging the person who once held the reins. No one is arguing for some kind of Amish future. But the research is now making it clear that the Internet is not “just” another delivery system. It is creating a whole new mental environment, a digital state of nature where the human mind becomes a spinning instrument panel, and few people will survive unscathed.

“This is an issue as important and unprecedented as climate change,” says Susan Greenfield, a pharmacology professor at Oxford University who is working on a book about how digital culture is rewiring us—and not for the better. “We could create the most wonderful world for our kids but that’s not going to happen if we’re in denial and people sleepwalk into these technologies and end up glassy-eyed zombies.”

Does the Internet make us crazy? Not the technology itself or the content, no. But a Newsweek review of findings from more than a dozen countries finds the answers pointing in a similar direction. Peter Whybrow, the director of the Semel Institute for Neuroscience and Human Behavior at UCLA, argues that “the computer is like electronic cocaine,” fueling cycles of mania followed by depressive stretches. The Internet “leads to behavior that people are conscious is not in their best interest and does leave them anxious and does make them act compulsively,” says Nicholas Carr, whose book The Shallows, about the Web’s effect on cognition, was nominated for a Pulitzer Prize. It “fosters our obsessions, dependence, and stress reactions,” adds Larry Rosen, a California psychologist who has researched the Net’s effect for decades. It “encourages—and even promotes—insanity.”

Fear that the Internet and mobile technology contributes to addiction—not to mention the often related ADHD and OCD disorders—has persisted for decades, but for most of that time the naysayers prevailed, often puckishly. “What’s next? Microwave abuse and Chapstick addiction?” wrote a peer reviewer for one of the leading psychiatric journals, rejecting a national study of problematic Internet use in 2006. The Diagnostic and Statistical Manual of Mental Disorders has never included a category of machine-human interactions.

But that view is suddenly on the outs. When the new DSM is released next year, Internet Addiction Disorder will be included for the first time, albeit in an appendix tagged for “further study.” China, Taiwan, and Korea recently accepted the diagnosis, and began treating problematic Web use as a grave national health crisis. In those countries, where tens of millions of people (and as much as 30 percent of teens) are considered Internet-addicted, mostly to gaming, virtual reality, and social media, the story is sensational front-page news. One young couple neglected its infant to death while nourishing a virtual baby online. A young man fatally bludgeoned his mother for suggesting he log off (and then used her credit card to rack up more hours). At least 10 ultra-Web users, serviced by one-click noodle delivery, have died of blood clots from sitting too long.

Now the Korean government is funding treatment centers, and coordinating a late-night Web shutdown for young people. China, meanwhile, has launched a mothers’ crusade for safe Web habits, turning to that approach after it emerged that some doctors were using electro-shock and severe beatings to treat Internet-addicted teens.

“There’s just something about the medium that’s addictive,” says Elias Aboujaoude, a psychiatrist at Stanford University School of Medicine, where he directs the Obsessive Compulsive Disorder Clinic and Impulse Control Disorders Clinic. “I’ve seen plenty of patients who have no history of addictive behavior—or substance abuse of any kind—become addicted via the Internet and these other technologies.”

His 2006 study of problematic Web habits (the one that was puckishly rejected) was later published, forming the basis for his recent book Virtually You, about the fallout expected from the Web’s irresistible allure. Even among a demographic of middle-aged landline users—the average respondent was in his 40s, white, and making more than $50,000 a year—Aboujaoude found that more than one in eight showed at least one sign of an unhealthy attachment to the Net. More recent surveys that recruit people already online have found American numbers on a par with those in Asia.

Then there was the University of Maryland’s 2010 “Unplugged” experiment that asked 200 undergrads to forgo all Web and mobile technologies for a day and to keep a diary of their feelings. “I clearly am addicted and the dependency is sickening,” reported one student in the study. “Media is my drug,” wrote another. At least two other schools haven’t even been able to get such an experiment off the ground for lack of participants. “Most college students are not just unwilling, but functionally unable, to be without their media links to the world,” the University of Maryland concluded.

That same year two psychiatrists in Taiwan made headlines with the idea of iPhone addiction disorder. They documented two cases from their own practices: one involved a high-school boy who ended up in an asylum after his iPhone usage reached 24 hours a day. The other featured a 31-year-old saleswoman who used her phone while driving. Both cases might have been laughed off if not for a 200-person Stanford study of iPhone habits released at the same time. It found that one in 10 users feels “fully addicted” to his or her phone. All but 6 percent of the sample admitted some level of compulsion, while 3 percent won’t let anyone else touch their phones.

In the two years since, concern over the Web’s pathological stickiness has only intensified. In April, doctors told The Times of India about an anecdotal uptick in “Facebook addiction.” The latest details of America’s Web obsession are found in Larry Rosen’s new book, iDisorder, which, despite the hucksterish title, comes with the imprimatur of the world’s largest academic publisher. His team surveyed 750 people, a spread of teens and adults who represented the Southern California census, detailing their tech habits, their feelings about those habits, and their scores on a series of standard tests of psychiatric disorders. He found that most respondents, with the exception of those over the age of 50, check text messages, email or their social network “all the time” or “every 15 minutes.” More worryingly, he also found that those who spent more time online had more “compulsive personality traits.”

Perhaps not that surprising: those who want the most time online feel compelled to get it. But in fact these users don’t exactly want to be so connected. It’s not quite free choice that drives most young corporate employees (45 and under) to keep their BlackBerrys in the bedroom within arms’ reach, per a 2011 study; or free choice, per another 2011 study, that makes 80 percent of vacationers bring along laptops or smartphones so they can check in with work while away; or free choice that leads smartphone users to check their phones before bed, in the middle of the night, if they stir, and within minutes of waking up.

We may appear to be choosing to use this technology, but in fact we are being dragged to it by the potential of short-term rewards. Every ping could be social, sexual, or professional opportunity, and we get a mini-reward, a squirt of dopamine, for answering the bell. “These rewards serve as jolts of energy that recharge the compulsion engine, much like the frisson a gambler receives as a new card hits the table,” MIT media scholar Judith Donath recently told Scientific American. “Cumulatively, the effect is potent and hard to resist.”

Recently it became possible to watch this kind of Web use rewire the brain. In 2008 Gary Small, the head of UCLA’s Memory and Aging Research Center, was the first to document changes in the brain as a result of even moderate Internet use. He rounded up 24 people, half of them experienced Web users, half of them newbies, and he passed them each through a brain scanner. The difference was striking, with the Web users displaying fundamentally altered prefrontal cortexes. But the real surprise was what happened next. The novices went away for a week, and were asked to spend a total of five hours online and then return for another scan. “The naive subjects had already rewired their brains,” he later wrote, musing darkly about what might happen when we spend more time online.

The brains of Internet addicts, it turns out, look like the brains of drug and alcohol addicts. In a study published in January, Chinese researchers found “abnormal white matter”—essentially extra nerve cells built for speed—in the areas charged with attention, control, and executive function. A parallel study found similar changes in the brains of videogame addicts. And both studies come on the heels of other Chinese results that link Internet addiction to “structural abnormalities in gray matter,” namely shrinkage of 10 to 20 percent in the area of the brain responsible for processing of speech, memory, motor control, emotion, sensory, and other information. And worse, the shrinkage never stopped: the more time online, the more the brain showed signs of “atrophy.”

While brain scans don’t reveal which came first, the abuse or the brain changes, many clinicians feel their own observations confirmed. “There’s little doubt we’re becoming more impulsive,” says Stanford’s Aboujaoude, and one reason for this is technology use. He points to the rise in OCD and ADHD diagnosis, the latter of which has risen 66 percent in the last decade. “There is a cause and effect.”

And don’t kid yourself: the gap between an “Internet addict” and John Q. Public is thin to nonexistent. One of the early flags for addiction was spending more than 38 hours a week online. By that definition, we are all addicts now, many of us by Wednesday afternoon, Tuesday if it’s a busy week. Current tests for Internet addiction are qualitative, casting an uncomfortably wide net, including people who admit that yes, they are restless, secretive, or preoccupied with the Web and that they have repeatedly made unsuccessful efforts to cut back. But if this is unhealthy, it’s clear many Americans don’t want to be well.

Like addiction, the digital connection to depression and anxiety was also once a near laughable assertion. A 1998 Carnegie Mellon study found that Web use over a two-year period was linked to blue moods, loneliness, and the loss of real-world friends. But the subjects all lived in Pittsburgh, critics sneered. Besides, the Net might not bring you chicken soup, but it means the end of solitude, a global village of friends, and friends you haven’t met yet. Sure enough, when Carnegie Mellon checked back in with the denizens of Steel City a few years later, they were happier than ever.

But the black crow is back on the wire. In the past five years, numerous studies have duplicated the original Carnegie Mellon findings and extended them, showing that the more a person hangs out in the global village, the worse they are likely to feel. Web use often displaces sleep, exercise, and face-to-face exchanges, all of which can upset even the chirpiest soul. But the digital impact may last not only for a day or a week, but for years down the line. A recent American study based on data from adolescent Web use in the 1990s found a connection between time online and mood disorders in young adulthood. Chinese researchers have similarly found “a direct effect” between heavy Net use and the development of full-blown depression, while scholars at Case Western Reserve University correlated heavy texting and social-media use with stress, depression, and suicidal thinking.

In response to this work, an article in the journal Pediatrics noted the rise of “a new phenomenon called ‘Facebook depression,’?” and explained that “the intensity of the online world may trigger depression.” Doctors, according to the report published by the American Academy of Pediatrics, should work digital usage questions into every annual checkup.

Rosen, the author of iDisorder, points to a preponderance of research showing “a link between Internet use, instant messaging, emailing, chatting, and depression among adolescents,” as well as to the “strong relationships between video gaming and depression.” But the problem seems to be quality as well as quantity: bad interpersonal experiences—so common online—can lead to these potential spirals of despair. For her book Alone Together, MIT psychologist Sherry Turkle interviewed more than 450 people, most of them in their teens and 20s, about their lives online. And while she’s the author of two prior tech-positive books, and once graced the cover of Wired magazine, she now reveals a sad, stressed-out world of people coated in Dorito dust and locked in a dystopian relationship with their machines.

People tell her that their phones and laptops are the “place for hope” in their lives, the “place where sweetness comes from.” Children describe mothers and fathers unavailable in profound ways, present and yet not there at all. “Mothers are now breastfeeding and bottle-feeding their babies as they text,” she told the American Psychological Association last summer. “A mother made tense by text messages is going to be experienced as tense by the child. And that child is vulnerable to interpreting that tension as coming from within the relationship with the mother. This is something that needs to be watched very closely.” She added, “Technology can make us forget important things we know about life.”

This evaporation of the genuine self also occurred among the high-school- and college-age kids she interviewed. They were struggling with digital identities at an age when actual identity is in flux. “What I learned in high school,” a kid named Stan told Turkle, “was profiles, profiles, profiles; how to make a me.” It’s a nerve-racking learning curve, a life lived entirely in public with the webcam on, every mistake recorded and shared, mocked until something more mockable comes along. “How long do I have to do this?” another teen sighed, as he prepared to reply to 100 new messages on his phone.

Last year, when MTV polled its 13- to 30-year-old viewers on their Web habits, most felt “defined” by what they put online, “exhausted” by always having to be putting it out there, and utterly unable to look away for fear of missing out. “FOMO,” the network called it. “I saw the best minds of my generation destroyed by madness, starving hysterical naked,” begins Allen Ginsberg’s poem Howl, a beatnik rant that opens with people “dragging themselves” at dawn, searching for an “angry fix” of heroin. It’s not hard to imagine the alternative imagery today.

The latest Net-and-depression study may be the saddest one of all. With consent of the subjects, Missouri State University tracked the real-time Web habits of 216 kids, 30 percent of whom showed signs of depression. The results, published last month, found that the depressed kids were the most intense Web users, chewing up more hours of email, chat, videogames, and file sharing. They also opened, closed, and switched browser windows more frequently, searching, one imagines, and not finding what they hoped to find.

They each sound like Doug, a Midwestern college student who maintained four avatars, keeping each virtual world open on his computer, along with his school work, email, and favorite videogames. He told Turkle that his real life is “just another window”—and “usually not my best one.” Where is this headed? she wonders. That’s the scariest line of inquiry of all.

Recently, scholars have begun to suggest that our digitized world may support even more extreme forms of mental illness. At Stanford, Dr. Aboujaoude is studying whether some digital selves should be counted as a legitimate, pathological “alter of sorts,” like the alter egos documented in cases of multiple personality disorder (now called dissociative identity disorder in the DSM). To test his idea, he gave one of his patients, Richard, a mild-mannered human-resources executive with a ruthless Web poker habit, the official test for multiple personality disorder. The result was startling. He scored as high as patient zero. “I might as well have been … administering the questionnaire to Sybil Dorsett!” Aboujaoude writes.

The Gold brothers—Joel, a psychiatrist at New York University, and Ian, a philosopher and psychiatrist at McGill University—are investigating technology’s potential to sever people’s ties with reality, fueling hallucinations, delusions, and genuine psychosis, much as it seemed to do in the case of Jason Russell, the filmmaker behind “Kony 2012.” The idea is that online life is akin to life in the biggest city, stitched and sutured together by cables and modems, but no less mentally real—and taxing—than New York or Hong Kong. “The data clearly support the view that someone who lives in a big city is at higher risk of psychosis than someone in a small town,” Ian Gold writes via email. “If the Internet is a kind of imaginary city,” he continues. “It might have some of the same psychological impact.”

A team of researchers at Tel Aviv University is following a similar path. Late last year, they published what they believe are the first documented cases of “Internet-related psychosis.” The qualities of online communication are capable of generating “true psychotic phenomena,” the authors conclude, before putting the medical community on warning. “The spiraling use of the Internet and its potential involvement in psychopathology are new consequences of our times.”

So what do we do about it? Some would say nothing, since even the best research is tangled in the timeless conundrum of what comes first. Does the medium break normal people with its unrelenting presence, endless distractions, and threat of public ridicule for missteps? Or does it attract broken souls?

But in a way, it doesn’t matter whether our digital intensity is causing mental illness, or simply encouraging it along, as long as people are suffering. Overwhelmed by the velocity of their lives, we turn to prescription drugs, which helps explain why America runs on Xanax (and why rehab admissions for benzodiazepines, the ingredient in Xanax and other anti-anxiety drugs, have tripled since the late 1990s). We also spring for the false rescue of multitasking, which saps attention even when the computer is off. And all of us, since the relationship with the Internet began, have tended to accept it as is, without much conscious thought about how we want it to be or what we want to avoid. Those days of complacency should end. The Internet is still ours to shape. Our minds are in the balance.

“Moral from oral”

The new work is also part of a field called grounded or embodied cognition. Basically, this ever-growing understanding of human mental function notes that all ideas and concepts—including hard-to-grasp ones like personality and trustworthiness—are ultimately anchored in concrete situations that happen to flesh-bound bodies.

By Kai MacDonald

What do a chilly reception, a cold-blooded murder, and an icy stare have in common? Each plumbs the bulb of what could be called your social thermometer, exposing our reflexive tendency to conflate social judgments—estimations of another’s trust and intent — with the perception of temperature. Decades of fascinating cross-disciplinary studies have illuminated the surprising speed, pervasiveness and neurobiology of this unconscious mingling of the personal and the thermal.

The blurring of ‘heat’ and ‘greet’ is highlighted in a recent experiment by Ohio University’s Matthew Vess, who asked whether this tendency is influenced by an individual’s sensitivity to relational distress. They found that people high in the psychological attribute called attachment anxiety (a tendency to worry about the proximity and availability of a romantic partner) responded to memories of a relationship breakup with an increased preference for warm-temperature foods over cooler ones: soup over crackers. Subjects low in attachment anxiety — those more temperamentally secure — did not show this “comfort food” effect.

In a related part of the same experiment, subjects were asked to reconstruct jumbled words into sentences that had either cold or warm evocations. (Sentence reconstruction tasks involving specific themes are known to unconsciously influence subsequent behavior.) After being temperature-primed, Vess’s subjects rated their perceptions of their current romantic relationship. As in the first condition, subjects higher in attachment anxiety rated their relationship satisfaction higher when prompted with balmier phrases than with frosty ones.

The fact that individual differences in a relationship-oriented trait (attachment anxiety) are related to a person’s sensitivity to unconscious temperature-related cues speaks to the “under the hood” unconscious mingling that occurs between our social perceptual system and our temperature perception system. Though people predisposed to worry about their relationships seem to be more sensitive to these cues, we are all predisposed to the blurring of different types of experience. Similar recent experiments have demonstrated, for example, that briefly holding a warm beverage buoys subsequent ratings of another’s personality, that social isolationsensitizes a person to all things chilly, that inappropriate social mimicry creates a sense of cold, and that differences in the setting of the experimental lab’s thermostat leads test subjects to construe social relationships differently.

Vess’s experiments follow a much longer line of psychological research exploring the reasons people avoid a cold shoulder, lament a frigid partner and have to “cool off” after a spat. In point of fact, the mercury in our social minds has been of interest since at least the 1940’s, when landmark work on impression-formation by the pioneering social psychologist Solomon Asch demonstrated the “striking and consistent differences of impression” created by substituting the words “warm” and “cold” into a hypothetical person’s personality profile. Since then, a panoply of studies of social perception in a host of cultures have validated the centrality of these temperate anchors in forming rapid unconscious impressions of a person.

It will come as no surprise that the ultimate confluence of the thermal and the personal happens between our ears. The neuroscientist Jaak Panksepp, for example, has noted that the neural machinery for attachment and bonding is actually cobbled together out of more primitive brain areas used for temperature regulation. Adding to this theme, the psychiatrist Myron Hofer’s seminal research in the 1970’s demonstrated that certain parameters of rodent maternal attachment behavior (e.g. variations in touch or warmth) act as “hidden regulators” of various physiological responses (e.g.digestion) in their pups. Around the same time, another psychiatrist, John Bowlby, penned his now-canon observations about the central importance of attachment for the social and psychological development of young humans, reminding us that we are just another part of a chain of mammals that depend on the care of others for survival.

The new work is also part of a field called grounded or embodied cognition. Basically, this ever-growing understanding of human mental function notes that all ideas and concepts—including hard-to-grasp ones like personality and trustworthiness—are ultimately anchored in concrete situations that happen to flesh-bound bodies. For example, our faces respond to disconcerting behavior (incest) in a similar way to disgusting ingestive behavior (crunching a cockroach). Moral from oral.

In the realm of relationships, this mingling of the less and more-concrete means that attachment experiences with specific others, their unconscious emotional tone, and the temperatures they actually feel activate and are stored in overlapping networks of brain areas. These are the so-called multimodal brain regions (a key one is called the insula) that blend different channels of sensory experience into a singular whole. In individual brains, then, the multichannel experience of being safe with another (initially, a mother) is forever fused with the experienced physical warmth that comes with being safe, fed and held. At the level of both brain and experience, this multi-sensory co-experiencing forms a wordless bedrock that implicitly grounds relational language, interpersonal evaluation, social cognition, and even imagined ingestion. “Warm” triggers “trust,” as well as the reverse.

From these insights, other questions abound. Regarding the biology of attachment and warmth, for example, one wonders whether certain molecules—the attachment hormone oxytocin, for example — bias cognition and behavior in a balmier direction. After all, given its vital role in birth, nursing, and early attachment bonds, oxytocin has a strong unconscious association with warm milk, a tendency to inspire trust and generosity, and the capacity to make more benign the valuation of others. Oxytocin even boosts a persons’ perception of the warmth of their own personality, and a ‘warm touch’ intervention in married couples enhances oxytocin levels. On the opposite hormonal pole, might testosterone—which decreases trust — also make things seem colder?

My late, beloved father—a psychologist — had a puckish propensity to char the toast that was a mandatory component of his hand-made hot breakfast ritual. Thanks to social neuroscience, I can now better appreciate (and perpetuate) the wisdom of this ritual linking hot food with heart-felt.

The ending no one wants

That is the thing that you begin to terrifyingly appreciate: Dementia is not absence; it is not a nonstate; it actually could be a condition of more rather than less feeling, one that, with its lack of clarity and logic, must be a kind of constant nightmare.

By Michael Wolff

ON THE WAY to visit my mother one recent rainy afternoon, I stopped in, after quite some constant prodding, to see my insurance salesman. He was pressing his efforts to sell me a long-term-care policy with a pitch about how much I’d save if I bought it now, before the rates were set to precipitously rise.

I am, as my insurance man pointed out, a “sweet spot” candidate. Not only do I have the cash (though not enough to self-finance my decline) but I also have a realistic view: Like so many people in our 50s — in my experience almost everybody — I have a parent in an advanced stage of terminal breakdown.

It’s what my peers talk about: our parents’ horror show. From the outside — at the office, restaurants, cocktail parties — we all seem perfectly secure and substantial. But in a room somewhere, hidden from view, we occupy this other, unimaginable life.

I didn’t need to be schooled in the realities of long-term care: The costs for my mother — who is 86 and who for the past 18 months has not been able to walk, talk, or address her most minimal needs and, to boot, is absent a short-term memory — come in at about $17,000 a month. And while her LTC insurance hardly covers all of that, I’m certainly grateful she had the foresight to carry such a policy.

And yet, on the verge of writing the first LTC check, I backed up.

We make certain assumptions about the necessity of care. It’s an individual and, depending on where you stand in the great health-care debate, a national responsibility. And yet, I will tell you, what I feel most intensely when I sit by my mother’s bed is a crushing sense of guilt for keeping her alive. Who can accept such suffering — who can so conscientiously facilitate it?

In 1990, there were slightly more than 3 million Americans over the age of 85. Now there are almost 6 million. By 2050 there will be 19 million — approaching 5 percent of the population.

By promoting longevity and technologically inhibiting death, we have created a new biological status held by an ever-growing part of the nation, a no-exit state that persists longer and longer, one that is nearly as remote from life as death, but which, unlike death, requires vast service, indentured servitude, really, and resources.

This is not anomalous; this is the norm. Contrary to the comedian’s maxim, comedy is easy, dying hard.

Seventy percent of those older than 80 have a chronic disability, according to one study; 53 percent in this group have at least one severe disability, and 36 percent have moderate to severe cognitive impairments.

Alzheimer’s is just one form — not, as it happens, my mother’s — of the -ever-more-encompassing conditions of cognitive collapse that are the partners and the price of longevity. There are now more than 5 million demented Americans. By 2050, upward of 15 million of us will have lost our minds.

This year, the costs of dementia care will be $200 billion. By 2050, $1 trillion.

Make no mistake, the purpose of long-term-care insurance is to help finance some of the greatest misery and suffering human beings have yet devised.

I HESITATE TO give my mother a personality here. It is the argument I have with myself every day — she is not who she was; do not force her to endure because of what she once was. Do not sentimentalize. And yet…that’s the bind: She remains my mother.

She graduated from high school in 1942 and went to work for the PatersonEvening News, a daily newspaper in New Jersey. In a newsroom with many of its men off to war, Marguerite Vander Werf — nicknamed “Van” in the newsroom and forevermore — shortly became the paper’s military reporter. Her job was to keep track of the local casualties. At 18, a lanky 95 pounds in bobby socks, my mother would often show up at a soldier’s parents’ front door before the War Department’s telegraph and have to tell these souls their son was dead.

She married my father, Lew Wolff, an adman, and left the paper after 11 years to have me — then my sister, Nancy, and brother, David. She did freelance journalism and part-time PR work (publicity, it was called then). She was a restless and compelling personality who became a civic power in our town, got elected to the board of education, and took charge of the public library, organizing and raising the money for its new building and expansion. She was the Pied Piper, the charismatic mom, a talker of great wit and passion — holding the attention of children and dinner-party drunks alike.

My father, whose ad agency had wide swings of fortune, died, suddenly, in that old-fashioned way, of a heart attack at 63, during one of the downswings. My mother was 58 — the age I am now — and left with small resources. She applied her charm and guile to a breathtaking reinvention and personal economic revival, becoming a marketing executive at first one and then another pharmaceutical company. At 72, headed to retirement but still restless, she capped off her career as the marketing head of an online-game company.

For 25 years, she lived in an apartment building in Ridgewood, N.J. Once a week, every week, she drove into Manhattan to cook dinner for my family and help my three children with their homework — I am not sure how I would have managed my life and raised children without her.

This is the woman, or what is left of the woman, who now resides in a studio apartment in one of those new boxy buildings that dot the Upper West Side — a kind of pre-coffin, if you will. It is even, thanks to my sister’s diligence, my mother’s LTC insurance and savings, and the contributions of my two siblings and me, what we might think of as an ideal place to be in her condition.

A painting from 1960 by March Avery, from the collection she and my father assembled — an Adirondack chair facing a blue sea — hangs in front of her. Below the painting is the flat-screen TV where she watches cooking shows with a strange intensity. She is attended 24/7 by two daily shifts of devoted caregivers.

It is peaceful and serene.

Except for my mother’s disquiet. She stares in mute reprimand. Her bewilderment and resignation somehow don’t mitigate her anger. She often tries to talk — desperate guttural pleas. She strains for cognition and, shockingly, sometimes bursts forward, reaching it — “Nice suit,” she said to me, out of the blue, a few months ago — before falling back.

That is the thing that you begin to terrifyingly appreciate: Dementia is not absence; it is not a nonstate; it actually could be a condition of more rather than less feeling, one that, with its lack of clarity and logic, must be a kind of constant nightmare.

“Old age,” says one of Philip Roth’s protagonists, “isn’t a battle, it’s a massacre.” I’d add, it’s a holocaust. Circumstances have conspired to rob the human person of all hope and dignity and comfort.

When my mother’s diaper is changed she makes noises of harrowing despair — for a time, before she lost all language, you could if you concentrated make out what she was saying, repeated over and over and over again: “It’s a violation. It’s a violation. It’s a violation.”

MY MOTHER’S CARDIOLOGIST had been for many years monitoring her for a condition called aortic stenosis — a narrowing of the aortic valve. The advice was do nothing until something had to be done. But now that she was showing symptoms that might suddenly kill her, why not operate and reach for another few good years? What’s to lose?

My siblings and I must take the blame here. It did not once occur to us to say, “You want to do major heart surgery on an 84-year-old woman showing progressive signs of dementia? What are you, nuts?”

Here’s what the surgeon said, defending himself, in perfect Catch-22-ese, against the recriminations that followed the stark and dramatic postoperative decline in my mother’s “quality-of-life baseline”: “I visited your mom before the procedure and fully informed her of the risks of such a surgery to someone showing signs of dementia.”

You fully informed my demented mom?

The operation absolutely repaired my mother’s heart — “She can live for years,” according to the surgeon. But where before she had been gently sinking, now we were in free fall.

Six weeks and something like $250,000 in hospital bills later (paid by Medicare — or, that is, by you), she was reduced to a terrified creature — losing language skills by the minute. “She certainly appears agitated,” the psychiatrist sent to administer anti-psychotic drugs told me, “and so do you.”

MY SISTER COMES over every morning. She brings the groceries, plans the menu, and has a daily routine for stretching my mother’s limbs. I’m here a few times a week (for exactly 30 minutes — no more, no less). Her grandchildren, with an unalloyed combination of devotion and horror, come on a diligent basis.

A few weeks ago, my sister and I called a meeting with my mother’s doctor. “It’s been a year,” I began, groping for what needed to be said. “We’ve seen a series of incremental but marked declines.” My sister chimed in with some vivid details.

We just wanted to help her go where she’s going. (Was that too much? Was that too specific?)

She does seem, the doctor allowed, to have entered another stage. “Perhaps more palliative care. This can ease her suffering, but the side effect can be to depress her functions. But maybe it is time to err on the side of ease.

“Your mom, like a lot of people, is what we call a dwindler,” said the doctor.

I do not know how death panels ever got such a bad name. Perhaps they should have been called deliverance panels. What I would not do for a fair-minded body to whom I might plead for my mother’s end.

The alternative is nuts: paying trillions and bankrupting the nation as well as our souls as we endure the suffering of our parents and our inability to help them get where they’re going. The single greatest pressure on health care is the disproportionate resources devoted to the elderly, to not just the old but the old old, and yet no one says what all old children of old parents know: This is not just wrongheaded but steals the life from everyone involved.

After due consideration, I decided that I plainly would never want what LTC insurance buys, and, too, that this would be a bad deal. My bet is that, even in America, we baby boomers watching our parents’ long and agonizing deaths won’t do this to ourselves. We will surely, we must surely, find a better, cheaper, quicker, kinder way out.

Meanwhile, since, like my mother, I can’t count on someone putting a pillow over my head, I’ll be trying to work out the timing and details of a do-it-yourself exit strategy. As should we all.

The Pink Floyd Night School

Why every graduate deserves a few aimless years after college.

Op-Ed Contributor
Published: May 1, 2010, NYT

Batesville, Va.

“So, what are you doing after graduation?”

In the spring of my last year in college I posed that question to at least a dozen fellow graduates-to-be at my little out-of-the-way school in Vermont. The answers they gave me were satisfying in the extreme: not very much, just kick back, hang out, look things over, take it slow. It was 1974. That’s what you were supposed to say.

My classmates weren’t, strictly speaking, telling the truth. They were, one might even say, lying outrageously. By graduation day, it was clear that most of my contemporaries would be trotting off to law school and graduate school and to cool and unusual internships in New York and San Francisco.

But I did take it slow. After graduation, I spent five years wandering around doing nothing — or getting as close to it as I could manage. I was a cab driver, an obsessed moviegoer, a wanderer in the mountains of Colorado, a teacher at a crazy grand hippie school in Vermont, the manager of a movie house (who didn’t do much managing), a crewman on a ship and a doorman at a disco.

The most memorable job of all, though, was a gig on the stage crew for a rock production company in Jersey City. We did our shows at Roosevelt Stadium, a grungy behemoth that could hold 60,000, counting seats on the grass. I humped amps out of the trucks and onto the stage; six or so hours later I humped them back. I did it for the Grateful Dead and Alice Cooper and the Allman Brothers and Crosby, Stills & Nash on the night that Richard Nixon resigned. But the most memorable night of that most memorable job was the night of Pink Floyd.

Pink Floyd demanded a certain quality of sound. They wanted their amps stacked high, not just on stage, where they were so broad and tall and forbidding that they looked like a barricade in the Paris Commune. They also wanted amp clusters at three highly elevated points around the stadium, and I spent the morning lugging huge blocks of wood and circuitry up and up and up the stairs of the decayed old bowl.

There was one other assignment: a parachute-like white silken canopy roof that Pink Floyd required over the stage. It took about six hours to get the thing up and in position. We were told that this was the first use of the canopy and Pink’s guys were unsteady. They had some blueprints, but those turned out not to be of much use. Eventually the roof did rise and inflate, with American know-how applied. Such know-how involved a lot of spontaneous knot-tying and strategic rope tangling.

Pink Floyd went on at about 10 that night and the amp clusters that we’d expended all that servile sweat to build didn’t work — people had sat on them, kicked them or cut the cords. So Pink made its noise, the towers stayed mute, the mob flicked on lighters at the end and then we spent three hours breaking the amps down and loading the truck. We refused to go after the speakers all the way up the stadium steps and, after some sharp words, Pink’s guys had to scramble up and retrieve them.

There was, for the record, almost always tension between the roadies and the stage crew. One time, at a show by (if memory serves) Queen, their five roadies got into a brawl with a dozen of our stage crew guys; then the house security, mostly Jersey bikers and black-belt karate devotees, heard the noise and jumped in. The roadies held on for a while, but finally they saw it was a lost cause. One of them grabbed a case of champagne from the truck cab and opened a bottle and passed it around — all became drunk and happy.

Pink’s road manager wanted the inflatable canopy brought down gently, then folded and packed securely in its wooden boxes. The problem was that the thing was full of helium and no one knew where the release valve was; we’d also secured it to the stage with so many knots of such foolish intricacy that their disentanglement would have given a gang of sailors pause. Everyone was tired. Those once intoxicated were no longer. It was 4 a.m. and time to go home.

An hour went into concocting strategies to get the floating pillowy roof down. It became a regular seminar. Then came Jim — Jimbo — our crew chief, who looked like a good-natured Viking captain and who defended the integrity of his stage crew at every turn, even going so far as to have screamed at Stevie Nicks, who was yelling at me for having dropped a guitar case, that he was the only one who had the right to holler at Edmundson. Faced with the Pink Floyd roof crisis, Jimbo did what he always could be counted on to do in critical circumstances, which is to say, he did something.

Jimbo walked softly to a corner of the stage, reached into his pocket, removed a buck knife and with it began to saw one of the ropes attaching the holy celestial roof to the earth. Three or four of us, his minions, did the same. “Hey, what are you doing?” wailed Pink’s head roadie. “I’ll smash your — ” Only then did he realize that Jimbo had a knife in his hand, and that some of the rest of us did, too. In the space of a few minutes, we sawed through the ropes.

There came a great sighing noise as the last thick cord broke apart. For a moment there was nothing; for another moment, more of the same.

Then the canopy rose into the air and began to float away, like a gorgeous cloud, white and soft. The sun at that moment burst above the horizon and the silk bloomed into a soft crimson tinge. Jimbo started to laugh his big bear-bellied laugh. We all joined. Even Pink’s guys did. We were like little kids on the last day of school. We stood on the naked stage, watching the silk roof go up and out, wafting over the Atlantic. Some of us waved.

“So, what are you doing after graduation?” Thirty-five years later, a college teacher, I ask my students the old question. They aren’t inclined to dissimulate now. The culture is on their side when they tell me about law school and med school and higher degrees in journalism and business; or when they talk about a research grant in China or a well-paying gig teaching English in Japan.

I’m impressed, sure, but I’m worried about them too. Aren’t they deciding too soon? Shouldn’t they hang out a little, learn to take it slow? I can’t help it. I flash on that canopy of white silk floating out into the void. I can see it as though it were still there. I want to point up to it. I’d like for my students to see it, too.

Mark Edmundson, a professor of English at the University of Virginia, is the author of the forthcoming memoir “The Fine Wisdom and Perfect Teachings of the Kings of Rock and Roll.”

The Fiction of Memory

Reality Hunger: A Manifesto
by David Shields

Reviewed by LUC SANTE
Published: March 12, 2010
New York Times Sunday Book Review

Consider the state of literature at the moment. Consider the rise of the memoir, the incidences of contrived and fabricated memoirs, the rash of imputations of plagiarism in novels, the overall ill health of the mainstream novel. Consider, too, culture outside of literature: reality TV, the many shades and variations of documentary film, the rise of the curator, the rise of the D.J., sampling, appropriation, the carry-over of collage from modernism into postmodernism. Now consider that all these elements might somehow be connected, might represent different aspects of some giant whatsit that will eventually constitute the cultural face of our time in the eyes of the future. That is what David Shields proposes in “Reality Hunger: A Manifesto.” He further argues that what all those things have in common is that they express or fulfill a need for reality, a need that is not being met by the old and crumbling models of literature.

To call something a manifesto is a brave step. It signals that you are hoisting a flag and are prepared to go down with the ship. David Shields’s clarion call may in some ways depart from the usual manifesto profile — it doesn’t speak on behalf of a movement, exactly — but it urgently and succinctly addresses matters that have been in the air, have relentlessly gathered momentum and have just been waiting for someone to link them together. His is a complex and multifaceted argument, not easily reducible to a bullet-point list — but then, so was the Surrealist Manifesto. “Reality Hunger” does contain quite a few slogan-ready phrases, but they weren’t all written by Shields, and some are more than a century old.

One way in which the book expresses its thesis is in its organization: it is made up of 618 numbered paragraphs, more than half of them drawn from other sources, attributed only at the end of the book. This will remind readers of Jonathan Lethem’s tour-de-force essay “The Ecstasy of Influence,” published in Harper’s in 2007, in which every single line derives from other authors — note that Lethem acknowledges a debt to Shields’s essays. But what reality is such magpie business enacting? Shields answers: “Old and new make the warp and woof of every moment. There is no thread that is not a twist of these two strands. By necessity, by proclivity and by delight, we all quote. It is as difficult to appropriate the thoughts of others as it is to invent.” He is, of course, quoting Emerson.

There is an artistic movement brewing, Shields writes. Among its hallmarks are the incorporation of “seemingly unprocessed” material; “randomness, openness to accident and serendipity; . . . criticism as autobiography; self-reflexivity; . . . a blurring (to the point of invisibility) of any distinction between fiction and nonfiction.” He briefly summarizes the history of the novel — set in stone by the mid-19th century — and that of the essay. One form is on its way down, the other on its way up. The novel, for all the exertions of modernism, is by now as formalized and ritualized as a crop ceremony. It no longer reflects actual reality. The essay, on the other hand, is fluid. It is a container made of prose into which you can pour anything. The essay assumes the first person; the novel shies from it, insisting that personal experience be modestly draped.

The flood of memoirs of the last couple of decades represents an uprising against such repression. So why have there been so many phony memoirs? Because of false consciousness, as Marxists would put it. Shields (echoing Alice Marshall) is disappointed in James Frey not because he lied in his book, but because when he appeared on Oprah Winfrey’s show he didn’t say: “Everyone who writes about himself is a liar. I created a person meaner, funnier, more filled with life than I could ever be.” After all, just because the novel is food for worms doesn’t mean that fiction has ceased. Only an artificial dualism would treat every non-novel as if it were reportage or court testimony, and only a fear of the slipperiness of life could perpetuate the cult of the back story. “Anything processed by memory is fiction,” as is any memory shaped into literature.

But we continue to crave reality, because we live in a time dominated by innumerable forms of extraliterary fiction: politics, advertising, the lives of celebrities, the apparatus surrounding professional sports — you could say without exaggeration that everything on TV is fiction whether it is packaged as such or not. So what constitutes reality, then, as it affects culture? It can be as simple as a glitch, an interruption, a dropped beat, a foreign object that suddenly intrudes. Hence the potency of sampling in popular music, which forces open the space between the vocal and instrumental components. It is also a form of collage, which edits, alters and reapportions cultural commodities according to need or desire. Reality is a landscape that includes unreal features; being true to reality involves a certain amount of wavering between real and unreal. Likewise originality, if there can ever be any such thing, will inevitably entail a quantity of borrowing, conscious and otherwise. The paradoxes pile up as thick as the debris of history — unsurprisingly, since that debris is our reality.

Shields’s text exemplifies many of his arguments. “The lyric essay doesn’t expound, is suggestive rather than exhaustive, depends on gaps, may merely mention,” he writes (quoting John D’Agata and Deborah Tall), and so it is with his book, which argues forcefully and passionately, but not like a debate-team captain, more like a clever if overmatched boxer, endlessly bobbing and weaving. And for all that so much of its verbiage is the work of others, it positively throbs with personality. This is so not simply because Shields includes a chapter of autobiographical vignettes; he puts his crotchets on display.

He is serious perhaps to a fault. The decision to identify the authors of the appropriated texts was, he tells us, not his but that of his publisher’s lawyers, and he suggests that readers might want to scissor out those nine pages of citations. This is a noble and idealistic stance, of course, but it overlooks a human frailty that is undeniably real: curiosity. His asceticism seems also to govern his view of narrative. He is “a wisdom junkie” who wants “a literature built entirely out of contemplation and revelation,” and thinks that “Hamlet” would be a lot better if all the plot were excised, leaving the chain of little essays it really wants to be. But while it’s true that Shakespeare’s plots can sometimes seem like armatures dragged in from the prop room, they are also there to service the human need for sensation. Sometimes Shields can give the impression that he dislikes the novel for the same reasons Cotton Mather might have: its frivolity, its voyeurism, its licentiousness.

On the whole, though, he is a benevolent and broad-minded revolutionary, urging a hundred flowers to bloom, toppling only the outmoded and corrupt institutions. His book may not presage sweeping changes in the immediate future, but it probably heralds what will be the dominant modes in years and decades to come. The essay will come into its own and cease being viewed as the stepchild of literature. Some version of the novel will endure as long as gossip and daydreaming do, but maybe it will become more aerated and less controlling. There will be a lot more creative use of uncertainty, of cognitive dissonance, of messiness and self-­consciousness and high-spirited looting. And reality will be ever more necessary and harder to come by.

Luc Sante’s most recent book is “Folk Photography.” He teaches at Bard College.

Bullying Bibliophile: The bloody age of Vyacheslav Molotov


Stalin’s violent henchman and his library may have inspired a modern classic

Mar 4th 2010 | From The Economist print edition

Molotov’s Magic Lantern: A Journey in Russian History. By Rachel Polonsky. Faber and Faber; 388 pages; £20.

EXPATRIATE spouses living pampered lives in Moscow often think it would be nice to write a book about their time there. The material is irresistible: vastness, extremes, depths and delights. But the trite, coy and overly personal jottings that result often prove quite resistible. Rachel Polonsky moved to Moscow with her lawyer husband and stayed for a decade. Her perceptive and erudite book is the exception and sets a standard to freeze the ink in others’ pens.

Ms Polonsky was a fellow at Cambridge University who initially planned to spend her time in Moscow working on a follow-up to her previous book, a heavyweight study of Russian orientalism. Instead she has produced a spectacular and enjoyable display of intellectual fireworks for the general reader.

The book’s core is other books: the fragments of a library that Ms Polonsky discovers in her neighbour’s flat, which once belonged to one of Russia’s greatest monsters, Vyacheslav Molotov. Stalin’s most devoted henchman in the blood-drenched years of the Great Terror, Molotov signed a record 373 death warrants for senior officials, including his close colleagues. He also co-signed, with Hitler’s foreign minister, Joachim von Ribbentrop, another death warrant—the deal that dismembered the countries of central Europe and the Baltic states. Toasted by Molotov and Hitler at a banquet in Berlin, the Nazi-Soviet pact consigned millions to death, slavery and destitution.

The butcher was a bibliophile. His books, sometimes annotated, or even with his moustache hairs left, repellently, as page markers, are much in Ms Polonsky’s thoughts during her journeys to Russia’s bleak north, lush south and distant east. Her finely drawn literary travelogues on Taganrog, Murmansk, Vologda, Irkutsk and other places depict squalor, pomp, misery, exhilaration, heroism and brutishness, each cameo framed in its historical, cultural and physical context.

Some of the material comes from Molotov’s books, others from Ms Polonsky’s well-stocked mind. Few readers will have her encyclopedic knowledge of the works of Anna Akhmatova, Isaak Babel, Anton Chekhov, Fyodor Dostoyevsky, Alexander Herzen, Varlam Shalamov and Marina Tsvetaeva, to name but a few. But while reading the book they will feel that they do. Ms Polonsky wears her considerable learning lightly.

She has a knack for putting herself into other people’s shoes with empathy and skill. During a visit to Moscow’s luxurious Sandunovsky bathhouse she spots a fellow-bather reading Oswald Spengler’s “Decline of the West”; another is applying a home-made unguent consisting of cream and coffee grounds. Spengler, a German historian, thought that cities were ulcers on the body of Russia. What, she asks, would he have made of this scene? That prompts a captivating excursion into the mystical significance of the steam bath, from its rural pagan roots to modern urban body worship.

Ms Polonsky’s interest in the spiritual comes across strongly. She highlights the significance of Aleksandr Men, an inspirational Orthodox priest murdered as the Soviet Union died. Her description of the Bolsheviks’ desecration of the Savvino-Storozhevsky monastery in the midst of the last monks’ final liturgy is memorable. So is her icy account of the creepy religiosity, bordering on paganism, that is to be found in the upper reaches of the current Russian regime.

The contempt she feels for the greed, filth and viciousness that she encounters is all the more compelling for being understated. Her sympathy and affection for the finest bits of Russia’s past and present shine through—whether for the civic traditions of ancient Novgorod, for the aristocratic rebels of the Decembrists or for more modern martyrs such as the Mandelstams or Anna Politkovskaya, a journalist murdered in 2006. The reader catches only fleeting glimpses of Ms Polonsky herself. That contrasts pleasingly with the self-centredness that is present in so much other Western writing about Russia. As her book shows, the author has grit, charm and style—and a gift for traveller’s tales.

Shelf Life

Published: March 4, 2010
The New York Times Magazine

People who reject e-books often say they can’t live without the heft, the texture and — curiously — the scent of traditional books. This aria of hypersensual book love is not my favorite performance. I sometimes suspect that those who gush about book odor might not like to read. If they did, why would they waste so much time inhaling? Among the best features of the Kindle, Amazon’s great e-reader, is that there’s none of that. The device, which consigns all poetry and prose to the same homely fog-toned screen, leaves nothing to the experience of books but reading. This strikes me as honest, even revolutionary.

But I have to stop being so marmish. I’m not a literacy promoter; I don’t run some kind of Reading Rocket Club. And binding sniffers are not illiterate, anyway. No less a reader than the cultural critic Walter Benjamin, as card-carrying a man of letters as the world has ever seen, savored the physicality of books. In his 1931 essay “Unpacking My Library: A Talk About Book Collecting” — one of those essays that’s often called “famous,” though we’re not talking Hello Kitty here — Benjamin turns euphoric while surveying his dusty books. He’s enchanted by each one: “the period, the region, the craftsmanship, the former ownership.” In acquiring books, often in mock-heroic ways, he says he has managed “to renew the Old World.”

Benjamin even pre-empts my skepticism. “The nonreading of books, you will object, should be characteristic of collectors?” His answer: Yes! Book collectors don’t always read! Should a person read all the books she buys? No more — Benjamin writes — than you should use your best china every day.

Hmph. Benjamin is not to be gainsaid. If he says not reading books can be as sophisticated and European as reading them, I believe him, and I will try to think of my books as Sèvres china. But Sèvres china, if I had any, would be for display on its days off, wouldn’t it? So how do I display or otherwise admire all these books I keep buying for the Kindle?

Unpacking my Kindle library, I click “menu” on my screen and find . . . a list. First, the words “The Happiness Project,” the title of a book by Gretchen Rubin, in stout dark gray lettering, underscored by a lighter, less stout line.

This might be depressing. I can’t tell if I’m supposed to consider this underlined title to be the “book” I ordered from Amazon. Maybe it’s more like a catalog listing. If I click on it, I’ll get to the words in the book. Maybe it’s analogous to a book’s spine.

I want to rhapsodize, as Benjamin does, when he remembers the tactics he employed to acquire the book “Fragmente aus dem Nachlass eines jungen Physikers” (Johann Wilhelm Ritter, 1810) after a Berlin auction. But the only memory I have of purchasing “The Happiness Project” is no memory at all. I’ll have to search my e-mail to see when I clicked “Start reading ‘The Happiness Project’ on your Kindle in under a minute” on Amazon. I think I had seen that “The Happiness Project,” a franchise with a blog I read, had become a best seller, and that seemed exciting; if I clicked, a credit card on file at Amazon would be out a theoretical $9.99, and in under a minute I’d be able to see for myself whether the book had more in it than the blog. That is the sweeping, romantic story of my acquisition of “The Happiness Project.” As Benjamin reminds us, Habent sua fata libelli. Books have their destiny!

Going down the gray list, further unpacking my dustless, odorless, weightless Kindle library, I find “Devotion,” by Dani Shapiro; “Let the Great World Spin,” by Colum Mc­Cann; “Wolf Hall: A Novel,” by Hilary Mantel; “Lit,” by Mary Karr; “Manhant,” by James Swanson; “DANCING IN THE DARK” — some titles are unaccountably given in all caps — by Morris Dickstein; and “Not Your Mother’s Slow Cooker Recipes for Entertaining” (a modern classic).

I have literally no memory of opting to get any of these on Amazon. Most of these books were bought impulsively, more like making a note to myself to read this or that than acquiring a tangible 3-D book; the list is a list of resolutions with price tags that will, with any luck, make the resolutions more urgent. Though it’s different from Benjamin’s ecstatic book collecting, this cycle of list making and resolution and constant-reading-to-keep-up is not unpleasurable.

Beholding “the several thousand volumes that are piled up around me,” Benjamin exclaims: “O bliss of the collector! Bliss of the man of leisure!” With nothing piled up around me but the Kindle and its charger, I may be missing out. But even Benjamin, who managed to see the future of media and technology more than once, knew he was writing an elegy for a way of experiencing books. I like to think he would be the first to recognize that the Kindle delivers a new kind of bliss.



Mere mechanical reproduction: How simple and even quaint modernist anxieties seem now, from the vertigo vantage of the Internet age. Let Walter Benjamin lay out the complexities of his day and even enliven your experience of our own digital culture. Read Benjamin’s Illuminations, the 1968 edition, with an introduction by Hannah Arendt.

Your mind has been asleep! Read Benjamin’s Reflections today! Find it in a library, a Parisian bookshop or in scattered fragments online. But tell Amazon to make it available for the Kindle only if you think Benjamin would have wanted it that way. What would he have thought about the aura of art that is disseminated digitally?

Depression’s Upside


Published: February 25, 2010
The New York Times Magazine

The Victorians had many names for depression, and Charles Darwin used them all. There were his “fits” brought on by “excitements,” “flurries” leading to an “uncomfortable palpitation of the heart” and “air fatigues” that triggered his “head symptoms.” In one particularly pitiful letter, written to a specialist in “psychological medicine,” he confessed to “extreme spasmodic daily and nightly flatulence” and “hysterical crying” whenever Emma, his devoted wife, left him alone.

While there has been endless speculation about Darwin’s mysterious ailment — his symptoms have been attributed to everything from lactose intolerance to Chagas disease — Darwin himself was most troubled by his recurring mental problems. His depression left him “not able to do anything one day out of three,” choking on his “bitter mortification.” He despaired of the weakness of mind that ran in his family. “The ‘race is for the strong,’ ” Darwin wrote. “I shall probably do little more but be content to admire the strides others made in Science.”

Darwin, of course, was wrong; his recurring fits didn’t prevent him from succeeding in science. Instead, the pain may actually have accelerated the pace of his research, allowing him to withdraw from the world and concentrate entirely on his work. His letters are filled with references to the salvation of study, which allowed him to temporarily escape his gloomy moods. “Work is the only thing which makes life endurable to me,” Darwin wrote and later remarked that it was his “sole enjoyment in life.”

For Darwin, depression was a clarifying force, focusing the mind on its most essential problems. In his autobiography, he speculated on the purpose of such misery; his evolutionary theory was shadowed by his own life story. “Pain or suffering of any kind,” he wrote, “if long continued, causes depression and lessens the power of action, yet it is well adapted to make a creature guard itself against any great or sudden evil.” And so sorrow was explained away, because pleasure was not enough. Sometimes, Darwin wrote, it is the sadness that informs as it “leads an animal to pursue that course of action which is most beneficial.” The darkness was a kind of light.

The mystery of depression is not that it exists — the mind, like the flesh, is prone to malfunction. Instead, the paradox of depression has long been its prevalence. While most mental illnesses are extremely rare — schizophrenia, for example, is seen in less than 1 percent of the population — depression is everywhere, as inescapable as the common cold. Every year, approximately 7 percent of us will be afflicted to some degree by the awful mental state that William Styron described as a “gray drizzle of horror . . . a storm of murk.” Obsessed with our pain, we will retreat from everything. We will stop eating, unless we start eating too much. Sex will lose its appeal; sleep will become a frustrating pursuit. We will always be tired, even though we will do less and less. We will think a lot about death.

The persistence of this affliction — and the fact that it seemed to be heritable — posed a serious challenge to Darwin’s new evolutionary theory. If depression was a disorder, then evolution had made a tragic mistake, allowing an illness that impedes reproduction — it leads people to stop having sex and consider suicide — to spread throughout the population. For some unknown reason, the modern human mind is tilted toward sadness and, as we’ve now come to think, needs drugs to rescue itself.

The alternative, of course, is that depression has a secret purpose and our medical interventions are making a bad situation even worse. Like a fever that helps the immune system fight off infection — increased body temperature sends white blood cells into overdrive — depression might be an unpleasant yet adaptive response to affliction. Maybe Darwin was right. We suffer — we suffer terribly — but we don’t suffer in vain.


ANDY THOMSON IS a psychiatrist at the University of Virginia. He has a scruffy gray beard and steep cheekbones. When Thomson talks, he tends to close his eyes, as if he needs to concentrate on what he’s saying. But mostly what he does is listen: For the last 32 years, Thomson has been tending to his private practice in Charlottesville. “I tend to get the real hard cases,” Thomson told me recently. “A lot of the people I see have already tried multiple treatments. They arrive without much hope.” On one of the days I spent with Thomson earlier this winter, he checked his phone constantly for e-mail updates. A patient of his on “welfare watch” who was required to check in with him regularly had not done so, and Thomson was worried. “I’ve never gotten used to treating patients in mental pain,” he said. “Maybe it’s because every story is unique. You see one case of iron-deficiency anemia, you’ve seen them all. But the people who walk into my office are all hurting for a different reason.”

In the late 1990s, Thomson became interested in evolutionary psychology, which tries to explain the features of the human mind in terms of natural selection. The starting premise of the field is that the brain has a vast evolutionary history, and that this history shapes human nature. We are not a blank slate but a byproduct of imperfect adaptations, stuck with a mind that was designed to meet the needs of Pleistocene hunter-gatherers on the African savanna. While the specifics of evolutionary psychology remain controversial — it’s never easy proving theories about the distant past — its underlying assumption is largely accepted by mainstream scientists. There is no longer much debate over whether evolution sculptured the fleshy machine inside our head. Instead, researchers have moved on to new questions like when and how this sculpturing happened and which of our mental traits are adaptations and which are accidents.

In 2004, Thomson met Paul Andrews, an evolutionary psychologist at Virginia Commonwealth University, who had long been interested in the depression paradox — why a disorder that’s so costly is also so common. Andrews has long dark brown hair and an aquiline nose. Before he begins to talk, he often writes down an outline of his answer on scratch paper. “This is a very delicate subject,” he says. “I don’t want to say something reckless.”

Andrews and Thomson struck up an extended conversation on the evolutionary roots of depression. They began by focusing on the thought process that defines the disorder, which is known as rumination. (The verb is derived from the Latin word for “chewed over,” which describes the act of digestion in cattle, in which they swallow, regurgitate and then rechew their food.) In recent decades, psychiatry has come to see rumination as a dangerous mental habit, because it leads people to fixate on their flaws and problems, thus extending their negative moods. Consider “The Depressed Person,” a short story by David Foster Wallace, which chronicles a consciousness in the grip of the ruminative cycle. (Wallace struggled with severe depression for years before committing suicide in 2008.) The story is a long lament, a portrait of a mind hating itself, filled with sentences like this: “What terms might be used to describe such a solipsistic, self-consumed, bottomless emotional vacuum and sponge as she now appeared to herself to be?” The dark thoughts of “The Depressed Person” soon grow tedious and trying, but that’s precisely Wallace’s point. There is nothing profound about depressive rumination. There is just a recursive loop of woe.

The bleakness of this thought process helps explain why, according to the Yale psychologist Susan Nolen-Hoeksema, people with “ruminative tendencies” are more likely to become depressed. They’re also more likely to become unnerved by stressful events: for instance, Nolen-Hoeksema found that residents of San Francisco who self-identified as ruminators showed significantly more depressive symptoms after the 1989 Loma Prieta earthquake. And then there are the cognitive deficits. Because rumination hijacks the stream of consciousness — we become exquisitely attentive to our pain — numerous studies have found that depressed subjects struggle to think about anything else, just like Wallace’s character. The end result is poor performance on tests for memory and executive function, especially when the task involves lots of information. (These deficits disappear when test subjects are first distracted from their depression and thus better able to focus on the exercise.) Such research has reinforced the view that rumination is a useless kind of pessimism, a perfect waste of mental energy.

That, at least, was the scientific consensus when Andrews and Thomson began exploring the depression paradox. Their evolutionary perspective, however — they see the mind as a fine-tuned machine that is not prone to pointless programming bugs — led them to wonder if rumination had a purpose. They started with the observation that rumination was often a response to a specific psychological blow, like the death of a loved one or the loss of a job. (Darwin was plunged into a debilitating grief after his 10-year-old daughter, Annie, died following a bout of scarlet fever.) Although the D.S.M. manual, the diagnostic bible for psychiatrists, does not take such stressors into account when diagnosing depressive disorder — the exception is grief caused by bereavement, as long as the grief doesn’t last longer than two months — it’s clear that the problems of everyday life play a huge role in causing mental illness. “Of course, rumination is unpleasant,” Andrews says. “But it’s usually a response to something real, a real setback. It didn’t seem right that the brain would go haywire just when we need it most.”

Imagine, for instance, a depression triggered by a bitter divorce. The ruminations might take the form of regret (“I should have been a better spouse”), recurring counterfactuals (“What if I hadn’t had my affair?”) and anxiety about the future (“How will the kids deal with it? Can I afford my alimony payments?”). While such thoughts reinforce the depression — that’s why therapists try to stop the ruminative cycle — Andrews and Thomson wondered if they might also help people prepare for bachelorhood or allow people to learn from their mistakes. “I started thinking about how, even if you are depressed for a few months, the depression might be worth it if it helps you better understand social relationships,” Andrews says. “Maybe you realize you need to be less rigid or more loving. Those are insights that can come out of depression, and they can be very valuable.”

This radical idea — the scientists were suggesting that depressive disorder came with a net mental benefit — has a long intellectual history. Aristotle was there first, stating in the fourth century B.C. “that all men who have attained excellence in philosophy, in poetry, in art and in politics, even Socrates and Plato, had a melancholic habitus; indeed some suffered even from melancholic disease.” This belief was revived during the Renaissance, leading Milton to exclaim, in his poem “Il Penseroso”: “Hail divinest Melancholy/Whose saintly visage is too bright/To hit the sense of human sight.” The Romantic poets took the veneration of sadness to its logical extreme and described suffering as a prerequisite for the literary life. As Keats wrote, “Do you not see how necessary a World of Pains and troubles is to school an intelligence and make it a soul?”

But Andrews and Thomson weren’t interested in ancient aphorisms or poetic apologias. Their daunting challenge was to show how rumination might lead to improved outcomes, especially when it comes to solving life’s most difficult dilemmas. Their first speculations focused on the core features of depression, like the inability of depressed subjects to experience pleasure or their lack of interest in food, sex and social interactions. According to Andrews and Thomson, these awful symptoms came with a productive side effect, because they reduced the possibility of becoming distracted from the pressing problem.

The capacity for intense focus, they note, relies in large part on a brain area called the left ventrolateral prefrontal cortex (VLPFC), which is located a few inches behind the forehead. While this area has been associated with a wide variety of mental talents, like conceptual knowledge and verb conjugation, it seems to be especially important for maintaining attention. Experiments show that neurons in the VLPFC must fire continuously to keep us on task so that we don’t become sidetracked by irrelevant information. Furthermore, deficits in the VLPFC have been associated with attention-deficit disorder.

Several studies found an increase in brain activity (as measured indirectly by blood flow) in the VLPFC of depressed patients. Most recently, a paper to be published next month by neuroscientists in China found a spike in “functional connectivity” between the lateral prefrontal cortex and other parts of the brain in depressed patients, with more severe depressions leading to more prefrontal activity. One explanation for this finding is that the hyperactive VLPFC underlies rumination, allowing people to stay focused on their problem. (Andrews and Thomson argue that this relentless fixation also explains the cognitive deficits of depressed subjects, as they are too busy thinking about their real-life problems to bother with an artificial lab exercise; their VLPFC can’t be bothered to care.) Human attention is a scarce resource — the neural effects of depression make sure the resource is efficiently allocated.

But the reliance on the VLPFC doesn’t just lead us to fixate on our depressing situation; it also leads to an extremely analytical style of thinking. That’s because rumination is largely rooted in working memory, a kind of mental scratchpad that allows us to “work” with all the information stuck in consciousness. When people rely on working memory — and it doesn’t matter if they’re doing long division or contemplating a relationship gone wrong — they tend to think in a more deliberate fashion, breaking down their complex problems into their simpler parts.

The bad news is that this deliberate thought process is slow, tiresome and prone to distraction; the prefrontal cortex soon grows exhausted and gives out. Andrews and Thomson see depression as a way of bolstering our feeble analytical skills, making it easier to pay continuous attention to a difficult dilemma. The downcast mood and activation of the VLPFC are part of a “coordinated system” that, Andrews and Thomson say, exists “for the specific purpose of effectively analyzing the complex life problem that triggered the depression.” If depression didn’t exist — if we didn’t react to stress and trauma with endless ruminations — then we would be less likely to solve our predicaments. Wisdom isn’t cheap, and we pay for it with pain.

Consider a young professor on tenure track who was treated by Thomson. The patient was having difficulties with his academic department. “This guy was used to success coming easy, but now it wasn’t,” Thomson says. “I made it clear that I thought he’d need some time to figure out his next step. His problem was like a splinter, and the pain wouldn’t go away until the splinter was removed.” Should the patient leave the department? Should he leave academia? Or should he try to resolve the disagreement? Over the next several weeks, Thomson helped the patient analyze his situation and carefully think through the alternatives. “We took it one variable at a time,” Thomson says. “And it eventually became clear to him that the departmental issues couldn’t be fixed. He needed to leave. Once he came to that conclusion, he started feeling better.”


The publication of Andrews and Thomson’s 36,000-word paper in the July 2009 issue of Psychological Review had a polarizing effect on the field. While some researchers, like Jerome Wakefield, a professor at New York University who specializes in the conceptual foundations of clinical theory, greeted the paper as “an extremely important first step toward the re-evaluation of depression,” other psychiatrists regarded it as little more than irresponsible speculation, a justification for human suffering. Peter Kramer, a professor of psychiatry and human behavior at Brown University, describes the paper as “a ladder with a series of weak rungs.” Kramer has long defended the use of antidepressants — his landmark work, “Listening to Prozac,” chronicled the profound improvements of patients taking the drugs — and criticized those who romanticized depression, which he compares to the glamorization of tuberculosis in the late 19th century. In a series of e-mail messages to me, Kramer suggested that Andrews and Thomson neglect the variants of depression that don’t fit their evolutionary theory. “This study says nothing about chronic depression and the sort of self-hating, paralyzing, hopeless, circular rumination it inspires,” Kramer wrote. And what about post-stroke depression? Late-life depression? Extreme depressive condition? Kramer argues that there’s a clear category difference between a healthy response to social stressors and the response of people with depressive disorder. “Depression is not really like sadness,” Kramer has written. “It’s more an oppressive flattening of feeling.”

Even scientists who are sympathetic to what Andrews and Thomson call the “analytic-rumination hypothesis” remain critical of its details. Ed Hagen, an anthropologist at Washington State University who is working on a book with Andrews, says that while the analytic-rumination hypothesis has persuaded him that some depressive symptoms might improve problem-solving skills, he remains unconvinced that it is a sufficient explanation for depression. “Individuals with major depression often don’t groom, bathe and sometimes don’t even use the toilet,” Hagen says. They also significantly “reduce investment in child care,” which could have detrimental effects on the survival of offspring. The steep fitness costs of these behaviors, Hagen says, would not be offset by “more uninterrupted time to think.”

Other scientists, including Randolph Nesse at the University of Michigan, say that complex psychiatric disorders like depression rarely have simple evolutionary explanations. In fact, the analytic-rumination hypothesis is merely the latest attempt to explain the prevalence of depression. There is, for example, the “plea for help” theory, which suggests that depression is a way of eliciting assistance from loved ones. There’s also the “signal of defeat” hypothesis, which argues that feelings of despair after a loss in social status help prevent unnecessary attacks; we’re too busy sulking to fight back. And then there’s “depressive realism”: several studies have found that people with depression have a more accurate view of reality and are better at predicting future outcomes. While each of these speculations has scientific support, none are sufficient to explain an illness that afflicts so many people. The moral, Nesse says, is that sadness, like happiness, has many functions.

Although Nesse says he admires the analytic-rumination hypothesis, he adds that it fails to capture the heterogeneity of depressive disorder. Andrews and Thomson compare depression to a fever helping to fight off infection, but Nesse says a more accurate metaphor is chronic pain, which can arise for innumerable reasons. “Sometimes, the pain is going to have an organic source,” he says. “Maybe you’ve slipped a disc or pinched a nerve, in which case you’ve got to solve that underlying problem. But much of the time there is no origin for the pain. The pain itself is the dysfunction.”

Andrews and Thomson respond to such criticisms by acknowledging that depression is a vast continuum, a catch-all term for a spectrum of symptoms. While the analytic-rumination hypothesis might explain those patients reacting to an “acute stressor,” it can’t account for those whose suffering has no discernible cause or whose sadness refuses to lift for years at a time. “To say that depression can be useful doesn’t mean it’s always going to be useful,” Thomson says. “Sometimes, the symptoms can spiral out of control. The problem, though, is that as a society, we’ve come to see depression as something that must always be avoided or medicated away. We’ve been so eager to remove the stigma from depression that we’ve ended up stigmatizing sadness.”

For Thomson, this new theory of depression has directly affected his medical practice. “That’s the litmus test for me,” he says. “Do these ideas help me treat my patients better?” In recent years, Thomson has cut back on antidepressant prescriptions, because, he says, he now believes that the drugs can sometimes interfere with genuine recovery, making it harder for people to resolve their social dilemmas. “I remember one patient who came in and said she needed to reduce her dosage,” he says. “I asked her if the antidepressants were working, and she said something I’ll never forget. ‘Yes, they’re working great,’ she told me. ‘I feel so much better. But I’m still married to the same alcoholic son of a bitch. It’s just now he’s tolerable.’ ”

The point is the woman was depressed for a reason; her pain was about something. While the drugs made her feel better, no real progress was ever made. Thomson’s skepticism about antidepressants is bolstered by recent studies questioning their benefits, at least for patients with moderate depression. Consider a 2005 paper led by Steven Hollon, a psychologist at Vanderbilt University: he found that people on antidepressants had a 76 percent chance of relapse within a year when the drugs were discontinued. In contrast, patients given a form of cognitive talk therapy had a relapse rate of 31 percent. And Hollon’s data aren’t unusual: several studies found that patients treated with medication were approximately twice as likely to relapse as patients treated with cognitive behavior therapy. “The high relapse rate suggests that the drugs aren’t really solving anything,” Thomson says. “In fact, they seem to be interfering with the solution, so that patients are discouraged from dealing with their problems. We end up having to keep people on the drugs forever. It was as if these people have a bodily infection, and modern psychiatry is just treating their fever.”

Thomson describes a college student who was referred to his practice. “It was clear that this patient was in a lot of pain,” Thomson says. “He couldn’t sleep, couldn’t study. He had some family issues” — his parents were recently divorced — “and his father was exerting a tremendous amount of pressure on him to go to graduate school. Because he’s got a family history of depression, the standard of care would be to put him on drugs right away. And a few years ago, that’s what I would have done.”

Instead, Thomson was determined to help the student solve his problem. “What you’re trying to do is speed along the rumination process,” Thomson says. “Once you show people the dilemma they need to solve, they almost always start feeling better.” He cites as evidence a recent study that found “expressive writing” — asking depressed subjects to write essays about their feelings — led to significantly shorter depressive episodes. The reason, Thomson suggests, is that writing is a form of thinking, which enhances our natural problem-solving abilities. “This doesn’t mean there’s some miracle cure,” he says. “In most cases, the recovery period is going to be long and difficult. And that’s what I told this young student. I said: ‘I know you’re hurting. I know these problems seem impossible. But they’re not. And I can help you solve them.’ ”


IT’S TOO SOON to judge the analytic-rumination hypothesis. Nobody knows if depression is an adaptation or if Andrews and Thomson have merely spun another “Just So” story, a clever evolutionary tale that lacks direct evidence. Nevertheless, their speculation is part of a larger scientific re-evaluation of negative moods, which have long been seen as emotional states to avoid. The dismissal of sadness and its synonyms is perhaps best exemplified by the rise of positive psychology, a scientific field devoted to the pursuit of happiness. In recent years, a number of positive psychologists have written popular self-help books, like “The How of Happiness” and “Authentic Happiness,” that try to outline the scientific principles behind “lasting fulfillment” and “getting the life we want.”

The new research on negative moods, however, suggests that sadness comes with its own set of benefits and that even our most unpleasant feelings serve an important purpose. Joe Forgas, a social psychologist at the University of New South Wales in Australia, has repeatedly demonstrated in experiments that negative moods lead to better decisions in complex situations. The reason, Forgas suggests, is rooted in the intertwined nature of mood and cognition: sadness promotes “information-processing strategies best suited to dealing with more-demanding situations.” This helps explain why test subjects who are melancholy — Forgas induces the mood with a short film about death and cancer — are better at judging the accuracy of rumors and recalling past events; they’re also much less likely to stereotype strangers.

Last year Forgas ventured beyond the lab and began conducting studies in a small stationery store in suburban Sydney, Australia. The experiment itself was simple: Forgas placed a variety of trinkets, like toy soldiers, plastic animals and miniature cars, near the checkout counter. As shoppers exited, Forgas tested their memory, asking them to list as many of the items as possible. To control for the effect of mood, Forgas conducted the survey on gray, rainy days — he accentuated the weather by playing Verdi’s “Requiem” — and on sunny days, using a soundtrack of Gilbert and Sullivan. The results were clear: shoppers in the “low mood” condition remembered nearly four times as many of the trinkets. The wet weather made them sad, and their sadness made them more aware and attentive.

The enhancement of these mental skills might also explain the striking correlation between creative production and depressive disorders. In a survey led by the neuroscientist Nancy Andreasen, 30 writers from the Iowa Writers’ Workshop were interviewed about their mental history. Eighty percent of the writers met the formal diagnostic criteria for some form of depression. A similar theme emerged from biographical studies of British writers and artists by Kay Redfield Jamison, a professor of psychiatry at Johns Hopkins, who found that successful individuals were eight times as likely as people in the general population to suffer from major depressive illness.

Why is mental illness so closely associated with creativity? Andreasen argues that depression is intertwined with a “cognitive style” that makes people more likely to produce successful works of art. In the creative process, Andreasen says, “one of the most important qualities is persistence.” Based on the Iowa sample, Andreasen found that “successful writers are like prizefighters who keep on getting hit but won’t go down. They’ll stick with it until it’s right.” While Andreasen acknowledges the burden of mental illness — she quotes Robert Lowell on depression not being a “gift of the Muse” and describes his reliance on lithium to escape the pain — she argues that many forms of creativity benefit from the relentless focus it makes possible. “Unfortunately, this type of thinking is often inseparable from the suffering,” she says. “If you’re at the cutting edge, then you’re going to bleed.”

And then there’s the virtue of self-loathing, which is one of the symptoms of depression. When people are stuck in the ruminative spiral, their achievements become invisible; the mind is only interested in what has gone wrong. While this condition is typically linked to withdrawal and silence — people become unwilling to communicate — there’s some suggestive evidence that states of unhappiness can actually improve our expressive abilities. Forgas said he has found that sadness correlates with clearer and more compelling sentences, and that negative moods “promote a more concrete, accommodative and ultimately more successful communication style.” Because we’re more critical of what we’re writing, we produce more refined prose, the sentences polished by our angst. As Roland Barthes observed, “A creative writer is one for whom writing is a problem.”

This line of research led Andrews to conduct his own experiment, as he sought to better understand the link between negative mood and improved analytical abilities. He gave 115 undergraduates an abstract-reasoning test known as Raven’s Progressive Matrices, which requires subjects to identify a missing segment in a larger pattern. (Performance on the task strongly predicts general intelligence.) The first thing Andrews found was that nondepressed students showed an increase in “depressed affect” after taking the test. In other words, the mere presence of a challenging problem — even an abstract puzzle — induced a kind of attentive trance, which led to feelings of sadness. It doesn’t matter if we’re working on a mathematical equation or working through a broken heart: the anatomy of focus is inseparable from the anatomy of melancholy. This suggests that depressive disorder is an extreme form of an ordinary thought process, part of the dismal machinery that draws us toward our problems, like a magnet to metal.

But is that closeness effective? Does the despondency help us solve anything? Andrews found a significant correlation between depressed affect and individual performance on the intelligence test, at least once the subjects were distracted from their pain: lower moods were associated with higher scores. “The results were clear,” Andrews says. “Depressed affect made people think better.” The challenge, of course, is persuading people to accept their misery, to embrace the tonic of despair. To say that depression has a purpose or that sadness makes us smarter says nothing about its awfulness. A fever, after all, might have benefits, but we still take pills to make it go away. This is the paradox of evolution: even if our pain is useful, the urge to escape from the pain remains the most powerful instinct of all.


Jonah Lehrer is the author of How We Decide, Proust Was a Neuroscientist, and the blog The Frontal Cortex. This is his first article for the magazine.

The Free-Appropriation Writer

Published: February 26, 2010

Here’s an exercise: Think of almost any kind of cultural endeavor and then use the word “we” to describe its creation. The communal pronoun trips easily off the tongue when talking about the world of contemporary arts and entertainment, where things are often the product of teams, workshops, studios or institutions, where collaboration and idea-swapping are the norm. But now try applying it to creative writing, especially to fiction and poetry, and it can sound absurd: “We worked for years on the character development and the voice, and when we finally nailed the subtle epiphany, we cracked open a bottle of Champagne to celebrate.”

Not that there isn’t the occasional team-written novel. But the popular conception of the creative writer is still by and large one of the individual trying to wrestle language, maybe even the meaning of life, from his soul, the kind of lone battle Jonathan Franzen described himself waging in writing “The Corrections,” which he sometimes did in the dark, wearing earplugs and earmuffs, trying to hold his mind “free of clichés.”

Maybe that’s one reason for the flurry of attention recently about a teenage German novelist, Helene Hegemann, whose book about Berlin’s club scene was named a finalist for a prestigious literary prize to be awarded next month in Leipzig. After a blogger and fellow novelist announced that Ms. Hegemann had blended sizeable chunks of his own writing into hers, Ms. Hegemann, instead of following the plagiarism-gotcha script of contrition and retraction so familiar in recent years, announced that appropriating the passages from that book and other sources was her plan all along.

A child of a media-saturated generation, she presented herself as a writer whose birthright is the remix, the use of anything at hand she feels suits her purposes, an idea of communal creativity that certainly wasn’t shared by those from whom she borrowed. In a line that might have been stolen from Sartre (it wasn’t) she added: “There’s no such thing as originality anyway, just authenticity.”

The news made waves in the United States with an almost novelistic kind of timing, just before the publication last week of a highly anticipated book by David Shields, “Reality Hunger,” a feisty literary “manifesto” built almost entirely of quotations from other writers and thinkers. The borrowed words are marshaled to make a case against what Mr. Shields sees as boring fiction and in favor of genre-bending forms like the lyric essay. Mr. Shields, a novelist who migrated to nonfiction, has called it “far and away the most personal book” he has ever written. And though publishing-house lawyers required him to include an appendix listing his sources (at least those he could remember) Mr. Shields asks the reader to honor the spirit of the book by taking a pair of scissors and giving it an appendectomy.

His manifesto and Ms. Hegemann’s novel prompted the quick drawing of battle lines, coming at a time when tensions have probably never been higher between a growing culture of borrowing and appropriation on one side and, on the other, copyright advocates and those who fear a steady erosion of creative protections.

Patrick Ross, executive director of the Copyright Alliance, a trade group involving movie studios, networks and artists, took to the alliance’s blog immediately to condemn Ms. Hegemann. “Our would-be novelist says nothing is original, yet the passages she lifted from other books were original expressions in those books, even if the ideas were not new,” he wrote, adding that a creative culture dominated by borrowing and repurposing is a “culture that will quickly grow stale.”

But Mr. Shields argues that blatant borrowing has been a foundation of culture since man first took up pen and paintbrush, long before Terence complained in the second century B.C. that “there’s nothing to say that hasn’t been said before.” (Mr. Shields’s point about borrowing has certainly been made many times before, a fact he readily acknowledges.) Appropriation has breathed life into music, art and theater, he argues, and he lines up a kind of murderers’ row of writers, including Sterne, Emerson, Eliot and Joyce (“I am quite content to go down to posterity as a scissors and paste man”) to make the case that it has been an important tradition in writing, too.

But it has been a limited one, viewed with even greater suspicion now. And Mr. Shields, so firmly in the camp that sees appropriation as just another kind of collaboration, laments that expressive writing has lagged behind the other arts in using appropriation as a tool, especially in an age when the most vital artists are those “breaking larger and larger chunks of ‘reality’ into their works.”

“Why can’t literature catch up with the other arts?” he said in a recent telephone interview from Seattle, where he lives and teaches, citing recent encouraging (to him) trends like Flarf, the experimental poetry movement in which practitioners make verse out of the results of random Internet word searches. Of Ms. Hegemann’s book and her defense of it, he added: “My goodness, it’s just straight out of my brainpan. She basically did the book I wanted to do.”

You could argue, of course, that Warhol’s use of a soup can or Danger Mouse’s use of the Beatles and Jay-Z on the Grey Album represent one thing, a re-contextualizing of cultural artifacts so well known they are a kind of shorthand. But does lifting from an obscure blogger — or even importing a description of a sunset by Steinbeck or a suburban tableau from Updike — accomplish the same thing?

Mr. Shields’s book relies on thinkers from Wittgenstein to DJ Spooky, melding them into a voice that can sound at times eerily consistent. He contends that in a world where the death of the novel has been announced with great regularity for almost half a century, such an open-source approach is the only way to keep literature alive. Even the most original-seeming writing borrows from the centuries of writing that came before, so why not simply be more honest and, he suggests, maybe do something more interesting in the process?

“So much of the energy of great work to me is feeling the echo effect on every line, of not knowing where it came from,” he said, citing a quote — this one attributed — from Graham Greene that he uses as one of the book’s epigraphs: “When we are not sure, we are alive.”

The law and conventional ethics are still probably a long way from embracing the kind of world that Mr. Shields and Ms. Hegemann envision. But Louis Menand, the Harvard professor and New Yorker staff writer, suggested that, as with any creative movement, if the results are compelling and profound enough, even rigid conventions come around to making what seemed like a sin into a virtue.

“If something is really successful, then the law tends to get changed and society changes to allow it to happen,” he said. “The test has always been in the pudding.”

A version of this article appeared in print on February 28, 2010, on page WK3 of the New York edition.

Why the Fetish About Footnotes?

In the world of academe, Web clicking would be too easy.

Wall Street Journal

JANUARY 28, 2010, 7:38 P.M. ET,

Is there a college graduate alive who doesn’t react to the words “footnotes” and “bibliography” with at least a small shiver of lingering dread? But while the citations, the style manuals, the numbering, the numbing tedium are just unpleasant memories to folks who now labor far from the groves of academe, they are a continuing affliction for scholars in the humanities. Thanks to the Internet, it doesn’t have to be that way.

Accurate sourcing guarantees that researchers make rigorous inquiries, handle evidence responsibly, and give credit where it’s due. In my first year of graduate school back in the early 1980s, a professor I worked for gave me a stern lesson in research. He was wrapping up an essay on Romantic poetry that borrowed upon recent critical studies, each one of them citing, in turn, previous works of scholarship and primary texts. We went up to the library one afternoon to make sure his immediate sources had reproduced earlier sources accurately. That meant collecting old books and journals one by one and finding relevant pages and passages. “Can’t you just assume they did it right?” I asked. He looked at me, raised a finger, and replied: “Never trust a footnote—always check the original.” After an hour we found one slight misquotation. He corrected it in his text and proved the point: Do your homework and get it right.

It can be so much easier now. If you’re in the middle of a 400-page tome and want to verify a footnote on an essay by John Dewey from 1905, you don’t have to leave your desk. Almost all scholarly journals are now online. A click to the library Web site, then to the electronic version of the Journal of Philosophy, Psychology, and Scientific Methods gives you Dewey’s piece “The Realism of Pragmatism” in handy pdf format. Thousands of old books important to scholars are available, too. If you need to check a quote from William James’s volume “Pragmatism,” there it sits at Google Books, and a word search carries you right to the relevant passage. As more e-versions become available, only books still in copyright and unique archival documents require a day trip to the library or a trip across the country to a special collection. All those hours pulling old periodicals off the shelves, recalling old books and poring through 45 pages to find one sentence shrink to minutes of online delving.

The process works most efficiently, of course, when scholarly writing is online and the scholar embeds a link into the text so that readers can click directly to the original source. But even in hardcover books and paper-and-ink periodicals, a Web page reference inserted at the end of a sentence allows readers to type it into a laptop and have both works at hand for comparison. One of the prime reasons for documentation, the evaluation of a scholar’s evidence, happens more smoothly—and collegially. Researchers who don’t have access to print versions of the works cited in a book they are reading can download them and join professional conversations well-equipped and up to speed. Graduate students who are pressed for time and money but have to complete a reference-heavy dissertation move swiftly toward that blessed ascension to Ph.D. status.

Yet digital modes of citation have not been standardized as scholarly practice in the humanities. The 2008 MLA Handbook still spends 97 pages detailing nondigital formats and protocols of documentation. This means that scholars must devote long hours to an apparatus that in a digital age is inefficient and increasingly unnecessary. As links have multiplied and become standard practice in blogs, online magazines and e-commerce, the academic model appears ever more just that—academic. Citations are supposed to enhance scholarly communication, but as you read one monograph after another, wading through notes and works cited until the eyes glaze over, you sense a different objective at work. Authors clog the pages with notes not to facilitate peer review, but to demonstrate their place at the table, their ability to speak an obscure language that few bother to translate for the world at large.

Canny novices discover the game early in graduate school. Load up your papers and chapters with references, position your work against a dozen figures in the subfield, for every major point give a minilineage of precursors, and sprinkle allusions to Big Thinkers such as Michel Foucault (and, of course, your local mentors). Readers won’t examine the notes closely. They’ll just chalk up the lines by number and names, and recognize you as a legit practitioner.

If we shrink the apparatus with digital links, we’ll lose that brand of professional gamesmanship. Yes, digital citation has its problems. The Web contains lots of unreliable sources and the environment fluctuates over time. The same Web page can say one thing one day, another thing on another day, as in the case of a Wikipedia entry. But the utility outweighs the downside. One of the worst trends in the humanities in recent decades has been the insularity of scholarly expression. The conversation is so mannered and self-involved, so insider-like, that it has no readership beyond a few dozen colleagues in the same sub-sub-field. How many people could stomach a sentence like this one, a runner-up in the celebrated Bad Writing Contest a few years back: “Punctuated by what became ubiquitous sound bites—Tonya dashing after the tow truck, Nancy sailing the ice with one leg reaching for heaven—this melodrama parsed the transgressive hybridity of un-narrativized representative bodies back into recognizable heterovisual codes”?

If digital technology streamlines documentation, we might get fewer hyperacademic treatises weighed down with notes, numbers, allusions and other machinery and find more books and articles built on interesting ideas presented in accessible paragraphs. The MLA and other professional organizations could do a great service by authorizing and standardizing the practice.

Mr. Bauerlein is the author of “The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future.”