Adam Phillips’ Missing Out

By Mark O’Connell
slate

The lives we didn't live, and how they affect us

My wife, who is pregnant with our first child, had her three-month sonogram in early September. Right after the scan was finished, I had to run out of the hospital and down the street to where we’d parked our car about an hour and a quarter earlier. We’d only had enough loose change to pay for an hour’s parking, and we were in increasing danger of getting clamped. I sat in the car and waited while she signed some forms at the reception, and as the rain spilled relentlessly down on the windshield, I took my phone out of my pocket and looked at the photograph I had taken of the sonogram image just a few minutes before. It struck me as a strange and uniquely contemporary experience, to be looking at an image on a screen that depicted another image on another screen that represented my first glimpse of my first child; it was somehow, paradoxically, all the more touching for this sense of an alienating technological double remove.

I felt that what I was looking at represented my future. I was going to be a father. And not just any father, but the father of this blurry little personage with its lovely pea-sized head and cartoonishly reclining body. And as I was thinking about all the clustered possibilities in those rapidly subdividing cells—all the bewildering permutations of gender and appearance and personality and genetic fate—I also began to think about the possibilities that were, as of right now, in my past, and that were therefore no longer possibilities. In my vague and ineffectual way, I had always planned to live abroad; and I registered now, with a vague sense of loss that was somehow part of the joy of looking at the sonogram image, that this was no longer very likely to happen. I was thinking, too, that the period of my life in which I might legitimately spend large amounts of time on projects not strictly financially motivated had ended. Even as I was exhilarated about the life that now lay ahead of me—all the wonderfully terrifying possibilities of parenthood—I was thinking about the various people I had never quite got around to becoming (the happily itinerant academic, the journalist seeking out extraordinary stories in strange places). I was thinking about my unlived lives, and how every route taken inevitably forecloses the possibility of various others.

And so when I heard that the British psychoanalyst and essayist Adam Phillips had a new book called Missing Out: In Praise of the Unlived Life, I was intrigued. The idea from which Phillips’ book starts out is that the paths we don’t pursue in life are a crucial dimension of our lived experience. “Our unlived lives—the lives we live in fantasy, the wished-for lives—are often more important to us than our so-called lived lives,” he writes in his prologue. “We can’t (in both senses) imagine ourselves without them.” This is a fascinating idea, and it’s difficult to think of anyone who would be better suited to exploring it than Phillips, who is one of the literary world’s most consistently provocative explorers of fascinating ideas.

In a sense, he has been hovering around this topic for much of his career. Psychoanalysis itself, of course, is traditionally at least as concerned with the things that don’t happen in our lives as those that do, with the shadow-world of dreams and anxieties and unmet desires. And Phillips has always been interested in the various ways, real and imagined, in which we extricate ourselves from the lives we find ourselves living. One of the oddest and most interesting of his many odd and interesting books is 2001’s Houdini’s Box: On the Arts of Escape, in which he examines evasion as one of the crucial mythologies of our cultural and psychological lives. “Every modern person,” he writes in its final pages, “has their own repertoire of elsewheres, of alternatives—the places they go to in their minds, and the ambitions they attempt to realize—to make their actual, lived lives more than bearable. Indeed the whole notion of escape—that it is possible and desirable—is like a prosthetic device of the imagination. How could we live without it?”

Missing Out seems, at first, to pick up where Houdini’s Box left off, with this idea that the life that doesn’t happen—the life, for instance, of the aforementioned wandering man of letters—is actually crucial to interpreting our experience of the one that does. “We may need to think of ourselves,” as Phillips puts it in the prologue, “as always living a double life, the one that we wish for and the one that we practice; the one that never happens and the one that keeps happening.” The implied definition here—that life is the thing that keeps happening—gives some sense of the kind of stylist Phillips is. He is casually, almost off-handedly epigrammatic; his work yields a perennial harvest of quotable phrases without ever making this aphorizing seem the point of the exercise. (“We share our lives with the people we have failed to be”; “Greed is despair about pleasure”; “Satisfaction is no more the solution to frustration than certainty is the solution to skepticism”;  “If you get Othello you have no idea of what it is about.”) If you’re an underliner, in other words, have a pencil sharpener to hand when reading Adam Phillips.

But in a way that mirrors Phillips’ idea (inherited from the child psychoanalyst D.W. Winnicott, who seems a stronger influence than Freud) that the good life is one in which there is “just the right amount of frustration,” the pleasures of his style—its studied waywardness, its cultivations of paradox and playful evasion—are inseparable from its aggravations. He seems to embark on his essays without a clear sense of what their destinations might be; he is, unmistakably, the kind of writer who finds out what he wants to say by finding himself saying it. Phillips’ books may be richly eloquent and aphoristic, but don’t look for conventionally attractive arguments—takeaways, ideas worth sharing. He’s as likely to write about Proust as he is to write about Freud or Lacan, but you’re never going to find an explanation of how Proust was a neurosurgeon, say, or an assurance that reading him can change your life. He is not, in other words, one of your modern notion-hawkers. His books tend to be basically gist-resistant, which is why their titles are always, to one degree or another, misleading. A Phillips essay is typically one in which a great many interesting ideas have been floated, but in which no solid overall structure of significance has been built.

There are moments in Missing Out, though, when Phillips’ equivocations and circumlocutions start to cancel one another out, and when you find yourself wondering what, if anything, is actually being said. In the essay “On Frustration,” for instance, he tells us that “Knowing too exactly what we want is what we do when we know what we want, or when we don’t know what we want (are, so to speak, unconscious of our wanting, and made anxious by our lack of direction).” This is too obviously a sentence that doesn’t know what it wants at all, or that doesn’t seem to want anything but to be left to its own devices. This is an extreme example of his evasive style, but his tics—always related to his habit of hedging and qualifying his statements—can be alarmingly domineering. Page 13: “Frustration is always, whatever else it is, a temptation scene.” Page 117: “Getting out … is always a missing-out, whatever else it is.” Page 134: “Literature is escapist, whatever else it is, in its incessant descriptions of people trying to release themselves from something or other.” Page 185: “Whatever else we are, we are also mad.”

I’m not just being overparticular here about a slightly irritating stylistic quirk. These sentences illustrate an interesting structural tension in Phillips’ prose between two equal and opposite forces: the resolutely aphoristic and the instinctively ambiguous. In what I think is his best book, the 1993 collection On Kissing, Tickling, and Being Bored, he asks whether “the artist has the courage of his perversions.” In a similar way, Phillips is an essayist who has the courage of his ambivalence, who never loses sight of the task of equivocation.

The question in Missing Out is whether those equivocations serve any obvious larger purpose. The six essays here are mostly either reformulations of the prologue’s claim about the lives that have escaped us, or extended considerations of subjects that only have tangential (or whimsical) relevance to it. This isn’t to say that there aren’t some intriguing ideas here, or a great many beautiful sentences. As is usual with Phillips, the diversions (the parenthetical assertions, the distracted definitions) are a sideshow that justifies the price of admission. There are wonderful analyses here—in both the Freudian and the critical senses—of Othello, of King Lear, and of Larkin’s poetry. Phillips breaks down the distinction between the work of the analyst and the work of the critic, and makes them seem like more or less the same job. He’s the sort of literary thinker who can extract vast amounts of significance from a single word or phrase. In “On Not Getting It,” which is, if I understand it, about the benefits of not understanding things, we get this wonderful run of sentences:

Infants and young children have to be, in a certain sense, understood by their parents; but perhaps understanding is one thing we can do with each other—something peculiarly bewitching or entrancing—but also something that can be limiting, regressive, more suited to our younger selves; that can indeed be our most culturally sanctioned defense against other kinds of experience—sexuality being the obvious case in point—that are not subject to understanding, or which understanding has nothing to do with, or is merely a distraction from. That if growing up might be a quest for one’s illegitimacy, this is because one’s illegitimacy resides in what one thinks one knows about oneself.

This gets to the heart of what I mean about the fundamental inseparability of the frustrations and pleasures of Phillips’ writing. It’s a superb passage—beautiful and surprising and, for all I know, true—but, like so much of the rest of the book, its relevance to the topic supposedly at hand is difficult to see. The subtitle of Missing Out—“In Praise of the Unlived Life”—suggests that Phillips plans to address the question of the unlived life, and that, if he were pushed to take a stance on this question, he would be broadly in favor. But, ironically and oddly appropriately, Missing Out ends up missing out on—or perhaps managing to evade—its own apparent subject. And so, while reading it, I spent a lot of time wondering about the wished-for book in praise of the unlived life, which remains frustratingly and tantalizingly unwritten.

In Praise of Concision

By Brad Leithauser
newyorker

concision-leithauser

Some guy on TV is describing how he fitted his automobile with a new skin: gluing them one by one, he has blanketed every inch of its exterior with beer-bottle caps. Or he’s recounting how he fashioned, from zillions of ordinary toothpicks, a toothpick ten feet long and a foot thick—something Paul Bunyan couldn’t lift to his mouth. Or he’s displaying the dozens of photo albums that catalogue, exhaustively, the individual stacks of pancakes on which he has breakfasted daily for the past six years. And as I sit watching, one of my daughters ambles by, glances at the screen, and mutters, “Whoa, free time.”

Whoa, free time. In a minimum of space, it speaks volumes. It says, Boy, leisure time must hang heavy over your head. And, Have you stopped to consider the uselessness of what you’re doing? And, often, Adults do get up to absolutely asinine things, don’t they?

Concision. While a love for poetry may seem inseparable from a love for words, I feel a special fondness for the poem (or quip, or short story) that gets the job done while using them—words—sparingly. I like epigrams, miniatures, punch lines, and I keep a sort of mental cabinet of clipped curiosities. Pride of place belongs to the author of “Fleas,” a poem often attributed to Ogden Nash but actually written by Strickland Gillilan. (The story gets complicated: Gillilan did not call it “Fleas”—it’s unclear who first did—and Ogden Nash was apparently credited because he ought to have written it.) In any case, it must be the shortest successful poem in the language. (I’m tempted wildly to declare it the shortest successful poem in any language.) Here it is in its entirety:

Adam
Had ’em.

One of Gillilan’s specialties was light verse, and a sympathetic reader will remark on how the poem, brief as it is, formally does what good light verse typically does: with its unlikely rhyme, it smoothes seeming clumsiness (“Had ’em”) into antic dexterity. And it does so with—another hallmark of light verse—a polished finish. (From a technical standpoint the poem is, I suppose, an absolutely regular trochaic monometric couplet.) But there’s more. The poem actually offers a “criticism of life”—Matthew Arnold’s touchstone for poetry that addresses the “spirit of our race.” Doesn’t it say, in effect, Why fuss over minor annoyances, as we’ve been doing since the beginning of time, given that complaining has done nothing to alleviate our lot?

On a graver note—as grave as humankind is capable of—what about “Jesus wept”? Surely, the shortest verse in the Bible may be the most affecting.

I’m partial to haiku, particularly when they intimate a far larger story than they tell. Here’s an especially terse example by Buson (translated by Robert Hass):

I go,
you stay;
two autumns.

The separation referred to may be a literal two years. But I prefer to think it’s metaphorical. Departing, remaining—in either case, it’s a loss, the season of loss. A single entity—a couple—devolves into a pared, shared falling away.

The most touching English-language haiku I know belongs to Seamus Heaney:

Dangerous pavements.
But this year I face the ice
With my father’s stick.

In a mere seventeen syllables, the poem evokes a complex, compromised psychological condition. There’s comfort in the notion that Father is sheltering us with that stolid stick of his. And there’s anguish and vulnerability in the implication that the stick has been transferred because Father has died—recently, within the past year. As we set off from home into the freezing outer world, all sorts of emotional accommodations must be discharged.

Concision in its broadest spirit encompasses far more than a stripping of verbiage. It clarifies the contours, it revels in the sleek and streamlined. Years ago, I edited “The Norton Book of Ghost Stories,” a gathering of twenty-eight tales from the thousand-plus that I read and took notes on. In some ways, my favorite in the book is W. F. Harvey’s “The Clock.” It’s hardly the scariest of the lot, but it does have the simplest premise, and utilizes the fewest props. There’s nothing in it but a straightforward, naïve narrator, a long-boarded-up house, and a ticking clock. (A ticking clock? But who in the abandoned house has wound the clock?)

I’d set Harvey’s clock beside some other household knickknacks, like the glass menagerie of Tennessee Williams’s play. I’ve seen many successful plays that were shorter or had a smaller cast, but, for me, “The Glass Menagerie” represents a certain sort of pruned perfection. Its quartet of touchingly at-odds characters creates a tableau where no line of dialogue feels extraneous, and every latent nuance is brought to pathos.

Still, poetry remains the domain where concision consistently burns brightest. (Someone told me that Marilyn Monroe once remarked that she enjoyed reading poetry “because it saves time.” I like this quotation so much that I’ve never dared to confirm it; I’d feel disenchanted to learn it was bogus.) My little cabinet includes two six-line poems whose psychological richness surely couldn’t be duplicated in a full page of poetic prose. The first is W. H. Auden’s “Epitaph on a Tyrant”:

Perfection of a kind was what he was after,
And the poetry he invented was easy to understand;
He knew human folly like the back of his hand,
And was greatly interested in armies and fleets;
When he laughed, respectable senators burst with laughter,
And when he cried the little children died in the streets.

We have here some Nazi monster listening to Schubert lieder at the end of a workday devoted to the Final Solution. Or Henry VIII admiring a Holbein portrait right before ordering another innocent to the executioner’s axe. Or Caligula attending a lighthearted masque on the heels of a highly productive brainstorming session with his court torturer. Here is, ultimately, the whole haunting, ever-repeating saga of the good ship Civilization foundering when a madman somehow seizes its helm.

I’m equally drawn to Donald Hall’s “Exile,” a poem that presents the double bonus of being a few syllables shorter than Auden’s and having a draft history of dramatic excision: Hall initially composed and published the poem in a hundred lines, of which ninety-four were eventually trimmed:

A boy who played and talked and read with me
Fell from a maple tree.

I loved her, but I told her I did not,
And wept, and then forgot.

I walked the streets where I was born and grew,
And all the streets were new.

We don’t know whether the boyhood friend survived his fall. But we do know this was a friendship of an especially fertilizing sort for a budding poet: a bond fusing the warmth of natural boyish amity to the pleasures of shared literary observation. Then, in stanza two, a girl materializes. The romance that evolves is clearly puppy love, with the ephemerality of its kind. Yet its one-time intensity turns out to be haunting: the sort of thing you wind up, years later, writing a poem about.

The third stanza echoes in my head whenever I find myself wandering around the old Detroit neighborhoods of my boyhood. Even those blocks that have escaped either renovation or the wrecker’s ball, the ones where the houses look the same, have become different blocks and houses. The change is within, like some reworking of cornea and retina; over time, you can’t help seeing with new eyes.

Together, the three stanzas provide a spare but secure ligature, binding up a man’s years. And each stanza is more effective for its narrowing shift from pentameter to trimeter in its second line. Things feel curtailed—as though the poet’s words, cut short, are dwindling away in the air.

The poem is a lovely example of a familiar, maddening, ever-alluring paradox. The poet seems to be arriving at something significant, and we’re following him there. You’re approaching a riddle, closer and closer, until suddenly it looms before you, the arc of your existence—your life! And now there’s everything to say. But the revelation occurs in a place where—concision’s vanishing point—you have no language left at all.

 

Is There A Place For The Mind In Physics? Part I

By Adam Frank
npr

eso1303a-stars-scorpius

So I want you to do something for me. I want you to think of a blue monkey. Are you ready? OK, go! Visualize it in your head. Any kind of monkey will do (as long as it’s blue). Take a moment. Really, see the little blue dude! Got it? Great. Now, here is the question: Where did that thought fit into reality? How was it real? Where was it real?

Another way to ask this question is: Was the “blue monkey thought” just the electrical activity of your neurons? Was that all there was to it? If not, might your private internal screening of the blue monkey be something altogether different? Was it, perhaps, part of something just as fundamental as quarks and Higgs bosons?

This is the fundamental question behind philosopher Thomas Nagel’s controversial book: Mind & Cosmos: Why The Materialist Neo-Darwinian Conception of Nature is Almost Certainly False“. I’ve been slowly making my way through Nagel’s short (though, at points, dense) volume for a few months now. Back in October our own most excellent philosopher of Mind, Alva Noe, presented his own take on Nagel’s work. Yesterday, Tania Lombrozo extracted some real-world questions out of Nagel’s philosophy. Today I want to begin thinking a bit about what and where the Mind might be in relation to my own science of physics.

Before we go any further, though, we have to deal with Nagel’s subtitle, which seems like the bad advice of a publisher intent on pushing sales. Nagel is a self-proclaimed atheist and is not using this work to push a vision of a Deity into the debate about the nature of reality at a fundamental level. His arguments are, for the most part, those of a philosopher steeped in philosophical tradition, laying out an argument that the Mind has its own unique place in the structure of the Universe.

Still, Nagel bears his own share of blame for the over-heated title. In the early chapters of the book he attempts to cast doubt on the traditional Darwinian account for both the origins of life and the development of species. In these parts of the book he seems out of his league, relying on intuition rather than argument. Recently I’ve been reading the literature on the Origins of Life and the basic arguments, including the balance of probability and timescales, seem pretty clear and pretty honest (for an in-depth response to Nagel from a biologist see Allen Orr’scogent review in The New York Review of Books).

Thus Nagel’s arguments against Darwin in these domains appear to be a kind wishful thinking invoked to support the next, and most important, step in his thinking: the appearance of the Mind in the Cosmos.

Nagel brings both intellectual heft and clarity to this question. Nagel’s famous 1974 article “What is it Like to be a Bat” is a masterpiece of argument against the reductionist view that Mind is nothing more than an epiphenomena (word of the day!) of neural activity. Instead, Nagel argued that there is vividness to the internal experience of consciousness that cannot be reduced to an external account of matter and motion (i.e., neurons and electrical currents). Nagel’s argument served as the springboard for in the philosopher David Chalmers’ equally famous and influential discussion of “The Hard Problem of Consciousness“.

In that work Chalmers was unrelenting in distinguishing “easy” from “hard” problems in the study of the Mind. Easy problems, Chalmers said, were things like the intentional control of behavior or accessing the information in a system’s internal states. Many of the easy problems are, of course, still quite hard but, in essence, seem to be computational in nature.

The Hard Problem, in contrast, is all about the luminosity of experience, the private phenomenal realm so vivid to us all. From the Hard Problem’s perspective, all the progress made in watching which parts of the brain light up in those famous MRI studies don’t tell us much about consciousness. Instead, they only teach us about the neural correlates of consciousness. The neurons firing and the internal experience are two different things even if they are correlated. Most importantly, they don’t really shed light on the central mystery of why, what, where and how the Mind arises, which are all questions Nagel really cares about.

For Nagel the answer to those questions will not be found in any of the sciences as they are presently constructed. In fact, he is so convinced that the Mind has a fundamental place in reality on its own that he claims that the reductionist drenched “materialist naturalism” of modern science must be incomplete. As Nagel puts it:

And if physical science … leaves us in the dark about consciousness, that shows that it can not provide the basic form of intelligibility. There must be a very different way in which things as they are make sense, and that includes the way the physical world is, since the problem cannot be quarantined in the mind.

That, my friends, is quite a claim. First, Nagel is saying we can’t account for consciousness via the usual reductionist arguments that everything starts with quarks and then leads to Brains (and — ho hum — Minds too). Then he goes farther and claims that this failure infects the entire project of Science.

Now, given that I’m a physicist, you might expect me to slam Nagel for being hopelessly lost in the weeds. The truth is, while I deeply suspect he is wrong, I do find his perspective bracing. Given his atheism, the question Nagel is really asking is stunning: Is there a fundamental place for the Mind in the fabric of reality? In its crudest form the question could be phrased: Might there be some “thing” we need to add to our picture of reality that we don’t have now in order to embrace mind?

This “thing” could be some kind of “consciousness particle” or “consciousness field”. Anyone familiar with Philip Pullman’s His Dark Materials novels will recognize “dust” as exactly this kind of new entity. Nagel would go much further than this kind of construct, however, adding a fundamental teleology — direction — to the development of the cosmic order that must come with the Mind.

It’s also worth noting at this point that Nagel is going far beyond Emergentism of the kind our dear co-blogger Stuart Kauffman has championed. From Nagel’s point of view, consciousness did not emerge from the behavior of simpler parts of the Universe. Instead, the Mind is as elemental a part of the Cosmos as the fabric of Space-Time.

Now, as 13.7 readers know, I am no fan of reductionism. In its grandest claims, reductionism tends to be more an affirmation of a faith then a tenable position about ontology (what exists in the world). However, as a physicist I am more prone to the Emergentist position because it requires a less radical alteration of what we believe does exist out there. Nagel’s view asks for such a dramatic reworking of ontology that the evidence better be just as dramatic and, so far, it isn’t.

Still, once I got past Nagel’s missteps on Darwin, I found his arguments to be quite brave, even if I am not ready to follow him to the ends of his ontology. There is a stiff, cold wind in his perspective. Those who dismiss him out of hand are holding fast to a knowledge that does not exist. The truth of the matter is we are just at the beginning of our understanding of consciousness and of the Mind.

Think about the difference between Galileo’s vision of “the real” and Einstein’s. At this point in our study of the Mind, are we really so sure of what can, and what cannot, be simply dismissed? Nagel may ultimately be wrong, but he is correct in articulating one limit in the range of what might possibly be right.

Mexico’s masked vigilantes defy drug gangs — and the law

By Nicholas Casey
wsj

AYUTLA, Mexico—Masked men, rifles slung over their shoulders, stand guard on a lonely rural road, checking IDs and questioning travelers. They wear no uniforms, flash no badges, but they are the law here now.

A dozen villages in the area have risen up in armed revolt against local drug traffickers that have terrorized the region and a government that residents say is incapable of protecting them from organized crime.

The villages in the hilly southern Mexican state of Guerrero now forbid the Mexican army and state and federal police from entering. Ragtag militias carrying a motley arsenal of machetes, old hunting rifles and the occasional AR-15 semiautomatic rifle control the towns. Strangers aren’t allowed entry. There is a 10 p.m. curfew. More than 50 prisoners, accused of being in drug gangs, sit in makeshift jails. Their fates hinge on public trials that began Thursday when the accused were arraigned before villagers, who will act as judge and jury.

Crime is way down—for the moment, at least. Residents say kidnapping ceased when the militias took charge, as did the extortions that had become the scourge of businessmen and farmers alike. The leader of one militia group, who uses the code name G-1 but was identified by his compatriots as Gonzalo Torres, puts it this way: “We brought order back to a place where there had been chaos. We were able to do in 15 days what the government was not able to do in years.”

Yet a few shaken townspeople in Ayutla, the area’s primary town, have stories of being arrested and held for more than a week before being deemed innocent and released. And one man was shot dead trying to escape the masked men at a checkpoint.

Village justice has long been part of life in rural Mexico. Now it’s playing a growing role in the country’s drug war. Across Mexico, from towns outside the capital to along the troubled border with the U.S., mobs have lynched suspected drug traffickers and shot those accused of aiding them. Last year a logging town in a neighboring state took up arms when traffickers of La Familia Michoacana, a drug cartel, attempted to lay claim to their forests.

The uprising around Ayutla, a two-hour drive from the resort city of Acapulco, differs from the others because it has started to spread locally. In the two weeks, bands in six other towns in Guerrero state have declared vigilante rule, including in Iguala, a city of 140,000. In the nearby Jalisco state, groups say they are considering similar action.

image

Some government officials are even siding with the militias, for now. Guerrero Governor Ángel Aguirre has met with the vigilantes and says state law gives villagers the right to self-rule. Ayutla’s mayor, Severo Castro, says he welcomes the new groups. On a recent evening, he pointed toward a checkpoint blocks away and said the town is nearly crime-free for the first time in years.

“There are two police departments now,” he said. “The ones in uniform and another masked one, which is much more brave.”

That sentiment seems to be shared even among local police, who are still technically on duty but who now seem limited to the role of directing traffic around the central square, leaving the rest of the patrolling and police work to the militias.

Police Commander Juan Venancio, a broad-faced middle-aged man with a mustache, said local police are too afraid of organized crime to make arrests.

“We could arrest a gangster for extortion, but if we couldn’t prove it, we’d have to let him go,” he said. “But then what about our families? Do you think we’re not scared they will take revenge on us if they are out? Of course we are scared.”

In some ways, life is getting back to normal here after years of insecurity. Village rodeos attract young cowboys and girls in traditional dresses, and weddings stretch late into the evening. The same townspeople who were once extorted by drug gangs now bring melons and tamales to the militiamen standing guard at checkpoints.

Suspicion of the government and outsiders runs high here. During a visit by The Wall Street Journal last week to the nearby hamlet of Azozuca, rumor spread that the reporter’s car was bringing state human-rights officials. An angry, stick-wielding mob of about 150 blocked the only road into town and didn’t allow the reporter to enter.

“Get out of here! Don’t take another step!” yelled a woman waving a wooden bat.

Remote villages in Guerrero, one of Mexico’s most independent regions, had long complained that too few police looked after their towns. In 1995, the state passed a law allowing towns to form “community police” groups that worked much like neighborhood-watch organizations, permitting the groups to detain suspects and hand them over to authorities. But the laws didn’t allow the groups to pass judgment on those accused.

By 2006, Mexico’s drug war had begun to weaken its already-troubled institutions. Areas like Mexico City remained under tight control, but the power of the state in rural areas diminished. Some 65,000 Mexicans have been killed since 2006, but only a fraction of the killings have been solved—or even investigated, according to the government and legal experts.

“Mexico has a 2% conviction rate, and Mexicans have taken note of that,” says Sergio Pastrana, a sociology professor at the College of Guerrero who has studied rural regions. “It’s caused unrest and a determination among some to take the reins themselves.”

Villagers in Ayutla say the town was never crime-free—bandits sometimes robbed horsemen riding the road, for example—but the specter of organized crime was something new.

Several years ago, a group known by villagers as Los Pelones—literally, the Bald Ones—entered Ayutla and began a racket which included both drugs and other crime, people here say.

Mr. Castro, the mayor, says his 19-year-old daughter was kidnapped two years ago and he paid a “large sum” for her release. Last July, the body of the town’s police chief Óscar Suástegui was found in a garbage dump outside town. He had been shot 13 times. Authorities said it looked like the work of a criminal group. No arrests were made in either case.

Townspeople say Los Pelones moved into extortions last year, demanding protection money from those who ran stalls in the market adjoining Ayutla’s central plaza. The payments were usually 500 pesos, or $40, a month per stall, according to several vendors, a large sum in the impoverished town.

As harvest season approached last fall, the group fanned out into the countryside, demanding monthly payments of 200 pesos, about $16, for each animal that farmers owned. Several farmers say the gang made a list of those who had agreed to pay and those who had not.

In November, a spate of kidnappings began. Gunmen in the village of Plan de Gatica captured the village commissioner, a kind of locally elected mayor, along with a priest in a nearby village who had refused to pay extortion fees for his church. A second commissioner was kidnapped in the village of Ahuacachahue in December. The three men eventually were released after ransoms were paid, villagers say.

When a village commissioner named Eusebio García was captured on Jan. 5, several dozen villagers from Rancho Nuevo grabbed weapons and formed a search party. The next morning, they found Mr. García in a nearby house with his kidnappers, who were arrested and jailed, say the militiamen.

“This was the turning point, the moment everything exploded here,” says Bruno Placido, one of the leaders of the armed groups. “We had shown the power armed people have over organized-crime groups.”

As word spread of Mr. García’s release, farmers in villages around Ayutla also took up arms. Their plan: to descend into Ayutla, where they believed the rest of the Los Pelones gang was based. That night they raided numerous homes throughout Ayutla, arresting people they believed to be lookouts, drug dealers, kidnappers and hit men, and brought them to makeshift jails. Other villagers set up checkpoints across the town.

The vigilantes were now in charge. They instituted the curfew and declared that state and federal authorities would be turned away at checkpoints. Villagers were allowed to make accusations against others, anonymously, at the homes of militiamen.

The group ordered most schools shut down, saying Los Pelones might try to take children hostage in exchange for prisoners detained by the vigilantes.

“I hadn’t seen anything quite like this before,” says state Education Secretary Silvia Romero, who traveled to Ayutla after the initial uprising to negotiate for classes to resume. Some teachers agreed that suspending school was necessary until all top gang leaders were under lock and key. “The students were an easy target for the criminals,” says teacher Ignacio Vargas.

Many schools have since reopened. The army, after negotiations, set up a checkpoint at the entrance to the region. Beyond that, the militiamen remain in control and no state or federal officials are permitted to enter the villages around Ayutla.

Townspeople interviewed recently said the masked men are ordinary farmers and businessmen, not rival criminals looking to oust Los Pelones. The mayor agrees. Still, Mr. Torres, the lead militiaman in Ayutla, acknowledged the risk of “spies from organized crime coming into our ranks.” He said he encourages his men to turn in anyone seeking to join the vigilantes who might be linked to crime groups.

The militias are moving beyond the drug gangs to other alleged crimes and, in the process, are revealing some of the pitfalls of village justice.

On a recent day, two pickup trucks filled with masked men pulled up carrying bar owner Juan de Dios Acevedo. They alleged that Mr. Acevedo, 42, had been involved in the rape of a local woman. One of them pulled a shirt over his head while another bound his hands with rope. His mother and sister comforted him and cried.

As he was being bundled into one pickup, his mother fetched signed papers from the local prosecutor’s office that said he had already been arrested for the same crime, and cleared by prosecutors. “This is a false accusation, and now I’ve been arrested for the second time,” Mr. Acevedo protested.

The vigilantes were unmoved and took him away for questioning. Later that day, he was released unharmed.

A makeshift detention center run by villagers in El Mezón is home to two dozen men and women accused of being with Los Pelones. There is no budget to run the prison, villagers say. The prisoners eat donated tortillas and rice and sleep on cardboard on the floor. On a recent afternoon, seven men were clustered behind bars in a tiny, dark room that smelled of urine. It was hot and dirty. There were no visible signs of physical abuse.

The masked commander of the facility, who wouldn’t give his name and declined to allow interviews with the prisoners, said the men are being treated well and will be given a chance to defend themselves in a public trial in the village. They won’t be allowed lawyers, he said, and villagers will decide their sentences by a consensus vote.

Possible punishments include hard labor constructing roads and bridges in chain gangs, he said, although it will be up to the villagers, not the militia, to decide. He added that executions, which are not permitted under Mexican law even in murder cases, were not on the table.

“The village will be their judge,” he said. “If the village saves you, you will be free. If not, then you are condemned.”

Nightly raids of suspected drug traffickers have provided the militiamen with a clutch of high-powered weapons, including AR-15 rifles. It isn’t clear how the men will be trained to use the weapons.

On Jan. 6, the night the checkpoints were erected, a man named Cutberto Luna was shot dead by the vigilantes, state authorities say. Mr. Torres, the Ayutla militia commander, says the man refused to stop at the checkpoint and opened fire on the men standing guard, who responded by firing back. He also alleges Mr. Luna was a “known leader of organized crime.”

Members of Mr. Luna’s family couldn’t be located for comment. The state prosecutor’s file on the case says Mr. Luna was a local taxi driver. The file makes no mention of organized-crime ties. No arrests have been made in the killing.

On a recent day, a group of militiamen in the village of Potreros discussed what lay ahead. A rancher in a nearby town was thought to have collected extortion money on behalf of the criminal gangs. Several militia members wanted to organize a raid to take back the money, then use it to buy ammunition. The men also discussed the merits of shooting on the spot criminals they believed to be guilty rather than taking them to village courts.

A vendor in the Ayutla town plaza is glad to have faced neither fate. He spent 14 days in the El Mezón jail but was released on Jan. 21, he said. The vendor said he was accused of helping an organized-crime member. In fact, he said, he was simply paying his 500 peso weekly extortion fee. He wasn’t harmed in detention, he said, but got sick after he was given dirty water from a nearby pond to drink.

“Clearly I wasn’t on the side of the bad guys,” he said. “Still, I went to jail. The kind of psychological damage this does is great. Now I’m afraid they’ll come back for me and cut off my finger or gouge out my eye.”

Suicide Watch

By Freya Johnston
the-tls

The gin-crazed girl commits suicide - George Cruikshank

On September 9, 1756, Edward Moore’s journal The World published an eight-page letter from a gentleman in distress. Having abandoned legal, military, and authorial careers, encumbered with debts and much-loved dependants, “John Tristman” now found himself “daily contending betwixt pride and poverty; a mournful relict of misspent youth; a walking dial, with two hands pointing to the lost hours”. This melancholy account takes a surprising turn when the writer begins to divulge his bold new money-making scheme. Tristman is convinced he can put a stop to the vulgar, messy suicides for which the English have become infamous. People who live in London but have somehow tired of life need no longer trust to chance. Now, they may repair to his stylish, centrally located suite of apartments and end their lives “decently as well as suddenly”. For the disappointed lady, Tristman offers a spacious bath in which to drown “with the utmost privacy and elegance”. Despairing actors can take their pick of daggers and poison. Soldiers will conveniently discover “swords fixed obliquely in the floor with their points upwards”.

“The Receptacle for Suicides”, as Tristman dubs his voguish idea, is a Swiftian institution: utterly outrageous and thoroughly plausible. In offering to make the business of self-destruction both private and classy, Tristman takes it indoors and smothers it with euphemisms, of which “sudden death” was among the most popular in the mid-eighteenth century. It befits a cutting-edge projector to refer, in conclusion, to his would-be clients as “suicides”, a fairly unusual term when The World’s satire was published. The Oxford English Dictionary dates “suicide”, in Tristman’s sense of “One who dies by his own hand”, back to 1732, again in a journalistic context. “Suicide” in the sense of “self-murder” is in use decades earlier, and appears to be Thomas Browne’s coinage.

As Kelly McGuire points out in Dying To Be English: Suicide narratives and national identity, 1721–1814, the word has a vexed history. Deploying a pronoun as a prefix in order to describe both an action and a person (a person who is at once victim and perpetrator), it is something of a botched job. The convolutions and impenetrability of the term seem appropriate to a deed which many understand as the consummate rejection – of life, family and community, as of social and religious obligations – although one lesson of all the books under review is that suicides themselves, actual and imagined, tend not to see it that way. Many of the ballads reproduced in The History of Suicide in England, 1650–1850 depict lovers killing themselves in the confident hope of forgiveness and a place in heaven, as of avoiding shame and misery on earth. And even the most hard-line of religious commentators will hesitate to condemn all suicides to hell: as the Calvinist preacher Thomas Beard wrote in 1631, “the mercie of God is incomprehensible”. Overall, there is much evidence of what John Donne called “a perplexitie and flexibilitie in the doctrine” of suicide.

Gradually replacing more overtly judgemental epithets such as “self-murder”, “suicide” became a familiar word in England in the later eighteenth century. Perhaps the availability of a neutral form of language influenced how people thought about voluntary death; there is a relic of the older way of describing it in current references to “self-harm”. It is sometimes argued that apparently more tolerant and sympathetic attitudes to suicide, as to other infractions of the moral law, developed in the eighteenth century as the result of a progressive secularization. But religious as well as civil sanctions against the act persisted, in Britain and in the American colonies – only in Pennsylvania was voluntary death not criminalized – and those official sanctions are not incompatible with sympathy.

No longer construed as a demonic temptation, suicide came instead to be viewed as a symptom of lunacy

A coroner’s pronouncement of suicide (felo da se) resulted in forfeiture of the deceased’s goods and property to the state, often leaving any surviving relatives destitute. So the increasingly common verdict of temporary insanity (non compos mentis) may suggest a change in how people understood the act of self-destruction: no longer construed as a demonic temptation, it came instead to be viewed as a symptom of lunacy. On the other hand, the prevalence of non compos mentis determinations in the coroner’s courts may reveal a pragmatic wish to safeguard cash and property for the living. The two possibilities are not mutually exclusive; it seems likely that people have always been in more than one mind about suicide.

In “Frederic and Elfrida”, Jane Austen’s early novelistic skit (dating from the late 1780s or early 90s), “the lovely Charlotte” finds herself agreeing to marry a handsome stranger within moments of having consented to become the wife of a rich old man. The next day, “the reflection of her past folly, operated so strongly on her mind, that she resolved to be guilty of a greater, and to that end threw herself into a deep stream which ran thro’ her Aunts pleasure Grounds in Portland Place”. The combination of a suggested mental disorder (folly operating strongly on the mind) and cool calculation (“she resolved . . . to that end”) is characteristic of a period in which suicides are presented, by turns, as helpless lunatics and rational agents. The first view makes them not responsible for their actions; the second renders them potentially culpable. After 1823, the bodies of suicides could be interred in consecrated ground and the ritual humiliation of their corpses was officially prohibited. But suicide remained a crime in England until 1961.

As Dignitas, the Swiss right-to-die association, notes on its website, the majority of suicide attempts fail – although a failure in this context might also be counted a success. It is odd to think how many people were, and are, survivors of themselves: part of the OED’s definition of “suicide” is “One who . . . has a tendency to commit suicide”. If you try and fail to perpetrate self-murder you are, technically speaking, a “suicide”. By contrast, in order to qualify as any other kind of murderer, you need to have killed someone outright. In We Shall Be No More: Suicide and self-government in the newly United States, Richard Bell movingly emphasizes the sometimes clumsy efforts of American asylums and humanitarian societies to care for those who tried to kill themselves, but who lived on for days, weeks, or years afterwards. Bell never forgets that suicide is about individuals and the persistence or recovery of their stories. His arresting, involving work on the young American republic brings out the farcical and tragic aspects of suicide. It also reveals a healthy suspicion of commentators in all periods who lament the helter-skelter decline of manners and morals, whether due to changes in legislation or reading habits.

Young, romantic, foolish and idle, consumers of prose fiction in eighteenth-century Britain and America were thought especially vulnerable to suicide; McGuire and Bell chart their sometimes fatal adventures in the realms of sensibility. Critics in both nations liked to whip up alarm at novelists’ apparent contempt of familial and social duties. But fictional tales of unguarded passion, culminating in suicide, may or may not demonstrate what McGuire identifies as “a death drive at work at the level of narrative”. After all, to kill your characters off is one handy way to wind up a plot, especially if you’re trying to avoid the stock conclusion of marriage.

Do human beings instinctively seek to preserve their own lives? Or is the desire to terminate our existence native to our character?

Disputes about suicide have always turned on conceptions of what is natural and unnatural. Do human beings instinctively seek to preserve their own lives? Or is the desire to terminate our existence native to our character? We live in a tangled and immoral world full of death, the most definitive of many injurious agents working against our survival and well-being. How is it possible to insinuate sense and meaning into such a realm? In the attempt to do so, suicide may seem in one mood or context absurd; in another, the only sane way out. John Donne’s Biathanatos, a seminal, heterodox text in the history of voluntary death, was written in 1608 and first published posthumously in 1644 (the work later inspired both Thomas De Quincey and Jorge Luis Borges). Because The History of Suicide in England takes as its start date 1650, it does not include Biathanatos, although the work was reprinted several times after 1644 and many of the authors included here are rebutting, sometimes point by point, what Donne says (or appears to say) in qualified vindication of self-homicide. This arrangement leads to awkwardness: the General Introduction, by Mark Robson, and the introduction to Volume One have to offer a synopsis of Donne’s argument, and substantial quotations from it, in order to make sense of much of what follows.

The problem with Donne’s omission raises a more serious issue with The History of Suicide. What are its basic principles of selection and rejection? Why are the dates 1650–1850 chosen? The General Introduction is hard to follow on this score, arguing that the “years around 1650 do broadly represent a change in thinking about suicide . . . at least in terminology”. The main reason for deciding to begin in 1650 appears to have been that the OED offers a text from that decade as the first known example of the word “suicide” in English. Yet the editors of The History of Suicide have themselves found earlier instances than this, and the terminology is susceptible of many different interpretations, one of which would be that no real change occurs in thinking about voluntary death, even if a new word comes into play.

The History of Suicide is a generically wide-ranging collection, embracing letters, ballads, tracts, depositions, refutations, broadsheets, statistical inquiries, social criticism, individual case studies, and so on. Texts are reproduced in typeset rather than in facsimile form; there is a general essay introducing the edition as a whole, and each document is supplied with a headnote and some annotation at the back of the book. The editors seem, as far as it is possible to judge from the introductions to each volume, to have imagined their work primarily as a storehouse for cultural historians. But those researching the history of suicide would be better off foraging in libraries and electronic databases for themselves. Nothing is said in the preliminary matter about textual or editorial policy. Original page breaks within the copy-texts seem to be indicated by a slash (/), although this is not stated anywhere.

Spot-checks of the primary material against printed and online copies of early editions are discouraging, and suggest that mistakes have been introduced. There are recurrent glitches in punctuation, and a difficulty with apostrophes used to indicate the omission of a letter or letters, which routinely appear the wrong way around (thus ’tis is wrongly rendered ‘tis, ’mongst as ‘mongst and so on). In the course of ten pages of the first volume’s excerpt from Owen Stockton’s Counsel to the Afflicted (1667), the full-text version of which can be downloaded from Early English Books Online, a misprint in the copy-text (“aj” for “a”) has been retained, and new errors have been added, all of which introduce some degree of interpretative confusion. It is hard to see how the high price of The History of Suicide can be justified – given its limited explanatory apparatus, and the ease and speed with which most of its texts may be consulted online, for free or through institutional subscription – if it does not even reproduce its source material accurately.

Suicide can seem to express a heroic self-sufficiency. Cultivating immunity to the ills of this world is bound up with the freedom to destroy oneself when those ills nonetheless, inevitably, attack. David Hume’s chilly essay “On Suicide” (1783) justifies self-destruction on the basis that duty to ourselves supersedes all other obligations. No man would kill himself if his life were worth living, argues Hume, and those who elect to commit suicide when they have become a burden to others set an example that is worthy of imitation. Besides, the natural world is resilient and adaptable: accidents happen, and suicide is one of countless temporary disruptions to the order of things. Seen from this perspective (but what human being really can see from this perspective?), the loss of an individual life “is of no greater importance than an oyster”. To speak and think thus is to ignore the counterargument of the faithful that suicide constitutes a sin, an act of rebellion against God’s sovereignty and those around us: as is said in Walter Scott’s Redgauntlet (1824), “Despair is treason towards man, / And blasphemy to Heaven”. Human beings are created by and dependent on a non-human maker. The Christian virtue of prudence therefore involves guarding the life that does not belong to you, and cannot be yours to dispose of. Voluntarily severing the bond that joins soul with body is to sever a tie with God. As for oysters, “Are not two sparrows sold for a farthing? and one of them shall not fall on the ground without your Father. But the very hairs of your head are all numbered. Fear ye not therefore, ye are of more value than many sparrows” (Matthew 10.29–31).

Numerous believers have made themselves desperate by nursing a sense of their own unique culpability. This kind of suicidal despair – convincing oneself that one is permanently cast out from the possibility of forgiveness – is terrible to read about. Take William Cowper. He was destined for the law, a profession for which, due to his morbid fear of public speaking, he was wholly unsuited. The prospect of being examined in 1763 at the bar of the House of Lords drove him to a series of frantic measures. About a week before the examination he bought a half-ounce of laudanum. Unable to consume the fatal dose, he thought of escaping to France. He resolved to drown himself, then tried to stab himself with his penknife, and finally hanged himself with a scarlet garter which broke just as he lost consciousness. On coming to, he heard the sound of his own groans and assumed he was in hell. A period of bitter misery ensued; Cowper attempted suicide on at least one further occasion. But conversations with his brother and chance readings in the Bible began to chip away at his certainty that he was the helpless prey of a furious, vengeful God. On July 26, 1764 he picked up a Bible and opened it, randomly, at Romans 3.25: “Whom God hath set forth to be a propitiation through faith in his blood, to declare his righteousness for the remission of sins that are past, through the forbearance of God”. In an instant, Cowper found strength to believe in the redeeming power of Christ, and was lost in tears of grateful ecstasy.

Cowper regretted his birth “in a country where melancholy is the national characteristic”, and admitted he had often wished himself a Frenchman. The French themselves apparently referred to suicide as death “à l’Anglaise – according to the English fashion”. The World’s John Tristman was one of many writers at home and abroad to link the English temperament with suicidal tendencies. In 1738, a journalist (possibly Samuel Richardson) claimed that suicide was England’s “new Religion”. Melancholy seemed to infect everyone and everything: even a sedan chair, narrating the history of its life and adventures in London in 1757, admits that it has flirted with self-destruction: “Since my reparation, I have . . . had a very particular dejection of spirits. Whether I am almost tired of a foolish and ridiculous world, I can’t tell . . .”. Two decades later, the Abbé Millot could still remark on “that rage of suicide, whereof England affords so many examples”. The English, he claimed, grow weary of existence “upon principle”. A national proclivity for self-murder was, perhaps, the inescapable counterpart of wealth, leisure, liberty and refinement. Spending power, and the freedom to think, generated variety and originality – hence, the argument ran, the surfeit of excellent English authors. But such benefits also encouraged, as in ancient Rome, effeminacy and madness. And then there was the weather, often presented as a fatal agent in the “English Malady”.

A national proclivity for self-murder was, perhaps, the inescapable counterpart of wealth, leisure, liberty and refinement.

Many eighteenth-century writers argue that trade supports human virtue. Yet trade, reliant on slavery, also generates luxury and the kind of enervation associated with melancholy. Poor people conveniently lacked the time and imagination to kindle suicidal thoughts into action; they were too busy working. A truly aristocratic temperament, on the other hand, was inherently proud and self-destructive, doomed to squander its tremendous gifts and resources. One “well born” correspondent summed the position up with exquisite absurdity in The World, again in 1756: “I grew to think that there was no living without killing oneself”.

Samuel Johnson’s Dictionary of the English Language (1755) condemns the act in its definition of “SUICIDE”: “Self-murder; the horrid crime of destroying one’s self”. Beneath this explanation Johnson cites, or rather slightly adapts, Samuel Richardson’s heroine Clarissa writing against voluntary death in the same morally offended register as Johnson’s: “To be cut off by the sword of injured friendship is the most dreadful of all deaths, next to suicide”. Clarissa seems to be assuring us that her own end couldn’t be further removed from such a fate. And yet, as McGuire points out, she also appears to be a textbook suicidal anorexic, persistently fasting after Lovelace’s rape and thereby conferring on herself a slow death that allows her to dispose of her property and execute her last wishes bit by bit. Clarissa persists in her resolve despite a warning from Lovelace’s former mistress, Sally, who says: “Your religion . . . should teach you that starving yourself is Self-Murder”. Yet Clarissa’s end is also that of an exemplary Christian, attended by many affirmations of faith and intimations of immortal glory. She murmurs “O death, where is thy sting?” and “come – blessed Lord – JESUS” as she dies. Richardson comments in the postscript that anyone “earnest in their profession of Christianity will rather envy than regret the triumphant death of CLARISSA”. But that triumph echoes the last days of Cicero’s friend and correspondent, Pomponius Atticus (110–32 BC), who “willingly famished himself to death”, in the words of one seventeenth-century pamphleteer, “and could not be disswaded from so doing by prayers and tears of his nearest and dearest allies and friends”.

Too keen an attachment to food might also amount to an appetite for death. Did Johnson’s friend, the gluttonous brewer Henry Thrale, kill himself through overindulgence? Johnson seems to have thought so, commenting shortly before Thrale’s death in 1781 that “such eating is little better than Suicide”. His wife, Hester Thrale, found these words “remarkable”, but judged it best to say no more. The physician George Cheyne, himself a lifelong dieter whose weight had peaked at 32 stone, argued in 1745 that “he that wantonly transgresseth the self-evident rules of health, is guilty of a degree of self-murder, and a habitual perseverance therein is direct suicide”. Thrale, like Clarissa, had persevered and defied the entreaties of friends and family.

Neither of these deaths is quite in line with the widespread modern aspiration to die with dignity (an aspiration often cited in debates about assisted suicide). But is a dignified exit from this world any more possible or desirable than John Tristman’s drawing-room vision of expiring decently and elegantly? Before the twentieth century, public discussions of voluntary death were not dominated by arguments about whether people ought to be kept alive for years in a condition such as that of locked-in syndrome, although the syndrome itself is nothing new: Noirtier de Villefort in The Count of Monte Cristo (1844) is described as a living corpse, and communicates via ocular movement alone. Yet it is immediately obvious from the painful narratives of love, madness, poverty, crime, violence, degradation and slavery included in the books under review that people have always longed to be allowed to do what they wanted with their own lives and bodies, and many have concluded (in Donne’s words) that: “I have the keyes of my prison in mine owne hand, and no remedy presents it selfe so soone to my heart, as mine own sword”. The suicidal desire for freedom whispers to us that we have the means in our power to end our own misery, perhaps even that it is a mark of courage and honour to do so.

Where, then, can we find comfort? What can we do to escape ourselves? Robert Burton recommended in the closing lines of his Anatomy of Melancholy (1621) that we should “Be not solitary; be not idle”, advice which Samuel Johnson carefully adapted for “disordered” men such as James Boswell: “If you are idle, be not solitary; if you are solitary, be not idle”. The end of the Samaritans’ information page on self-harm urgently communicates the same message as that of the first full-length treatise on suicide published in English, John Sym’s Lifes Preservative Against Self-Killing (1637), and it can’t be said often enough: “There is always hope. There is always help”.

The Making of a Philosophy Professor

By John Kaag
chronicle

“The unexamined life is not worth living.” What Socrates failed to tell us is that the examined one isn’t a whole lot better.

So he wasn’t the wisest of all men. Or if he was, he was a patronizing jerk. When I grew up, I thought to myself, I wouldn’t be a patronizing jerk. I’d tell people straightforwardly, without irony or obfuscation, what a pathetic ruse life was. I’d tell them that living was a euphemism for dying slowly, that life was an incurable disease that was ultimately fatal. So what if I was only 12?

This is what happens when your older brother, home from university, leaves his copy of Plato’s Apology on the back of the toilet. He goes on to become the doctor that he’s supposed to. And you become a philosophy professor.

I’m sure that I wasn’t alone in my understanding of life’s meaninglessness, but I remember being surprised that more kids didn’t seem affected by it. Maybe they, like Socrates, just didn’t want to talk about it. Did they not experience the monotony of class and lunch, class and lunch, day after day? Did they not experience recess as a sadistic lie? Sadistic because it was either too painful or too short (you pick), and a lie because it was meant to provide some respite from the monotony. If they did, they weren’t saying.

I sort of hoped that was the case. I hoped that my classmates silently worried about the bus crashing, or about getting hit by it, or about the monosodium glutamate in their ham sandwich, or about the pig that went into making the ham.

I did.

Bedtime involved an extended ritual that had to be performed with extreme care—a type of penance for the pig and everything else I felt guilty about. Exactly six—not five or seven—trips to the bathroom had to be made. Three ice cubes had to be placed into one glass of water, which was placed on one white washcloth on the right bedside table. One set of eyeglasses had to be set on that table and pointed toward the door. And that door had to be propped open exactly two inches (the width of my 12-year-old hand). That measurement had to be checked at least twice, although one was allowed to check more often, depending on what he had eaten for lunch.

I hoped that this ritual would somehow keep my universe intact. That hope, on most nights, let me get some sleep.

Let me be clear: I had a very pleasant childhood. My anxiety did not have any particular cause, which amounts to saying that it was true anxiety. Sure, my father drank too much (I am not divulging any secret here) and was generally negligent (again, not a secret), but he left when I was 3. So let’s not blame him. That would be too easy. Even at 12, I knew that no discrete situation could warrant the fear and trembling of my bedtime ritual.

My mother, like any good mother (she was great, by the way), was worried. Indeed, she worried about me almost as much as I did. She worried that my monkey mind and nighttime prowling would leave me tired the next day, and that if I was tired, I wouldn’t be able to make friends, and that if I didn’t make friends, I’d get depressed, and that if I got depressed, I’d lose interest in school, and that if I lost interest in school, I’d never get a job, and that if I didn’t get a job, then I couldn’t have a family, and that if I didn’t have a family, I’d be miserable.

At least her worries were reasonable.

And so she was terrified when I announced—at the age of 15—that I was going into philosophy. She knew me well enough to take me seriously, and philosophy well enough to know that it would not ease my mind. As usual, she was right.

Graduate school taught me two things. First, it taught me that I had been justified in feeling bad about that pig. (Thanks, Peter Singer.) Ham sandwiches would henceforth be placed on a long list of things that merited guilt and penance. Second, it taught me that I could do nothing about the suffering of the world; one could neither adequately atone for one’s existence nor make a meaningful attempt to escape it.

Unless you consider Camus, of course.

“There is but one truly serious philosophical question, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.” If the answer to that question seems obvious to you, you’re lucky: You’re not a philosopher. Camus postulates only two possible answers, neither of which is much fun. He takes as a given that we live in a world that is completely indifferent to our human purposes.

For a long time, I was inclined to answer Camus in the negative: Life in a meaningless world was not worth living. But I realized that such a conviction would have pesky consequences. Answering in the affirmative was no walk in the park, either. It meant that you affirmed that life was an incurable disease, but that you had decided, resolutely and freely, to suffer through it.

Resolutely and freely. As a rule, philosophers are not dauntless. They talk a big game but are generally too worried about screwing up to actually do much of anything. Deciding to act resolutely and freely, therefore, is probably the least philosophical thing I’ve ever done. No, that isn’t quite true. Acting resolutely and freely is the least philosophical thing I’ve ever done.

So I stopped eating ham sandwiches. Maybe becoming a vegetarian doesn’t seem that significant to you, but it turns out that a little resolve can go a long way.

It also turns out that ham sandwiches come in many forms—living in an unhappy marriage; the desperate attempts to meet the expectations of friends, family, and colleagues; the impossibility of meeting your own. Of course, sometimes a ham sandwich is just a ham sandwich.

In any event, I stopped eating all of them.

When anxiety leaves you, or you decide to leave it, it’s very much like losing a certain kind of old friend—one whom you have come to hate. You still remember it in vivid detail, how it humiliated you, how it kept you up at night, how it wasted your time. But now it is gone. And suddenly you’re well rested, and you have lots of time.

My mother was right. She told me long ago that if I got enough sleep, I could make friends, and if I had friends, I wouldn’t be so depressed, and if I wasn’t so depressed, I’d do better in school, and if I did better in school, I’d get a good job, and if I had a good job, I could have a happy family, and if I had a happy family, I wouldn’t be miserable.

But here’s the thing about not being miserable.

Life is still a pathetic ruse: either too painful or too short. You pick.

Let there be light

By John Banville
ft

Folio34r: Face of Christ at the top.

The story of the Book of Kells, of the mystery surrounding its provenance and the anonymity of the master scribes and artists who executed it, is a splendid romance. Few emblems of medieval European civilisation have caught the imagination of the international public to the same degree. Every year tens of thousands of visitors to Dublin file through the Long Room of Trinity College to view its intricately decorated pages. The artistry, colour, exuberance and wit that went into the making of this illuminated version of the four Gospels, described in the 11th-century Annals of Ulster as “primh-mind iarthair domain”, “the most precious object of the western world”, are an enduring source of awe and admiration. Here is a spark of brilliant light shining for us out of the Dark Ages.

The “Book of Kells” is probably a misnomer. Certainly the book was kept at Kells, a pleasant town in County Meath some 40 miles northwest of Dublin, from the beginning of the 11th century until it was sent to Dublin in the early 1650s for safe-keeping. It is not known for sure, however, where it originated. The best scholarly guess is that it was composed, or at least that its composition was begun, by Columban monks on the small, fertile island of Iona, off Mull on the west coast of Scotland, around the year 800.

The book, or codex, the more precise term for a manuscript volume, was from the start closely linked with the name of Columcille. The Annals of Ulster, for instance, refer to it specifically as “the great Gospel of Colum Cille”. This saint – Columcille translates as the “Dove of the Church” – is a potent figure in medieval Insular history, that is, the history of Ireland and Britain. He is Christian Ireland’s St George, but without the great-sword, and the pious counterpart, for generations of Irish schoolchildren, of the legendary warrior-hero Cuchulainn.

Columcille, or Columba, as he was known to the Latin-speaking world, was born in 521 or 522 into the aristocratic family of the O’Neills of Tír Connaill, roughly present-day Donegal. At the beginning of the 560s he travelled with 12 companions, in echo of Christ and his apostles, on a mission to Scotland to convert the Picts. In 563 he settled on Iona and founded a settlement there, which was to endure for centuries. Members of the community would go on to set up other monastic houses, including one on the great rock of Lindisfarne in Northumberland, established by the Ionan monk Aidan in 635.

At the close of the 8th century the tranquil life of Iona was violently disrupted by the arrival of the Viking longships. Raids were recorded in 795; in 802, when the settlement was “burned by the heathens”, as the Ulster annalists put it; and in 806, when 68 inhabitants were slaughtered. The following year a part of the monastic community transferred to Kells, where there was a royal hill-fort owned by the southern branch of Columcille’s family. The annals speak of Kells at this time as the “noue ciuitatis” of the Columban monks. The word “ciuitatis” means place of refuge, the circumference of which, according to one historical source, could be measured by an angel with a reed in his hand – it must have been a small fort indeed – which possibly accounts for the depiction in the Book of Kells of a number of angels holding what are probably reeds.

This last is one of many fascinating conjectures put forward by Bernard Meehan in The Book of Kells, a sumptuous – it is the only word for it – volume containing more than 80 pages from the manuscript reproduced full-size and in full, ravishing colour, as well as five comprehensive chapters on the historical background, the elements of the book, the manner of decoration, the work of the scribes and artists involved, and the physical features of the book.

Meehan, head of research collections and keeper of manuscripts at Trinity College, is surely the world’s foremost contemporary authority on the Book of Kells, which he has lived with and worked on throughout his professional career. His new book is a triumph of scholarly investigation and interpretation. Although he maintains an appropriately sober tone throughout, it is clear that he finds this marvellous artefact – indeed, this work of art – as fascinating and compellingly mysterious today as he did when he first set himself to unravelling its secrets some 30 years ago.

One of the endearing features of this version of the Gospels is that it is not particularly accurate. “While the scripts of the Book of Kells have a unique verve and beauty,” Meehan writes, “its text is erratic, with many errors resulting from eyeskip (where the scribe’s eye has jumped from a word to its next appearance, omitting the intervening text or letter).” This calls down a rare but stern professorial rebuke: “There is considerable carelessness in transcription.” Reading this, one’s deplorably feckless imagination wanders back through the smoke of the centuries to that frail little isle afloat in the wild Atlantic, where in a stone beehive hut a lonely scribe, hunched with quill in hand over his sheet of vellum, halts suddenly as he spots a mistranscription, claps a hand to his brow and utters whatever might have been the monastic equivalent of “Oh, shit!”

Folio 200r, written by Scribe C.

Those poor scribes – there were four of them, “prosaically termed A, B, C and D”, as Meehan sympathetically remarks – had their work cut out for them. The Book of Kells was made from 180 calf skins – an indication, by the way, of the comparative wealth of the monastic community, for in those days cows were money – and of the complete work, 680 pages remain, some folios as well as the original binding having been lost or destroyed. The scribes, employing broad quill pens held at right angles to the page, wrote with surprising speed, at an estimated rate of about 180 words per hour; the illustrator-artists, of course, would have worked much more slowly.

An elegant playfulness is evident throughout, with contingencies often being turned into occasions of bravura inventiveness. Meehan points out that to achieve an evenly justified right-hand margin, sometimes the final letter or letters of a word had to be inscribed below the remainder, and offers the example from the bottom of the verso of folio 276, where the final t of the word dixit is placed below the rest of the word, forming a flamboyant cross with the second stroke of the x. Elsewhere, too, necessity offered opportunity. The major illustrated pages would have taken very much longer to execute than script pages, and in order not to delay the process of transcription, the reverse sides would have been left blank to be filled in later with text. “Having to guess how much space to leave on these occasions,” Meehan writes, “the scribes normally erred on the conservative side, knowing that space could be filled with decoration if necessary.” Send in the artists.

Folio 63r: Fly decorates the text where Beelzebub, ‘Lord of the flies’, is mentioned.

The Irish have a weakness for puns, and this is as evident in the Book of Kells as it is in Finnegans Wake, although in the former the puns are for the most part visual, for no monk would think to tamper with the Gospel texts. A delightful example of visual punning occurs on the recto of folio 63. Meehan cites another scholar, George Henderson, identifying an insect on this page as a fly, “consistent with its place in the text, at Matthew 12.24, where the Pharisees say, ‘This man casteth not out devils but by Beelzebub, the prince of the devils’, for the name ‘Beelzebub’ is glossed in the list of Hebrew names in the Book of Kells as meaning ‘having flies’ or ‘Lord of flies’.”

Folio 180r: Text referring to St Peter features the body of a hare, symbolic of timidity.

Matters of doctrinal dispute, too, give rise to the occasional sly squib. The early Irish Church had its differences with Rome, for example on the thorny question of the dating of Easter, which led to a “great dispute” and a resulting Irish hostility towards St Peter, the founder of the Roman Church. According to the Venerable Bede, the Columbans had been persuaded in the early 700s to adopt the Roman method for fixing the date for Easter, but as Meehan points out, evidence in the Book of Kells points to a lingering resentment among the scribes and artists of Iona. Thus on the recto of folio 180, a line of text referring to Peter’s denial of Christ incorporates the figure of a hare, an animal known for its timidity.

Folio 87v: A hare gazes at text about St. Peter.

An even more ingenious leporine dig is spotted by Meehan, with equal ingenuity, on the verso of folio 87. “The hare here, forming the S of Simile, gazes back overleaf to folio 87r, where Peter expresses doubt about following Jesus. It is precisely on the other side of the leaf from the pet of petrus, suggesting a deliberate association, to Peter’s disadvantage, between the words petrus and Simile and the animal, Peter being ‘like to’ a hare.” We should look with indulgence upon these harmless sallies. After all, as James Joyce pointed out, the Church of Rome was built on a pun, when Christ chose Peter (Petrus) as the rock (petra) of its foundation.

The Book of Kells is endlessly fascinating, boundlessly inspiring. “For many in Ireland,” Meehan remarks, “it symbolizes the power of learning, the impact of Christianity on the life of the country, and the spirit of artistic imagination.” So it is for many in the world at large, also. You do not have to be a Christian to appreciate the book’s beauty and power, expressing as it does our love of the natural world and at the same time the pathos of our yearning towards transcendence. In the beginning was the word, declares the Gospel of St John, and thereafter, we might add, came the transcribers of that word.

Deciphering the mysterious Voynich Manuscript

By Scott Van Wynsberghe
nationalpost

Its name sounds like the title of a Robert Ludlum thriller, and it has bamboozled generations of spies. An emperor reputedly once owned it, the Jesuits later acquired it and Yale University now has the infuriating thing. For those in the know, all that is needed is to roll one’s eyes and mutter about the Voynich Manuscript, which was discovered (or, technically, rediscovered) a century ago this year.

Wisely, Yale University’s Beinecke Rare Book and Manuscript Library decided to be open about so controversial an item, and the entire manuscript has been [external] posted online for scrutiny. There, one finds an object that initially does not seem to merit the fuss.

Physically, the manuscript is not large and has been measured (at a laboratory hired by the Beinecke Library) at just 23.5 x 16.2 cm, or just over 9” x 6”. Nor is it very lengthy. It once had no more than 116 leaves (or folios), each numbered on only one side, but 14 of them vanished as much as centuries ago, so just 102 remain. Counting both sides of each leaf, that makes 204 “pages,” although purists can be fussy on that point. (For the record, the Beinecke Library follows the convention whereby the leaf or folio on the right side of an open book is referred to as “recto,” while the reverse of that same leaf is “verso.” Thus, instead of references like “page 9,” one instead gets “folio 9 recto.”)

Once the technical minutia is out of the way, however, amazement follows. The manuscript is handwritten in a tidy, curvy format that cannot be read by anyone. When the individual characters of the writing are transliterated into a format of Roman letters adopted by Voynich buffs for the sake of convenience, the text provides such extreme nonsense as: “yteedy qotal dol shedy qokedar chcthey otordoror qokal otedy qokedy qokedy dal qokedy qokedy skam.”

The writing is accompanied by hundreds of illustrations, which one would expect to provide some guidance, but the opposite is the case. The pictures include perplexing charts of the cosmos, lots of unidentified plants and images of naked women either bathing or interacting with a bizarre network of tubes. (And the tubes are not even phallic: Sometimes, the women are inside them.)

For such a dreadful conundrum, the world has one man to thank — rare-book dealer Wilfrid Voynich. A former student agitator in flight from imperial Russia, Voynich transplanted himself first to the U.K. and then (as of 1914) to the United States, all the while building a reputation as a connoisseur of ancient scribblings. At some point in 1912 — nobody seems to know precisely when — he found the manuscript that bears his name.

Up to his death in 1930, Voynich was so evasive about the details of his discovery that one might reasonably wonder if he himself created the manuscript. However, the document was radiocarbon tested at the University of Arizona in 2009, yielding an origin point in the early 1400s. As well, the correspondence of a renowned scholar of the 1600s, the Jesuit Athanasius Kircher, has revealed a handful of apparent references to the manuscript. One of his contacts, the Prague-based physician Johannes Marcus Marci, evidently sent the manuscript to him for interpretation in the mid-1660s.

So the manuscript is not a modern fraud, but its provenance still remains very sketchy. In a 1930 letter written by Voynich’s wife Ethel — and not opened until her own death in 1960 — it was somewhat clarified that Voynich found the manuscript, thanks to a Jesuit priest named Joseph Strickland, at Frascati, near Rome. Voynich buffs have taken that to mean the Jesuit centre at the Villa Mondragone, near Frascati, where Strickland worked. In any case, Ethel revealed that Fr. Strickland swore Voynich to secrecy, implying that the transaction was somehow dicey.

How the manuscript got to Frascati is murky but must have had something to do with the Jesuit Kircher and his presumed receipt of it from Dr. Marci. In turn, Kircher was told by Marci that the latter had obtained it through the will of a late friend, George Baresch (also known by the Latin handle of “Barschius”). As well, Marci had heard a claim that the manuscript was once owned by Holy Roman Emperor Rudolf II (who reigned from 1576-1611), and ultraviolet examination of folio 1 recto has indeed revealed the faded signature of Rudolf’s chief botanist and alchemist, Jacobj à Tepenece.

Prior to Rudolf, however, there are at least 150 years of utter mystery. The very earliest theories about the manuscript centred on the English philosopher-monk Roger Bacon, who lived in the 1200s, but his era is ruled out by the radiocarbon test. In a 2011 article for the Skeptical Inquirer magazine, Voynich researcher Klaus Schmeh listed (and scoffed at) other candidates who have been proposed over the years, including artist Leonardo da Vinci, an Italian architect of the 1400s named Antonio Averlino and even an underground group of Cathar heretics. An English occultist, John Dee, has often come up, but online Voynich authority Philip Neal strongly discounts him.

Schmeh has wondered if the actual perpetrator may simply have been some unknown mentally ill person, but that looks impossible. Although all Voynich researchers may not agree, there has been a strong argument since the 1970s that the handwriting of the manuscript indicates at least two authors, not one. As well, linguist and computer expert Gordon Rugg has pointed out that the manuscript does not statistically conform to the known patterns of insane ravings, whatever its actual contents.

With the manuscript’s origin hopelessly obscure, Voynich enthusiasts have had no other choice but to try to crack the text. That daunting task has been made a little less grim by such online resources as the Beinecke Library (to which the manuscript was donated in 1969 by rare-book dealer H.P. Kraus, who bought it after Ethel Voynich’s death). Today, anyone can pretend to be a Voynich pundit. For much of its earlier history, however, the Voynich field was the preserve of a small circle of devotees — many of whom came from the shadows.

According to an historian of cryptology, David Kahn, Voynich consulted various authorities about his puzzling manuscript, and, in 1917, he approached MI-8, the U.S. military’s codebreaking unit during the First World War. The unit’s commander, Herbert O. Yardley, took a look at the manuscript, but it was a subordinate, John M. Manly, who became obsessed with it. When a possibly unhinged scholar named William R. Newbold began claiming in 1921 that he had solved the manuscript, Manly would lead the charge that discredited him. (Amusingly, Voynich was torn between the two men: The New York Timeslater revealed that both Manly and Newbold’s widow figured in Voynich’s will.)

Other significant intelligence personnel who joined the Voynich field in the decades to come were William F. Friedman (a legendary U.S. codebreaker of the Second World War), John H. Tiltman (a British contemporary of Friedman, also acclaimed), Prescott Currier (a U.S. Navy specialist), Yale University professor Robert S. Brumbaugh (previously a cipher sleuth for the U.S. Army) and Mary D’Imperio (whose highly regarded 1978 book on the manuscript can be consulted at the website of the National Security Agency).

Despite their skills, not one of the above was able to read the manuscript, leading to the growing suspicion that the text does not involve any encryption, because that would have been broken by now. And it may not even involve any actual language, either: In 2004, the aforementioned Gordon Rugg declared that the text was just gibberish — an ancient hoax possibly assembled through a non-functional version of a Renaissance coding technique called the “Cardan grille.” Rugg could not, however, explain why anyone in the Renaissance would do such a thing, nor did he address the amazing effort that went into the hundreds of illustrations. After a century of study, the Voynich Manuscript still mocks us.

The Evolutionary Mystery of Homosexuality

By David P. Barash
chronicle

Critics claim that evolutionary biology is, at best, guesswork. The reality is otherwise. Evolutionists have nailed down how an enormous number of previously unexplained phenomena—in anatomy, physiology, embryology, behavior—have evolved. There are still mysteries, however, and one of the most prominent is the origins of homosexuality.

The mystery is simple enough. Its solution, however, has thus far eluded our best scientific minds.

First the mystery.

The sine qua non for any trait to have evolved is for it to correlate positively with reproductive success, or, more precisely, with success in projecting genes relevant to that trait into the future. So, if homosexuality is in any sense a product of evolution—and it clearly is, for reasons to be explained—then genetic factors associated with same-sex preference must enjoy some sort of reproductive advantage. The problem should be obvious: If homosexuals reproduce less than heterosexuals—and they do—then why has natural selection not operated against it?

The paradox of homosexuality is especially pronounced for individuals whose homosexual preference is exclusive; that is, who have no inclination toward heterosexuality. But the mystery persists even for those who are bisexual, since it is mathematically provable that even a tiny difference in reproductive outcome can drive substantial evolutionary change.

J.B.S. Haldane, one of the giants of evolutionary theory, imagined two alternative genes, one initially found in 99.9 percent of a population and the other in just 0.1 percent. He then calculated that if the rare gene had merely a 1-percent advantage (it produced 101 descendants each generation to the abundant gene’s 100), in just 4,000 generations—a mere instant in evolutionary terms—the situation would be reversed, with the formerly rare gene occurring in 99.9 percent of the population’s genetic pool. Such is the power of compound interest, acting via natural selection.

For our purposes, the implication is significant: Anything that diminishes, even slightly, the reproductive performance of any gene should (in evolutionary terms) be vigorously selected against. And homosexuality certainly seems like one of those things. Gay men, for example, have children at about 20 percent of the rate of heterosexual men. I haven’t seen reliable data for lesbians, but it seems likely that a similar pattern exists. And it seems more than likely that someone who is bisexual would have a lower reproductive output than someone whose romantic time and effort were devoted exclusively to the opposite sex.

Nor can we solve the mystery by arguing that homosexuality is a “learned” behavior. That ship has sailed, and the consensus among scientists is that same-sex preference is rooted in our biology. Some of the evidence comes from the widespread distribution of homosexuality among animals in the wild. Moreover, witness its high and persistent cross-cultural existence in Homo sapiens.

In the early 1990s, a geneticist at the National Institutes of Health led a study that reported the existence of a specific allele, Xq28, located on the X chromosome, that predicted gay-­versus-straight sexual orientation in men. Subsequent research has been confusing, showing that the situation is at least considerably more complicated than had been hoped by some (notably, most gay-rights advocates) and feared by others (who insist that sexual orientation is entirely a “lifestyle choice”).

Some studies have failed to confirm any role for Xq28 in gay behavior, while others have been supportive of the original research. It is also increasingly clear that whatever its impact on male homosexuality, this particular gene does not relate to lesbianism. Moreover, other research strongly suggests that there are regions on autosomal (nonsex) chromosomes, too, that influence sexual orientation in people.

So a reasonable summary is that, when it comes to male homosexuality, there is almost certainly a direct influence, although probably not strict control, by one or more alleles. Ditto for female homosexuality, although the genetic mechanism(s), and almost certainly the relevant genes themselves, differ between the sexes.

Beyond the suggestive but inconclusive search for DNA specific to sexual orientation, other genetic evidence has emerged. A welter of data on siblings and twins show that the role of genes in homosexual orientation is complicated and far from fully understood—but real. Among noteworthy findings: The concordance of homosexuality for adopted (hence genetically unrelated) siblings is lower than that for biological siblings, which in turn is lower than that for fraternal (nonidentical) twins, which is lower than that for identical twins.

Gay-lesbian differences in those outcomes further support the idea that the genetic influence upon homosexuality differs somewhat, somehow, between women and men. Other studies confirm that the tendency to be lesbian or gay has a substantial chance of being inherited.

Consider, too, that across cultures, the proportion of the population that is homosexual is roughly the same. We are left with an undeniable evolutionary puzzle: What maintains the underlying genetic propensity for homosexuality, whatever its specific manifestations? Unlike most mystery stories, in which the case is typically solved at the finish, this one has no ending: We simply do not know.

Here are some promising possibilities.

Kin selection. Scientists speculate that altruism may be maintained if the genes producing it help a genetic relative and hence give an advantage to those altruistic genes. The same could be true of homosexuality. Insofar as homosexuals have been freed from investing time and energy in their own reproduction, perhaps they are able to help their relatives rear offspring, to the ultimate evolutionary benefit of any homosexuality-promoting genes present in those children.

Unfortunately, available evidence does not show that homosexuals spend an especially large amount of time helping their relatives, or even interacting with them. Not so fast, however: Those results are based on surveys; they reveal opinions and attitudes rather than actual behavior. Moreover, they involve modern industrialized societies, which presumably are not especially representative of humanity’s ancestral situations.

Some recent research has focused on male homosexuals among a more traditional population on Samoa. Known as fa’afafine, these men do not reproduce, are fully accepted into Samoan society in general and into their kin-based families in particular, and lavish attention upon their nieces and nephews—with whom they share, on average, 25 percent of their genes.

Social prestige. Since there is some anthropological evidence that in preindustrial societies homosexual men are more than randomly likely to become priests or shamans, perhaps the additional social prestige conveyed to their heterosexual relatives might give a reproductive boost to those relatives, and thereby to any shared genes carrying a predisposition toward homosexuality. An appealing idea, but once again, sadly lacking in empirical support.

Group selection. Although the great majority of biologists maintain that natural selection occurs at the level of individuals and their genes rather than groups, it is at least possible that human beings are an exception; that groups containing homosexuals might have done better than groups composed entirely of straights. It has recently been argued, most cogently by the anthropologist Sarah B. Hrdy, that for much of human evolutionary history, child-rearing was not the province of parents (especially mothers) alone. Rather, our ancestors engaged in a great deal of “allomothering,” whereby nonparents—other genetic relatives in particular—pitched in. It makes sense that such a system would have been derived by Homo sapiens, of all primate species the one whose infants are born the most helpless and require the largest investment of effort. If sufficient numbers of those assistants had been gay, their groups may have benefited disproportionately.

Alternatively, if some human ancestors with a same-sex preference reproduced less (or even not at all), that, in itself, could have freed up resources for their straight relatives, without necessarily requiring that the former were especially collaborative. Other group-level models have also been proposed, focusing on social interaction rather than resource exploitation: Homosexuality might correlate with greater sociality and social cooperation; similarly, it might deter violent competition for females.

Balanced polymorphisms. Perhaps a genetic predisposition for homosexuality, even if a fitness liability, somehow conveys a compensating benefit when combined with one or more other genes, as with the famous case of sickle-cell disease, in which the gene causing the disease also helped prevent malaria in regions where it was epidemic. Although no precise candidate genes have been identified for homosexuality, the possibility cannot be excluded.

Sexually antagonistic selection. What if one or more genes that predispose toward homosexuality (and with it, reduced reproductive output) in one sex actually work in the opposite manner in the other sex? I prefer the phrase “sexuallycomplementary selection”: A fitness detriment when genes exist in one sex—say, gay males—could be more than compensated for by a fitness enhancement when they exist in another sex.

One study has found that female relatives of gay men have more children than do those of straight men. This suggests that genes for homosexuality, although disadvantageous for gay men and their male relatives, could have a reproductive benefit among straight women.

To my knowledge, however, there is as yet no evidence for a reciprocal influence, whereby the male relatives of female homosexuals have a higher reproductive fitness than do male relatives of heterosexual women. And perhaps there never will be, given the accumulating evidence that female homosexuality and male homosexuality may be genetically underwritten in different ways.

A nonadaptive byproduct. Homosexual behavior might be neither adaptive nor maladaptive, but simply nonadaptive. That is, it might not have been selected for but persists instead as a byproduct of traits that presumably have been directly favored, such as yearning to form a pair bond, seeking emotional or physical gratification, etc. As to why such an inclination would exist at all—why human connections are perceived as pleasurable—the answer may well be that historically (and prehistorically), it has often been in the context of a continuing pair-bond that individuals were most likely to reproduce successfully.

There are lots of other hypotheses for the evolution of homosexuality, although they are not the “infinite cornucopia” that Leszek Kolakowski postulated could be argued for any given position. At this point, we know enough to know that we have a real mystery: Homosexuality does have biological roots, and the question is how the biological mechanism developed over evolutionary time.

Another question (also yet unanswered) is why should we bother to find out.

There is a chilling moment at the end of Ray Bradbury’s The Martian Chronicles,when a human family, having escaped to Mars to avoid impending nuclear war, looks eagerly into the “canals” of their new planetary home, expecting to see Martians. They do: their own reflections.

It wasn’t terribly long ago that reputable astronomers entertained the notion that there really were canals on Mars. From our current vantage, that is clearly fantasy. And yet, in important ways, we are still strangers to ourselves, often surprised when we glimpse our own images. Like Bradbury’s fictional family, we, too, could come to see humanity, reflected in all its wonderful diversity, and know ourselves at last for precisely what we are, if we simply looked hard enough.

Unlike the United States military, with its defunct “don’t ask, don’t tell” policy, many reputable investigators are therefore asking … not who is homosexual, but why are there homosexuals. We can be confident that eventually, nature will tell.

Reinventing Bach

By Alexandra Mullen
bnreview

More than 300 years ago, Johann Sebastian Bach was born in the small German town of Eisenach. Unlike his contemporary Handel, he never traveled very far. But from this intense central point, Bach — or at least the sound waves representing him — seems to be filling up the universe. Three of his pieces are on Voyager’s golden disc which is now approaching the brink of interstellar space; and at the same time, as Paul Elie, the author of Reinventing Bach, says, “He is in my pocket.” How has Bach in our time become a Godlike being whose center is everywhere and whose circumference nowhere?

By juxtaposing the space capsule and the pocket, Elie captures two elements of Bach — the domestic and the transcendent. They were certainly evident in his own life of playing, rehearsing, composing, teaching, and performing music, where there was not much distance or difference between the clavier at home and the organ at church. By juxtaposing the LP and the iPod, Elie reminds us of how technology has democratized and universalized Bach — all of us can “play” him now whether we’re picking through a score at the piano or listening to recordings of Edwin Fischer’s impassioned wrong notes or Glenn Gould’s equally impassioned right ones.

I single out Fischer and Gould’s piano recordings because while both Fischer and Gould give highly individual performances, they represent two different ways of thinking about what a recording is. Fischer’s recordings sound old, not just because of the background hissings and pops but also because they are embedded in concert practice: one continuous take, one continuous flow distilling concentrated experience with that particular piece at that particular moment in that particular space, warts, felicities, and all. Gould’s recording of the same pieces sound—well, newer, certainly, but in some sense out of time and even location: Gould consciously exploited “take-twoness” — not so much to eliminate flubs (although that too) as to craft peaks of brilliance on an instrument inhabiting its own sound-world.

For Elie, whose previous book, The Life You Save May Be Your Own, concentrated on four American Catholic writers, the spiritual and the technological are not antithetical:

In an age of recordings, the past isn’t wholly past and the present isn’t wholly present, and our suspension in time, our intimacy with the most sublime expressions of people distant and dead, is a central fact of our experience. This is at once a benefit and a quandary, and in it, I would venture, are the makings of a spirituality of technology.

Some Bach pianists have the gift of bringing out lines of music in a score that I’ve never noticed before — I think particularly of Gould’s and Simone Dinnerstein’s startling different but utterly compelling versions of the Goldberg Variations. Elie has a similar ability to hear new connections between well-known notes. I find him particularly thought-provoking when he marks long historical phrases. In one passage, after considering the physical placement of the church organist (literally lofty, back turned to the service), Bach’s chorales, and the goals of the Reformation that lead “every believer [to] hope for a direct encounter with the thing itself,” Elie moves outward to our day: “Recordings heighten the effect, completing the transition from eye to ear, from seeing to hearing, that the Reformation had brought about. The organist is done away with. So is the church building. So are the limits of space and time, of stamina and attention. The music of Bach is all that is left. The recordings pour out perfection: they enable the listener to import transcendence into an ordinary room, to “play” the music without making it.” Is it just a fluke that some early recording studios were deconsecrated churches?

I have been singling out technological contemplations, but Elie has many strengths and strands: detailed and beautifully described moments of listening, engagingly narrated summaries of scholarship, alert attention to telling facts, and a loving knowledge of many different kinds of music, including Robert Johnson and Led Zeppelin. There’s plenty of audiophile information — wax cylinder, recording, mono, stereo, different kinds of tape, 78s, long-playing records, CD’s, iPods — and a lot on the placement of microphones.

Wearing his learning lightly (with wonderful endnotes as a ground), Elie is polyphonic and contrapuntal. In counterpoint, as Nicholas Slonimsky defines it, “each voice has a destiny of its own.” Elie’s book is held together by chain of voices following one other as they make an entrance, step back, overlap, and enter again to reveal a new aspect against the changing conversation: Schweitzer to Casals to Stokowski to Gould to Ma. Other voices too move in and out, filling out the progressions: Tureck, Schoenberg, Einstein, Jobs, even the musically fantastic Mickey Mouse. The voice hovering over all is Elie’s own, modest, serious, attuned to the whole.

Among the wonder of Bach’s music, according to Elie, is that “it sounds inventive — it doesn’t finish the musical thought so much as keep it aloft.” Above all it is this aspect of keeping a musical idea in play, Elie feels, that has inspired so many musicians to enter into this long conversation.

It’s a conversation that has technical or professional aspects, but that also welcomes interested amateurs like Elie and me. Here are two signs of my engagement provoked by this book: first, the number of comments I’ve made in the margins, often disagreements over the role of technology; second, the number of times I’ve turned to CDs, DVDs, iTunes, and YouTube to listen to something he mentioned. Both a benefit and a quandary indeed. The perfection of recordings can be transcendent, yes, but also inhibiting for sublunary amateurs. From a technical point of view, Gould’s Apollonian super-perfection is now an every-day occurrence thanks to the ability to drop in a single-note retouch for a flub. (I’ll leave autotune alone.) For example, I particularly like something Elie doesn’t mention perhaps because he doesn’t: the domestic intimacy of overhearing Gould hum in the background of, say, the English Suites. But I’ve been entranced by some of the YouTube videos he does mention that make the power of Bach visible, from vibrating graphic animations to a Japanese performance of the Matthew Passion to Mstislav Rostropovich playing at the Berlin Wall — in front of exuberant Western graffiti including (Elie strikingly fails to mention given his eye for recurrence) Mickey Mouse.

It is a pleasure to read such a serious and inventive book on Bach, and that’s saying something.

Monsieur Proust’s Library

By Joseph Epstein
wsj

No one should read Marcel Proust’s “In Search of Lost Time” for the first time. A first reading, however carefully conducted, cannot hope to unlock the book’s complexity, its depth, its inexhaustible richness. Roughly a million words and more than 3,000 pages long, it is a novel I have read twice, and one of the reasons I continue to exercise and eat and drink moderately and have a physical every year into my 70s is that I hope to live long enough to read it one more time.

Told with France’s Belle Epoque (that bright and lavish quarter of a century before World War I permanently darkened all life in Europe) as its background, “In Search of Lost Time” is the recollections of a first-person narrator over several decades. This narrator, who bears many resemblances to its author (he is called Marcel, and his family and circumstances are similar to Proust’s) but who also differs from him in striking ways (chief among them that his life is not devoted to writing a great novel), is relentless in his energy for analysis. In his detailed attempt to remember all things past, he is as all-inclusive as literature can get; what normal people filter out of memory the narrator channels in. And so it was with Proust himself: While most authors working at revision tend to take things out of their manuscripts, up to his death in 1922 Proust was continuing to add things to his.

image

“In Search of Lost Time” is a masterwork. Masterworks seem to require new translations every half-century or so, and such has been the case with Proust’s vast novel. Penguin has recently undertaken a re-translation, with different hands assigned each of the novel’s seven volumes, though, alas, not each of these hands is up to the difficult task of translating Proust, and so the translation is uneven. I prefer Terence Kilmartin’s 1970s reworking of the earlier C.K. Scott Moncrieff translation, which appeared under the title “Remembrance of Things Past” (a phrase that Scott Moncrieff took from Shakespeare’s Sonnet XXX: “When to the sessions of sweet silent thought / I summon up remembrance of things past”). It remains in print from Random House, and among its other advantages is that the edition is spaciously printed, no small benefit in a lengthy work composed of sentences sometimes running several cubits long.

Masterworks also engender writing about them by superior people. Small books have been written about Proust’s novel by François Mauriac, Samuel Beckett and Jean-François Revel. Other studies of the book have been done by the poets Howard Moss and Howard Nemerov and the critic Roger Shattuck. Full-length biographies of Proust have been written by George Painter, André Maurois, William C. Carter and Jean-Yves Tadié. Others have written books about photography and Proust; about painting and Proust; about his May 1922 dinner meeting with James Joyce, Igor Stravinsky and other of the great figures of Modernism; about his interest in but limited knowledge of English. There is even an excellent biography of Proust’s mother, who played so important a role in his life. Proustolators, of whom I count myself one, do not want for excellent reading about their idol.

With “Monsieur Proust’s Library,” Anka Muhlstein has added another volume to the collection of splendid books about Proust. A woman of intellectual refinement, subtle understanding and deep literary culture, Ms. Muhlstein has written an excellent biography of Astolphe de Custine, the 19th-century French aristocrat who did for Russia what Alexis de Tocqueville did for the United States. Her previous book, “Balzac’s Omelette,” was a study of the place of food in that novelist’s life and in his work.

“Monsieur Proust’s Library” is a variation on her Balzac book. Early in “Balzac’s Omelette” she wrote: “Tell me where you eat, what you eat, and what time of day you eat, and I will tell you who you are.” Much to it, but there is even more to be learned by discovering, as Ms. Muhlstein in effect does in “Monsieur Proust’s Library,” what a person reads and when, what he thinks of what he reads, and what effect it has had on him. Omelettes for Balzac, books for Proust: Ms. Muhlstein is an excellent provisioner of high-quality intellectual goods.

Marcel Proust (1871-1922) was immensely well read. “In Search of Lost Time” encapsulates within itself the main traditions in French literature: both in fiction (from Madame de Lafayette through Stendhal, Balzac, Flaubert and Zola) and in the belle-lettristic-philosophical line (from Montaigne through Pascal, La Rochefoucauld and Chamfort). Proust formed a strong taste for generalization through these latter writers. I own a small book of his maxims, drawn from the novel and his discursive writings, and an unusually high quotient of them are dazzling. Let one example suffice: “It has been said that the greatest praise of God lies in the negation of the atheist, who considers creation sufficiently perfect to dispense with a creator.”

As an asthmatic child, Proust read more than most children. Ms. Muhlstein recounts that, by the age of 15, he was already immersed in contemporary literature, having read the essays and novels of Anatole France and Pierre Loti, the poetry of Mallarmé and Leconte de Lisle, and a number of the novels of Dostoyevsky, Tolstoy, Dickens and George Eliot. Unlike Henry James, who referred to their works as “baggy monsters,” Proust fully appreciated the great Russian novelists. He thought Tolstoy “a serene god,” valuing especially his ability to generalize in the form of setting down laws about human nature. Ms. Muhlstein informs us that, for Proust, Dostoyevsky surpassed all other writers, and that he found “The Idiot” the most beautiful novel he had ever read. He admired Dostoyesky’s skill with sudden twists in plot, providing the plausible surprises that propelled his novels.

In his 1905 essay “On Reading,” a key document, Ms. Muhlstein notes, in Proust’s freeing himself to write his great novel, he quoted Descartes: “The reading of all good books is like a conversation with the most cultivated of men of past centuries who have been their authors.” Proust’s examination of “the original psychological act called reading,” that “noblest of distractions,” holds that books are superior to conversation, which “dissipates immediately.”

A book, he felt, is “a friendship . . . and the fact that it is directed to one who is dead, who is absent, gives it something disinterested, almost moving.” Books are actually better than friends, Proust thought, because you turn to them only when you truly desire their company and can ignore them when you wish, neither of which is true of a friend. One also frequently loves people in books, “to whom one had given more of one’s attention and tenderness [than] to people in real life.” In his own novel, Proust wrote: “Real life, life at last laid bare and illuminated—the only life in consequence which can be said to be really lived—is literature.”

Ms. Mulstein provides a comprehensive conspectus of Proust’s reading tastes and habits. But the true strength of her book resides in her lucidly setting out how Proust put his reading to work in the creation of “In Search of Lost Time.” Characters in the novel are imbued with the ideas of the writers Proust admired. The painter Elstir, for example, enunciates many of the theories of the English art critic John Ruskin, whom Proust translated with the help of his mother (whose English was superior to his). As Ms. Muhlstein remarks, Proust also “endows his great creation, Charles Swann, with Ruskin’s artistic taste.”

The narrator’s grandmother is a devoted reader of Madame de Sévigné—whose 17th-century letters are unparalleled for their maternal endearment—who supplies the model for her treatment of her own daughter, the narrator’s mother. At home the Baron de Charlus attempts to imitate the quotidian life of Louis XIV as chronicled by the memoirs of Saint-Simon. Charlus, perhaps the most brilliant of all Proust’s characters—certainly the novel comes most alive when he is at its forefront—is a great reader. The writer Bergotte, who some say is modeled on Anatole France, held many of the views on literature that Proust himself held. The Brothers Goncourt, whose journals provide the most intimate view we have of the great 19th-century French writers—Flaubert, Maupassant, Gautier and others—figure throughout the novel in both direct and indirect ways. Racine’s play “Phèdre,” drawn from the Greek myth about a woman’s passion for her stepson, is used throughout to illustratel’amour-malade: illicit love, possessiveness, jealousy, disappointment, rejection.

Perhaps no other novel has ever been written in which so many characters are readers, and what they read and how they react to it often determine their standing in Proust’s and ultimately our eyes. Characters reveal themselves by snobbishly criticizing lapses in style in Balzac, or, in the instance of the narrator’s friend Bloch, chalking up Ruskin as “a dreary bore.” The Duchesse de Guermantes, who is socially and artistically the central female character in the novel, sees literature as a weapon of social domination, using her heterodox opinions about books to shock and make others uncomfortable. “In Search of Lost Time,” as Ms. Muhlstein demonstrates, is not merely a magnificent book but also a highly bookish book.

The one sentence in “Monsieur Proust’s Library” with which I find myself in disagreement comes late, when Ms. Muhlstein, considering Proust’s condemnation of the Goncourt brothers for their attacks on the morality of their contemporaries, writes: “For Proust literature had nothing to do with morality.” Perhaps Ms. Muhlstein meant to write “conventional morality,” because a reversal of that sentence—”For Proust literature had everything to do with morality”—is closer to the truth. No other modern author was more alive than he to the toll taken by snobbery, cruelty, brutishness; none so exalted kindness, loftiness of spirit, sweetness of character, the kind and generous heart. No great novelist has ever written oblivious to morality, and Marcel Proust is among the novelists in that small and blessed circle of the very greatest of the great.

The Noble and the Base: Poland and the Holocaust

By John Connelly
thenation

Earlier this year, while conferring a posthumous Presidential Medal of Freedom on the Polish hero Jan Karski, Barack Obama inadvertently touched off the greatest crisis in US-Polish relations in recent memory. The man he honored had served as a courier for the Polish resistance against Hitler, and in 1942 Karski traveled across occupied Europe to tell Western leaders about the Nazi war crimes being committed in Poland, including the Holocaust. Karski had been sent on this secret mission, Obama explained, after fellow underground fighters had told him that “Jews were being murdered on a massive scale and smuggled him into the Warsaw Ghetto and a Polish death camp to see for himself.” It was late evening in Warsaw when Obama spoke, but within minutes Polish officials were demanding an apology for his use of the phrase “Polish death camp,” which they thought scandalous.

Even those well-versed in European history must wonder why. After all, the media routinely speak of “French camps” from which Jews were sent to their deaths, and the phrase doesn’t draw similar ire from the French government. On the contrary, in July the French president himself, François Hollande, began a widely covered speech on the seventieth anniversary of the roundup of Jews at Vélodrome d’Hiver by stating, “We’ve gathered this morning to remember the horror of a crime, express the sorrow of those who experienced the tragedy…and therefore France’s responsibility.” Why are Poles so sensitive on the matter of Polish camps? Readers of Halik Kochanski’s new book, The Eagle Unbowed, will ask the opposite question: How could a famously well-educated person such as Barack Obama be so insensitive regarding the simple facts about Poland, the first country to stand up to Hitler?

Here’s one undisputed, essential fact: after the Nazis and their Soviet allies overran Poland in September 1939, they did not permit the Poles to form a new national government. The Soviets made the eastern Polish territories into western Soviet republics; the Germans annexed the western Polish territories into the Reich and made central Poland a “General Government” that they ruled directly. This arrangement was radically different from those in Nazi-occupied France, Denmark or Slovakia, which were ruled by collaborationist regimes. The French camps, then, really were French—that is, operated by French collaborators (a fact stressed by Hollande in his July speech). In Poland the death camps were German, like most other institutions. The Germans allowed the Poles no administration above the village level, reduced the police force to 15,000 men and made the population into a pool of slave labor. They denied Poles schooling above grade six and closed down newspapers and journals while making vodka and pornography readily available. Meat rations disappeared almost entirely, and the population was kept on a starvation diet.

To break Polish resistance, the Germans staged frequent “round-ups,” cordoning off sections of a city’s streets and detaining everyone caught in the dragnet, or sealing off apartment houses, trams or churches and arresting everyone inside. The prisoners were sent to concentration camps or to the Reich as slave labor—or, if circumstances required, kept as hostages, to be shot if Germans were killed by the Polish underground (the ratio was 100 Poles for every German). As one of Kochanski’s sources recalled, in this climate of terror “there was never a moment when we did not feel threatened.”

By 1942, the SS had devised a plan to deport some 31 million Slavs to areas beyond the Ural Mountains. That number was to include 85 percent of all Poles. (A small percentage would stay behind and be forcibly “Germanized.”) In their place would come millions of German settlers, and with them the transformation of Poland into the eastern marches of the thousand-year Reich. The plan calculated a fatality rate from deliberate starvation of up to 80 percent. The mass expulsions began in late 1942, when the Germans cleared some 300 villages near Lublin.

Poles resisted these genocidal policies. By 1944, an underground “Home Army” (AK) had grown to more then 400,000 soldiers on Polish territory, who harassed the Germans while awaiting the right moment for an uprising. Thousands of other Poles escaped and continued the fight outside Poland. Polish pilots accounted for one of every eight German planes shot down during the Battle of Britain. An entire army of Poles left the Soviet Union in 1942 and fought through North Africa and up the Italian peninsula. In September 1944, a Polish parachute brigade under British command dropped into the Netherlands, and the following year Polish soldiers fought their way into Germany, from the west as well as the east.

Despite these efforts, the Poles saw themselves as a nation betrayed. Home Army units broke out of hiding to assist the Red Army as it entered prewar Polish territory early in 1944. Yet instead of welcoming them as allies, Soviet authorities arrested the Polish soldiers and sent them to camps. In August 1944, with Red Army troops encamped on the opposite bank of the Vistula River, the Home Army staged an uprising against the Germans in Warsaw. Soviet forces simply looked on as the Germans regrouped and destroyed the insurgency. Some 200,000 Poles lost their lives. (More than 2 million non-Jewish Poles died in World War II.) Though Poland was the first state to resist Hitler, it lost huge swaths of territory to the Soviet Union without its Western allies so much as uttering a protest. Poles from the lost areas were placed in cattle cars and resettled in central and western Poland (some of which was being “cleansed” of Germans).

Such dramas of idealism, self-sacrifice and betrayal—told well if selectively in Kochanski’s history—seem indelibly compelling. So how did they escape Obama and his speechwriters? The Eagle Unbowed is billed as the “first truly comprehensive account” of Poland in World War II, but previous works have told the basic story. On my small office shelf I count five such volumes (including Timothy Snyder’s important recent work Bloodlands). Why do Westerners remain so ignorant about the simple facts of Poland’s war?

Clues are offered by Jan Gross and Irena Grudzinska-Gross in their new book Golden Harvest. The facts are not so simple, because the country they depict hardly resembles the one described by Kochanski. Instead of starved and recalcitrant victims, gentile Poles appear as accomplices in Nazi policies to exterminate their Jewish co-citizens. These policies involved not only death camps but also massive seizures of Jewish property. After deporting Jews from ghettos, German officials confiscated and sent home the most valuable loot—but much remained to tempt local Poles. When news circulated that Germans were about to clear a ghetto, peasants from surrounding villages drove up their horse carts to haul away all they could. Lust for gold sent Poles to fields around Treblinka and other German death camps, where they dug many meters into the earth seeking tooth fillings and jewelry. Regions around the camps experienced economic booms.

Rather than being heroic, Poles appear in Golden Harvest not so different from other Europeans in their willingness to aid Hitler in destroying the Jews. Such a perspective, which may seem unremarkable to Western readers, culminates a revolution in historical thinking within Poland itself, sparked some eleven years ago by the publication of Jan Gross’s book Neighbors (2001). Previously, the standard view was that Poles did not help the Nazis because the Nazis viewed Poles as subhumans unfit for collaboration; instead, the Germans sought camp guards from the Ukrainian or Baltic populations. If Poles did not rescue more Jews, that was because of the penalties for doing so: unlike any other people under Nazi occupation, Poles hiding Jews were punished with death for themselves and their families.

In Neighbors, Gross began to undermine this consensus by showing that in the small town of Jedwabne in northeast Poland, on July 10, 1941, Poles murdered their Jewish neighbors in a day-long orgy of violence. After recovering from the shock of this revelation, Polish historians examined previously neglected sources and found more than twenty other places where Poles—encouraged but not forced by the Germans—had abused and killed Jews in the summer of 1941. A new Polish Center for Holocaust Research in Warsaw has pushed forward this revolution. Historians still agree that the overwhelming majority of Polish Jews were killed by the Germans, first in overcrowded ghettos under conditions calculated to kill slowly, and then through deportations to the death camps, a process mostly completed by late 1942. But they estimate that some 10 percent of Poland’s Jews escaped deportation and sought shelter in villages and forests, often in large family units. The great majority of these Jews (probably more than 80 percent) did not survive until liberation because Poles helped Germans hunt them down.

In their studies of rural Poland, the Polish historians Jan Grabowski, who teaches at the University of Ottawa, and Barbara Engelking, of the Institute of Philosophy and Sociology at the Polish Academy of Sciences in Warsaw, have shown how this happened. First, German police and Polish village leaders enlisted peasants to comb the forests for Jews who were attempting to survive, often in hand-dug caves and bunkers. Once discovered, the Jews were usually executed on the spot, often by German policemen but sometimes by Polish ones. Jews who took shelter with Polish peasants likewise were usually hunted down and killed. This was due not to frequent patrols by the German police, who were actually few and far between, but to the watchful eyes of other Poles, recording in an invisible ledger every commonplace fact, such as extra portions of bread or milk being consumed by a given household. The members of one Polish family lost their lives when German gendarmes—tipped off by the family’s neighbors—discovered stores of food intended for Jews in hiding (who were also discovered and shot).

Polish historians have long known about Polish collaborators, whom they described as marginal, the dregs of society. Now a consensus is arising among researchers that the denouncers came from all walks of life. In villages around Kielce, for example, local elites orchestrated the killing of several hundred Jews, lending the crimes a “kind of official imprimatur,” according to Gross. Polish policemen tended to be well-situated heads of families. In his investigation of a district in southeastern Poland, Grabowski discovered that peasants with medium-size properties were overrepresented among the collaborators.

Jan and Irena Gross do not claim that all Poles took part in looting Jews’ property, let alone in killing them. Yet those who did could count on the tacit acceptance of their communities. Villagers also knew about the Polish underground but divulged nothing about it to the Germans, for that would have violated a societal consensus. Indeed, thousands of Poles eagerly risked death in the Home Army. Young people in particular plunged enthusiastically into all kinds of “suicidal” acts aimed at frustrating German policy—save the policy of killing Jews. By contrast, stealing from, hunting down and murdering Jews did not flout commonly shared values. Again and again, postwar court testimony speaks of a Jew discovered in hiding and begging his neighbors (with whom he might have played as a child) for his life, yet being delivered to the gendarmes and then shot. All of this occurred in the open. Alina Skibinska, who has read hundreds of court files and other documents, said she has not encountered a single case where villagers found escaped Jews and either let them return to the forest or decided to hide them themselves. The new historical work makes it clear that rural Poland was a hostile, indeed deadly, environment for Jews seeking help.

* * *

Halik Kochanski does not deny that Jews under German occupation faced a different situation from Poles. “For all the sufferings of the Christian Poles during this period,” she writes, “they were not being subjected to the unprecedented policy of calculated and deliberate extermination that the Polish Jews faced.” Yet though Kochanski reads Polish, the revolution in the history of Polish-Jewish relations has passed her by. She acknowledges the killings at Jedwabne but attributes them to German instigation. Of the pogroms in nearby Polish towns she says nothing, though she eagerly reports that Ukrainians abused Jews in eastern Poland. The “Ukrainians needed little encouragement,” she writes. But Poles needed no more.

Kochanski admits to the existence of anti-Semitism in Poland but denies it explanatory power. She cites an SS report from 1941 complaining there was “no real antisemitism” in the country, but she fails to ask whether the perspective of the SS is reliable on this score: Who, after all, might count as “real” anti-Semites for these hyper-racists? In keeping with the old stereotypes, Kochanski explains Polish hostility to the Jews as a reaction to the supposed Jewish sympathy for Communism. “One possible motive for taking part in the pogroms” at Jedwabne, she writes, “could have been revenge against the perceived prominence of the Jews in the Soviet administration.” Does that account for the hundreds of men, women and children who were burned to death in a single barn (and whose screams were so loud that a band was brought in to drown them out)?

Though she has not read recent studies of the fate of Jewish refugees, Kochanski does respond to earlier work by the Israeli historian Shmuel Krakowski on the Polish Home Army’s hunting down of Jewish partisans hiding in the forests. In tune with nationalist writers, she calls these partisans “Jewish bandits” and asserts that, by executing such alleged marauders, the AK “protected” the Polish population. And yet, if it had included Jews as part of the population to protect, the Polish underground would have fed those in hiding rather than hunt them down. In a sense, members of the AK were also bandits, dependent on the local population for provisions, taking by force what they could not obtain by consent. Why does Kochanski think that Polish Jewish partisans were a menace whereas Polish Christian partisans were not?

The answer is that Kochanski repeats the stereotypes of her sources. In the Polish mind, Jews were Communists, and armed groups of Jewish escapees were feared for showing particular brutality toward the Polish Christian population. The combing of forests for “bandits” thus produced a sense of security among Poles. Like the nationalist authors she favors, Kochanski assumes that most Poles wanted to help the Jews. Drawing on a few stories from eastern Poland (including recollections of her relatives), she asserts that “the outsourcing of Jewish labour [from camps] to local landowners and farmers gave the Poles the opportunity to provide assistance.” If more Jews did not survive, that was because they refused to help themselves. WÅ‚adysÅ‚awa Chomsowa, a Pole who was “very active in saving Jews,” noted, “the greatest difficulty was the passivity of the Jews themselves.” Kochanski cites a Jewish survivor from Wilno: “we should have mobilized and fought.”

If Kochanski had read more Jewish memoirs, she would feel the cold absence of sympathy characteristic of such opinions. Resistance is not spontaneous. A crowd consisting largely of women and children, herded by heavily armed and extremely violent guards, does not “as a man” start thrashing or hitting. Until the end, Jews could not be certain of their fate, though they could be certain that even a slight display of disobedience would result in the immediate execution of oneself and one’s loved ones. The Nazis diabolically exploited Jews’ devotion to their families: though Kochanski writes that the Jews of eastern Poland were “poorly guarded and had ample opportunities for escape,” it would have meant abandoning children and elderly parents to their fate. When some Jews finally did escape from the ghettos in 1942 to avoid being sent to the death camps, they fled in large family units—and that is how they met their deaths during the ensuing manhunts. In Kochanski’s account, Poles have no role in this story. She writes that some Jews “took to the forests where the Germans hunted them down.”

Referring to Polish attitudes toward Jews during the Holocaust, Kochanski writes that the issue has “provoked intense and highly emotional debates which show no sign of ending.” The implication is that the historiography consists of a predictable repetition of viewpoints, “Polish” and “Jewish.” In her book, the former mostly prevails.

* * *

One might have thought it understandable that destitute Poles would seize Jewish property after its owners were killed; after all, they also seized the property of other Poles. Historian Anna Machcewicz has written of a B-24 bomber that crashed in Poland; soon, local peasants went inside the wreckage and stripped the dead Polish crew of their clothes. Silent hoards of Polish looters descended on the Warsaw Ghetto after it was emptied in 1943, but the same thing happened in the ruins of the city’s west bank after the Germans left in January 1945. And after the fighting, millions of Poles moved in and began using property left by the Germans in the western part of the country. In desperate times, people take what they need to survive.

Jan Gross refuses to accept such reasoning. Though many in Poland dismiss him as a Jew, Gross represents a particular kind of Polish perspective, one that is self-critical though patriotic. Of the inhabitants of the villages around Treblinka, he writes: “It can be safely assumed that the customs of every social and ethnic group demand respect toward their dead. Such respect is not a sign of some ‘higher’ civilization, but of basic human solidarity. The body is not a thing; even after death it retains the shape of the person whom it was serving in life…. one cannot say that the despoiling of the ‘bottomless Treblinka earth,’ as Vasili Grossman described it, could be justified by poverty, need, or necessity.” Vital here are two words yoked together, “their dead”: in Gross’s telling, the murdered Jews were as much Polish as Jewish.

How to relate Poles and Jews is a question that has confounded historians. Gross himself omitted Jews from his first book, Polish Society Under German Occupation (1979), because they were “separated from the rest of the population and treated differently by the occupiers.” He wrote of the self-sacrifice and heroism of Poles as they created institutions to salvage their national life. Yet his sources made him wonder about the realities left out of this “heroic” narrative. At the Hoover Institution in Stanford, Gross discovered a shocking report written in 1940 by Jan Karski, who believed that Poles ought to understand that both Jews and Poles “are being unjustly persecuted by the same enemy.” However, “such an understanding does not exist among the broad masses of the Polish populace. Their attitude toward the Jews is overwhelmingly severe, often without pity.” Karski worried that this attitude made Poles vulnerable to demoralization. “A large percentage of them is benefitting from the rights that the new situation gives them…. ‘The solution of the Jewish Question’ by the Germans…is a serious and quite dangerous tool in the hands of the Germans, leading toward the ‘moral pacification’ of broad sections of Polish society.”

These observations were so embarrassing that Karski kept them out of his reports for the Western allies. They unsettled Gross because they called into question the stories he had imbibed growing up in postwar Poland, even in a household that found ethnic nationalism repugnant. His father was the Polish-Jewish barrister Zygmunt Gross, widely respected for defending victims of Stalinism in the early 1950s; his mother, the Polish gentile Hanna Szumanska, served in the Polish underground. She helped hide Zygmunt and other Jews, including a first husband denounced by neighbors (who were rewarded with a liter of vodka). His parents were a bridge for Jan to an older, romantic sense of Polishness, largely forgotten in mostly mono-ethnic postwar Poland—a Polishness that had included Jews, Lithuanians and Ukrainians. Jan, his parents and his wife Irena left Poland after the anti-Semitic campaign orchestrated by Polish Communists in 1968, first heading for Italy and then the United States. This was the “March emigration,” in which most remaining Polish Jews left the country. The former dissident publisher Barbara Torunczyk later recalled that Zygmunt was the first of the émigrés she saw return for a visit. That was in 1973: he was bringing back Hanna’s ashes to be laid to rest in Polish soil. Later, Jan would return with the ashes of his father.

After the collapse of Communism in 1989, Jan Gross spent more time in Poland and consulted previously inaccessible records. A sociologist by training, he also used methods considered unserious by Polish historians, such as talking to people who knew about the crimes. At pubs in Jedwabne, one could hear “incredible” stories about Jews being murdered in a barn that the historians knew nothing about. After Neighbors, Gross published Fear (2006), a study of the postwar pogroms in Krakow and Kielce (on July 3, 1946, in the latter city, Poles killed forty-two Jews who had survived the Holocaust). “I wrote this book,” he later said, “as a Pole who felt the events described were a stain on my Polish identity.”

Yet the intensity of this criticism left little space for the more tolerant Poland that was his parents’: none of the protagonists in Golden Harvest communicate values that transcend the ethnic perspective. Historian PaweÅ‚ Machcewicz, himself a leader in investigating the Jedwabne massacre, has criticized Gross for not including in his accounts the thousands of Poles who helped Jews. In Warsaw alone, some 25,000 Jews are thought to have lived in hiding before the outbreak of the uprising in August 1944, and, according to conservative estimates, at least three times that number of Poles would have been required to keep them alive. No other group is as numerous among the “Righteous Gentiles” honored at Yad Vashem as Poles.

Gross’s answer to criticisms like Machcewicz’s is that the heroic story is well-known in Poland, and his task as an author is to say something new. But why has no one before him told of Poles robbing and murdering Jews? Gross’s book on Jedwabne appeared sixty years after the crime. A partial explanation lies in the decades of collusion between Communism and nationalism. Poland’s Communists were placed in power by the Red Army and widely seen as lackeys of Moscow, Poland’s historic enemy. They therefore sought to boost support through an ethnic narrative that was anti-German but also, at times, anti-Semitic. The historians of Communist Poland ignored questions of wartime collaboration and wrote that 6 million Polish citizens had died, failing to note that more than half were Jewish. As late as 1995, only 8 percent of Poles surveyed believed that Auschwitz was, above all, a place where Jews were killed (of the 1.1 million people killed at Auschwitz, about 90 percent were Jewish); by 2010, that number had risen to 47.4 percent.

In this sense, the work of Jan and Irena Gross and younger Polish historians like Engelking and Grabowski is pedagogical: part of the broader democratization of Polish society, excavating and contextualizing evidence deemed inopportune by the Communist regime—for example, the hundreds of Yiddish-language memoirs left by survivors in the immediate postwar era and stored in the Jewish Historical Institute in Warsaw. Even historians on the far right are now admitting some Polish role in the Holocaust. But Kochanski, a military historian whose parents fought in the resistance, also wants to educate, and her target audience is Westerners ignorant of the Polish struggle against Nazi and Communist totalitarianism.

* * *

The question is whether these two images of Poland—a country of heroes and a country of collaborators—can be combined. The difficulty stems from the occupation itself. Rarely has a society been more violently divided than Polish society was during the war: Jews divided from Poles, but also Poles divided from other Poles. The Polish Jewish writer Janina Bauman, who escaped the Warsaw Ghetto with her mother and sister and lived among Poles, described the process. “Some time and several shelters passed,” she recalled, “before I realised that for the people who sheltered us our presence also meant more than great danger, nuisance, or extra income. Somehow it affected them, too. It boosted what was noble in them, or what was base. Sometimes it divided the family, at other times it brought the family together in a shared endeavor to help and survive.”

The base attained a distance from the noble that Westerners can scarcely imagine. But the story does not end there, for the distance between the two poles was also collapsed as each was inverted, and each inversion compounded. The base became more so by being presented as virtuous, and the noble eluded people’s reach because it was stigmatized as harmful, indeed self-serving. Jan Gross writes of a case in southern Poland where neighbors hounded a woman to dispose of the two Jewish children in her care, insisting she was “selfishly” endangering the village. They left her in peace only after she had assured them—falsely—that she had drowned the pair. Gross asks us to ponder the inversion of morality in a place where people breathed a sigh of relief believing that their neighbor had murdered two children. In his sources, Grabowski repeatedly encounters Polish police carrying out their “patriotic” duty of turning over Jewish women and children to the Germans. The debasement of the noble continued after the war, as Polish rescuers begged the Jews they had saved to keep quiet. The eminent critic Marcel Reich-Ranicki and his wife owed their salvation to the Polish worker “Bolek,” who housed them for fourteen months after they had fled the Warsaw Ghetto. When Soviet troops finally pushed the Germans back and liberated them, Bolek offered Reich-Ranicki a glass of vodka to celebrate but implored him to tell “no one that you were with us. I know this nation. They would never forgive us for sheltering two Jews.”

Reich-Ranicki kept his promise until 2000, when his autobiography appeared, to much acclaim, in Germany. Like him, historians such as Jan and Irena Gross are also exposing stories that have lain dormant beneath the surface for decades, unnoticed because they happened far away from Warsaw or Krakow, where urbane intellectuals construct “historical memory.” Gross cites an esteemed Polish ethno-musicologist who has spent decades collecting folklore in the Polish countryside, who is “enamored of Polish village life and its culture,” but who writes, “The most painful thing for me is the attitude in the countryside toward Jews, and a universal sense of triumph because they are no longer there.” A keen reporter for the Polish underground had already written in December 1942 that in the “soul” of Polish society, there was no “elemental protest” against the murder of the Jews. Instead, Poles felt “a subconscious satisfaction that there will be no Jews in the Polish organism.” This was a confirmation of Karski’s worst fears. By 1943, writes the historian Andrzej Zbikowski, Poles took for granted that the Jews would disappear, and a kind of solidarity spread through the Polish underground, from the (otherwise nonracist) socialists to the deeply anti-Semitic nationalists. The war would lead to the defeat of two enemies: the Germans, but also the Jews.

From a European perspective, Poland seems to be advancing toward a “normal” open society that is working its way through a difficult past. In France, decades elapsed before the public and the French state recognized the extent of native collaboration with the Nazis. What is different in Poland is the severity of the clash between the old and new narratives. The Polish underground was more massive, Polish collaboration far smaller than its French counterpart, and Polish suffering on a scale unknown in Western Europe—yet the crimes against Jews on Polish territory, and the virulence of native anti-Semitism, were also far greater. And even more is at stake here: the myth (not to say fiction) of martyrdom became a pillar of identity in Poland, a country made to live not only under the yoke of a system imposed by the Soviets but also in great poverty, forgotten by Europe and seemingly irrelevant. If Poland did not have a present, at least it had a past.

In a May 31 letter to his Polish counterpart, President Obama apologized for the words “Polish death camps.” “The killing centers at Auschwitz-Birkenau, Belzec, Treblinka, and elsewhere in occupied Poland were built and operated by the Nazi regime,” he wrote. “In contrast many Poles risked their lives—and gave their lives—to save Jews from the Holocaust.” Yet if one reads the newly translated memoirs of Jewish survivors, and the neglected court testimonies backing up the long-suppressed popular memories of looting and murder, one can say that during World War II, Poland itself became a death camp for Jews. If it worked effectively, that was because Poles helped keep it running. Exactly how many took part in the manhunts and denunciations isn’t known, but their numbers were significant enough to produce the result that the country’s nationalists wanted, satisfying widespread hopes that Poland would become “Polish.” To say so is not to hurl slander at the Poles from afar, but to reprise a story that ever more Poles are telling about themselves, in the name of a Poland that is at the same time very old and very new

Upper Middle Brow: The culture of the creative class

By William Deresiewicz
theamericanscholar

“Masscult and Midcult,” Dwight Macdonald’s famous essay in cultural taxonomy, distinguished three levels in modern culture: High Culture, represented most recently by the modernist avant-garde but already moribund in Macdonald’s day; Mass Culture (“or Masscult, since it really isn’t culture at all”), also known as pop culture or kitsch (or, more recently, entertainment); and the insidious new form Macdonald labeled Midcult. Midcult is Masscult masquerading as art: slick and predictable but varnished with ersatz seriousness. For Macdonald, Midcult was Our TownThe Old Man and the Sea, South PacificLife magazine, the Book-of-the-Month Club: all of them marked by a high-minded sentimentality that congratulated the audience for its fine feelings.

“Masscult and Midcult” was published in 1960. In his introduction to a recent collection of Macdonald’s essays, Louis Menand wrote that the culture that was about to emerge in the ensuing decade, a hybrid of pop demotics and high-art sophistication—Dylan, the Beatles, Bonnie and Clyde, Andy Warhol, Portnoy’s Complaint—rendered Macdonald’s categories obsolete. Perhaps, but Masscult and Midcult are certainly still with us, even if there are other forms of culture, too. Masscult today is Justin Bieber, the Kardashians, Fifty Shades of Grey, George Lucas, and a million other things. Midcult, still peddling uplift in the guise of big ideas, is Tree of Life, Steven Spielberg, Jonathan Safran Foer, MiddlesexFreedom—the things that win the Oscars and the Pulitzer Prizes, just like in Macdonald’s day.

But now I wonder if there’s also something new. Not middlebrow, not highbrow (we still don’t have an avant-garde to speak of), but halfway in between. Call it upper middle brow. The new form is infinitely subtler than Midcult. It is post- rather than pre-ironic, its sentimentality hidden by a veil of cool. It is edgy, clever, knowing, stylish, and formally inventive. It is Jonathan Lethem, Wes Anderson, Lost in TranslationGirls, Stewart/Colbert, The New YorkerThis American Life and the whole empire of quirk, and the films that should have won the Oscars (the films you’re not sure whether to call films or movies).

The upper middle brow possesses excellence, intelligence, and integrity. It is genuinely good work (as well as being most of what I read or look at myself). The problem is it always lets us off the hook. Like Midcult, it is ultimately designed to flatter its audience, approving our feelings and reinforcing our prejudices. It stays within the bounds of what we already believe, affirms the enlightened opinions we absorb every day in the quality media, the educated bromides we trade on Facebook. It doesn’t tell us anything we don’t already know, doesn’t seek to disturb—the definition of a true avant-garde—our fundamental view of ourselves, or society, or the world. (Think, by contrast, of some truly disruptive works: The WireBlood Meridian, almost anything by J. M. Coetzee.)

There is a sociology to all of this. As Clement Greenberg pointed out in “Avant-Garde and Kitsch” (1939), the predecessor to Macdonald’s essay, high culture flourished under the aristocracy. Mass culture came in with mass literacy, while Midcult is a product of the postwar college boom, a way of catering to the cultural aspirations of the exploding middle class. Now, since the ’70s, we’ve gone a step further, into an era of mass elite and postgraduate education. This is the root of the so-called creative class, the Bobos, the liberal elite as it exists today. The upper middle brow is the cultural expression of this demographic. Its purpose is to make consciousness safe for the upper middle class. The salient characteristic of that class, as a moral entity, is a kind of Victorian engorgement with its own virtue. Its need is for an art that will disturb its self-delight.

Solitude is enlightening but if it does not lead us back to society, it can become a spiritual dead end

By John Burnside
aeonmagazine

‘I have a great deal of company in my house; especially in the morning, when nobody calls.’ Henry David Thoreau’s remark about his experience of solitude expresses many of the common ideas we have about the work — and the apparent privileges — of being alone. As he put it so vividly in Walden(1854), his classic account of the time he spent alone in the Massachusetts woods, he went there to ‘live deep and suck out all the marrow of life’. Similarly, when I retreat into solitude, I hope to reconnect with a wider, more-than-human world and by so doing become more fully alive, recovering what the Gospel of Thomas called, ‘he who was, before he came into being’.

It has always been a key step on the ‘way’ or ‘path’ in Taoist philosophy (‘way’ being the literal translation of Tao) to go into the wilderness and lay oneself bare to whatever one finds there, whether that be the agonies of St Anthony, or the detachment of the Taoist masters. Alone in the wild, we shed the conventions that keep society ticking over — freedom from the clock, in particular, is a hugely important factor. We are opened up to other, less conventional, customs: in the wild, animals may talk to us, birds will sometimes guide us to water or light, the wind may become a second skin. In the wild, we may even find our true bodies, creaturely and vivid and indivisible from the rest of creation — but this comes only when we break free, not just from the constraints of clock and calendar and social convention, but also from the sometimes-clandestine hopes, expectations and fears with which we arrived.

For many of us, solitude is tempting because it is ‘the place of purification’, as the Israeli philosopher Martin Buber called it. Our aspiration for travelling to that place might be the simple pleasure of being away, unburdened by the pettiness and corruption of the day-to-day round. For me, being alone is about staying sane in a noisy and cluttered world – I have what the Canadian pianist Glenn Gould called a ‘high solitude quotient’ — but it is also a way of opening out a creative space, to give myself a chance to be quiet enough to see or hear what happens next.

There are those who are inclined to be purely temporary dwellers in the wilderness, who don’t stay long. As soon as they are renewed by a spell of lonely contemplation, they are eager to return to the everyday fray. Meanwhile, the committed wilderness dwellers are after something more. Yet, even if contemplative solitude gives them a glimpse of the sublime (or, if they are so disposed, the divine), questions arise immediately afterwards. What now? What is the purpose of this solitude? Whom does it serve?

To take oneself out into the wilderness as part of a spiritual quest is one thing, but to remain there in a kind of barren ecstasy is another. The Anglo-American mystic Thomas Merton argues that ‘there is no greater disaster in the spiritual life than to be immersed in unreality, for life is maintained and nourished in us by our vital relation with realities outside and above us. When our life feeds on unreality, it must starve.’ If practised as part of a living spiritual path, he says, and not simply as an escape from corruption or as an expression of misanthropy, ‘your solitude will bear immense fruit in the souls of men you will never see on earth’. It is a point Ralph Waldo Emerson, Thoreau’s friend and teacher, also makes. Solitude is essential to the spiritual path, he argues, but ‘we require such solitude as shall hold us to its revelations when we are in the streets and in palaces … it is not the circumstances of seeing more or fewer people but the readiness of sympathy that imports’.

Thoreau, however, felt keenly the corruption of a politically compromised, profit-oriented, slave-keeping society. His posthumously published work Cape Cod (1865) is, at least in part, an expression of dismay, even grief, in which he revealed his desire to turn his back on American society. Yet for much of his life, he kept Emerson’s principle close, as he remembers in Walden:

There too, as everywhere, I sometimes expected the Visitor who never comes. The Vishnu Purana says, ‘The house-holder is to remain at eventide in his courtyard as long as it takes to milk a cow, or longer if he pleases, to await the arrival of a guest.’ I often performed this duty of hospitality, waited long enough to milk a whole herd of cows, but did not see the man approaching from the town.

Perhaps the ‘Visitor who never comes’ is the man approaching from town – or perhaps it is some other, more mysterious – and perhaps less benevolent arrival. As Merton cautioned, the wilderness is a place of becoming lost, as much as found. ‘First, the desert is the country of madness. Second, it is the refuge of the devil, thrown out … to “wander in dry places”. Thirst drives men mad, and the devil himself is mad with a kind of thirst for his own lost excellence — lost because he has immured himself in it and closed out everything else.’

Karl Marx expresses this idea in another way. In his A Contribution to the Critique of Hegel’s Philosophy of Right (1844) he says, ‘what difference is there between the history of our freedom and the history of the boar’s freedom if it can be found only in the forests? … It is common knowledge that the forest echoes back what you shout into it.’ Marx saw religion — and by implication, the spiritual life in general — as ‘the opium of the people,’ but the important point is the need to be careful of the dangers of forest thinking. As in every fairy tale and medieval romance, the wilderness is peopled with dragons, but only some of them are native to the place. The rest are introduced by the solitary pilgrim himself, whose quest had seemed so pure and well-intentioned when he set out.

If solitude does not lead us back to society, it can become a spiritual dead end, an act of self-indulgence or escapism, as Merton, Emerson, Thoreau, and the Taoist masters all knew. We might admire the freedom of the wild boar, we might even envy it, but as long as others are enslaved, or hungry, or held captive by social conventions, it is our duty to return and do what we can for their liberation. For the old cliché is true: no matter what I do, I cannot be free while others are enslaved, I cannot be truly happy while others suffer. And, no matter how sublime or close to the divine my solitary hut in the wilderness might be, it is a sterile paradise of emptiness and rage unless I am prepared to return and participate actively in the social world. Thoreau, that icon of solitary contemplation, did eventually return to support the cause of abolition. In so doing, he laid down the principles of civil disobedience that would later inspire Gandhi, Martin Luther King and the freedom fighters of anti-imperialist movements throughout the world.

‘No man is an island, entire of itself,’ wrote John Donne, in a too-often quoted line, but the full impact comes in the continuation of his meditation, where he writes:

every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bell tolls; it tolls for thee.

It is one of the great paradoxes of solitude, that it offers us not an escape, not a paradise, not a dwelling place where we can haughtily maintain our integrity by ignoring a vicious and corrupt social world, but a way back to that world, and a new motive for being there. Moreover, it can enliven a new sense of what companionship means — and, with it, a courtesy and hospitability that goes beyond anything good manners might decree. Because, no matter who I am, and no matter what I might or might not have achieved, my very life depends on being prepared, always, for the one visitor who never comes, but might arrive at any moment, from the woods or from the town.

Proust wasn’t a neuroscientist. Neither was Jonah Lehrer.

By Boris Kachka
nymag

“We are all bad apples,” wrote Jonah Lehrer, in probably the last back-cover endorsement of his career. “Dishonesty is everywhere … It’s an uncomfortable message, but the implications are huge.”

Lehrer’s blurb was for behavioral economist Dan Ariely’s The (Honest) Truth About Dishonesty: How We Lie to Everyone—Especially Ourselves.Among Ariely’s bite-size lessons: We all cheat by a “fudge factor” of roughly 15 percent, regardless of how likely we are to get caught; a few of us advance gradually to bigger and bigger fudges, often driven by social pressures; and it’s only when our backs are up against the wall that we resort to brazen lies.

Lehrer, 31, had already established the kind of reputation that made his backing invaluable to a popular science writer. Thanks to three books, countless articles and blog posts, and many turns on the lecture circuit, Lehrer was perhaps the leading explainer of neuroscience this side of a Ph.D. He was kind enough to interview Ariely this past June for the Frontal Cortex, a blog Lehrer had started in 2006 and carried with him from one high-profile appointment to the next.The New Yorker had begun hosting it that month, after Lehrer was hired as a staff writer—another major career milestone. But newyorker.com didn’t run the Ariely story, because by the time he wrote it, Lehrer had already been banned from his own blog. Two weeks earlier, readers had discovered that he was rampantly “self-plagiarizing” his own blog posts among different media outlets. Lehrer held onto his three-day-old print contract, but the blog was on ice.

Then it got so much worse. Four excruciating months later, Jonah Lehrer is known as a fabricator, a plagiarist, a reckless recycler. He’s cut-and-pasted not just his own stories but at least one from another journalist; he’s invented or conflated quotes; and he’s reproduced big errors even after sources pointed them out. His publisher, Houghton Mifflin Harcourt, will soon conclude a fact-check of his three books, the last of which, Imagine, was recalled from bookstores—a great expense for a company that, like all publishing houses, can’t afford to fact-check most books in the first place. In the meantime, he’s been completely ostracized. It’s unclear if he’ll ever write for a living again.

If the public battering seems excessive now, four months in, that should come as no surprise. That’s how modern scandals go—burning bright, then burning out, leaving a vacuum that fills with sympathy. It’s especially true in cases like Lehrer’s, where the initial fury is narrowly professional, ­fueled by Schadenfreude and inside-baseball ethical disputes. It was fellow journalists who felled Lehrer, after all, not the ­sources he betrayed. But the funny thing is that while the sins they accused him of were relatively trivial, more interesting to his colleagues than his readers, Lehrer’s serious distortions—of science and art and basic human motivations—went largely unnoticed. In fact, by the time he was caught breaking the rules of journalism, Lehrer was barely beholden to the profession at all. He was scrambling up the slippery slope to the TED-talk elite: authors and scientists for whom the book or the experiment is just part of a multimedia branding strategy. He was on a conveyor belt of blog posts, features, lectures, and inspirational books, serving an entrepreneurial public hungry for futurist fables, easy fixes, and scientific marvels in a world that often feels tangled, stagnant, and frustratingly familiar. He was less interested in wisdom than in seeming convincingly wise.

The remarkable thing about that transformation is that it wasn’t all that unusual. In his short, spectacular career, Lehrer had two advocate-editors who quickly became his exilers. The first, Wired editor Chris Anderson, has himself been caught plagiarizing twice, the second time in an uncorrected proof. The often-absentee editor of a futurist magazine that may be the house journal of the lecture circuit, Anderson makes his living precisely as Lehrer did—snipping and tailoring anecdotal factoids into ready-to-wear tech-friendly conclusions.

The second, David Remnick, has invested resources in The New Yorker’s own highbrow talk series, The New Yorker Festival, in which staff writers function as boldfaced brand experts in everything from economics to medicine to creativity. The tone of those talks mixes the smooth technospeak of the Aspen Ideas Festival—co-hosted by rival magazine The Atlantic—with the campfire spirit of first-person storyteller confab the Moth. It was at the Moth that The New Yorker’s biggest brand, The Tipping Point author Malcolm Gladwell, got into hot water in 2005 by telling a story about games he played in the pages of the Washington Post that turned out to be almost entirely untrue. In print, Gladwell is often knocked for reducing social science to easy epiphanies and is occasionally called out for ignoring evidence that contradicts his cozy theories—most recently over a piece this past September on Jerry Sandusky. Yet he also serves as a pioneer in the industry of big-idea books—like those by his New Yorker colleague James Surowiecki, the “Freakonomics” guys, Dan Ariely, and others. Theirs is a mixed legacy, bringing new esoteric research to a lay audience but sacrificing a great deal of thorny complexity in the process.

In the world of magazines, of course, none of us is immune to slickness or oversimplification—New York included. But two things make Lehrer’s glibness especially problematic, and especially representative. First, conferences and corporate speaking gigs have helped replace the ­journalist-as-translator with the journalist-as-sage; in a magazine profile, the scientist stands out, but in a TED talk, the speaker does. And second, the scientific fields that are the most exciting to today’s writers—neuroscience, evolutionary biology, behavioral economics—are fashionable despite, or perhaps because of, their newness, which makes breakthrough findings both thrilling and unreliable. In these fields, in which shiny new insights so rarely pan out, every popularizer must be, almost by definition, a huckster. When science doesn’t give us the answers we want, we find someone who will.

“I’ve never really gotten over the sense of fraudulence that comes with being onstage,” Lehrer once said. Young and striving and insecure, he was both a product of this glib new world and a perpetrator of its swindles. He was also its first real victim.

Lehrer, who grew up in L.A. and attended prestigious North Hollywood High School, was always precocious. At 15, he won $1,000 in a contest run by nasdaq with an essay calling the stock market “a crucial bond between plebeians and patricians.” Two years later, he and some other students made the finals of the countrywide Fed Challenge for a cogent argument against raising the national interest rate. “We felt we shouldn’t act on a guess or a premonition,” he told a newspaper. “We should act on the basis of statistics.”

At Columbia University, Lehrer majored in neuroscience, helped edit the literary Columbia Review, and spent a few years working in the lab of Eric Kandel. (Journalists and scientists often mistook this undergraduate experience for lab work that left Lehrer just shy of a Ph.D.) The Nobel Prize–winning neuroscientist, who was unlocking the secrets of our working memory, remembers his former lab assistant fondly. “He was the most gracious, decent, warm, nice kid to interact with,” says Kandel. “Cultured, fun to have a conversation with—and knew a great deal about food. I was surprised he didn’t go into science, because he had a real curiosity about it.”

Lehrer won a Rhodes Scholarship, then used some of his research at Oxford to write his first book, Proust Was a Neuroscientist. Published in late 2007, it was a grab bag of fun facts in the service of an earnest point: that great Modern artists anticipated the discoveries of brain science. It had a senior-thesis feel, down to an ambitious coda. Critic C. P. Snow had called, in 1959, for a “third culture” to bridge science and art—a prophecy that had been fulfilled, but to the advantage, Lehrer thought, of science. In Proust, Lehrer proposed a “fourth culture,” in which art would be a stronger “counterbalance to the glories and excesses of scientific reductionism.”

A year earlier, Lehrer had begun blogging on Frontal Cortex. After hundreds of posts, he began to find traction in magazines. Mark Horowitz, then an editor atWired, brought him in to write a feature on a project to map all the genes in the brain. “It was a very complex piece,” says ­Horowitz, “with lots of reporting, lots of science. I thought that was a breakthrough for him.” It was, he adds, thoroughly fact-checked; none of Lehrer’s magazine stories have been found to have serious errors.

“If you asked him, ‘How many ideas do you have for an article?’ he had ten ideas, more than anyone else,” Horowitz says. “That’s why he was able to churn out so many blog posts.” They were long posts, too, the kind that quickly became the basis for print stories. In 2010, Frontal Cortex moved over to wired.com. “Chris [Anderson] loved Jonah Lehrer—loved him,” Horowitz says. “Any story idea he had, it was, ‘See if Jonah will do it.’ He was good, he was young, and he was getting better with every story.”

Lehrer’s tortuous fall began on what should have been a day of celebration. Monday, June 18, was his official start date as a New Yorker staff writer. That evening, an anonymous West Coast journalist wrote to media watchdog Jim Romenesko, noting that one of Lehrer’s five New Yorker blog posts—“Why Smart People Are Stupid”—had whole paragraphs copied nearly verbatim from Lehrer’s October 2011 column for The Wall Street Journal.

Within 24 hours, journalists found several more recycled posts, setting off a feeding frenzy one blogger called the “Google Game”—find a distinctive passage, Google it: pay dirt. On Wednesday, the irascible arts blogger Ed Champion unleashed an 8,000-word catalogue of previously published story material Lehrer had worked into Imagine. (Never mind that drawing on earlier stories for book projects is standard practice.) It was called “The Starr Report of the Lehrer Affair.” That day Lehrer told the New York Times that repurposing his own material “was a stupid thing to do and incredibly lazy and absolutely wrong.”

At The New Yorker, David Remnick initially saw the “self-plagiarism” pile-on as overkill. “There are all kinds of crimes and misdemeanors in this business,”The New Yorker editor said that Thursday, explaining his decision to retain Lehrer. “If he were making things up or appropriating other people’s work, that’s one level of crime.” A source says Remnick did consider firing Lehrer outright, but decided against it.

Ironically, it was another journalist’s sympathy for Lehrer that led to his complete unraveling. “The Schadenfreude with Lehrer was pretty aggressive,” says Michael Moynihan, a freelance writer who was then guest-blogging for the Washington Post. “I was going to write a bit about the mania for destroying journalists because they’re popular and have more money than you do.” Having never read Lehrer’s books, he dug into Imagine (which purports to explain the brain science of “how creativity works”), not even knowing that its first chapter focused on one of his favorite musicians, Bob Dylan. He found some suspiciously unfamiliar quotes. “Every Dylan quote, every citation, is online,” Moynihan says. A new quote is “like finding another version of the Bible.”

He e-mailed Lehrer, who claimed to be on vacation until just after Moynihan’sPost gig was up. But off the top of his head, Lehrer offered one source for a quote—a book on Marianne Faithfull. It was wildly out of context, but no matter: Where did the ­other six come from? When Moynihan reached him the following week, Lehrer expressed surprise that he still planned to run the piece. That was when, as Moynihan puts it, “the calls started.”

On the phone, Lehrer seemed charming and cooperative. He said he’d pulled some quotes from a Dylan radio program as well as unaired footage for the documentary No Direction Home. Dylan’s manager, Jeff Rosen, had given him the latter. Moynihan pressed him for more details over the next several days, but Lehrer stalled.

Finally, Moynihan was able to reach Rosen, who said he’d never heard from Lehrer. When Moynihan spoke to the author, while walking down Flatbush Avenue near his Brooklyn home, the conversation grew so heated that a passing acquaintance thought it was a marital spat. Lehrer finally came clean about making up his sources. He was impressed that Moynihan had figured out how to reach Rosen. “It shows,” he told Moynihan, “you’re a better journalist than I am.”

An editor Moynihan knew at the online magazine Tablet had happily accepted Moynihan’s exposé. The Sunday before it was published, July 29, Moynihan had to ignore Lehrer’s late-night calls just to write the piece.

That same evening of July 29, David Remnick was at his first Yankees game of the season. After getting an e-mail from Tablet’s editor, Alana Newhouse, he spent most of the game in the aisle, calling and e-mailing with Newhouse, his editors, and Lehrer. It was all, as Remnick said the next day, “a terrifically sad situation.”

The next morning, a desperate Lehrer finally managed to reach Moynihan. Didn’t he realize, Lehrer pleaded, that if Moynihan went forward, he would never write again—would end up nothing more than a schoolteacher? The story was published soon after. That afternoon, Lehrer announced through his publisher that he’d resigned from The New Yorker and would do everything he could to help correct the record. “The lies,” he said, “are over now.”

The ensuing flurry of tweets and columns was split between the Google Game fact-checkers and opiners like David Carr, who felt that Lehrer’s missteps were the result of “the Web’s ferocious appetite for content” and the collapse of hard news. All of them were grappling to name Lehrer’s pathology. What none of them really asked, and what Houghton Mifflin’s fact-check won’t answer, is what Imagine would look like if it really were scrubbed of every slippery shortcut and distortion. In truth, it might not exist at all. The fabricated quotes are not just slight aberrations; they’re more like the tells of a poker player who’s gotten away with bluffing for far too long.

In case after case, bad facts are made to serve forced conclusions. Take that Dylan chapter. First, of course, there are the quotes debunked by Moynihan. Then there are the obvious factual errors: Dylan did not immediately repair from his 1965 London tour to a cabin in Woodstock to write “Like a Rolling Stone” (he took a trip with his wife first and spent only a couple of days in that cabin), and did not coin the word juiced, as Lehrer claims; it had meant “drunk” for at least a decade. (These errors were discovered by Isaac Chotiner, weeks before Moynihan’s exposé, in The New Republic: “almost everything,” he wrote, “from the minor details to the larger argument—is inaccurate, misleading, or simplistic.”) Lehrer’s analysis of Dylan’s “Like a Rolling Stone” breakthrough is also wrong. It was hardly his first foray into elliptical songwriting, and it was hardly the first piece to defy the “two basic ways to write a song”—a dichotomy between doleful bluesy literalism and “Sugar pie, honeybunch” that no serious student of American pop music could possibly swallow.

Finally and fatally, what ties the narrative together is not some real insight into the nature of Dylan’s art, but a self-help lesson: Take a break to recharge. To anyone versed in Dylan, this story was almost unrecognizable. Lehrer’s intellectual chutzpah was startling: His conclusions didn’t shed new light on the facts; they distorted or invented facts, with the sole purpose of coating an unrelated and essentially useless lesson with the thinnest veneer of plausibility.

It’s the same way with the science that “proves” the lesson. Lehrer quotes one neuroscientist, Mark Beeman, as saying that “an insight is like finding a needle in a haystack”—presumably an insight like Dylan’s, though Beeman’s study hinges on puzzles. Beeman tells me, “That doesn’t sound like me,” because it’s absolutely the wrong analogy for how the brain works—“as if a thought is embedded in one connection.” In the next chapter, Lehrer links his tale of Dylan’s refreshed creativity to Marcus Raichle’s discoveries on productive daydreaming. But Raichle tells me those discoveries aren’t about daydreaming. Then why, I ask, would Lehrer draw that conclusion? “It sounds like he wanted to tell a story.”

Consider another tall tale, this one from Lehrer’s previous book, How We Decide. Discussing what happens when we choke under pressure, Lehrer invokes the famous case of Jean Van de Velde, a golfer who blew a three-stroke lead in the eighteenth hole of the final round of the 1999 British Open. In Lehrer’s telling, the pressure caused Van de Velde to choke, focusing on mechanics and “swinging with the cautious deliberation of a beginner with a big handicap.”

Lehrer tees this up as a transition to a psychological study on overthinking. It fits perfectly into what one critic called “the story-study-lesson cycle” of this kind of book. And just like Dylan’s “insight,” it’s largely made up. Here too he flubs an important fact: Van de Velde didn’t lose outright: He tied and lost the subsequent playoff. But then there is the larger deception. Most golf commentators thought at the time that he simply chose risky clubs—that he wasn’t handicapped by anxiety, but undone by cockiness. Van de Velde agreed; he played too aggressively. A month after the disaster, he said, “I could not live with myself knowing that I tried to play for safety and blew it.” Lehrer just rewrote the history to reach a conclusion flatly contradicted by the story of how Van de Velde actually decided.

Unlike the books, Lehrer’s New Yorker pieces were thoroughly fact-checked. But even there, his conclusions are facile. One popular story, published in 2010, is especially symptomatic of how he misrepresents science—and harms it in the process. Headlined “The Truth Wears Off,” it sets out to describe a curious phenomenon in scientific research: the alarmingly high number of study results that couldn’t be repeated in subsequent experiments. Researchers worry a lot about this tendency, sometimes called the “decline effect.” But they’ve settled on some hard, logical truths: Studies are incredibly difficult to design well; scientists are biased toward positive results; and the more surprising the finding, the more likely it is to be wrong. Good theories require good science, and science that can’t be replicated isn’t any good.

That wasn’t Lehrer’s approach. His story begins, instead, with the question, “Is there something wrong with the scientific method?” To answer that question definitively would require a very rigorous review of research practice—one that demonstrated persuasively that even the most airtight studies produced findings that couldn’t be replicated. Lehrer’s conclusion is considerably more mystical, offering bromides where analysis should be: “Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.” It sounds an awful lot like the Zen-lite conclusion of Imagine: “Every creative story is different. And every creative story is the same. There was nothing. Now there is something. It’s almost like magic.”

By August 14, the storm seemed to be passing. That morning, Lehrer’s aunt had breakfast in California with an old friend. While fretting over her nephew, she mentioned innocently that Lehrer still had an outstanding contract with Wired. Her friend happened to be the mother of Jonah Peretti, a founder of the website BuzzFeed, who gladly published the “update.” Wiredconfirmed that Lehrer was still under contract, but said it wouldn’t publish anything more until a full vetting of his blog posts, already under way, was completed.

In fact, the real online vetting hadn’t even begun, and probably wouldn’t have happened if not for Lehrer’s chatty aunt. On the 16th, wired.com editor-in-chief Evan Hansen called Charles Seife, a science writer and journalism professor at NYU, to ask if he could investigate Lehrer’s hundreds of Frontal Cortex posts. It was too much work, so they settled on a mere eighteen, some of them known to have problems. Seife found a range of issues, from recycling—in most of the stories—to lazy copying from press releases, a couple of slightly fudged quotes, and three cases of outright plagiarism. (In fairness, there was only one truly egregious case of stealing.)

Accounts vary over whether Seife was expected to publish a Wired story about Lehrer, and whether his 90-minute conversation with Lehrer was on the record. Lehrer told a friend that Chris Anderson assured him there wouldn’t be a story—but then Hansen called him to ask if his remarks were on the record. Lehrer said they weren’t. Wired decided against running a full story, but allowed Seife to take it elsewhere.

Slate’s story went up on August 31, just after Wired began posting its first corrections to Lehrer’s blog posts. All Seife could say about his phone conversation with Lehrer was that it “made me suspect that Lehrer’s journalistic moral compass is badly broken.” An hour later, wired.com issued a full statement saying they had “no choice but to sever our relationship” with Lehrer.

If Anderson did indeed quietly defend Lehrer against Seife, it would fit the pattern: For all the hand-wringing about the decline of print-media standards, Lehrer was not a new-media wunderkind but an old-media darling. Just as newyorker.com had banned Lehrer before David Remnick canceled his print contract, it was wired.com that led the charge against Lehrer, and the print magazine that only fired him when it “had no choice”—after Seife published his exposé at another web magazine.

Lehrer’s biggest defenders today tend to be veterans of traditional journalism. NPR’s longtime correspondent Robert Krulwich has known Lehrer for almost a decade and used him many times on the science program “Radiolab.” “I find myself uncomfortable with how he’s been judged,” Krulwich wrote in an e-mail, weeks after “Radiolab” ran six corrections online. “If in a next round, he produces work that’s better, more careful, I hope his editors and his readers will welcome him back.” Malcolm Gladwell wrote me, “[Lehrer] didn’t twist anyone’s meaning or libel anyone or manufacture some malicious fiction … Surely only the most hardhearted person wouldn’t want to give him a chance to make things right.”

If anyone could have gotten a second chance, it was Lehrer. He’d made himself the perfect acolyte. Lehrer seemed to relish exciting ideas more than workaday craft, but editors are ravenous for ideas, and Lehrer had plenty. He fed pitches to deskbound editors and counted on them and their staffs to clean up the stories for publication. He was a fluid writer with an instinctive sense of narrative structure. In fact, he was much better at writing magazine stories than he was at blogging. His online posts were not only repetitive but too long and full of facts—true or not.

Seife spent a chunk of his time tracking down a change made to an E. O. Wilson quote in one of Lehrer’s New Yorker stories, only to find that a fact-checker had altered it at Wilson’s insistence. The piece’s editor told Seife that Lehrer was “a model of probity.” Meanwhile, wired.com—the very site that hired Seife—couldn’t vouch for any of the work Lehrer had published there. Lehrer told a friend that the first time he heard from Hansen in his two years at wired.com was during the vetting. The lack of oversight became distressingly clear when Seife, on the phone with Lehrer, demanded to know why he hadn’t asked his blog editor to fix his errors. Lehrer shot back in frustration that there was no editor.

Lehrer spent much of August writing about the affair, trying to figure out where it had all gone wrong. He came to the conclusion that he’d stretched himself too thin. His excuses fall along those lines: He told Seife that his plagiarized blog post was a rough draft he’d posted by mistake. And his latest explanation for those fabricated Dylan quotes is that he had written them into his book proposal and forgotten to fix them later. Even by his own account, then, the writing wasn’t his top priority.

The lectures, though, were increasingly important. Lehrer gave between 30 and 40 talks in 2010, all while meeting constant deadlines, starting a family, and buying a home in the Hollywood Hills. It was more than just a time suck; it was a new way of orienting his work. Lehrer was the first of the Millennials to follow his elders into the dubious promised land of the convention hall, where the book, blog, TED talk, and article are merely delivery systems for a core commodity, the Insight.

The Insight is less of an idea than a conceit, a bit of alchemy that transforms minor studies into news, data into magic. Once the Insight is in place—Blink, Nudge, Free, The World Is Flat—the data becomes scaffolding. It can go in the book, along with any caveats, but it’s secondary. The purpose is not to substantiate but to enchant.

The tradition of the author’s lecture tour goes back at least as far as Charles Dickens. But its latest incarnation began with Gladwell in 2000. The Tipping Point, his breakthrough best seller, didn’t sell itself. His publisher, Little, Brown, promoted the book by testing out its theory—that small ideas become blockbusters through social networks. Gladwell was sent across the country not just to promote his book but to lecture to booksellers about the secrets of viral-marketing. Soon The New Yorker was dispatching him to speak before advertisers, charming them and implicitly promoting the magazine’s brand along with his own. Increasingly, he became a commodity in his own right, not just touring a book (which authors do for free) but giving “expert” presentations to professional groups who pay very well—usually five figures per talk.

Gladwell was quickly picked up by Bill Leigh, whose Leigh Bureau handles many of the journalist-lecturers of the aughts wave. Asked what bookers require from his journalist clients, Bill Leigh simply says, “The takeaway. What they’re getting is that everyone hears the same thing in the same way.” The writers, in turn, get a paying focus group for their book-in-progress. Leigh remembers talking to his client, the writer Steven Johnson, about how to package his next project. “He wanted to take his book sales to the next level,” says Leigh. “Out of those conversations came his decision to slant his material with a particular innovation feel to it.” That book was titled Where Big Ideas Come From: The Natural History of Innovation. His new one is called Future Perfect.

One of the sharpest critiques of this new guard of nonspecialist Insight peddlers came from a surprising source, a veteran of the lecture circuit who decried “our thirst for nonthreatening answers.” “I’m essentially a technocrat, a knowledge worker,” says Eric Garland, who was a futurist long before that became a trendy descriptor. A past consultant to some of the Insight men’s favorite companies—3M, GM, AT&T—Garland is wistful for a time when speakers were genuine experts in sales, leadership, and cell phones. “Has Jonah Lehrer ever presented anything at a neuroscience conference?” he asks a touch dismissively.

Lehrer has not. But Gladwell actually did give a talk, in 2004, at an academic conference devoted to decision-making. “Some people were outraged by the simplification,” remembers one attendee, who likes Gladwell’s work. Someone stood up and asked if he should be more careful about citing ­sources.

In reply, Gladwell offered another anecdote. A while back, he’d found out that the playwright Bryony Lavery’s award-winning play, Frozen, cribbed quotes from one of his stories. Though he might have sued Lavery for plagiarism, Gladwell concluded that, no, the definition of plagiarism was far too broad. The important thing is not to pay homage to the source material but to make it new enough to warrant the theft. Lavery’s appropriation wasn’t plagiarism but a tribute. “I thought it was a terrible answer,” says the attendee. “If there was ever an answer that was about rationalization, this was it.”

The worst thing about Lehrer’s “decline effect” story is that the effect is real—science is indeed in trouble—and Lehrer is part of the problem. Last month, the Nobel laureate behavioral economist and psychologist Daniel Kahneman sent a mass e-mail to colleagues warning that revelations of shoddy research and even outright fraud had cast a shadow over the hot new subfield of “social priming” (which studies how perceptions are influenced by subtle cues and expectations). Others blamed the broader “summer of our discontent,” as one science writer called it, on a hunger for publicity that leads to shaved-off corners or worse.

“There’s a habit among science journalists to treat a single experiment as something that is newsworthy,” says the writer-psychologist Steven Pinker. “But a single study proves very little.” The lesson of the “decline effect,” as Pinker sees it, is not that science is fatally flawed, but that readers have been led to expect shocking discoveries from a discipline that depends on slow, stutter-step progress. Call it the “TED ­effect.” Science writer Carl Zimmer sees it especially in the work of Lehrer and Gladwell. “They find some research that seems to tell a compelling story and want to make that the lesson. But the fact is that science is usually a big old mess.”

Sadly, Lehrer knows exactly how big a mess it is, especially when it comes to neuroscience. One of his earliest blog posts, back in 2006, was titled “The Dirty Secrets of fMRI.” Its subject was the most appealing tool of brain science, functional magnetic-resonance imaging. Unlike its cousin the MRI, fMRI can take pictures of the brain at work, tracking oxygen flow to selected chunks while the patient performs assigned tasks. The most active sections are thus “lit up,” sometimes in dazzling colors, seeming to show clumps of neurons in mid-thought. But in that early blog post, Lehrer warned of the machine’s deceptive allure. “The important thing,” he concluded, “is to not confuse the map of a place for the place itself.”

Lehrer repeated this warning about the limitations of fMRI in later stories. And yet, both in How We Decide and Imagine, fMRI is Lehrer’s deus ex machina. No supermarket decision or sneaker logo or song lyric is conceived without “lighting up” a telltale region of the brain. Here comes the anterior cingulated cortex, and there goes the superior temporal sulcus, and now the amygdala has its say. It reads like a symphony—magical, authoritative, deeply true.

This contradiction was pointed out back in March in a critique of Imaginepublished at the literary site the Millions by Tim Requarth and Meehan Crist. “It’s baffling that in Imagine Lehrer makes statements so similar to ones he thoroughly discredits” elsewhere. Then they offer an analogy to explain what’s wrong with drawing vast conclusions from pretty fMRI pictures. “Brain regions, like houses, have many functions,” they write, and just because there are people at someone’s house doesn’t mean you know what they’re doing. “While you can conclude that a party means there will be people,” they write, “you cannot conclude that people means a party.”

Rebecca Goldin, a mathematician-­writer who often criticizes “neurobabble,” points out that this is exactly what’s so enticing about this brand-new science: its mystery. Imagine that fMRI is a primitive telescope, and those clumps of neurons are like all the beautiful stars you can finally see up close, but “may in fact be in different galaxies.” You still can’t discern precisely how they’re interacting. Journalist David Dobbs recently asked a table full of neuroscientists: “Of what we need to know to fully understand the brain, what percentage do we know now?” They all gave figures in the single digits. Imagine makes it look like we’re halfway there.

If Lehrer was misusing science, why didn’t more scientists speak up? When I reached out to them, a couple did complain to me, but many responded with shrugs. They didn’t expect anything better. Mark Beeman, who questioned that “needle in the haystack” quote, was fairly typical: Lehrer’s simplifications were “nothing that hasn’t happened to me in many other newspaper stories.”

Even scientists who’ve learned to write for a broad audience can be fatalistic about the endeavor. Kahneman had a surprise best seller in 2011, Thinking, Fast and Slow. His writing is dense and subtle, as complicated as pop science gets. But as he once told Dan Ariely, his former acolyte, “There’s no way to write a science book well. If you write it for a general audience and you are successful, your academic colleagues will hate you, and if you write it for academics, nobody would want to read it.”

For a long time, Lehrer avoided the dilemma by assuming it didn’t apply to him, writing not for the scientists (who shrugged off his oversimplifications) or for the editors (who fixed his most obvious errors) but for a large and hungry audience of readers. We only wanted one thing from Jonah Lehrer: a story. He told it so well that we forgave him almost ­everything.