Nick Werle's homepage



Deterring financial crime by Too Big To Jail Banks – MSc Dissertation

Category : Uncategorized · No Comments · by Sep 14th, 2014

I just submitted the dissertation component of UCL’s MSc Economic Policy course. I researched how the existence of Too Big To Jail banks changes the microeconomics of financial crime prosecution. By modeling prosecutorial discretion in charging corporations versus their mangers, the dissertation showed that unless managers face a credible threat of incarceration for criminal behavior, authorities will be unable to optimally deter financial criminality by and within financial institutions.

I am currently working on adapting the two microeconomic models in this dissertation into a pair of economics articles. The abstract of my full dissertation is below. Please email me at nickwerle[atgmail] if you would like a copy of my research.

Deterring Financial Crime in the Age of Too Big To Jail Banks

By Nick Werle

In the United States’ legal system, prosecutors wield immense discretion in deciding how to resolve criminal cases. In cases of corporate crime, their ability to charge either a firm or its employees broadens this discretion substantially. Earlier papers in the economics of crime have studied how using corporate and individual liability effects deterrence, arguing that authorities can force firms to fully internalize the costs of their criminality with fines. However, the existence of large firms that are “Too Big To Fail” introduces important new constraints on prosecutorial decision-making, since the government is loathe to impose sanctions that might damage these systemically important firms. This study explores how the “Too Big To Jail” (TBTJ) problem complicates prosecutorial strategy by imposing an upper limit on credible threats of punishment for some firms. The TBTJ problem is particularly acute in cases of financial crime by and within large financial institutions, because if criminal sanctions against a large, interconnected bank produces a liquidity shortfall, it could trigger a crisis. This paper shows that if a bank is TBTJ, in the absence of a strong individual liability regime that includes a credible threat of incarceration, crime may be profitable for the firm, even if it knows it may face prosecution. Despite widespread financial crime in the wake of the global financial crisis, many important financial crime prosecutions in the USA have failed to charge individuals, and no bank employees have faced prison time. This paper offers a theoretical argument for instituting a stronger individual liability regime for financial crimes by employees of TBTJ banks by extending a common framework for modeling corporate crime control.

I could not have completed this dissertation without the support of the Marshall Aid Commemoration Commission and my supervisor, Dr. Cloda Jenkins.

Mortgage fraud continues, even if no one is watching

Category : Uncategorized · No Comments · by May 23rd, 2014

In September, 2004, the FBI held a press conference to warn about the growing problem of mortgage fraud. “It has the potential to be an epidemic,” warned Chris Strecker, assistant director of the FBI’s criminal division. “We think we can prevent a problem that could have as much impact as the S&L crisis.” What might have seemed like an overstated publicity stunt at a time when the booming property market seemed healthy turned out to be far too modest. Exactly four years later, the US mortgage market brought the global financial system to the brink of collapse. Aggressively fighting mortgage fraud would not have stopped the housing bubble from inflating and bursting, but cutting down the volume of worthless debt could have reduced the pain.

As the waves of foreclosures mounted in the wake of the financial crisis, the federal government wanted to take a hard stand against financial crime. In May 2009, as the extent of the rot in the US housing market became clear, President Obama signed the Fraud Enforcement and Recovery Act, which expanded the federal government’s ability to prosecute fraud in previously unregulated sectors of the mortgage industry. It also authorized hundreds of millions of dollars for the Department of Justice to fight complex financial fraud.

At a press conference in October 2012, the Attorney General declared the initiative a success. His statement claimed that DOJ had charged 530 people with mortgage fraud-related crimes in cases that involved more than 73,000 homeowners and total losses estimated at over $1 billion.

But an internal audit of DOJ’s mortgage fraud efforts released by the Department’s Inspector General in March 2014 showed how little was actually done to combat mortgage fraud. The audit concludes that neither Congress nor DOJ treated mortgage fraud as a high priority, despite their public statements. It even found that the numbers quoted by the Attorney General in 2012 were wildly exaggerated. DOJ’s criminal investigations only snared 107 defendants in cases related to just $95 million in homeowner losses, 91% less than the Attorney General had publicly claimed. Worse, DOJ continued to cite the inflated numbers in press releases for 10 months after learning they were false.

Congressional inaction might help explain why DOJ’s real results didn’t live up to its aspirational numbers. Financial fraud investigations are complex and difficult to prove; that means expensive. In 2009, Congress  “authorized” $165 million per year to fight fraud, but when budget time came around, it only appropriated 17% of that funding. In 2010, various DOJ agencies received a total of $34.8 million, with the bulk going to the FBI. In 2011, the FBI’s $20.2 million was it. It is impossible to bring complex fraud cases without teams of investigators, accountants, and attorneys, but the DOJ Criminal Division in Washington only received enough money for 5 new hires to investigate financial fraud.

But this is a problem of priorities, not just funding. Despite the additional funding and public the FBI Criminal Investigative Division ranked complex financial crime as the lowest of six criminal threat categories and mortgage fraud as last among three subcategories. At the FBI field offices that auditors visited, mortgage fraud was listed as a low priority or not listed as a priority at all. These offices included New York, always a center of financial crime, and troubled housing markets such as Miami and Los Angeles. These results echo a 2005 audit, which found that reassigning hundreds of FBI agents from criminal investigation to counter-terrorism work had hurt the government’s ability to deter white collar crimes such as bank fraud and mortgage fraud.

In their defense, FBI officials argue that the end of the housing boom and the tightening of lending standards have cut down the rate of fraudulent mortgage purchases, with classic schemes involving straw buyers on the wane.

However the threat has not disappeared; it has just morphed along with the times. More than 50,000 homes fell into foreclosure for the first time during January of this year and total household debt is increasing again for the first time since the crisis, according to RealtyTrac and the Federal Reserve Bank of New York. Borrowers struggling with high debt are ripe targets for malicious debt consolidation, loan modification, and foreclosure rescue schemes, which promise to save homeowners from their creditors in exchange for an up-front fee.  Of course, many of these companies do nothing to help.

The FBI reported that for the first time in many years, these foreclosure rescue schemes have surpassed mortgage origination fraud as the housing industry’s greatest criminal threat, but the audit found that few of these cases have been fully investigated. These scams typically represent smaller overall financial losses that fail to meet federal prosecutors’ thresholds. In response, FBI agents said they put these cases into “unaddressed work files,” while waiting for the dollar losses and fraudulent activity to mount.

While shutting down foreclosure rescue scams may not grab headlines with big numbers, these frauds prey on some of the most vulnerable Americans, pushing them deeper into debt and causing untold psychological damage as foreclosures proceed unabated. If prosecutorial thresholds are encouraging the FBI to let these fraudsters continue to steal from desperate homeowners, the DOJ should abandon them.

And if the Department wants to go after some fish big enough to excite the press, perhaps it should reopen investigations into the big mortgage servicing companies. Despite the 5 biggest mortgage servicing banks signing a $25 billion national settlement and promising to fix systemic foreclosure abuses in 2012, evidence continues to emerge that they still wrongly foreclose on homeowners, refuse to allow customers to submit missing paperwork, fabricate loan documents in foreclosure proceedings, and deny loan modifications to deserving borrowers. In October, the New York State Attorney General filed a lawsuit accusing Bank of America and Wells Fargo of failing to live up to these customer-service promises. Where is the Department of Justice?

When the individually rational sums to the collectively insane

Category : Uncategorized · No Comments · by May 22nd, 2014

Originally posted on 3QuarksDaily.

 

motion-of-molecules1The most striking aspect of Isaac Asimov’s Foundation is the pacing of its narrative. The story, which tracks the fall of the Galactic Empire into what threatens to be a 30,000-year dark age, never tracks characters for more than a few chapters. The narrative unfolds at a historical pace, a timescale beyond the range of normal human experience. While several short sections might follow one another with only hours in between, gaps of 50 or 100 years are common. The result is a narrative in which characters are never more than bit players; the book’s real focus is on the historical forces responsible for the rise and fall of planets. The thread holding this tale together is the utopian science of psychohistory, which combines psychology, sociology and statistics to calculate the probability that society as a whole will follow some given path in the future. The novel’s action follows the responses to a psychohistorical prediction of the Empire’s fall made by Hari Seldon, the inventor of the science, who argued by means of equations that the dark ages could be reduced to only a single millennium with the right series of choices. In comparing the science of psychohistory and the actual events that accompany the Galactic Empire’s fall, Asimov’s time-dilated narrative weaves together disparate theories of history and science articulated around the problem of predicting the future, the historical primacy of crises, and the irreducible difference between studying an individual and analyzing a society as a whole. In Asimov’s imagined science, however, we can trace the real logic of macroeconomics and begin to understand why Keynes could never produce such dramatic predictions.

The goal of Asimov’s psychohistory is always the prediction of future events, but these prognostications are different from the usual fictional presentiments in that they cannot determine exactly what will happen. It is a probabilistic science. Of course, this leaves open the possibility of psychohistorians trying to guide society toward the best possible future, as long as the population at large doesn’t know the predictions’ details. But more importantly, psychohistory’s probabilistic nature limits the scale on which its predictions are useful:

“Psychohistory dealt not with man, but with man-masses. It was the science of mobs; mobs in their billions. It could forecast reactions to stimuli with something of the accuracy that a lesser science could bring to the forecast of a rebound of a billiard ball. The reaction of one man could be forecast by no known mathematics the reaction of a billion is something else again.”

Like all statistical tools, it relies on the law of large numbers as a way to get past the intrinsic randomness one faces in anticipating human behavior. Practically, this means that psychohistory can only predict events for a population at large; it has nothing to say about any particular individuals.

Despite Foundation’s seemingly fantastic premise – the ability to know what will happen tens of thousands of years into the future – Asimov’s focus on this fundamental limit to psychohistory’s predictive power keeps the story firmly in the realm of science fiction. Psychohistory is, in a sense, an idealized form of macroeconomics, insofar as economists aim to predict and plan for the best possible future for society. However, the theoretical connection between these two “sciences” is more profound. John Maynard Keynes’ essential insight, which forms the epistemological core of macroeconomics, is the discontinuity between mathematical descriptions of the large and the small, the society and the individual. Indeed, all statistical sciences – genetics, epidemiology, and quantum mechanics, for example – face this same intrinsic limitation. While we have worked out many statistical laws identifying the genes responsible for congenital disease, the behaviors that raise the risk of spreading an infectious disease, and how electrons flow through super-conductors, none of these sciences can say with any certainty whether a genotype will manifest during a person’s life, a patient will contract a disease, or an electron will be at a given time or place.

This shared epistemological limit is not a coincidence. Keynes’ 1936 General Theory of Emplyoment, Interest, and Money sought to explain why the Depression-era economy seemed incapable of providing jobs to the millions of unemployed workers, who were starving and clearly willing to work for any wage. Unlike his predecessors, Keynes approached this problem by analyzing the collective behavior of the laboring masses rather than the imagined bargaining strategies of a single employer hiring a single worker. My research into his mathematical work, published more than a decade before the General Theory, suggests that Keynes modeled macroeconomics, his new statistical theory of the economy-at-large, on thermodynamics, the modern explanation for the behavior of bulk matter. Since Keynesian macroeconomics starts on the aggregate level, rather than from a theory of a rational actor’s decision-making like Ricardian classical economics or neoclassical rational expectations theory, Keynes faced the challenge of linking the dynamics of the national economy to the psychology of the individuals that compose it. This is precisely the theoretical problem Keynes solved with the statistical methods of thermodynamics and statistical mechanics.

Thermodynamics describes how a quantity of gas changes as a whole or how a chemical reaction will proceed through intermediate equilibria, by measuring temperature, pressure, and volume. It achieves this, however, without assuming anything at all about the small-scale structure of matter. Indeed, thermodynamics was developed before belief in the reality of atoms was widespread among physicists, and Einstein’s subsequent proof of their existence did nothing to change the science. Thus, the equations of thermodynamics give no information about the motions of individual gas molecules. An early attempt to bridge this gap, Bernoulli’s kinetic theory, assumed that a given combination of temperature, volume, and pressure determined that all the gas molecules were bouncing around the container at a uniform speed. Later, however, James Clerk-Maxwell and Ludwig Boltzmann showed how this was impossible. Instead, they argued that a gas in a certain thermodynamic state is composed of an ensemble of molecules all moving in fundamentally random directions and at random speeds. In other words, even if one knows the exact thermodynamic state of a gas, it is impossible to determine the motion of an individual molecule, and vice versa. Statistical mechanics is the link between these two levels; it characterizes the probabilistic distributions of molecular velocities that correspond to gases in thermodynamic equilibria. The two fields are joined by one of science’s slipperiest terms, entropy, which measures the microscopic uncertainty intrinsic to any known macroscopic situation.

In trying to work out the individual’s relationship to the macroeconomic aggregate, Keynes faced a similar mereological problem. Macroeconomics uses aggregate measurements – e.g. unemployment, GDP, and inflation – to describe the health of the economy overall and predict its future dynamics. Yet it is unclear how individual experience links up with such large-scale concepts. Whereas classical theory assumes that an economy of 300 million “makes decisions” identically to the total of 300 million individual choices, macroeconomics is premised on the idea that the aggregate economy acts as an “organic whole,” akin to a thermodynamic gas and irreducible to arithmetic sum of rational individuals. Complex feedback loops, which might make one person’s rising costs become another’s disposable income and yet a third’s sales revenue, multiply the effects of investment, consumption, and staffing decisions through the economy. Furthermore, people don’t think and act identically; the same background macroeconomic “facts” incite various people to form drastically different, even contradictory, expectations about the future. A direct connection between aggregates and individuals is even further frustrated by the importance of the income distribution: An economy can experience growth – as measured by a rising real GDP – even as the majority of the actual people living in it lose purchasing power. Indeed, this has been the reality for the past several decades, as income inequality in America has exploded. Finally, the prevalence of bank runs, stock market crashes, and asset bubbles attests to the destructive power of rational irrationality, cases in which many individuals’ personally rational decisions (to withdraw their deposits from a shaky bank) sum to produce a collectively insane result (a bank failure).

These blockages, which prevent a simple micro-macro relationship in economics, posed a mathematical as well as a philosophical problem for Keynes. He was able to solve both aspects of this difficulty by importing the probabilistic approach that had joined the macro-scale science of thermodynamics to the particulate-level explanation of statistical mechanics. This solution is crystallized in two rate-determining macroeconomic quantities – the Marginal Propensity to Consume and the inducement to invest – that connect the aggregate dynamics of an economy to the psychologies and behaviors of individuals living within it. (For the details of this argument, please check out my recent article in the Journal of Philosophical Economics.) Following statistical mechanics, Keynesian macroeconomics forswears assumptions of strict rationality, even though the resulting uniformity would make the necessary mathematics more tractable.

Although these different theoretical relationships between economic parts and wholes might seem esoteric and hopelessly abstract, they have real, material stakes when incorporated into experts’ policy analyses. Consider the case of today’s stubborn high unemployment. If the neoclassical theorists are right, and the macroeconomy is nothing more complex than millions of individual decisions, then the main problems are likely inflated wages and labor market friction. Their prescriptions: wage cuts, union givebacks, and labor market “reforms” that make it easier for employers to fire and hire workers. However, if the Keynesian analysis is correct and the macroeconomy is more than a simple sum of its parts, the problem is more structural. In this view, the economy is not operating at a level sufficient to utilize all of its unemployed factors of production and so the government ought to stimulate aggregate demand with expansionary fiscal and monetary policy. This is the debate going on today in capitals across the world.

Asimov’s psychohistory is useful because it dramatizes the problems statistical sciences have in simultaneously predicting the future on both aggregate and particulate levels. His vision in Foundation of a utopian social science was deeper than a mere desire for large-scale fortune telling; he followed Keynes in extending the logic of modern physics:

“Because even Seldon’s advanced psychology was limited, it could not handle too many independent variables. He couldn’t work with individuals over any length of time, any more than you could apply the kinetic theory of gases to single molecules. He worked with mobs, populations of whole planets, and only blind mobs who do not possess foreknowledge of the results of their own actions.”

Indeed, it’s probably this final restriction that prevents macroeconomics from achieving the predictive power of psychohistory. Insofar as real people always strive to know the equations and understand the forecast, anticipatory recursion will always prevent the best predictions from coming perfectly true.

Competing to Live: On Planet Earth

Category : Uncategorized · No Comments · by May 22nd, 2014

This was originally posted on 3QuarksDaily.

David Attenborough reserves a certain mournful tone for narrating death in the natural world. In the Jungles episode of BBC’s epic documentary series Planet Earth, we hear that voice, interspersed with the rich, crackling sound of splintering wood, as we see a massive rain forest tree collapse under its own weight after centuries of growth. Just as the tree’s last branches fall out of view through the canopy, Attenborough, in his reassuringly authentic British accent, opines: “the death of a forest giant is always saddening, but it has to happen if the forest is to remain healthy.” After the surrounding trees spring back into place, we descend to the rain forest floor, and enter a realm whose usual gloom has been suddenly washed away by the new hole in its leafy ceiling. Here we can see, with the help of Planet Earth’s signature time-lapse cinematography, how the flood of light that now reaches the forest floor triggers a race to the top by the unbelievable variety of plant life struggling to collect that valuable light. The narration explains how each species has its own strategy for besting its competitors. Vines climb up neighboring trees, sacrificing structural strength for rapid vertical growth. Broad-leaved pioneers such as macarangas are the clear winners at this early stage; their huge leaves provide them with enough energy to grow up to eight meters in a single year. But “the ultimate winners are the tortoises, the slow and steady hardwoods,” which will continue striving for their places in the light-drenched canopy for centuries to come.

The series’ unmatched capacity to bring the natural world to life, as it were, has made it both the premier wildlife documentary of its day and the most enjoyable toy for twenty-first century stoned college students. Time-lapse photography and stunning footage of impossibly rare animals transport us, as viewers, into virgin territory, a territory that operates according to its own natural laws, thus far spared from human interference. While the show’s inventive cinematography animates the natural world, Attenborough is able to give meaning to natural processes by articulating the concealed, organic logic that organizes life. Sped up, slowed down, zoomed in, or seen from above, Planet Earth explains nature’s apparent randomness by casting the world’s plants and animals as players in an epic struggle for survival. The planet’s breathtaking beauty – along with its inhabitants’ sometimes-bizarre bodies and behaviors – is the integrated result of countless relations between harsh climates, scarce resources, and living things competing to exist. But if this is the narrative of the natural world, does it accurately reflect an already existent reality? What artifacts can we find of this production of meaning about the world? Is there a difference between Nature and the natural world? And most importantly, where do we – as viewers, as humans, as people – fit into this story?

As one can see from the “Planet Earth Diaries” at the end of each episode, finding beautiful scenery is not enough to make a compelling wildlife series. The Planet Earth team struggled at each shoot to find innovative ways of animating worlds whose dynamism is not always clear on the standard time scale of human thought. As we enter the rain forest “hot house,” Attenborough notes that the jungle seems “virtually lifeless” despite the cacophony of insect, bird, frog, and monkey calls. In this episode, the obvious challenge is bringing the trees themselves to life. The team accomplishes this feat with breathtaking tracking shots that lift us from the darkness of the forest floor to the blazing light of the canopy. This one continuous motion not only shows the different worlds at each height but also portrays the forest environment as a living growing entity itself. Just as time-lapse cinematography shrinks an entire temperate growing season into a minute in order to show the spectacular vernal burst of life, the camera’s ascent through the rain forest evokes the hardwoods’ centuries-long climb to the sun.

Of course, it is easy to accuse Planet Earth of relying too heavily on anthropomorphism as a narrative technique. From pole to pole, Attenborough introduces us to (images of) animals whose thoughts seem strikingly logical and sometimes even emotional. But watching Planet Earth again, this time with unusually lucid attention to the storytelling, it seems to me that the content of what I will broadly call “anthropomorphism” does intellectual work beyond merely setting up characters and a plot. Beyond constructing beings worthy of viewers’ empathy, this narrative technique presents animals and plants as creatures possessing the power of intentionality. The organization of Planet Earth’s tightly structured world, ordered by the natural equivalent of rational self-interest, is starkly different from the overwhelming feeling one has when actually standing in the middle of a rain forest. Attenborough parses the real world’s infinite complexity and glosses its fundamental randomness of organic life with the intertwined logics of the local flora and fauna. To be fair, the series never exactly portrays this purposefulness as conscious, but it is certainly an important and pervasive tendency in Attenborough’s narration. Indeed, it may be this distinction between intention and consciousness that makes simple accusations of over-anthropomorphizing seem to be beside the point.

In order to understand this issue of intentionality, we should return to Jungles and revisit the new clearing in the rain forest. After descending from the action in the now-ruptured canopy, the camera tightens on the leaf litter, which covers the forest floor. Normally, Attenborough tells us, little of the sun’s energy reaches these depths, so now “the thirst for light triggers a race for a place in the sun.” The leaves, which each fill about a quarter of the screen, are pushed aside by seedlings madly bursting forth from the concealed soil. Thanks to the time-lapse cinematography, the young plants grow at an astounding rate. They move so quickly, in fact, that their unfurling leaves look like the weakly gesturing limbs of a newborn until they begin to shoot upward out of the frame.

Because one of the themes of this story of the rain forest is its unbelievable diversity, Attenborough shows us several plants with different strategies for securing a scarce position to collect valuable light. The climbers are surely the most interesting competitors; the authoritative British voice assures us that their “strategy looks chaotic, but there’s method in their madness.” Sped up, we can witness how “their growing tips circle like lassoes, searching on anchors for their spindly stems.” From this temporal perspective, it’s true: the growing tips do not look like mindless masses of sugar whose winding heliotropic growth is controlled by auxins. Instead, they appear to be plants intentionally employing a particular growth strategy that maximizes vertical reach while minimizing energy investment. This is confirmed when Attenborough shows us the climbers’ forethought: “they put coils in their tendrils so that if their support moves, they will stretch and not snap.”

Certainly, Attenborough’s narration is well served by this tactic. Explaining the ostensible reasons for certain plants’ unique growth and their comparative advantages over competitors in the jungle clearing turns this natural scene into a story with characters who win or lose depending on the viability of their natural abilities. Certainly, this is far more compelling for the lay viewer than an explanation of how the underlying biology. As a popular nature documentary, the Planet Earth team must have continually struggled with the challenge of making these stories gripping; surely the latter strategy would be quite a buzz killer for the stoned college student crowd, the series’ most devoted followers. So my criticism of this narrative strategy is not (intended to be) pointless fault finding but rather an exploration of its effects.

By virtue of its task, i.e. depicting the beauty and complexity of the natural world, it is easy to lose sight of the role of storytelling in Planet Earth. As a product of a rigorous naturalism, it is easy to interpret the moments of action in the series as parts of plot lines already existing in the world, rather than as elements of a story told about the world. The narration seeks to fade into invisibility, leaving only Nature. But this Nature that Attenborough presents is an assemblage of characters and settings, conflicts and dénouements that work together to keep viewers enraptured. Its construction is the challenge facing any nature documentarian and the Planet Earth team does this more effectively than anyone before them. The series is, in equal measures, a work of art and science, a provisional distinction that converges with the deployment of intentionality as a narrative strategy. Intentionality is not part of any specimen or fossil collected in the wild. It is manifest neither in the rain forest plants said to be striving for the light nor the parasitic cortisept fungi said to work as checks to maintain balance among insect species. Instead, it is a way of making sense of the natural world by connecting organisms to one another with dramatic links of cause and effect.

In its technical use, as a central tenet of continental ontology, “intentionality” is a frustratingly elusive term. Franz Brentano, who invented the term in the late nineteenth century, positioned it as a property that necessarily connected “mental phenomena” to the their intentional objects. Later, Edmund Husserl argued that this relationship is a constitutive element of thought itself: “to have sense, or ‘to intend to’ something, is the fundamental characteristic of all consciousness.” Similarly, the father of French existentialism, Jean-Paul Sartre, thought of intentionality as coextensive with consciousness. But since consciousness itself is not really at issue here (I don’t think David Attenborough is really attributing strategic consciousness to the polar bear mother during the Ice Worlds episode), these definitions are not particularly helpful.

Instead, we should consider Martin Heidegger’s efforts to theorize Being, which he identified with the fundamental entity Dasein, in Being and Time, one of the more imposing and impenetrable tomes in modern philosophy (so please, pardon my language). Eschewing the technical term “intentionality,” which had already been firmly pegged to conscious thought, Heidegger works with the twin concepts of care and concern to denote the intentionality of Being. Heidegger writes that care, “as a primordial structural totality, lies ‘before’ every factical ‘attitude’ and ‘situation’ of Dasein.” This means that care can be described as a Being’s fundamental orientation towards the world. Ontologically, care is at the root of both willing and wishing; indeed, “in the phenomenon of willing, the underlying totality of care shows through.”

To what extent does this Heideggerian model of Being as care reflect the existence of the natural world as told by Planet Earth? Throughout the series, the emphasis on organisms’ strategies for survival creates a sense that they possess, or at least spontaneously enact, a will to live and multiply. Obviously, this drama plays itself out differently in each biome, but the general theme of organisms as engaged in unceasing competition is relatively constant. And this message of competition is important, for Attenborough explains how mutual competition is the motor of evolution and the source of nature’s astounding diversity of life. But this language, this notion of individual will and competition, is not unique to Planet Earth. Charles Darwin’s On the Origin of Species displays a similar understanding of the natural world as a place where organisms are constantly striving to live:

 

In looking at Nature, it is most necessary… never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or the old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount.

 

Throughout Darwin’s text there is a continual oscillation between his theories of Nature and scenes of survival from the natural world itself. He certainly wasn’t the first naturalist who tried to bring the vastness of the colonial world back home to England already arrayed in helpful categories, but On the Origin of Species betrays Darwin’s acute awareness of the importance of storytelling to his work. His project was one of sense making, not one of sense finding. His claim is not, “animals are striving to increase in numbers” but that everything “around us may be said to be striving to the utmost.” The text does not read like a dispassionate treatise on the way the world is. Instead, Darwin is suggesting that we may construct a narrative to explain a world in which we stand at the center. Without undoing Copernicus’ labor, which removed man from his privileged position at the center of the universe, On the Origin of Species presents a story of the world as we grasp it. Whether or not this acknowledgement of his explanations as a possibility was a scientist’s hedge against accusations of blasphemy is not important; what does matter is the attention Darwin gives to the problem of narrative, a difficulty seldom considered in scientific writing.

Again, I doubt that “anthropomorphism” sufficiently accounts for the way in which organisms are assigned a willing disposition. Neither Planet Earth nor Darwin asserts or even implies that this will to live and multiply is contained in some kind of consciousness or a particularly mindful relation to the world. Instead, might we say that these classics of British naturalism consider these diverse survival strategies to be spontaneous orientations towards the world? For both texts, is not this intrinsic will to live positioned before any particular circumstance, environment, or competition? Can we not, then, assign this continuous striving for existence to be a fundamental aspect of Being in this version of the natural world? That is: could an active struggle for existence be at the root of this naturalist conception of life itself?

Indeed, this fundamental will to survive is important to Heidegger as well. He argues that “the urge ‘to live’” is something fundamental to Dasein; it “is something ‘towards’ which one is impelled, and it brings the impulsion along with it of its own accord. It is ‘towards this at any price’.” Now, we can understand (or at least attempt to comprehend) how Dasein is not merely a person or an animal but a fundamental unit of Being. While it is true that each animal or plant could be said to “prefer” life over death, this is not how we see the logic of natural existence. When an enterprising fox is able to steal some goslings from the edge of a humongous colony of migratory birds Attenborough does not bemoan the apparent loss of life as he does when lions kill a solitary elephant. This is precisely the regular “price” that the flock must pay to survive and multiply. We might even say that it is not even a loss of life, since the story of the geese takes the entire flock as the unit of life or Being with a common urge to live. And since this urge is “rooted ontologically in care,” we might also say that this flock has a common intentionality, a common Being.

Returning to Darwin might be helpful here. Recall that he is speaking about neither plants nor animals in the quotation above; his concern is not even generalized to organisms. Instead, Darwin is describing the struggle for existence that dominates the life of every “organic being.” I think we should take this distinction seriously. There is something in these narratives of the natural world that abstracts struggle from one defined by physical distinctions to a more philosophical level, which plays with conventional understandings of life. I read Darwin’s “organic being” as a unit of life with a size that varies depending on climate, terrain, and energy availability from a solitary arctic wolf to a thousands-strong herd of wildebeest. This unit is not given in the world itself but is, instead, an artifact of a particular understanding of it; the “organic being” is a figure born from Darwin’s attempts to tell a story about the world.

This fundamental urge to live should neither be forgotten nor left in theoretical isolation, for it “seeks to crowd out other possibilities” in the constant struggle for its own existence. Whether tracking the life of an individual tree or a flock of millions of geese, the natural world is a domain of unceasing competition in “ever-increasing circles of complexity.” This competition is the integrated total of countless Beings’ urges and intentions, and in this naturalist conception of the world it is a ubiquitous and powerful force. Attenborough does not mince words: “in the jungle there is competition for everything.”

Just as the rain forest clearing’s apparent lawlessness becomes comprehensible when sped up, we can recognize organisms’ continuous evolutions in the balanced state of nature. In this narrative, the illusion of static balance is ensured by natural competition, which couples together organisms in a complex web of coevolving relations. We might say that natural competition is both the motor of evolution (“generations of choosy female [birds of paradise] have driven the evolution of males’ remarkable displays”) and simultaneously its regulator (when insect populations grow, “parasites stop any one group of animal getting the upper hand”). This power is presented as nearly omnipotent in Nature. These texts credit it with producing both the natural world’s unmatched beauty and organic systems whose complexity and efficiency would be the envy of the best engineer.

For both Darwin and Attenborough, the dynamic of competition serves to balance the natural world and provide space for all of its Beings and their competing intentions. Though these struggles between Beings are unceasing, “in the long-run the forces are so nicely balanced, that the face of nature remains uniform for long periods of time.” But in fact, it is merely the face of nature that remains unchanged. In the rain forest, which we have seen has both high productivity and unceasing conflict, “competition for resources ensures that no one species dominates the jungle.” Reading further, however, we see that the apparent stasis of Darwin’s “state of nature” is actually a dynamic equilibrium, shaped and maintained by the competition between Beings’ struggles to survive. It is not that everything stays the same in the unspoiled natural landscape; it only appears this way on our familiar time scale.

For a narrative to be meaningful, it helps to have a traceable set of reasons for what happens. In Planet Earth, the recurring narrative of organic balance is powered, or explained, by intentionality. Attenborough’s presentation of Nature’s dynamic equilibrium as the spontaneous result of organic beings who compete according to their non-conscious self interests recalls the logic of traditional economics, which credits the invisible hand of the market for this balancing act. The identity of these processes, these mechanisms for maintaining the world as it is and optimizing participants’ experiences, offers an opportunity to see more deeply into each and understand how widely relied upon this notion of self-regulation is.

The idea that both free markets and unspoiled ecosystems are able to remain in productive balance seems to be the result of a belief that competition has the innate ability to order complex systems. International deregulation efforts, which have left “natural market forces” in charge of the global economy, speak to the strength of people’s faith in the idea of self-regulation. But this invisible hand is more than the basis for a particular economic theory. Just like the dialectic – Hegel’s idealist, self-generating process of reason through contradiction – this mechanism of competitive self-regulation is a deep philosophical belief in the way the world progresses. Insofar as naturalists, led by Darwin and narrated by Planet Earth, have used this idea as an overarching explanation for how Nature functions, it seems just as organic as the rain forest trees struggling for sunlight.

This ideological contact between the ecological and the economic might allow us to finally situate ourselves, humanity, in Planet Earth’s storytelling. We occupy a complicated position in the narrative. It is striking that Attenborough rarely mentions humanity and we only see people when the cameras descend through labyrinthine caves deep into the planet. Yet at the end of several episodes, Attenborough warns viewers of these environments’ precarious positions. The appeal is most dire at the end of Jungles: “Rain forest diversity has come at a cost. It has made them the most finely balanced ecosystems in the world, only too easily upset and destroyed by that other great ape, the chimpanzee’s closest relative, ourselves.” Only a few minutes after viewers have seen their own humanity in the mirror of a marauding band of territory-hungry chimpanzees, this language is striking. It positions humanity not as an alien force superimposed on an independently existing natural world but as a part of the same precariously balance system. The argument is so affective because it refuses to plead. Instead it suggests that we reconsider the boundaries we draw between systems we hope to keep in balance. Rather than seeing economics and ecology as two fundamentally separate, permanently walled-off disciplines this attitude takes them as parallel projects working on different problems. Instead of defining the jungle as the wild and unthinkable state of nature, this naturalist approach seeks to fuse man’s understanding of himself with the complexities of Nature in order to ensure that Planet Earth never becomes a stunning monument to irrecoverable beauty.

To spend or not to spend: The austerity debate

Category : Uncategorized · No Comments · by May 20th, 2014

Originally posted on 3QuarksDaily

Public sector austerity has come back to the West in a big way. Governments throughout the European Union are wrestling against striking civil servants, a stagnant private sector, and an entrenched public welfare system to drastically reduce spending. The budget cuts are broad, and they run deep. Under pressure from global financial markets and the European Central Bank to reduce public deficits, Spain, Italy, Portugal, and Greece have issued “austere” budgets for the coming year that simultaneously raise taxes and slash government spending. David Cameron’s new Conservative government has violated its campaign pledge to spare Britain’s generous middle class subsidies in an attempt to close a budget gap that is among the world’s largest, at 11 percent of GDP. Supposedly confirming the wisdom of austerity, the financial press has trumpeted the re-election of Latvia’s center-right government, which passed an IMF-endorsed budget with austerity reductions equal to 6.2 percent of GDP. Prime Minister Valdis Dombrovskis won his “increased mandate” – “an inspiration for his colleagues in the EU” – against a backdrop of 20 percent unemployment and a cumulative economic contraction of 25 percent in 2008 and 2009, the most severe collapse in the world.

Latvian electoral politics notwithstanding, austerity has been a tough sell worldwide. Both the protests that broke out across Europe at the end of September and the general strikes mounted against Socialist governments in Portugal, Spain, and Greece attest to the resistance all governments face in cutting public spending. And opposition has not been confined to the streets. At a G20 summit in Washington DC on April 23, the finance ministers and central bank governors of the world’s 20 largest economies agreed that extraordinary levels of public spending should be maintained until “the recovery is firmly driven by the private sector and becomes more entrenched.” Indeed, Larry Summers, the departing Director of the White House National Economic Council, still argues that the United States must continue its policy of economic stimulus in the form of deficit spending on infrastructure rather than pull back public resources, lest it cede the small gains of the nascent recovery.

Yet the pressure to embrace austerity continues to mount on governments on both sides of the Atlantic, crowding out calls for further stimulus spending; the stimulus vs. austerity debate has heated up in both policy circles and academia. On one side are the Ricardians, who argue that austerity budgets will boost confidence, by signaling that the recovery has taken hold, and spur private investment, because capital will no longer fear future tax hikes to pay for today’s deficit spending. We hear this story coming from three major institutions: the European Central Bank, which regulates the 16 Eurozone countries; the International Monetary Fund, which provided lender of last resort bailouts for countries struggling to meet their international obligations; and the global financial markets, which penalize debtor countries by demanding ever higher interest rates to refinance sovereign debt.

The Keynesians are on the other side, arguing that governments must maintain their economic stimulus programs to help make up the difference between the internationally depressed levels of aggregate private demand and the level of economic activity necessary to support full employment. Their argument against austerity-induced gutting of social welfare programs goes beyond moral claims about equity. Government spending, especially in programs that target the bottom end of the income distribution, circulates through the economy, multiplying the job-creating effects of the initial public expenditure. Of course, the root of the current economic problems is an overabundance of debt – both public and private. But as international political economist Mark Blyth explains, it is dangerous for governments to try to clean up their balance sheets with austerity at the same that the private sector is paying down its own debts from the housing boom instead of investing and hiring. Indeed, the US shed 95,000 jobs in September, after layoffs by local governments and the release of temporary Census workers cost 159,000 jobs. Until recently, the Obama Administration was the main proponent of the stimulus view, which is also supported by organized labor and hordes of protesting Europeans.

Strangely, when the G20 finance ministers reconvened on June 5 in South Korea, their message had changed. Instead of encouraging countries to continue supporting the recovery, they announced that “countries with serious fiscal challenges need to accelerate the pace of consolidation,” and identified monetary policy as the best tool going forward. This, despite the fact that monetary policy levers are at the “zero bound” worldwide, allowing no room for further expansionary movement. So, why the sudden shift? How might we characterize the compulsion governments have to engage in painful belt tightening when their belts are circled around their necks?

The austerity vs. stimulus debate is not just a policy disagreement between social classes with opposing interests; it is a confrontation between two entirely distinct modes of governing, two different ways of conceiving of the state and the economy. Austerity is a quintessentially Classical prescription for economic imbalances, a direct descendent of the vertiginous deflationary adjustments countries were forced to stomach under the gold standard. Now, as then, financial power compels states to sacrifice the health of their domestic markets in order to preserve international credibility. Politicians would not make this trade without compulsion; anyone concerned with reelection will rightly worry about the havoc this fiscal discipline wreaks on his constituents. This should not suggest, however, that the only benefit of a policy to stimulate the economy with government spending is its ability to create short-term construction jobs. Properly administered, a Keynesian stimulus will keep unemployment rates manageable by propping up aggregate demand, but the real goal of government spending is to make the short-term economic picture look rosy enough to improve private expectations of the future. As government money filters through the economy, businesses can count on boosted demand for their products and will hire more workers, so private demand can gradually recover.

Michel Foucault, who is known for his studies of “governmentality,” developed a philosophical framework that helps distinguish these approaches to dealing with recession. In his 1978 lectures at the College de France, Security, Territory, Population, Foucault argues that there are three forms of power: juridical law, discipline, and security. Juridical law maintains order by establishing prohibitions and doling out punishment. Its model is a hanging, commanded by the sovereign to punish a subject who violated the law. Foucault is most famous for his theory of discipline, wrought from his meticulous studies of the techniques of power used in prisons, schools, barracks, asylums, and hospitals. A disciplinary institution aims for efficiency; it structures power relations so that the surveillance, and transformation of individuals can proceed with the least possible expenditure of resources. Its ultimate goal is, in a sense, utopian: to forge subjects who have internalized the law and follow it by themselves. The model disciplinary institution is the panopticon, in which the prisoner must always behave as if someone is watching. Finally, a security apparatus handles problems with measurement. Recognizing that it is impossible to completely engineer away social ills, a security apparatus sees a problem as the result of a series of probable events and enters it into a calculation of cost. Rather than focusing its attention on the legal boundary between the permitted and the prohibited, a regime of security “establishes an average considered as optimal on the one hand, and, on the other, a bandwidth of the acceptable that must not be exceeded.” And benefits of any policy are weighed against the costs of implementing it. Instead of deploying mechanisms to transform deviant individuals into ideal subjects, the techniques of security act on a new object: the population. Its tools are statistics, which uniquely make meaning from uncertainty and direct power to most effectively manage a large ensemble. Its model is the modern management of epidemics. While the differences between these three categories do rely on innovation, Foucault stresses that they do not represent distinct eras but rather alternative, and coexistent, ways that power organizes the social world.

Foucault deploys this framework on economic problems to show different ways of allocating resources to deal with the problem of scarcity. He begins with a juridical mode of resource allocation: price controls. For a long time, authorities attempted to control the food supply by instituting rigid price ceilings, intended to keep food affordable; regulations on food storage, intended to prevent hording from precipitating an artificial shortage; and export restrictions, meant to protect domestic supplies. Of course, in practice these price controls actually functioned to exacerbate food shortages, as the law prevented peasant farmers from charging enough to recoup their investments and plant enough grain the following year.

Foucault then studies the prescriptions of the Physiocrats, who advocated that governments reduce these restrictions and allow supply and demand to set prices according to the dynamics of the market. By allowing individuals to decide when and for how much to sell their grain, guided only by competition and informed by market prices, laissez faire policies leave the problem of managing scarcity to the decentralized decisions of many market actors, who sold their grain where high prices indicated it was most needed. Foucault shows how, historically, this shift to a new mode of governance alleviated the food shortages that had plagued Europe. But here, Foucault gets it wrong. He incorrectly classifies the Physiocrats’ free markets as a technique of security. Instead, laissez faire ought to be considered a disciplinary mechanism, since it aims to solve the problem of scarcity by conditioning individuals to make the “right choices” on their own about how much grain to grow and where to sell it.

Political economy first entered the realm of security when Keynes invented macroeconomics as a way of managing unemployment and taming the business cycle. For the first time, economists could attend to a population and direct their policies at the economy as a whole. Indeed, the concept of unemployment only makes sense for a whole economy; it has no microeconomic analogue. In his General Theory, Keynes shows how governments can use fiscal policy to keep their unemployment rates within reasonable bounds, consistent with long-term economic growth and social stability. Government’s deficit spending is the distinctive technique of this regime of Foucauldian security. An economic stimulus is not intended to help any particular individuals – though some sectors certainly benefit more than others – but rather boost aggregate demand. Its target is the whole economy, the population. Indeed, classical economics did not admit the economy per se as an organic object, since it was seen as merely a large collection of individual, rational actors. Insofar as macroeconomic policy has this population as the target of its interventions, Keynes can be said to have invented the economy as an object.

It is easy to see where austerity fits in Foucault’s taxonomy: It is a disciplinary force exerted against free-spending governments. Just as the structures of school buildings make rambunctious children into docile bodies, pressure to embrace public austerity is an effort on behalf of international capital to restrain the free-spending tendencies of welfare states. This fiscal discipline, sold as a virtuous and commonsensical “pain after the party,” is intended to produce chastened governments, which maintain capital-friendly tax policies at the expense of social services and in the name of stability, predictability, and job creation. Even though newly streamlined corporations are again flush with cash but have not rehired the workers laid off during the worst of the financial crisis, business leaders continue to argue for an emergency loosening of labor laws that would allow them to fire employees more cheaply.

Although these revisions to the modern welfare state’s social contract may seem draconian, they are hardly unprecedented. The IMF has been pushing public austerity and business-friendly labor reforms on financial crisis-plagued developing countries for decades under the banner of the “Washington Consensus.” Yet these stringent retrenchments, required as conditions on IMF rescue packages for countries from East Asia to Latin America to Latvia, have almost always exacerbated recessions. Indeed, the country that avoided the most damage in the 1997 East Asian financial crisis was Malaysia, which was condemned at the time for eschewing these familiar neoliberal fixes and setting up strict currency controls. Today’s massive foreign currency reserves in East Asian treasuries exist precisely so that these countries will never again have to turn to the IMF for another many-strings-attached bailout. The citizens of the global West are finally experiencing an economic pain all-too familiar to previous recipients of IMF bailouts. In all spheres of economic life, laissez faire prescriptions discipline states with the same old, capital-friendly mantra: “That government is best which governs least.”

Football, Finance, and Surprises

Category : Uncategorized · No Comments · by May 20th, 2014

Originally posted on 3QuarksDaily.

As the New Orleans Saints lined up to kick off the second half of Super Bowl XLIV, CBS Sports color commentator and former Super Bowl MVP Phil Simms was explaining why the Saints should have deferred getting the ball after winning the pregame coin toss. Simms suggested that the Saints, 4½-point underdogs to the Indianapolis Colts, would be in a better position were they not giving the ball to future Hall of Fame quarterback Peyton Manning, who already enjoyed a four-point lead and had had 30 minutes to study the Saints’ defensive strategy. Simms had barely finished this thought when Saints’ place kicker Thomas Morstead surprised everyone – the 153.4 million television viewers, the 74,059 fans in attendance, and most importantly the Indianapolis Colts – with an onside kick. The ball went 15 yards, bounced off the facemask of an unprepared Colt, and was recovered by the Saints, who took possession of the ball and marched 58 yards down the field to score a touchdown and gain their first lead of the game, 13-10. The Saints would go on to win the championship in an upset, 31-17.

Although Saints quarterback Drew Brees played an outstanding game and the defense was able to hold a dangerous Indianapolis team to only 17 points, Head Coach Sean Payton received the bulk of the credit for the win, in large part because of his daring call to open the second half. Onside kicks are considered risky plays and usually appear only when a team is desperate, near the end of a game. In fact the Saints’ play, code named “Ambush,” was the first onside kick attempted before the fourth quarter in Super Bowl history. And this is precisely why it worked. The Colts were completely surprised by Payton’s aggressive play call. Football is awash in historical statistics (humorously specific stat goes here), and these probabilities guide coaches’ risk assessments and game planning. On that basis, didn’t Indianapolis Head Coach Jim Caldwell have zero reason to prepare his team for an onside kick, since the probability of the Saints’ ambush was zero (0 onside kicks ÷ 43 Super Bowl second halves)? But if the ambush’s probability was zero, then how did it happen? The answer is that our common notion of probability – as a ratio of the frequency of a given event to the total number of events – is poorly suited to the psychology of decision making in advance of a one-time-only situation. And this problem is not confined to football. Indeed, the same misunderstanding of probability plagues mainstream economics, which is stuck in a mathematical rut best suited to modeling dice rolls.

Probability is a predictive tool; it helps decision makers confront the uncertainty of future events, armed with more than their guts. Both economists and football coaches use probabilistic reasoning to predict how others will act in certain situations. The former might predict that, faced with a promising investment opportunity and a low interest rate, entrepreneurs tend to invest, while the latter might anticipate time-consuming running plays from teams winning by a touchdown with four minutes left in a game. Both the economist and the coach would look up historical statistics, which they hope would provide insight into their subjects’ decision-making tendencies. And over the long run, these statistics would likely be quite good at predicting what people do most of the time. It would be foolish not to act in anticipation of these tendencies.

Indeed, there are many statisticians employed to do such things. In the lucrative, gambling-powered world of football analysis, for example, a company named AccuScore tries to predict the outcomes of NFL games and the performances of individual players with computational simulations early in the week. Although their exact computational methods are proprietary secrets, they have roughly described the strategy behind their Monte Carlo simulation engine. Through fine-grained analysis of troves of historical statistics, AccuScore’s computers create mathematical equations to represent the upcoming game’s players and coaches. How often does a team pass the ball when it’s third down with four yards to go at their own thirty-yard line, with no team up by more than three points in the first quarter at an indoor stadium? When New York Jets running back LaDainian Tomlinson rushes up the middle, how often does he get past the middle linebacker and rush for more than eight yards? The probabilistic answers to these questions – and many others – become the parameters of the players’ and coaches’ equations, which AccuScore pit against each other on a numerical field. The computers then simulate the game, one play at a time, guided by a random number generator and the participants’ tendencies. Then they repeat the simulation 10,000 times and average the results. (ESPN embed or link for an example)

According to AccuScore’s website, their predictions have an overall gambling accuracy of about 54%. This probabilistic strategy makes sense for its purpose, predicting the outcomes of games by analyzing the frequency with which subjects make certain decisions, but does not at all resemble the thought process by which a coach or his opponent calls a play in the middle of a close game. In contrast to AccuScore’s simulations, the real football game is only played once. Had they played Super Bowl XLIV 10,000 times, the Colts’ normal, kickoff return formation would surely have been the right bet to make at the starts of the 10,000 second halves. But they only kicked it once, and the act of kicking it destroyed the possibility of it ever happening again. (For the moment, let’s ignore the chance that someone on the Saints committed a penalty, necessitating a redo.) Sean Payton’s aggressive call worked, not because it gave the Saints the highest probability of success, but because the one time Morstead kicked it onside he caught the Colts by surprise.

Economics must also grapple with the difference between these two interpretations of probability. When economists declare that that markets are populated with rational agents, they must mathematically define that rationality, just as AccuScore defines players and coaches with tendency equations. The dominant strategy for defining economic agents’ rationality comes from Oskar Morganstern and John von Neumann’s groundbreaking 1944 book, Theory of Games and Economic Behavior. In it, they propose assigning each market actor with a utility function, which weights the payoffs of various possible actions with their probabilities of coming to pass. In constructing utility functions, neoclassical economists must assume that they have considered all of the relevant possibilities, which is another way of saying that the probabilities of all possible events included in the utility function add up to one. They then define the agent’s rational choice as the one that maximizes the expected value (link here) of her utility function. This method is the foundational concept of game theory and is used to predict how decision makers will act. Modeling a market then proceeds in roughly the same way that AccuScore models NFL games.

However, generations of critics have argued that rational choice theory is psychologically unrealistic as a description of actual human decision-making. While one might be able to argue that it represents the optimal definition of rationality, it is nearly impossible to conceive of someone actually making this sort of calculation on the fly in an even remotely complex situation. In general, it is unrealistic to assume that people consider every single possible outcome of a decision, so that the probabilities of all these events can properly sum to 100%. If someone thinks of a new possible outcome, why should she consider any of the ones she’s already considered to be any less likely than they were before she thought of the new one? But more fundamentally, rational choice theory relies on the frequency ratio definition of probability, which we have seen is incoherent when applied to the circumstances of one-time-only decisions. The most important decisions we face (and thus, model) are unique. In these cases, when making a choice destroys the very possibility of anyone ever making that same choice again, the notion of probability as a historical frequency ratio is nonsensical.
Shackle potential surprise graph

There have been several attempts to construct a theory of probability that accurately describes the psychological process of making decisions in one of these self-destructive choices. One strand of thought, coming from the Keynesian economist G.L.S. Shackle, is particularly well suited to describing the psychology of making decisions in the face of uncertainty. In Shackle’s theory, the probability of an event coming to pass is no longer calculated as one of several possible outcomes, as the standard frequency ratio theory does. Instead, he figures the likelihood of any particular outcome on its own terms, by asking a simple question: Based on what I know now, how surprised would I be if Y happened? Because the likelihood of each outcome is determined independently, their probabilities need not sum to one. That means thinking of a new possibility does not make any other less likely to happen. It also means that one can hold two or more mutually exclusive outcomes to be equally unsurprising, based on the information at hand. Indeed, most of the time, there will be a range of possible outcomes that are all judged to be equally unsurprising. (Shackle illustrated this with the graph at right.) Thus, Shackle’s decision-making comes down to a comparison of the best possible unsurprising outcome to the worst possible unsurprising outcome. This process seems much closer to the psychology of forming expectations and making choices than trying to maximize the expected value of a weighted average of all possible outcomes in your head.

Shackle developed his potential surprise framework as a way to model individuals’ expectations when considering a capital investment. A firm facing a particular investment decision may never have those same choices again. If it spends, it could lose and potentially go bankrupt. If it saves, it might not get such an attractive offer in the future, or it may be outcompeted by others. In forming expectations about a potential investment, firms naturally compare the most optimistic reasonable scenario to the most pessimistic. But Shackle’s potential surprise theory can just as easily describe the psychology of a football coach calling plays. A coach aims to control the surprises on the field, employing strategies to anticipate his opponents’ moves and surprise them as much as possible. Indeed, former fighter pilot and current NFL statistics guru Brian Burke calculated that surprise is the biggest factor determining the success of onside kicks. Overall, onside kicks are successful (i.e. the kicking team recovers the ball) 26% of the time. Most teams only try them when they’re desperate, and when a team is trailing at the end of a game no one is surprised by an onside kick. But in other situations, when the opponents aren’t expecting them, teams recover about 60% of attempted onside kicks.

Neither the decision to call a football play nor to make a capital investment is dominated by the calculation of probability-weighted historical statistics. Of course, considering what has worked and failed in the past is still smart practice – Shackle himself writes that it would be foolish to disregard probabilities calculated this way – but rational choice theory fails to depict the thought process of a decision maker facing a one-time-only choice with any psychological subtlety. To remember this, one need only pay attention to the fine print and sped up announcement at the end of the mutual fund advertisements at halftime: “Past performance does not guarantee future results.”

I know I've been absent, but I've been Quarking regularly, promise

Category : Uncategorized · No Comments · by Mar 24th, 2011

It’s been quite a while since I’ve written here, but that doesn’t necessarily mean that I’ve found a better way to express my ideas than the occasional essay on this blog. In fact, I’ve been writing a lot.

Last fall, I finished a year-long research project on the statistical foundations of Keynesian macroeconomics. It grew out of several strands of thought that came together after a fair amount of time spent in New York City’s great cathedral for thought, the 42nd Street Public Library. The project started as a final paper for Mark Blyth’s Foundations of Political Economy seminar, for which I had to write about the financial crisis using any o the theoretical frameworks we had developed over the semester. While reading Keynes’ General Theory, I noticed striking parallels in the theoretical relationships between Keynes’ macroeconomic aggregates and individual market actors and that between bulk matter and the individual particles that compose it. At the time, I was finishing up my physics degree with a course on thermodynamics and statistical mechanics – two theories of matter on different scales – and researching the development of nineteenth century physics for my honors thesis. After many revisions, the paper was accepted by the peer-reviewed Journal of Philosophical Economics and is coming out in their May, 2011 issue.

Since October, I’ve been a regular contributor to 3QuarksDaily, writing a monthly “Monday Musings” column on philosophy, economics, and whatever else strikes my fancy. So far, I’ve discussed how to use probability to model decision making in a psychologically accurate way (focusing on football and finance), the dualing politics of fiscal austerity and stimulus spending, and the ontology implicit in BBC’s Planet Earth. I’ve also written a couple more, about why credit rating agencies are systemically risky and something on David Harvey’s Marxist theory of capital overaccumulation.

From here on out, I’ll try to post on here whenever I’ve published something elsewhere, and perhaps put up a few original things as well. Thanks for reading.

Adventure Capital: Condos, Groupon and Big Pharma

Category : Uncategorized · No Comments · by Feb 21st, 2011

Originally posted on 3QuarksDaily.

The late economist Hyman Minsky wrote that after fortunes inflate on the back of a speculative bubble, and after investors’ irrational optimism and overvalued assets inevitably collapse, an economy enters a “period of revulsion,” when people remember that it’s risky to bet big on an uncertain future. Likewise, it’s always during the depths of a hangover that a drinker remembers how whiskey invites its own overconsumption and swears that the only way to avoid another descent into this purgatory is to never touch the stuff again. But after the fog leaves and with a clear head regained, he forgets the pain after the party and declares another Manhattan to be an eminently reasonable investment. Of course, the trick is to recall at just that moment how miserable you’ll be after another three. A pessimistic economist faces the same cyclical popularity as a tee-totaling friend; a consoling voice the morning after becomes a buzz killer as soon as night falls again.

For economists focused on capitalism’s tendency to foment crisis, it’s important to make the most of investors’ revulsion. Indeed, if there’s ever a time for Marxists to find an eager audience for their theories of capitalist overaccumulation, it’s in the wake of a financial crisis. The moment is particularly ripe for David Harvey, a Marxist trained as a geographer, who has made a career of explaining why surplus capital has such an affinity for real estate and describing how overproduction regularly reconfigures the spaces in which we live. Both Ben Bernanke and a slew of Neo-Keynesians led by Paul Krugman have pointed to a “global savings glut” – originating in the current-account surpluses of net exporters such as China, Japan, and Germany and flowing to the bloated real estate markets of the United States and Western Europe – as the fundamental imbalance responsible for the latest boom and bust. To Harvey and his fellow Marxists, the global savings glut is not a historical fluke but an instance of an intrinsic tendency for capitalist economies to overproduce, and the great North Atlantic real estate bubble is but another temporary answer to their perpetual problem: What can absorb the great mass of overaccumulated capital?

Insofar as capital always seeks to realize a profit by selling commodities for more than they cost to produce, the economy always requires new inputs to account for the creation of this surplus value. As Benjamin Kunkel explains in his illuminating review of Harvey’s work, “the full cash value of today’s product can therefore be realized only with the assistance of money advanced against commodity values yet to be produced.” It is credit, which Marx includes in the broader category of “fictitious capital,” that permits “money values backed by tomorrow’s as-yet unproduced goods and services to be exchanged against those already produced today.” Of course, this forward-looking financing scheme requires that those future values are actually produced, so banks can be repaid and accumulation can continue. As Harvey writes,

A proper allocation of credit can ensure a quantitative balance between [today’s consumption and tomorrow’s production]. The gap between purchases and sales… can be bridged, and production can be harmonized with consumption to ensure balanced accumulation. Any increase in the flow of credit to housing construction, for example, is of little avail today without a parallel increase in the flow of mortgage finance to facilitate housing purchases. Credit can be used to accelerate production and consumption simultaneously.

In a steadily growing economy, haute finance expands credit to both consumers and producers, anticipating that the latter will sell enough tomorrow to cover the former’s spending today.

The danger, and for Harvey the inevitability, is that financiers profiting by underwriting both the demand and the supply sides of a growing economy will overindulge, eliciting the production of more commodities than can possibly be sold for a profit and the creation of more fictitious capital than can ever be backed by actual production. In the North Atlantic real estate bubble, as clear a case of overindulgence as there ever was, this meant more the construction of more houses than could possibly be sold and the issuance of more mortgage debt than could possibly be repaid with workers’ long-stagnant wages. As a geographer, Harvey’s interest lies in the physical results of this continual overproduction. The “spatio-temporal fix,” as he calls it, yields more than just unwanted condos; the necessary expansion of infrastructure that goes along with these binges – for example America’s post-war suburbanization, the global spread of identical glass and steel office towers, Dubai and Saudi Arabia’s booming ex nihilo oases, and China’s stimulus-fueled real estate bubble – fundamentally reshape where and how humans live their lives.

As an economist, however, Harvey’s attention falls on property values, long a sore subject for Marxist theory. If all value represents some quantity of real human labor expended, as Marx holds, then it doesn’t make sense that a plot of unimproved land can have a value, even though no labor has transformed it yet. In his 1982 Limits to Capital, Harvey proposes to treat ground rents, the value of a property beyond anything built on it, as a pure financial asset. “Like all such forms of fictitious capital,” Harvey writes, “what is traded is a claim on future revenues, which means a claim on future profits from the use of the land or, more directly, a claim on future labor.” Thus, property values can be considered fundamentally speculative, regardless of whether a given price seems appropriate or has clearly been inflated by irrational exuberance. Real estate, then, is the perfect capital sink to absorb the surplus value created by a broader speculative binge. As an intrinsically speculative asset, property can easily accrue the added value necessary to finance excess consumption elsewhere in the economy. As a natural resource, a piece of earth has the aura of a good always in limited supply, helping rationalize its rising price. As a physical place, land provides a site for building, allowing it to absorb a great deal of additional surplus labor and surplus capital before starting to look overvalued. The credit system permits capital to finance both the supply and demand of real estate on borrowed money, so long as property values continue to rise.

This treatment of property values is easily extended to another speculative landscape: the Internet. The way venture capital poured into not-yet-or-maybe-never-profitable dot-com startups in the late ‘90s is akin to how developers’ dollars rapidly inflated the value of the ground under Floridian swamps in the early ‘00s. Both pots of money were allocated in anticipation of the properties’ future productivity. The inflowing capital, whether it went to building e-commerce engines or McMansions, seemed to justify the rising property values. Credit financed years of spending (on servers and software engineers in one case and on roads, schools, and stainless steel kitchen appliances in the other) before the assets’ overvaluation became too obvious to ignore. Today, after overproduction has left a housing glut and a real estate market unfit for further investment, capital is returning to cyberspace. The gathering buzz surrounding the impending IPOs of Facebook and Groupon and a possible $10 billion purchase of still-profitless Twitter are based on a shared belief that social media will bring real future production, unlike last time. But extending Harvey’s analytical framework to digital properties illuminates how the Internet now serves an identical systemic purpose as real estate: It is an absorbant destination for overaccumulated capital. Whether or not the market judges $15 billion a fair price for Groupon, Harvey’s work suggests viewing its capitalization as a fundamentally speculative investment in fictitious capital.

Marxists see the search for assets that can absorb the economy’s overaccumulated capital as the necessary papering over of a fundamental capitalist contradiction. As the coordinated lowering of capital controls has increased the fluidity of international financial markets, liquid capital can chase profits around the globe. The non-linearity of investment fads – think of the surging popularity of emerging markets, mortgage-backed securities, and dot-com stocks – makes this profit chase an intrinsically bubbly undertaking, and one can tell a compelling history of the macroeconomic business cycle by tracking the flows of capital among the three fundamental asset classes: securities, commodities, and real estate. The power of Harvey’s theory of property values lies in his analysis of the ways in which capital finances both the demand and supply sides of a speculative industry, allowing asset values to continue inflating until someone declares that things have gotten out of hand.

Consider, for instance, the financial dynamics of the American health care industry; the amount of capital and labor allocated to it is astounding. In 2008, the sector weighed in at a monstrous 16% of GDP in 2008, easily outstripping all other nations’ medical systems, according to the OECD. The second-most profligate country, France, spent just over 11% of GDP on health care. Unlike more centralized systems, the American medical industry is structured to incentivize the prescription of high-cost, cutting edge therapies rather than funding cheaper preventative care. Decentralized in the name of the free market, the existing American health insurance structure has defied fundamental economic logic, financing an ever-increasing demand for expensive medical services even as prices continue their seemingly interminable rise and the quality of care provided stagnates. Surgeon and New Yorker contributor Atul Gawande has made a career out of highlighting intuitive, cheap, and effective cost-control and quality-improving strategies, such as employing operating room checklists and dispatching teams of nurses and social workers to keep tabs on the highest cost patients and reduce their emergency room admission rate. Despite their low cost and effectiveness, the fixes Gawande champions struggle to attract broad support from a medical industry that nets more from a bypass surgery and a life-long course of statins than quarterly appointments with a nutritionist. The result is an intrinsically shortsighted health care system unable to tackle America’s stubborn and costly public health problems, such as obesity. Although many of the cost-saving components of Obamacare are intended to rationalize a permissive and inefficient insurance regulatory system, it remains to be seen whether it will be able to sufficiently alter the existing incentive structure, even as it drives millions of new customers into the arms of private insurance companies.

Capital is doing just as well on the supply side. The biomedical technology sector is among America’s most successful, thanks in large part to a financing system that makes biotech a lucrative target for venture capital. Despite the massive sunk costs endemic to funding hit-or-miss biotech research – the industry has lost nearly $100 billion since 1976, according to an executive at Genentech – the potential upside to drug research is enormous thanks to a set of globally enforced intellectual property laws that guarantee a steady stream of profits for any drugs that gain FDA approval. Key industry victories, such as the ban included in the Medicare Part D prescription drug bill that banned Medicare from using its large market share as leverage to bargain for lower prices, help maintain unnecessarily high drug prices. Internationally, economic rents derived from intellectual property claims, including pharmaceutical patents, are an increasingly important revenue source for the US, as IP-intensive industries accounted for approximately 60% of total US exports from 2000-2007. And domestic investors are protected from competition as well; even as foreign purchasers of Treasury debt fund this extravagant system, foreign capital faces protective restrictions on foreign direct investment in American biotech companies.

Although a medical system unable to contain administrative, provider, or drug costs may be good for financial capital, it is as unsustainable for the country at large as a housing bubble. The weight of insurance obligations is a leading drain of America’s international competitiveness, accelerating the decline of domestic manufacturing and initiating a race to the bottom in coverage that the new health law aims to arrest. The implications for labor are serious. If increases in health care costs continue to outpace GDP growth, workers’ income available for non-medical expenses will continue to shrink, medical expenses will remain the leading cause of personal bankruptcies (another arena in which capital has won substantial legislative victories), and the right will continue to cite generous benefits packages as justification to further erode collective bargaining rights.

Overall, Harvey’s work inhabits the familiar Marxist duality between trenchant economic analysis and leftist disgust. He describes financial capital as a class buoyed by a theoretical utopianism and emboldened by a practical disregard for macroeconomic consequences. During the housing bubble, it was “as if the banking community had retired into the penthouse of capitalism where they manufactured oodles of money by trading and leveraging among themselves without any mind whatsoever for what the working people living in the basement were doing.” While financing both sides of a Ponzi-like economy with privatized gains and socialized losses, capital seems as unconcerned with the massive misallocation of resources and erosion of competitiveness as H.G. Wells’ Eloi were with the subterranean world of the Morlocks. And while few pronounce medicine a bubble waiting to pop, it resembles our recent real estate overindulgence in both the mechanisms of its malignant growth and its potential to cripple America’s long-term economic health.

Should we fear fear itself?

Category : Uncategorized · No Comments · by Nov 29th, 2010

Originally posted on 3QuarksDaily.

People are worried about the Euro. As bad news flows out of Europe – persistent unemployment, popular discontent over painful austerity measures, and catastrophic bank losses tied to still-deflating real estate markets – international investors continue to cast doubt over the Euro-Zone’s short- and long-term stability. Fear of at least a partial disintegration of the monetary union is rampant. Indeed, Morgan Stanley recently released the results of a survey of 150 of its clients; while only 3 percent of the investors thought there is more than a 60 percent chance that the Euro-zone will break up, three-quarters of the respondents thought there was some probability of a breakup. These statistics raise a double concern. First is the fear that this nightmare scenario will come to pass, an unprecedented event that could fatally wound investor confidence in the Euro, potentially eliminating its viability as a secure store of value. Second, one might fear this fear itself, as these investors’ worries might contribute to their own realization.

Financiers justify the distinctive double movement of the last several decades by arguing that markets are efficient. Neither the proliferation of capital markets nor the wearing away of regulations on them would be legitimate cause for concern if markets could be counted on to allocate capital to the areas of the economy that deserve it. Yet this period’s continual booms, busts, and crises provide a substantial and ever-increasing body of evidence that these supposedly rational capital markets are, in fact, anything but. As much as Florida’s decaying, uninhabited subdivisions attest to the dangers of irrational exuberance, Ireland’s swaths of unsold houses and imploding, too-big-to-fail banks attest to the power of expectations. They demonstrate that rather than allocating resources on the basis of soberly considered “economic fundamentals,” capital markets have a stubborn tendency to synthesize their own realities from the grist of investors’ expectations.

Consider how the now-familiar contagion of financial crisis replays itself in each of the PIIGS (Portugal, Italy, Ireland, Greece, Spain), adding to the uncertainty about the Euro-Zone’s continued solidarity. First, the hard slap of reality deflates a convenient and officially supported untruth, swamping the government’s budget with red ink. In Greece, it was Prime Minister George A. Papandreou’s acknowledgement that his predecessor had been hiding massive government obligations in complex financial instruments, cleverly designed by the whizzes at Goldman Sachs. Ireland’s difficulties stem from the painful popping of its massive real estate bubble, which hit its banking sector with losses so big they overwhelmed the Irish state’s aggressive bailout attempts. In both cases, deficits quickly piled up and investors started to worry that the governments might default on their sovereign debts.

This is when a new and dangerous set of self-reinforcing expectations took hold of the situation. To appease investors worried about the riskiness of their debts, the Greek and Irish governments were forced to raise the yields on their bonds. Perversely, increasing the price of their sovereign debt service further strained their budgets and made defaults more likely. This, in turn, made international investors more cautious about lending to these governments, necessitating further increases in bond yields. Worse, the more concerned investors become about any one of the PIIGS, the more likely it is that the contagion will spread to the other fragile countries gorging themselves at the trough of international capital markets.

This self-reinforcing cycle was hardly unpredictable. Indeed, John Maynard Keynes’ General Theory of Employment, Interest, and Money is, at root, a treatise on the power our expectations have over anything like economic “reality.” Under ideal market conditions, there should be a diversity of expectations about any particular economic question. A mix of bulls and bears ensures that every seller can find a buyer without reducing his assets to fire-sale prices. Yet the patchwork of institutions with the power to influence investors does not always foster this necessary diversity of sentiment. While the International Monetary Fund and the European Central Bank designed massive rescue packages to shore up confidence in Ireland’s and Greece’s ability to repay their debts, other institutions function to exacerbate financial crises. For example, credit-rating agencies such as Standard & Poor’s and Moody’s have continually downgraded the sovereign debt of countries with troubled balance sheets, providing official reinforcement to the feedback loops threatening the PIIGS. Every rating downgrade unifies global expectations, encouraging investors to bet against these countries all at once, deepening the problem.

There is a dark irony to the credit-rating agencies’ conservative pronouncements as it was only a few years ago that they were bestowing their highest grades on now-toxic mortgage-backed securities. How is it that these ratings agencies calculated that the Republic of Ireland is a substantially riskier investment than a collection of no-money-down mortgages from Tampa? Credit-rating agencies play an important role in the lightly regulated global economy; experts at these companies are supposed to judge the long-term stability of various assets, from sovereign debt to collateralized debt obligations. Their judgments form the basis on which investment firms determine their exposures to risk. Although existing regulatory regimes assume that credit ratings are sober evaluations of assets’ fundamental strengths and weaknesses, in practice, raters are frequently caught up in the same illusions as traders, since they both subscribe to the same theoretical orthodoxies and use similar models. The result is profoundly destabilizing: Instead of pressuring traders to reconsider the Street’s conventional expectations, credit ratings “confirm” the validity of traders’ bets, both inflating bubbles with a false sense of security and violently popping them once feelings start to turn sour.

This myopic logic, which takes the prevailing expectations about an asset to be more important than its “real” strength, usually works, at least in the short term. Both open exchanges, such as the NYSE, and private financial companies’ “market making” activities increase the allure of certain assets by making them appear more liquid. The liquidity that normally functioning markets provide allows professional investors to restrict their attentions to the short term, decreasing the relevance of long-term growth potential or solvency. Keynes compares markets dominated by professional investors to

those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.

In supposedly liquid markets, the optimal investment strategies do move to these higher levels of analysis, further from a strategy based on fundamental economic value. Indeed, studies indicate that markets behave quite differently depending on how many degrees of expectations agents consider. In general, “it is not the case that the average expectation today of the average expectation tomorrow of future payoffs is equal to the average expectation of future payoffs.” Rather than stabilizing, these markets tend to accelerate; economic models suggest that asset bubbles and crashes are natural features of such self-reinforcing market dynamics.

Despite the obvious risks of betting big on something as ephemeral as popular sentiment, traders make billions in management fees by convincing clients that they can consistently beat the gun. On the upsides of bubbles, this strategy easily pays off, usually beating the returns of investment strategies based on economic fundamentals. However, as soon as expectations turn against a particular asset, the process works in reverse and huge losses can wipe out years of hard-won gains. During these crashes, investors’ bearish expectations synchronize, causing liquidity to dry up as sellers drastically outnumber buyers. Indeed, Keynes maintains that “of the maxims of orthodox finance none, surely, is more anti-social than the fetish of liquidity, [since] it forgets that there is no such thing as liquidity of investment for the community as a whole.” Despite traders’ efforts to insulate themselves from risk with credit default swaps and other complicated hedging strategies, someone will be left holding the bag, full of near-worthless assets. If worries are widespread, rather than restricted to a particular security, this fear can spread from one asset to another as investors try to unwind their risky positions before their competitors. Thus, trying to “beat the gun” on the downside by anticipating others’ expectations of the average expectation about the future not only accelerates the fire-sale in progress but also spreads the contagion to other, unrelated assets. This is why the Portuguese government is watching the Irish situation so closely: Portugal knows it’s next in line.

In the wake of financial crises, it’s common for financiers to renounce the riskiest, most highly leveraged investment strategies in favor of something more “traditional.” Indeed, a long-time Wall Street veteran now dying of brain cancer, Gordon Murray, recently released a book condemning the entire concept of active money management. People would do better over the long term pinning their portfolios to an index like the S&P 500, he claims, than paying high-priced traders to try to beat the market. While some people burned by the recent crises may heed such advice for a time, history suggests that this conservatism will last only as long as the painful memories retain their sting. This moment has likely already passed. So long as traders can convince the holders of international capital that they can beat their competitors to the next big thing, one would be well-served by recalling FDR’s first inaugural address. Then, as now, negative speculation threatened banks, markets, and political stability. Fear itself might not be the only thing we have to fear today, but it’s certainly one of the biggest.

Torture, Doctors, $

Category : Uncategorized · No Comments · by Apr 20th, 2009

Today, the BDH published my last column of the semester, which argued that the Political Theory Project acted irresponsibly when it paid John Yoo to speak at Brown in February. Though the column is a little late on the Brown side of things, the case(s) against Yoo and company have only gotten stronger with further Obama-powered memo releases.

Speaking of torture, I’ve was working on an essay (On Defining Torture) in my writing seminar earlier this semester about the unintended consequences of the use of medical language to legally define torture. Thought I thought I had some “good” material to work with back in early February, last week’s leaked, top secret ICRC report was unbelievable:

Medical personnel were deeply involved in the abusive interrogation of terrorist suspects held overseas by the Central Intelligence Agency, including torture, and their participation was a “gross breach of medical ethics,” a long-secret report by the International Committee of the Red Cross concluded.

Based on statements by 14 prisoners who belonged to Al Qaeda and were moved to Guantánamo Bay, Cuba, in late 2006, Red Cross investigators concluded that medical professionals working for the C.I.A. monitored prisoners undergoing waterboarding, apparently to make sure they did not drown. Medical workers were also present when guards confined prisoners in small boxes, shackled their arms to the ceiling, kept them in frigid cells and slammed them repeatedly into walls, the report said.

Facilitating such practices, which the Red Cross described as torture, was a violation of medical ethics even if the medical workers’ intentions had been to prevent death or permanent injury, the report said. But it found that the medical professionals’ role was primarily to support the interrogators, not to protect the prisoners, and that the professionals had “condoned and participated in ill treatment.

From my essay,

The situation is similar with regard to torture and interrogations; sections 2.067 and 2.068 of the AMA ethics code unequivocally prohibit physicians from participating in either [torture or executions]. The rules not only prevent doctors from providing material assistance to interrogators but also prohibit supplying or withholding their professional knowledge in the service of intelligence agents. Physicians may not even “monitor interrogations with the intention of intervening in the process, because this constitutes direct participation in interrogation.” These standards clearly oppose the kind of physician involvement that would be necessary were torture’s legal definition to be defined by medical standards of harm. Even ex post facto medical evaluations of previous interrogations would be problematic, since doctors’ participation would enable intelligence personnel to get by with harmful actions that don’t meet the bar for classification as torture. This is analogous to the existing ban on doctors pronouncing inmates dead on the execution table. Consequently, medical participation at any stage in the interrogation process would force doctors to either violate their professional codes of conduct or require groups such as the AMA to unreasonably weaken their ethical expectations….

Deploying medical knowledge onto this situation reasserts some level of authority, organization, control, and professionalism that counteract the frightening sense of the “War on Terror” as an abusive free-for-all. But, the very characteristics of stability and respect that make medicine an attractive basis for defining legal categories extend to conceal the acts themselves. Just as pancuronium bromide conceals the corporeal violence of execution, cloaking torture in the discourse of medicine bestows human rights abuses with a veneer of respectability. For people seduced by the allure of information extracted by waterboarding an “al Qaeda operative,” knowledge of medical supervision might be sufficient to excuse this otherwise objectionable practice.  One can already imagine Limbaugh’s quip: “Liberals should stop complaining, these terrorists’ interrogations are conducted under the supervision of a doctor. That’s more than many Americans without health insurance can say about their own lives.”