Nick Werle's homepage



Mortgage fraud continues, even if no one is watching

Category : Uncategorized · No Comments · by May 23rd, 2014

In September, 2004, the FBI held a press conference to warn about the growing problem of mortgage fraud. “It has the potential to be an epidemic,” warned Chris Strecker, assistant director of the FBI’s criminal division. “We think we can prevent a problem that could have as much impact as the S&L crisis.” What might have seemed like an overstated publicity stunt at a time when the booming property market seemed healthy turned out to be far too modest. Exactly four years later, the US mortgage market brought the global financial system to the brink of collapse. Aggressively fighting mortgage fraud would not have stopped the housing bubble from inflating and bursting, but cutting down the volume of worthless debt could have reduced the pain.

As the waves of foreclosures mounted in the wake of the financial crisis, the federal government wanted to take a hard stand against financial crime. In May 2009, as the extent of the rot in the US housing market became clear, President Obama signed the Fraud Enforcement and Recovery Act, which expanded the federal government’s ability to prosecute fraud in previously unregulated sectors of the mortgage industry. It also authorized hundreds of millions of dollars for the Department of Justice to fight complex financial fraud.

At a press conference in October 2012, the Attorney General declared the initiative a success. His statement claimed that DOJ had charged 530 people with mortgage fraud-related crimes in cases that involved more than 73,000 homeowners and total losses estimated at over $1 billion.

But an internal audit of DOJ’s mortgage fraud efforts released by the Department’s Inspector General in March 2014 showed how little was actually done to combat mortgage fraud. The audit concludes that neither Congress nor DOJ treated mortgage fraud as a high priority, despite their public statements. It even found that the numbers quoted by the Attorney General in 2012 were wildly exaggerated. DOJ’s criminal investigations only snared 107 defendants in cases related to just $95 million in homeowner losses, 91% less than the Attorney General had publicly claimed. Worse, DOJ continued to cite the inflated numbers in press releases for 10 months after learning they were false.

Congressional inaction might help explain why DOJ’s real results didn’t live up to its aspirational numbers. Financial fraud investigations are complex and difficult to prove; that means expensive. In 2009, Congress  “authorized” $165 million per year to fight fraud, but when budget time came around, it only appropriated 17% of that funding. In 2010, various DOJ agencies received a total of $34.8 million, with the bulk going to the FBI. In 2011, the FBI’s $20.2 million was it. It is impossible to bring complex fraud cases without teams of investigators, accountants, and attorneys, but the DOJ Criminal Division in Washington only received enough money for 5 new hires to investigate financial fraud.

But this is a problem of priorities, not just funding. Despite the additional funding and public the FBI Criminal Investigative Division ranked complex financial crime as the lowest of six criminal threat categories and mortgage fraud as last among three subcategories. At the FBI field offices that auditors visited, mortgage fraud was listed as a low priority or not listed as a priority at all. These offices included New York, always a center of financial crime, and troubled housing markets such as Miami and Los Angeles. These results echo a 2005 audit, which found that reassigning hundreds of FBI agents from criminal investigation to counter-terrorism work had hurt the government’s ability to deter white collar crimes such as bank fraud and mortgage fraud.

In their defense, FBI officials argue that the end of the housing boom and the tightening of lending standards have cut down the rate of fraudulent mortgage purchases, with classic schemes involving straw buyers on the wane.

However the threat has not disappeared; it has just morphed along with the times. More than 50,000 homes fell into foreclosure for the first time during January of this year and total household debt is increasing again for the first time since the crisis, according to RealtyTrac and the Federal Reserve Bank of New York. Borrowers struggling with high debt are ripe targets for malicious debt consolidation, loan modification, and foreclosure rescue schemes, which promise to save homeowners from their creditors in exchange for an up-front fee.  Of course, many of these companies do nothing to help.

The FBI reported that for the first time in many years, these foreclosure rescue schemes have surpassed mortgage origination fraud as the housing industry’s greatest criminal threat, but the audit found that few of these cases have been fully investigated. These scams typically represent smaller overall financial losses that fail to meet federal prosecutors’ thresholds. In response, FBI agents said they put these cases into “unaddressed work files,” while waiting for the dollar losses and fraudulent activity to mount.

While shutting down foreclosure rescue scams may not grab headlines with big numbers, these frauds prey on some of the most vulnerable Americans, pushing them deeper into debt and causing untold psychological damage as foreclosures proceed unabated. If prosecutorial thresholds are encouraging the FBI to let these fraudsters continue to steal from desperate homeowners, the DOJ should abandon them.

And if the Department wants to go after some fish big enough to excite the press, perhaps it should reopen investigations into the big mortgage servicing companies. Despite the 5 biggest mortgage servicing banks signing a $25 billion national settlement and promising to fix systemic foreclosure abuses in 2012, evidence continues to emerge that they still wrongly foreclose on homeowners, refuse to allow customers to submit missing paperwork, fabricate loan documents in foreclosure proceedings, and deny loan modifications to deserving borrowers. In October, the New York State Attorney General filed a lawsuit accusing Bank of America and Wells Fargo of failing to live up to these customer-service promises. Where is the Department of Justice?

When the individually rational sums to the collectively insane

Category : Uncategorized · No Comments · by May 22nd, 2014

Originally posted on 3QuarksDaily.

 

motion-of-molecules1The most striking aspect of Isaac Asimov’s Foundation is the pacing of its narrative. The story, which tracks the fall of the Galactic Empire into what threatens to be a 30,000-year dark age, never tracks characters for more than a few chapters. The narrative unfolds at a historical pace, a timescale beyond the range of normal human experience. While several short sections might follow one another with only hours in between, gaps of 50 or 100 years are common. The result is a narrative in which characters are never more than bit players; the book’s real focus is on the historical forces responsible for the rise and fall of planets. The thread holding this tale together is the utopian science of psychohistory, which combines psychology, sociology and statistics to calculate the probability that society as a whole will follow some given path in the future. The novel’s action follows the responses to a psychohistorical prediction of the Empire’s fall made by Hari Seldon, the inventor of the science, who argued by means of equations that the dark ages could be reduced to only a single millennium with the right series of choices. In comparing the science of psychohistory and the actual events that accompany the Galactic Empire’s fall, Asimov’s time-dilated narrative weaves together disparate theories of history and science articulated around the problem of predicting the future, the historical primacy of crises, and the irreducible difference between studying an individual and analyzing a society as a whole. In Asimov’s imagined science, however, we can trace the real logic of macroeconomics and begin to understand why Keynes could never produce such dramatic predictions.

The goal of Asimov’s psychohistory is always the prediction of future events, but these prognostications are different from the usual fictional presentiments in that they cannot determine exactly what will happen. It is a probabilistic science. Of course, this leaves open the possibility of psychohistorians trying to guide society toward the best possible future, as long as the population at large doesn’t know the predictions’ details. But more importantly, psychohistory’s probabilistic nature limits the scale on which its predictions are useful:

“Psychohistory dealt not with man, but with man-masses. It was the science of mobs; mobs in their billions. It could forecast reactions to stimuli with something of the accuracy that a lesser science could bring to the forecast of a rebound of a billiard ball. The reaction of one man could be forecast by no known mathematics the reaction of a billion is something else again.”

Like all statistical tools, it relies on the law of large numbers as a way to get past the intrinsic randomness one faces in anticipating human behavior. Practically, this means that psychohistory can only predict events for a population at large; it has nothing to say about any particular individuals.

Despite Foundation’s seemingly fantastic premise – the ability to know what will happen tens of thousands of years into the future – Asimov’s focus on this fundamental limit to psychohistory’s predictive power keeps the story firmly in the realm of science fiction. Psychohistory is, in a sense, an idealized form of macroeconomics, insofar as economists aim to predict and plan for the best possible future for society. However, the theoretical connection between these two “sciences” is more profound. John Maynard Keynes’ essential insight, which forms the epistemological core of macroeconomics, is the discontinuity between mathematical descriptions of the large and the small, the society and the individual. Indeed, all statistical sciences – genetics, epidemiology, and quantum mechanics, for example – face this same intrinsic limitation. While we have worked out many statistical laws identifying the genes responsible for congenital disease, the behaviors that raise the risk of spreading an infectious disease, and how electrons flow through super-conductors, none of these sciences can say with any certainty whether a genotype will manifest during a person’s life, a patient will contract a disease, or an electron will be at a given time or place.

This shared epistemological limit is not a coincidence. Keynes’ 1936 General Theory of Emplyoment, Interest, and Money sought to explain why the Depression-era economy seemed incapable of providing jobs to the millions of unemployed workers, who were starving and clearly willing to work for any wage. Unlike his predecessors, Keynes approached this problem by analyzing the collective behavior of the laboring masses rather than the imagined bargaining strategies of a single employer hiring a single worker. My research into his mathematical work, published more than a decade before the General Theory, suggests that Keynes modeled macroeconomics, his new statistical theory of the economy-at-large, on thermodynamics, the modern explanation for the behavior of bulk matter. Since Keynesian macroeconomics starts on the aggregate level, rather than from a theory of a rational actor’s decision-making like Ricardian classical economics or neoclassical rational expectations theory, Keynes faced the challenge of linking the dynamics of the national economy to the psychology of the individuals that compose it. This is precisely the theoretical problem Keynes solved with the statistical methods of thermodynamics and statistical mechanics.

Thermodynamics describes how a quantity of gas changes as a whole or how a chemical reaction will proceed through intermediate equilibria, by measuring temperature, pressure, and volume. It achieves this, however, without assuming anything at all about the small-scale structure of matter. Indeed, thermodynamics was developed before belief in the reality of atoms was widespread among physicists, and Einstein’s subsequent proof of their existence did nothing to change the science. Thus, the equations of thermodynamics give no information about the motions of individual gas molecules. An early attempt to bridge this gap, Bernoulli’s kinetic theory, assumed that a given combination of temperature, volume, and pressure determined that all the gas molecules were bouncing around the container at a uniform speed. Later, however, James Clerk-Maxwell and Ludwig Boltzmann showed how this was impossible. Instead, they argued that a gas in a certain thermodynamic state is composed of an ensemble of molecules all moving in fundamentally random directions and at random speeds. In other words, even if one knows the exact thermodynamic state of a gas, it is impossible to determine the motion of an individual molecule, and vice versa. Statistical mechanics is the link between these two levels; it characterizes the probabilistic distributions of molecular velocities that correspond to gases in thermodynamic equilibria. The two fields are joined by one of science’s slipperiest terms, entropy, which measures the microscopic uncertainty intrinsic to any known macroscopic situation.

In trying to work out the individual’s relationship to the macroeconomic aggregate, Keynes faced a similar mereological problem. Macroeconomics uses aggregate measurements – e.g. unemployment, GDP, and inflation – to describe the health of the economy overall and predict its future dynamics. Yet it is unclear how individual experience links up with such large-scale concepts. Whereas classical theory assumes that an economy of 300 million “makes decisions” identically to the total of 300 million individual choices, macroeconomics is premised on the idea that the aggregate economy acts as an “organic whole,” akin to a thermodynamic gas and irreducible to arithmetic sum of rational individuals. Complex feedback loops, which might make one person’s rising costs become another’s disposable income and yet a third’s sales revenue, multiply the effects of investment, consumption, and staffing decisions through the economy. Furthermore, people don’t think and act identically; the same background macroeconomic “facts” incite various people to form drastically different, even contradictory, expectations about the future. A direct connection between aggregates and individuals is even further frustrated by the importance of the income distribution: An economy can experience growth – as measured by a rising real GDP – even as the majority of the actual people living in it lose purchasing power. Indeed, this has been the reality for the past several decades, as income inequality in America has exploded. Finally, the prevalence of bank runs, stock market crashes, and asset bubbles attests to the destructive power of rational irrationality, cases in which many individuals’ personally rational decisions (to withdraw their deposits from a shaky bank) sum to produce a collectively insane result (a bank failure).

These blockages, which prevent a simple micro-macro relationship in economics, posed a mathematical as well as a philosophical problem for Keynes. He was able to solve both aspects of this difficulty by importing the probabilistic approach that had joined the macro-scale science of thermodynamics to the particulate-level explanation of statistical mechanics. This solution is crystallized in two rate-determining macroeconomic quantities – the Marginal Propensity to Consume and the inducement to invest – that connect the aggregate dynamics of an economy to the psychologies and behaviors of individuals living within it. (For the details of this argument, please check out my recent article in the Journal of Philosophical Economics.) Following statistical mechanics, Keynesian macroeconomics forswears assumptions of strict rationality, even though the resulting uniformity would make the necessary mathematics more tractable.

Although these different theoretical relationships between economic parts and wholes might seem esoteric and hopelessly abstract, they have real, material stakes when incorporated into experts’ policy analyses. Consider the case of today’s stubborn high unemployment. If the neoclassical theorists are right, and the macroeconomy is nothing more complex than millions of individual decisions, then the main problems are likely inflated wages and labor market friction. Their prescriptions: wage cuts, union givebacks, and labor market “reforms” that make it easier for employers to fire and hire workers. However, if the Keynesian analysis is correct and the macroeconomy is more than a simple sum of its parts, the problem is more structural. In this view, the economy is not operating at a level sufficient to utilize all of its unemployed factors of production and so the government ought to stimulate aggregate demand with expansionary fiscal and monetary policy. This is the debate going on today in capitals across the world.

Asimov’s psychohistory is useful because it dramatizes the problems statistical sciences have in simultaneously predicting the future on both aggregate and particulate levels. His vision in Foundation of a utopian social science was deeper than a mere desire for large-scale fortune telling; he followed Keynes in extending the logic of modern physics:

“Because even Seldon’s advanced psychology was limited, it could not handle too many independent variables. He couldn’t work with individuals over any length of time, any more than you could apply the kinetic theory of gases to single molecules. He worked with mobs, populations of whole planets, and only blind mobs who do not possess foreknowledge of the results of their own actions.”

Indeed, it’s probably this final restriction that prevents macroeconomics from achieving the predictive power of psychohistory. Insofar as real people always strive to know the equations and understand the forecast, anticipatory recursion will always prevent the best predictions from coming perfectly true.

Competing to Live: On Planet Earth

Category : Uncategorized · No Comments · by May 22nd, 2014

This was originally posted on 3QuarksDaily.

David Attenborough reserves a certain mournful tone for narrating death in the natural world. In the Jungles episode of BBC’s epic documentary series Planet Earth, we hear that voice, interspersed with the rich, crackling sound of splintering wood, as we see a massive rain forest tree collapse under its own weight after centuries of growth. Just as the tree’s last branches fall out of view through the canopy, Attenborough, in his reassuringly authentic British accent, opines: “the death of a forest giant is always saddening, but it has to happen if the forest is to remain healthy.” After the surrounding trees spring back into place, we descend to the rain forest floor, and enter a realm whose usual gloom has been suddenly washed away by the new hole in its leafy ceiling. Here we can see, with the help of Planet Earth’s signature time-lapse cinematography, how the flood of light that now reaches the forest floor triggers a race to the top by the unbelievable variety of plant life struggling to collect that valuable light. The narration explains how each species has its own strategy for besting its competitors. Vines climb up neighboring trees, sacrificing structural strength for rapid vertical growth. Broad-leaved pioneers such as macarangas are the clear winners at this early stage; their huge leaves provide them with enough energy to grow up to eight meters in a single year. But “the ultimate winners are the tortoises, the slow and steady hardwoods,” which will continue striving for their places in the light-drenched canopy for centuries to come.

The series’ unmatched capacity to bring the natural world to life, as it were, has made it both the premier wildlife documentary of its day and the most enjoyable toy for twenty-first century stoned college students. Time-lapse photography and stunning footage of impossibly rare animals transport us, as viewers, into virgin territory, a territory that operates according to its own natural laws, thus far spared from human interference. While the show’s inventive cinematography animates the natural world, Attenborough is able to give meaning to natural processes by articulating the concealed, organic logic that organizes life. Sped up, slowed down, zoomed in, or seen from above, Planet Earth explains nature’s apparent randomness by casting the world’s plants and animals as players in an epic struggle for survival. The planet’s breathtaking beauty – along with its inhabitants’ sometimes-bizarre bodies and behaviors – is the integrated result of countless relations between harsh climates, scarce resources, and living things competing to exist. But if this is the narrative of the natural world, does it accurately reflect an already existent reality? What artifacts can we find of this production of meaning about the world? Is there a difference between Nature and the natural world? And most importantly, where do we – as viewers, as humans, as people – fit into this story?

As one can see from the “Planet Earth Diaries” at the end of each episode, finding beautiful scenery is not enough to make a compelling wildlife series. The Planet Earth team struggled at each shoot to find innovative ways of animating worlds whose dynamism is not always clear on the standard time scale of human thought. As we enter the rain forest “hot house,” Attenborough notes that the jungle seems “virtually lifeless” despite the cacophony of insect, bird, frog, and monkey calls. In this episode, the obvious challenge is bringing the trees themselves to life. The team accomplishes this feat with breathtaking tracking shots that lift us from the darkness of the forest floor to the blazing light of the canopy. This one continuous motion not only shows the different worlds at each height but also portrays the forest environment as a living growing entity itself. Just as time-lapse cinematography shrinks an entire temperate growing season into a minute in order to show the spectacular vernal burst of life, the camera’s ascent through the rain forest evokes the hardwoods’ centuries-long climb to the sun.

Of course, it is easy to accuse Planet Earth of relying too heavily on anthropomorphism as a narrative technique. From pole to pole, Attenborough introduces us to (images of) animals whose thoughts seem strikingly logical and sometimes even emotional. But watching Planet Earth again, this time with unusually lucid attention to the storytelling, it seems to me that the content of what I will broadly call “anthropomorphism” does intellectual work beyond merely setting up characters and a plot. Beyond constructing beings worthy of viewers’ empathy, this narrative technique presents animals and plants as creatures possessing the power of intentionality. The organization of Planet Earth’s tightly structured world, ordered by the natural equivalent of rational self-interest, is starkly different from the overwhelming feeling one has when actually standing in the middle of a rain forest. Attenborough parses the real world’s infinite complexity and glosses its fundamental randomness of organic life with the intertwined logics of the local flora and fauna. To be fair, the series never exactly portrays this purposefulness as conscious, but it is certainly an important and pervasive tendency in Attenborough’s narration. Indeed, it may be this distinction between intention and consciousness that makes simple accusations of over-anthropomorphizing seem to be beside the point.

In order to understand this issue of intentionality, we should return to Jungles and revisit the new clearing in the rain forest. After descending from the action in the now-ruptured canopy, the camera tightens on the leaf litter, which covers the forest floor. Normally, Attenborough tells us, little of the sun’s energy reaches these depths, so now “the thirst for light triggers a race for a place in the sun.” The leaves, which each fill about a quarter of the screen, are pushed aside by seedlings madly bursting forth from the concealed soil. Thanks to the time-lapse cinematography, the young plants grow at an astounding rate. They move so quickly, in fact, that their unfurling leaves look like the weakly gesturing limbs of a newborn until they begin to shoot upward out of the frame.

Because one of the themes of this story of the rain forest is its unbelievable diversity, Attenborough shows us several plants with different strategies for securing a scarce position to collect valuable light. The climbers are surely the most interesting competitors; the authoritative British voice assures us that their “strategy looks chaotic, but there’s method in their madness.” Sped up, we can witness how “their growing tips circle like lassoes, searching on anchors for their spindly stems.” From this temporal perspective, it’s true: the growing tips do not look like mindless masses of sugar whose winding heliotropic growth is controlled by auxins. Instead, they appear to be plants intentionally employing a particular growth strategy that maximizes vertical reach while minimizing energy investment. This is confirmed when Attenborough shows us the climbers’ forethought: “they put coils in their tendrils so that if their support moves, they will stretch and not snap.”

Certainly, Attenborough’s narration is well served by this tactic. Explaining the ostensible reasons for certain plants’ unique growth and their comparative advantages over competitors in the jungle clearing turns this natural scene into a story with characters who win or lose depending on the viability of their natural abilities. Certainly, this is far more compelling for the lay viewer than an explanation of how the underlying biology. As a popular nature documentary, the Planet Earth team must have continually struggled with the challenge of making these stories gripping; surely the latter strategy would be quite a buzz killer for the stoned college student crowd, the series’ most devoted followers. So my criticism of this narrative strategy is not (intended to be) pointless fault finding but rather an exploration of its effects.

By virtue of its task, i.e. depicting the beauty and complexity of the natural world, it is easy to lose sight of the role of storytelling in Planet Earth. As a product of a rigorous naturalism, it is easy to interpret the moments of action in the series as parts of plot lines already existing in the world, rather than as elements of a story told about the world. The narration seeks to fade into invisibility, leaving only Nature. But this Nature that Attenborough presents is an assemblage of characters and settings, conflicts and dénouements that work together to keep viewers enraptured. Its construction is the challenge facing any nature documentarian and the Planet Earth team does this more effectively than anyone before them. The series is, in equal measures, a work of art and science, a provisional distinction that converges with the deployment of intentionality as a narrative strategy. Intentionality is not part of any specimen or fossil collected in the wild. It is manifest neither in the rain forest plants said to be striving for the light nor the parasitic cortisept fungi said to work as checks to maintain balance among insect species. Instead, it is a way of making sense of the natural world by connecting organisms to one another with dramatic links of cause and effect.

In its technical use, as a central tenet of continental ontology, “intentionality” is a frustratingly elusive term. Franz Brentano, who invented the term in the late nineteenth century, positioned it as a property that necessarily connected “mental phenomena” to the their intentional objects. Later, Edmund Husserl argued that this relationship is a constitutive element of thought itself: “to have sense, or ‘to intend to’ something, is the fundamental characteristic of all consciousness.” Similarly, the father of French existentialism, Jean-Paul Sartre, thought of intentionality as coextensive with consciousness. But since consciousness itself is not really at issue here (I don’t think David Attenborough is really attributing strategic consciousness to the polar bear mother during the Ice Worlds episode), these definitions are not particularly helpful.

Instead, we should consider Martin Heidegger’s efforts to theorize Being, which he identified with the fundamental entity Dasein, in Being and Time, one of the more imposing and impenetrable tomes in modern philosophy (so please, pardon my language). Eschewing the technical term “intentionality,” which had already been firmly pegged to conscious thought, Heidegger works with the twin concepts of care and concern to denote the intentionality of Being. Heidegger writes that care, “as a primordial structural totality, lies ‘before’ every factical ‘attitude’ and ‘situation’ of Dasein.” This means that care can be described as a Being’s fundamental orientation towards the world. Ontologically, care is at the root of both willing and wishing; indeed, “in the phenomenon of willing, the underlying totality of care shows through.”

To what extent does this Heideggerian model of Being as care reflect the existence of the natural world as told by Planet Earth? Throughout the series, the emphasis on organisms’ strategies for survival creates a sense that they possess, or at least spontaneously enact, a will to live and multiply. Obviously, this drama plays itself out differently in each biome, but the general theme of organisms as engaged in unceasing competition is relatively constant. And this message of competition is important, for Attenborough explains how mutual competition is the motor of evolution and the source of nature’s astounding diversity of life. But this language, this notion of individual will and competition, is not unique to Planet Earth. Charles Darwin’s On the Origin of Species displays a similar understanding of the natural world as a place where organisms are constantly striving to live:

 

In looking at Nature, it is most necessary… never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or the old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount.

 

Throughout Darwin’s text there is a continual oscillation between his theories of Nature and scenes of survival from the natural world itself. He certainly wasn’t the first naturalist who tried to bring the vastness of the colonial world back home to England already arrayed in helpful categories, but On the Origin of Species betrays Darwin’s acute awareness of the importance of storytelling to his work. His project was one of sense making, not one of sense finding. His claim is not, “animals are striving to increase in numbers” but that everything “around us may be said to be striving to the utmost.” The text does not read like a dispassionate treatise on the way the world is. Instead, Darwin is suggesting that we may construct a narrative to explain a world in which we stand at the center. Without undoing Copernicus’ labor, which removed man from his privileged position at the center of the universe, On the Origin of Species presents a story of the world as we grasp it. Whether or not this acknowledgement of his explanations as a possibility was a scientist’s hedge against accusations of blasphemy is not important; what does matter is the attention Darwin gives to the problem of narrative, a difficulty seldom considered in scientific writing.

Again, I doubt that “anthropomorphism” sufficiently accounts for the way in which organisms are assigned a willing disposition. Neither Planet Earth nor Darwin asserts or even implies that this will to live and multiply is contained in some kind of consciousness or a particularly mindful relation to the world. Instead, might we say that these classics of British naturalism consider these diverse survival strategies to be spontaneous orientations towards the world? For both texts, is not this intrinsic will to live positioned before any particular circumstance, environment, or competition? Can we not, then, assign this continuous striving for existence to be a fundamental aspect of Being in this version of the natural world? That is: could an active struggle for existence be at the root of this naturalist conception of life itself?

Indeed, this fundamental will to survive is important to Heidegger as well. He argues that “the urge ‘to live’” is something fundamental to Dasein; it “is something ‘towards’ which one is impelled, and it brings the impulsion along with it of its own accord. It is ‘towards this at any price’.” Now, we can understand (or at least attempt to comprehend) how Dasein is not merely a person or an animal but a fundamental unit of Being. While it is true that each animal or plant could be said to “prefer” life over death, this is not how we see the logic of natural existence. When an enterprising fox is able to steal some goslings from the edge of a humongous colony of migratory birds Attenborough does not bemoan the apparent loss of life as he does when lions kill a solitary elephant. This is precisely the regular “price” that the flock must pay to survive and multiply. We might even say that it is not even a loss of life, since the story of the geese takes the entire flock as the unit of life or Being with a common urge to live. And since this urge is “rooted ontologically in care,” we might also say that this flock has a common intentionality, a common Being.

Returning to Darwin might be helpful here. Recall that he is speaking about neither plants nor animals in the quotation above; his concern is not even generalized to organisms. Instead, Darwin is describing the struggle for existence that dominates the life of every “organic being.” I think we should take this distinction seriously. There is something in these narratives of the natural world that abstracts struggle from one defined by physical distinctions to a more philosophical level, which plays with conventional understandings of life. I read Darwin’s “organic being” as a unit of life with a size that varies depending on climate, terrain, and energy availability from a solitary arctic wolf to a thousands-strong herd of wildebeest. This unit is not given in the world itself but is, instead, an artifact of a particular understanding of it; the “organic being” is a figure born from Darwin’s attempts to tell a story about the world.

This fundamental urge to live should neither be forgotten nor left in theoretical isolation, for it “seeks to crowd out other possibilities” in the constant struggle for its own existence. Whether tracking the life of an individual tree or a flock of millions of geese, the natural world is a domain of unceasing competition in “ever-increasing circles of complexity.” This competition is the integrated total of countless Beings’ urges and intentions, and in this naturalist conception of the world it is a ubiquitous and powerful force. Attenborough does not mince words: “in the jungle there is competition for everything.”

Just as the rain forest clearing’s apparent lawlessness becomes comprehensible when sped up, we can recognize organisms’ continuous evolutions in the balanced state of nature. In this narrative, the illusion of static balance is ensured by natural competition, which couples together organisms in a complex web of coevolving relations. We might say that natural competition is both the motor of evolution (“generations of choosy female [birds of paradise] have driven the evolution of males’ remarkable displays”) and simultaneously its regulator (when insect populations grow, “parasites stop any one group of animal getting the upper hand”). This power is presented as nearly omnipotent in Nature. These texts credit it with producing both the natural world’s unmatched beauty and organic systems whose complexity and efficiency would be the envy of the best engineer.

For both Darwin and Attenborough, the dynamic of competition serves to balance the natural world and provide space for all of its Beings and their competing intentions. Though these struggles between Beings are unceasing, “in the long-run the forces are so nicely balanced, that the face of nature remains uniform for long periods of time.” But in fact, it is merely the face of nature that remains unchanged. In the rain forest, which we have seen has both high productivity and unceasing conflict, “competition for resources ensures that no one species dominates the jungle.” Reading further, however, we see that the apparent stasis of Darwin’s “state of nature” is actually a dynamic equilibrium, shaped and maintained by the competition between Beings’ struggles to survive. It is not that everything stays the same in the unspoiled natural landscape; it only appears this way on our familiar time scale.

For a narrative to be meaningful, it helps to have a traceable set of reasons for what happens. In Planet Earth, the recurring narrative of organic balance is powered, or explained, by intentionality. Attenborough’s presentation of Nature’s dynamic equilibrium as the spontaneous result of organic beings who compete according to their non-conscious self interests recalls the logic of traditional economics, which credits the invisible hand of the market for this balancing act. The identity of these processes, these mechanisms for maintaining the world as it is and optimizing participants’ experiences, offers an opportunity to see more deeply into each and understand how widely relied upon this notion of self-regulation is.

The idea that both free markets and unspoiled ecosystems are able to remain in productive balance seems to be the result of a belief that competition has the innate ability to order complex systems. International deregulation efforts, which have left “natural market forces” in charge of the global economy, speak to the strength of people’s faith in the idea of self-regulation. But this invisible hand is more than the basis for a particular economic theory. Just like the dialectic – Hegel’s idealist, self-generating process of reason through contradiction – this mechanism of competitive self-regulation is a deep philosophical belief in the way the world progresses. Insofar as naturalists, led by Darwin and narrated by Planet Earth, have used this idea as an overarching explanation for how Nature functions, it seems just as organic as the rain forest trees struggling for sunlight.

This ideological contact between the ecological and the economic might allow us to finally situate ourselves, humanity, in Planet Earth’s storytelling. We occupy a complicated position in the narrative. It is striking that Attenborough rarely mentions humanity and we only see people when the cameras descend through labyrinthine caves deep into the planet. Yet at the end of several episodes, Attenborough warns viewers of these environments’ precarious positions. The appeal is most dire at the end of Jungles: “Rain forest diversity has come at a cost. It has made them the most finely balanced ecosystems in the world, only too easily upset and destroyed by that other great ape, the chimpanzee’s closest relative, ourselves.” Only a few minutes after viewers have seen their own humanity in the mirror of a marauding band of territory-hungry chimpanzees, this language is striking. It positions humanity not as an alien force superimposed on an independently existing natural world but as a part of the same precariously balance system. The argument is so affective because it refuses to plead. Instead it suggests that we reconsider the boundaries we draw between systems we hope to keep in balance. Rather than seeing economics and ecology as two fundamentally separate, permanently walled-off disciplines this attitude takes them as parallel projects working on different problems. Instead of defining the jungle as the wild and unthinkable state of nature, this naturalist approach seeks to fuse man’s understanding of himself with the complexities of Nature in order to ensure that Planet Earth never becomes a stunning monument to irrecoverable beauty.

To spend or not to spend: The austerity debate

Category : Uncategorized · No Comments · by May 20th, 2014

Originally posted on 3QuarksDaily

Public sector austerity has come back to the West in a big way. Governments throughout the European Union are wrestling against striking civil servants, a stagnant private sector, and an entrenched public welfare system to drastically reduce spending. The budget cuts are broad, and they run deep. Under pressure from global financial markets and the European Central Bank to reduce public deficits, Spain, Italy, Portugal, and Greece have issued “austere” budgets for the coming year that simultaneously raise taxes and slash government spending. David Cameron’s new Conservative government has violated its campaign pledge to spare Britain’s generous middle class subsidies in an attempt to close a budget gap that is among the world’s largest, at 11 percent of GDP. Supposedly confirming the wisdom of austerity, the financial press has trumpeted the re-election of Latvia’s center-right government, which passed an IMF-endorsed budget with austerity reductions equal to 6.2 percent of GDP. Prime Minister Valdis Dombrovskis won his “increased mandate” – “an inspiration for his colleagues in the EU” – against a backdrop of 20 percent unemployment and a cumulative economic contraction of 25 percent in 2008 and 2009, the most severe collapse in the world.

Latvian electoral politics notwithstanding, austerity has been a tough sell worldwide. Both the protests that broke out across Europe at the end of September and the general strikes mounted against Socialist governments in Portugal, Spain, and Greece attest to the resistance all governments face in cutting public spending. And opposition has not been confined to the streets. At a G20 summit in Washington DC on April 23, the finance ministers and central bank governors of the world’s 20 largest economies agreed that extraordinary levels of public spending should be maintained until “the recovery is firmly driven by the private sector and becomes more entrenched.” Indeed, Larry Summers, the departing Director of the White House National Economic Council, still argues that the United States must continue its policy of economic stimulus in the form of deficit spending on infrastructure rather than pull back public resources, lest it cede the small gains of the nascent recovery.

Yet the pressure to embrace austerity continues to mount on governments on both sides of the Atlantic, crowding out calls for further stimulus spending; the stimulus vs. austerity debate has heated up in both policy circles and academia. On one side are the Ricardians, who argue that austerity budgets will boost confidence, by signaling that the recovery has taken hold, and spur private investment, because capital will no longer fear future tax hikes to pay for today’s deficit spending. We hear this story coming from three major institutions: the European Central Bank, which regulates the 16 Eurozone countries; the International Monetary Fund, which provided lender of last resort bailouts for countries struggling to meet their international obligations; and the global financial markets, which penalize debtor countries by demanding ever higher interest rates to refinance sovereign debt.

The Keynesians are on the other side, arguing that governments must maintain their economic stimulus programs to help make up the difference between the internationally depressed levels of aggregate private demand and the level of economic activity necessary to support full employment. Their argument against austerity-induced gutting of social welfare programs goes beyond moral claims about equity. Government spending, especially in programs that target the bottom end of the income distribution, circulates through the economy, multiplying the job-creating effects of the initial public expenditure. Of course, the root of the current economic problems is an overabundance of debt – both public and private. But as international political economist Mark Blyth explains, it is dangerous for governments to try to clean up their balance sheets with austerity at the same that the private sector is paying down its own debts from the housing boom instead of investing and hiring. Indeed, the US shed 95,000 jobs in September, after layoffs by local governments and the release of temporary Census workers cost 159,000 jobs. Until recently, the Obama Administration was the main proponent of the stimulus view, which is also supported by organized labor and hordes of protesting Europeans.

Strangely, when the G20 finance ministers reconvened on June 5 in South Korea, their message had changed. Instead of encouraging countries to continue supporting the recovery, they announced that “countries with serious fiscal challenges need to accelerate the pace of consolidation,” and identified monetary policy as the best tool going forward. This, despite the fact that monetary policy levers are at the “zero bound” worldwide, allowing no room for further expansionary movement. So, why the sudden shift? How might we characterize the compulsion governments have to engage in painful belt tightening when their belts are circled around their necks?

The austerity vs. stimulus debate is not just a policy disagreement between social classes with opposing interests; it is a confrontation between two entirely distinct modes of governing, two different ways of conceiving of the state and the economy. Austerity is a quintessentially Classical prescription for economic imbalances, a direct descendent of the vertiginous deflationary adjustments countries were forced to stomach under the gold standard. Now, as then, financial power compels states to sacrifice the health of their domestic markets in order to preserve international credibility. Politicians would not make this trade without compulsion; anyone concerned with reelection will rightly worry about the havoc this fiscal discipline wreaks on his constituents. This should not suggest, however, that the only benefit of a policy to stimulate the economy with government spending is its ability to create short-term construction jobs. Properly administered, a Keynesian stimulus will keep unemployment rates manageable by propping up aggregate demand, but the real goal of government spending is to make the short-term economic picture look rosy enough to improve private expectations of the future. As government money filters through the economy, businesses can count on boosted demand for their products and will hire more workers, so private demand can gradually recover.

Michel Foucault, who is known for his studies of “governmentality,” developed a philosophical framework that helps distinguish these approaches to dealing with recession. In his 1978 lectures at the College de France, Security, Territory, Population, Foucault argues that there are three forms of power: juridical law, discipline, and security. Juridical law maintains order by establishing prohibitions and doling out punishment. Its model is a hanging, commanded by the sovereign to punish a subject who violated the law. Foucault is most famous for his theory of discipline, wrought from his meticulous studies of the techniques of power used in prisons, schools, barracks, asylums, and hospitals. A disciplinary institution aims for efficiency; it structures power relations so that the surveillance, and transformation of individuals can proceed with the least possible expenditure of resources. Its ultimate goal is, in a sense, utopian: to forge subjects who have internalized the law and follow it by themselves. The model disciplinary institution is the panopticon, in which the prisoner must always behave as if someone is watching. Finally, a security apparatus handles problems with measurement. Recognizing that it is impossible to completely engineer away social ills, a security apparatus sees a problem as the result of a series of probable events and enters it into a calculation of cost. Rather than focusing its attention on the legal boundary between the permitted and the prohibited, a regime of security “establishes an average considered as optimal on the one hand, and, on the other, a bandwidth of the acceptable that must not be exceeded.” And benefits of any policy are weighed against the costs of implementing it. Instead of deploying mechanisms to transform deviant individuals into ideal subjects, the techniques of security act on a new object: the population. Its tools are statistics, which uniquely make meaning from uncertainty and direct power to most effectively manage a large ensemble. Its model is the modern management of epidemics. While the differences between these three categories do rely on innovation, Foucault stresses that they do not represent distinct eras but rather alternative, and coexistent, ways that power organizes the social world.

Foucault deploys this framework on economic problems to show different ways of allocating resources to deal with the problem of scarcity. He begins with a juridical mode of resource allocation: price controls. For a long time, authorities attempted to control the food supply by instituting rigid price ceilings, intended to keep food affordable; regulations on food storage, intended to prevent hording from precipitating an artificial shortage; and export restrictions, meant to protect domestic supplies. Of course, in practice these price controls actually functioned to exacerbate food shortages, as the law prevented peasant farmers from charging enough to recoup their investments and plant enough grain the following year.

Foucault then studies the prescriptions of the Physiocrats, who advocated that governments reduce these restrictions and allow supply and demand to set prices according to the dynamics of the market. By allowing individuals to decide when and for how much to sell their grain, guided only by competition and informed by market prices, laissez faire policies leave the problem of managing scarcity to the decentralized decisions of many market actors, who sold their grain where high prices indicated it was most needed. Foucault shows how, historically, this shift to a new mode of governance alleviated the food shortages that had plagued Europe. But here, Foucault gets it wrong. He incorrectly classifies the Physiocrats’ free markets as a technique of security. Instead, laissez faire ought to be considered a disciplinary mechanism, since it aims to solve the problem of scarcity by conditioning individuals to make the “right choices” on their own about how much grain to grow and where to sell it.

Political economy first entered the realm of security when Keynes invented macroeconomics as a way of managing unemployment and taming the business cycle. For the first time, economists could attend to a population and direct their policies at the economy as a whole. Indeed, the concept of unemployment only makes sense for a whole economy; it has no microeconomic analogue. In his General Theory, Keynes shows how governments can use fiscal policy to keep their unemployment rates within reasonable bounds, consistent with long-term economic growth and social stability. Government’s deficit spending is the distinctive technique of this regime of Foucauldian security. An economic stimulus is not intended to help any particular individuals – though some sectors certainly benefit more than others – but rather boost aggregate demand. Its target is the whole economy, the population. Indeed, classical economics did not admit the economy per se as an organic object, since it was seen as merely a large collection of individual, rational actors. Insofar as macroeconomic policy has this population as the target of its interventions, Keynes can be said to have invented the economy as an object.

It is easy to see where austerity fits in Foucault’s taxonomy: It is a disciplinary force exerted against free-spending governments. Just as the structures of school buildings make rambunctious children into docile bodies, pressure to embrace public austerity is an effort on behalf of international capital to restrain the free-spending tendencies of welfare states. This fiscal discipline, sold as a virtuous and commonsensical “pain after the party,” is intended to produce chastened governments, which maintain capital-friendly tax policies at the expense of social services and in the name of stability, predictability, and job creation. Even though newly streamlined corporations are again flush with cash but have not rehired the workers laid off during the worst of the financial crisis, business leaders continue to argue for an emergency loosening of labor laws that would allow them to fire employees more cheaply.

Although these revisions to the modern welfare state’s social contract may seem draconian, they are hardly unprecedented. The IMF has been pushing public austerity and business-friendly labor reforms on financial crisis-plagued developing countries for decades under the banner of the “Washington Consensus.” Yet these stringent retrenchments, required as conditions on IMF rescue packages for countries from East Asia to Latin America to Latvia, have almost always exacerbated recessions. Indeed, the country that avoided the most damage in the 1997 East Asian financial crisis was Malaysia, which was condemned at the time for eschewing these familiar neoliberal fixes and setting up strict currency controls. Today’s massive foreign currency reserves in East Asian treasuries exist precisely so that these countries will never again have to turn to the IMF for another many-strings-attached bailout. The citizens of the global West are finally experiencing an economic pain all-too familiar to previous recipients of IMF bailouts. In all spheres of economic life, laissez faire prescriptions discipline states with the same old, capital-friendly mantra: “That government is best which governs least.”

Football, Finance, and Surprises

Category : Uncategorized · No Comments · by May 20th, 2014

Originally posted on 3QuarksDaily.

As the New Orleans Saints lined up to kick off the second half of Super Bowl XLIV, CBS Sports color commentator and former Super Bowl MVP Phil Simms was explaining why the Saints should have deferred getting the ball after winning the pregame coin toss. Simms suggested that the Saints, 4½-point underdogs to the Indianapolis Colts, would be in a better position were they not giving the ball to future Hall of Fame quarterback Peyton Manning, who already enjoyed a four-point lead and had had 30 minutes to study the Saints’ defensive strategy. Simms had barely finished this thought when Saints’ place kicker Thomas Morstead surprised everyone – the 153.4 million television viewers, the 74,059 fans in attendance, and most importantly the Indianapolis Colts – with an onside kick. The ball went 15 yards, bounced off the facemask of an unprepared Colt, and was recovered by the Saints, who took possession of the ball and marched 58 yards down the field to score a touchdown and gain their first lead of the game, 13-10. The Saints would go on to win the championship in an upset, 31-17.

Although Saints quarterback Drew Brees played an outstanding game and the defense was able to hold a dangerous Indianapolis team to only 17 points, Head Coach Sean Payton received the bulk of the credit for the win, in large part because of his daring call to open the second half. Onside kicks are considered risky plays and usually appear only when a team is desperate, near the end of a game. In fact the Saints’ play, code named “Ambush,” was the first onside kick attempted before the fourth quarter in Super Bowl history. And this is precisely why it worked. The Colts were completely surprised by Payton’s aggressive play call. Football is awash in historical statistics (humorously specific stat goes here), and these probabilities guide coaches’ risk assessments and game planning. On that basis, didn’t Indianapolis Head Coach Jim Caldwell have zero reason to prepare his team for an onside kick, since the probability of the Saints’ ambush was zero (0 onside kicks ÷ 43 Super Bowl second halves)? But if the ambush’s probability was zero, then how did it happen? The answer is that our common notion of probability – as a ratio of the frequency of a given event to the total number of events – is poorly suited to the psychology of decision making in advance of a one-time-only situation. And this problem is not confined to football. Indeed, the same misunderstanding of probability plagues mainstream economics, which is stuck in a mathematical rut best suited to modeling dice rolls.

Probability is a predictive tool; it helps decision makers confront the uncertainty of future events, armed with more than their guts. Both economists and football coaches use probabilistic reasoning to predict how others will act in certain situations. The former might predict that, faced with a promising investment opportunity and a low interest rate, entrepreneurs tend to invest, while the latter might anticipate time-consuming running plays from teams winning by a touchdown with four minutes left in a game. Both the economist and the coach would look up historical statistics, which they hope would provide insight into their subjects’ decision-making tendencies. And over the long run, these statistics would likely be quite good at predicting what people do most of the time. It would be foolish not to act in anticipation of these tendencies.

Indeed, there are many statisticians employed to do such things. In the lucrative, gambling-powered world of football analysis, for example, a company named AccuScore tries to predict the outcomes of NFL games and the performances of individual players with computational simulations early in the week. Although their exact computational methods are proprietary secrets, they have roughly described the strategy behind their Monte Carlo simulation engine. Through fine-grained analysis of troves of historical statistics, AccuScore’s computers create mathematical equations to represent the upcoming game’s players and coaches. How often does a team pass the ball when it’s third down with four yards to go at their own thirty-yard line, with no team up by more than three points in the first quarter at an indoor stadium? When New York Jets running back LaDainian Tomlinson rushes up the middle, how often does he get past the middle linebacker and rush for more than eight yards? The probabilistic answers to these questions – and many others – become the parameters of the players’ and coaches’ equations, which AccuScore pit against each other on a numerical field. The computers then simulate the game, one play at a time, guided by a random number generator and the participants’ tendencies. Then they repeat the simulation 10,000 times and average the results. (ESPN embed or link for an example)

According to AccuScore’s website, their predictions have an overall gambling accuracy of about 54%. This probabilistic strategy makes sense for its purpose, predicting the outcomes of games by analyzing the frequency with which subjects make certain decisions, but does not at all resemble the thought process by which a coach or his opponent calls a play in the middle of a close game. In contrast to AccuScore’s simulations, the real football game is only played once. Had they played Super Bowl XLIV 10,000 times, the Colts’ normal, kickoff return formation would surely have been the right bet to make at the starts of the 10,000 second halves. But they only kicked it once, and the act of kicking it destroyed the possibility of it ever happening again. (For the moment, let’s ignore the chance that someone on the Saints committed a penalty, necessitating a redo.) Sean Payton’s aggressive call worked, not because it gave the Saints the highest probability of success, but because the one time Morstead kicked it onside he caught the Colts by surprise.

Economics must also grapple with the difference between these two interpretations of probability. When economists declare that that markets are populated with rational agents, they must mathematically define that rationality, just as AccuScore defines players and coaches with tendency equations. The dominant strategy for defining economic agents’ rationality comes from Oskar Morganstern and John von Neumann’s groundbreaking 1944 book, Theory of Games and Economic Behavior. In it, they propose assigning each market actor with a utility function, which weights the payoffs of various possible actions with their probabilities of coming to pass. In constructing utility functions, neoclassical economists must assume that they have considered all of the relevant possibilities, which is another way of saying that the probabilities of all possible events included in the utility function add up to one. They then define the agent’s rational choice as the one that maximizes the expected value (link here) of her utility function. This method is the foundational concept of game theory and is used to predict how decision makers will act. Modeling a market then proceeds in roughly the same way that AccuScore models NFL games.

However, generations of critics have argued that rational choice theory is psychologically unrealistic as a description of actual human decision-making. While one might be able to argue that it represents the optimal definition of rationality, it is nearly impossible to conceive of someone actually making this sort of calculation on the fly in an even remotely complex situation. In general, it is unrealistic to assume that people consider every single possible outcome of a decision, so that the probabilities of all these events can properly sum to 100%. If someone thinks of a new possible outcome, why should she consider any of the ones she’s already considered to be any less likely than they were before she thought of the new one? But more fundamentally, rational choice theory relies on the frequency ratio definition of probability, which we have seen is incoherent when applied to the circumstances of one-time-only decisions. The most important decisions we face (and thus, model) are unique. In these cases, when making a choice destroys the very possibility of anyone ever making that same choice again, the notion of probability as a historical frequency ratio is nonsensical.
Shackle potential surprise graph

There have been several attempts to construct a theory of probability that accurately describes the psychological process of making decisions in one of these self-destructive choices. One strand of thought, coming from the Keynesian economist G.L.S. Shackle, is particularly well suited to describing the psychology of making decisions in the face of uncertainty. In Shackle’s theory, the probability of an event coming to pass is no longer calculated as one of several possible outcomes, as the standard frequency ratio theory does. Instead, he figures the likelihood of any particular outcome on its own terms, by asking a simple question: Based on what I know now, how surprised would I be if Y happened? Because the likelihood of each outcome is determined independently, their probabilities need not sum to one. That means thinking of a new possibility does not make any other less likely to happen. It also means that one can hold two or more mutually exclusive outcomes to be equally unsurprising, based on the information at hand. Indeed, most of the time, there will be a range of possible outcomes that are all judged to be equally unsurprising. (Shackle illustrated this with the graph at right.) Thus, Shackle’s decision-making comes down to a comparison of the best possible unsurprising outcome to the worst possible unsurprising outcome. This process seems much closer to the psychology of forming expectations and making choices than trying to maximize the expected value of a weighted average of all possible outcomes in your head.

Shackle developed his potential surprise framework as a way to model individuals’ expectations when considering a capital investment. A firm facing a particular investment decision may never have those same choices again. If it spends, it could lose and potentially go bankrupt. If it saves, it might not get such an attractive offer in the future, or it may be outcompeted by others. In forming expectations about a potential investment, firms naturally compare the most optimistic reasonable scenario to the most pessimistic. But Shackle’s potential surprise theory can just as easily describe the psychology of a football coach calling plays. A coach aims to control the surprises on the field, employing strategies to anticipate his opponents’ moves and surprise them as much as possible. Indeed, former fighter pilot and current NFL statistics guru Brian Burke calculated that surprise is the biggest factor determining the success of onside kicks. Overall, onside kicks are successful (i.e. the kicking team recovers the ball) 26% of the time. Most teams only try them when they’re desperate, and when a team is trailing at the end of a game no one is surprised by an onside kick. But in other situations, when the opponents aren’t expecting them, teams recover about 60% of attempted onside kicks.

Neither the decision to call a football play nor to make a capital investment is dominated by the calculation of probability-weighted historical statistics. Of course, considering what has worked and failed in the past is still smart practice – Shackle himself writes that it would be foolish to disregard probabilities calculated this way – but rational choice theory fails to depict the thought process of a decision maker facing a one-time-only choice with any psychological subtlety. To remember this, one need only pay attention to the fine print and sped up announcement at the end of the mutual fund advertisements at halftime: “Past performance does not guarantee future results.”