PulseUX Blog

Theory, Analysis and Reviews on UX User Experience Research and Design

HomeMissionAbout Charles L. Mauro CHFP

How new theories in human information processing explain the meltdown on Wall Street

Response to The New York Times article on Wall Street Risk by Joe Nocera (1/4/2009)

In The New York Times Sunday Magazine section Joe Nocera produced a column that was important, well researched and insightful on how computer-based decision tools (VaR models) led Wall Street down the path to near ruin. However, Mr. Nocera missed an important, deeper point that is impacting critical analysis of what actually happened on Wall Street that allowed these seemingly intelligent executives to continue to pile on massive levels of risk long after they should have known better.


The new science of decision-making isn’t

At the heart of the answer to this vexing question on the melt down of Wall Street is a major shift in the underlying psychological theories of human decision-making. In fact contemporary theory on human error research has shifted entirely from the idea of “decision-making” to the concept of “sense-making”. The underlying cognitive science behind this new way of visualizing how individuals and more important entire institutions assess risk is known generally as “Naturalistic Decision Making” or NDM. What this new view teaches is that there are no “points-in-time” that constitute rational decision triggers, but that problems like RISK management on Wall Street are actually an accumulated series of EXPERIENCES that flow together to create situations that are filled with distortions such “positive outcome bias”. We now know from significant research that these distortions make it nearly impossible for those directly involved in such situations to make intelligent (reasoned) decisions about actual RISK.

The intellectual concept that humans are sense-makers and not rigorously programmed decision makers flies in the face of “Rationalist” decision making theory produced in large measure by economist who, on occasion, received Nobel prizes for proffering theories that humans were rational processors of choice sets leading to optimized, rational decisions. Recently, researchers involved in complex systems design which combine human intelligence with machine automation see the rationalist view as unworkable if not outright incorrect. There are significant real-world experiences that support this shift in human error management theory.

What the Space Shuttle disasters teach Wall Street

In these situations it is known that technology-based risk assessments systems (ie. VaR risk calculations) can and do have massively negative implications, as humans (in this case Wall Street executives), rely more and more on structured, computer-based feedback and less on their instincts about the state of the system. This is exactly what happened with the destruction of 2 space shuttles and is now at the center of why Wall Street continued to pile on more RISK.

Human information processing is not what it used to be

The answer to these extremely complex problems resides in a fundamental redistribution of how functional thinking about risk is distributed between the human participants in the system and the tools that they employ to help manage huge amounts of complex data. At the core of this new way of thinking is the realization that the human information processing system is far more capable than assumed but also subject to context-specific biases that produce disasters like the meltdown on Wall Street and destruction of 2 Space Shuttles.

For these new theories to produce reliable results it is essential that sense-making systems must fundamentally help the human capital in the system visualize risk at the highest levels of management. It is a fact that many of the toxic asset classes conjured forth by Wall Street were so complex and ill defined that they cannot even be categorized let alone visualized. It is exactly such cognitively impoverished EXPERIENCES that made sense-making impossible on Wall Street in recent years. Yet, these may not be impossible problems to solve.

So, where is the solution?

An entire new field of human-centered systems design is currently evolving that is focused on solving these complex problems which have nothing to do with decision-making and everything to do with sense-making. This leads to a rather simple maxim that says: humans make sense of situations, computers do not. This is NOT to say that computers are not a vital component of systems design, but it does say that computers must support human intelligence not supplant it.

Charles L. Mauro

Subscribe to email updates


Post a Comment

Your email address will not be published. Required fields are marked *