1. Setting the Landscape
A long-expected but unpredictable earthquake just struck the community. A few seconds before the shaking seismic waves hit the city, the early warning system has alerted sensitive infrastructures and people through the advanced communication network connected to the widely spread arrays of monitoring seismic stations. Schoolchildren and citizens, well trained in advance, have dropped and covered up, turned off stoves and stopped delicate operations. In businesses, automated systems have opened the elevator doors, shut down production lines and placed sensitive equipment in a safe mode. In no time, power stations and grid facilities have been put in safety position to protect from strong shaking. Emergency responders have started to prepare and prioritize response decisions. Through a decentralized sensor network systems coupled with crowd-based cell phone apps, decisions-makers are immediately informed and continuously updated. Emergency response centers are directing the layered responses to avert negative consequences. Health teams are rushing to cater to the physical and psychological needs of the victims. Damaging reverberations such as fires, landslides and pollution are factored in and the suitable counter-measures are implemented. All this unravels smoothly through a combination of well-informed decentralized units with autonomous decision responsibilities integrated into a centralized managing command system gathering information on the unfolding of the disaster, synthesizing understanding and prioritizing actions concerning the deployment of experts, teams and equipment.
This ideal scenario epitomizes one of the axioms of management theory, which states that managers oversee other people by means of information. They receive information from different sources, process it, make a decision, and translate this decision to subordinates and other audiences. The quality of the information being received about real conditions of the external and internal environment influences the quality of decisions, and later on the adequacy of an organizations response.
Unfortunately, the reality is often far from this idealization of management. Indeed, a widely held misconception is that, right after disasters, executives and government officials have comprehensive information about the important facets of the catastrophe that allow for adequate decision-making to respond. Regrettably, the truth is different: the quality of information in the hands of managers is often very poor, which translates into inadequate decisions after the disaster. In fact, this sad diagnostic extends to the amount and quality of information in the possession of managers before disasters, which poses an even more pressing question, namely the responsibility of misinformed managers for facilitating, promoting or even creating the calamity. Moreover, as Lee Clarke documented extensively,
This disparity between perception and reality is also manifested in most of the books on risk and crisis communication, which are generally concerned with what companies and organization have to do right after a disaster and how they should react to a crisis. in which seemingly minor risk events often produce extraordinary public concern and social and economic impacts, cascading across time, geography, and social institutions. Our interest is at the other end, when risks are under-estimated and hidden.
In contrast to the emphasis on disaster communication developed by other works, the present book concentrates on the importance of a proper understanding and transmission of the related information concerning the risks within a company, an industry or a society before a disaster strikes and the problems associated with internal risk transmission right after accidents. Severe reputation, material as well as human losses may result from the communication to external audiences of an incorrect understanding of the disaster in the first hours and days. Based on the analysis of past and on-going accidents, our aim is therefore to complement existing materials about proper risk communication processes, focusing on (i) the causes and consequences and (ii) the nature of the mistakes, which result from information gaps and concealments.
Professor Nancy G. Leveson (MIT) summarized masterfully the critical need of a proper information flow, whose many deficiency types are dissected in the present book: Flawed human decision making can result from incorrect information and inaccurate process models Proper decision making often requires knowledge about the timing and sequencing of events. Because of system complexity and built - in time delays due to sampling intervals, however, information about conditions or events is not always timely or even presented in the sequence in which the events actually occurred Enforcing safety constraints on system behavior requires that the information needed for decision making is available to the right people at the right time, whether during system development, operations, maintenance, or reengineering Safety - related decision making must be based on correct, complete, and up - to - date information Communication is critical Communication channels, resolution processes, adjudication procedures must be created to handle expressions of technical conscience Risk perception is directly related to communication and feedback. The more and better the information we have about the potential causes of accidents in our system and the state of the controls implemented to prevent them, the more accurate will be our perception of risk .
Before joining ETH Zurich as a researcher in March 2013, the first author, Dmitry, had been consulting in crisis communication after disasters for over seven years to some of the largest Russian companies (Gazprom, Gazprom-Neft, Russian Railways, Winter Olympic Games in Sochi 2014, RusHydro, EuroChem, Aeroflot, Russian Post, MegaFon, etc.). The second author, Didier, has been dismayed many times during his academic career by his observations of the divide between the standard post-mortem stories told about disasters, in particular in spaceflight accidents and financial crises, and the understanding that he has come to develop through his work on the failure of engineering structures and on financial bubbles and crashes. Moreover, the shock created by the 2011 Fukushima Daiichi disaster led him to conceive of a civil super-Apollo project in nuclear R&D on how to manage civil nuclear risks over the required time scales of tens of years to thousands and even perhaps up to millions of years, given the short span and unstable nature of human societies. When Dmitry contacted Didier to come join him to hone his practical expertise learning from the quantitative engineering approach of ETH Zurich, it soon became clear to us that the roots of the Fukushima Daiichi disaster could be found in the (unlearned lessons of the) 1986 Chernobyl catastrophe, which itself had strong connections to (the unlearned lessons of) the 1979 Three Mile Island nuclear accident. During our investigations, we discovered important lessons that could be useful for the world industrial community and for policy makers on the management mistakes of such severe accidents.