Human Factors in Large-Scale Technological Systems’ Accidents: Three Mile Island, Bhopal, Chernobyl

[This article was published in 1991 in the journal, “Industrial Crisis Quarterly”, Vol. 5, 131-154.]
Human Factors in Large-Scale Technological Systems’ Accidents:

Three Mile Island, Bhopal, Chernobyl
Najmedin Meshkati
Institute of Safety and Systems Management
University of Southern California
Los Angeles, California 90089-0021

Abstract

Key Words (Bibliographic Entry): Industrial Accidents, Safety, Nuclear Power Plants, Chemical Processing Plants, Human Error, Human Factors, Three Mile Island, Bhopal, Chernobyl.

Many serious large-scale technological systems’ accidents, having grave consequences, such as those of Three Mile Island (TMI), Bhopal, and Chernobyl, have primarily been attributed to `operator error.’ However, further investigation has revealed that a good majority of these incidents are caused by a combination of many factors whose roots can be found in the lack of human factors (micro- and macroergonomics) considerations. Relevant human factors considerations, the causes of human error, and commonalities of human factors problems in major disasters are briefly reviewed. It is concluded that system accidents are caused by the way the (system) parts — engineered and human — fit together and interact. Also, on many occasions, the error and the resultant failures are both the attribute and effect of such factors as complicated operational processes, ineffective training, non-responsive managerial systems, non-adaptive organizational designs, haphazard response systems, and sudden environmental disturbances, rather than being their cause. Recommendations for prevention of such accidents are provided.

I. Introduction

“The root cause of the Chernobyl accident, it is concluded, is to be found in the so-called human element….The lessons drawn from the Chernobyl accident are valuable for all reactor types.”

Summary Report on the Post-Accident Review Meeting on the Chernobyl Accident (1986, p. 76).

A common characteristic of many high-risk, large-scale technological systems, such as nuclear power plants and chemical processing installations is the large amounts of potentially hazardous materials that are concentrated in single sites and under the centralized control of few operators. Catastrophic breakdowns of these systems pose serious threats not only for those within the plant, but also for the neighboring public, and even the whole region and the country. Attesting is the accident at the Chernobyl nuclear power plant in the Soviet Union in 1986. Chernobyl demonstrated, for the first time, that the effects of any such nuclear accident would not be localized, but rather would spill over into neighboring countries and have global consequences. The radioactive fallout resulting from Chernobyl was detected all over the world, from Finland to South Africa. Specifically, the Europeans, in addition to serious health concerns, have had to deal with significant economic losses and serious, long-lasting environmental consequences. This phenomenon has been described most succinctly as a nuclear accident anywhere is a nuclear accident everywhere.

Moreover, beside the serious and the disastrous effects of the major accidents at these facilities, the importance of this study and others of this nature is further heightened by the proliferation of nuclear power plants in many (developing) countries.

According to the International Atomic Energy Agency (IAEA), as of December 31, 1989, there are currently some 436 nuclear power plants operating in 26 countries (12 of which went into operation only last year), and 96 new reactors are under construction (IAEA Bulletin, 1/1990). Furthermore, an increasing number of developing countries are embracing nuclear power to gain greater economic independence and to achieve a permanent relief to their worrisome balance of payment and foreign debt burden, which is aggravated by their continuously increasing need for imported energy. Attesting to this is Cuba’s aggressive nuclear program — two nuclear power plants under construction and two planned (IAEA Bulletin, 4/1987). According to The Economist (December 4, 1988), nuclear power will reduce Cuba’s heavy need of imported energy and her dependence on Soviet oil and economic aid ($6.8 billion in 1986). India, another developing country, has recently announced that 32 new nuclear reactors will be operating by the year 2000 (Insight, September 18, 1989). This emerging phenomenon among the developing countries prompted the IAEA, in 1986 to establish the Senior Expert Group to assist developing countries in the promotion and financing of nuclear power programs (Bennett, 1987). Also, according to a study by the Division of Nuclear Power of the IAEA, developing countries’ present share of the world’s installed nuclear power plants is 7.1%. A total of 21 developing countries either have nuclear power plant(s) in operation, or have plants in the construction or planning stages (Csik and Schenk, 1987). This number will certainly increase at a “modest rate” in the future. Based on a recent estimate by the IAEA, nuclear energy production will grow an average of 2.8 to 3.9% per year, worldwide from 1989-2005 (IAEA Newsbriefs, August/September, 1990). The estimated average range of annual growth rates of nuclear power production for developing countries in the Middle East and South Asia (combined), and Latin America are 19.5-24.2 and 12.8-16.5 percent, respectively.

It is highly critical to realize that the Bhopal accident should not necessarily be considered an isolated event which is unique to, and could only happen in developing countries. The findings of an extensive comparative analysis presented at a U.N.-organized international conference, demonstrated that with the present safety precautions, this accident could happen at any comparable plant in any developed country, e.g., the Federal Republic of Germany (Uth, 1988). It could also happen, as easily, in the United States, “there is no question that Bhopal would happen in the U.S… You are dealing with such terribly dangerous chemicals that human failures or mechanical failures can be catastrophic. The potential is here and it could happen, maybe today, maybe 50 years from now” (cited in Weir, 1987, p. 123). Moreover, according to a 1985 report by the Congressional Research Service, about 75 percent of the U.S. population lives “in proximity to a chemical plant” (cited in Weir, 1987, p. 116). In fact, according to an expert with the Environmental Policy Institute, there have been “17 accidents in the U.S., where each of which released the Bhopal equivalent toxic gases… Only because the wind was blowing in the right direction, Bhopal did not happen here” (USAToday, August 2, 1989).

The underlying rationale and the major objective of this article is to highlight and demonstrate the critical effects of human and organizational factors in the safety of hazardous, large-scale technological systems. This is done by analyzing the three well known accidents at such systems — Three Mile Island (TMI), Bhopal, and Chernobyl. Moreover, by integrating the common causes of these three accidents, a policy framework and/or guideline facilitating adherence to those identified, safety-ensuring, factors is suggested. The specific needs of the developing countries in the context of technology transfer, are also addressed.

II. Grave Consequences of Hazardous, Large-scale Technological Systems’ Accidents

The aftermath of most hazardous, large-scale technological systems’ accidents have serious and long-lasting health and environmental consequences.

The physical and epidemiological consequences of the accident at the Three Mile Island nuclear power plant in March 28, 1979, are not fully known yet. However, according to many studies, residents in the vicinity of TMI exhibited elevated symptoms of stress (as measured by self-report, performance, and catecholamine levels) more than one year after the accident in 1979 (Baum, Gatchel and Schaeffer, 1983). Baum’s (1988) study of TMI and other technological calamities, especially with the involvement of toxic substances, suggested that they cause more severe or longer-lasting mental and emotional problems than do natural disasters of similarly tragic magnitude. The technological accident-induced chronic stress and the associated emotional, hormonal (i.e., epinephrine, norepinephrine, and cortisol) and immunological changes may cause or exacerbate illness. Moreover, based on Davidson and Baum (1986) findings, symptoms of post-traumatic stress syndrome persisted among the residents in the vicinity of TMI even as long as five years after the accident. The investigative work of Prince-Embury and Rooney (1988) following the restart of the TMI nuclear plant in 1985, also found that psychological symptoms of stress in the residents in the vicinity of the TMI “remained chronically elevated” (p. 779). Furthermore, it took only $700 million to build the plant, but to clean it up after the accident, 400 workers had to work for 4.5 years with a cost of $970 million (New York Times, April 24, 1990).

The leak of Methyl Isocyanate (MIC) at the Union Carbide pesticide plant in Bhopal, India, on December 4, 1984, resulted in the deaths of approximately 3,800 people and the injury of more than 200,000 (New York Times, September 12, 1990). And, on the average, two of the 200,000 people who were injured at the onset of this disaster die every day. Those who initially survived the gas “are continuing to suffer not only deterioration of their lungs, eyes and skin but also additional disorders that include ulcers, colitis, hysteria, neurosis and memory loss” (Los Angeles Times, March 13, 1989). Moreover, the MIC exposure has even affected the second generation; “mortality and abnormalities among children conceived and born long after the disaster to exposed mothers and fathers continue to be higher than among a selected control group of unexposed parents” (Ibid). Also, based on a recent report issued by the National Toxic Campaign Fund, on the basis of lab tests, several toxic substances which were found in the Bhopal environment raising further questions about the additional long-term effects of the disaster (Jenkins, 1990).

Among the three foregoing accidents, Chernobyl has had the most wide-spread effects. According to recent charges, “the Chernobyl accident released at least 20 times more radiation than the Soviet government has admitted” (Time, November 13, 1989, p. 62). The immediate (short-run) aftermath of the 1986, Chernobyl nuclear power plant accident in the Soviet Union included: 300 deaths (New York Times, April 27, 1990); a $12.8 billion cost of disruption to the Soviet economy (New York Times, October 13, 1989); twice the normal rate of birth defects among those living in the vicinity of the plant (Los Angeles Times, March 27, 1989); thyroid glands of more that 150,000 people were “seriously affected” by doses of radioactive iodine; rates of thyroid cancer are 5 to 10 times higher for the 1.5 million people living in the affected areas; leukemia rates among children in some areas of the Ukraine 2 to 4 times normal level, one or two children a week in a Minsk Hospital are dying of leukemia compared to one or two a year before the accident; the death rate for people who have been working at the Chernobyl plant since the accident is 10 times what it was before the accident (New York Times, April 28, 1990); one-fifth (1/5) of the republic of Beylorussia’s more than 10 million people have to be moved from areas contaminated by radiation, including 27 cities and more than 2,600 villages (Los Angeles Times, June 20, 1990); $26 billion is allotted for the resettlement of the 200,000 people still living in the irradiated areas (New York Times, April 26, 1990); evacuation of people in all the contaminated areas is estimated to cost $70 billion (Los Angeles Times, June 20, 1990); and it will take up to 200 years to “totally wipe out” the effects of the accident in the affected areas (Ibid).

The other grave, long-term environmental consequences of this accident are yet to be realized. For instance, due to Chernobyl’s contamination, about 2 million acres of land in Byelorussia and Ukraine cannot exploited normally and Byelorussia has lost 20% of its farmland (Insight, January 15, 1990). In March 1990, these facts led the Ukrainian Supreme Soviet, the Ukraine republic’s legislature to order the government to permanently close the remaining reactors at Chernobyl and to “decide on the stoppage of further development of additional nuclear power stations in Ukraine” (Los Angeles Times, March 3, 1990). The Ukrainian decision follows a similar move last year in the southern republic of Armenia to close an atomic energy plant.

Moreover, most of the injuries inflicted on the people in such accidents, such as exposure to radiation and toxic gases, are very difficult and sometimes even impossible to cure. Most of the people exposed to high doses of radioactive material at Chernobyl have died of cancer. Even exotic medical techniques such as bone marrow transplants have not been effective in saving their lives (Gale and Hauser, 1988).

III. Human Factors

The present complex, large-scale technological systems pose additional demands and new requirements on the human operators. These systems require human operators to constantly adapt to new and unforeseen system and environmental demands. Furthermore, there is no clear cut distinction between system design and operation, since the operator will have to match system properties to the changing demands and operational conditions. In other terms, according to Reason (1990), operators must be able to handle the `non-design’ emergencies, because the system designers could not foresee all possible scenarios of failures and are not able to provide automatic safety devices for every contingency. Therefore, it is highly important that the operators job, which involves effort and error-prone activities of problem-solving and decision-making at the workstation level, to be facilitated by the proper interface devices and be supported by the needed organizational structure.

Thus, the role of the human operator responsible for such systems has changed from a manual or man-in-the-loop controller to a supervisory controller who is responsible for overseeing one or more computer controllers who perform the routine, frequently occurring control functions. In supervisory control systems, the human operator’s role is primarily passive, a monitor of the change in the system state (Mitchell, 1987). The operator’s passive role, however, changes to one of active involvement in cases of unexpected systems events, emergencies, alerts, and/or system failures (Rasmussen and Rouse, 1981). Moreover, operators of these systems, faced with inherent system’s opacity and decision uncertainty, are usually working in a centralized location (e.g., control room), sharing and exchanging data, collectively analyzing information and making decisions.

III.1 Micro- and Macroergonomics

Human factors, also called ergonomics, is concerned with improving the productivity, health, safety, and comfort of people, and also with effective interaction between people, the technology they are using, and the environment in which both must operate. Human factors specialists call these collective sets “human-machine-environment systems.” Human factors at the micro level, microergonomics, is focused on the human-machine system level and is concerned with the design of individual control panels, visual displays and workstations, and with the “ergonomic seats” now so familiar in car commercials. Included are studies of human body sizes, known as anthropometric, human skills, cognitive capacity, human decision-making, information processing and error, etc. Human factors at the macro level, macroergonomics, is focused on the overall people-technology system level and is concerned with the impact of technological systems on organizational, managerial, and personnel (sub-) systems.

It is a known fact that the two major building blocks of any technological system are: (1) the physical, engineered components and (2) the people (human). Not one of the fundamental building blocks, the role of organization (and its structure) is equally important, being perhaps analogous to the mortar — facilitating the interface, connecting and joining the blocks together. Thus, stability, performance and the survival of technological systems, as well as their ability to tolerate environmental disturbances, are dependent upon the nature, formation and interaction of the human, organizational, and engineered (technological) subsystems — a macroergonomical paradigm.

At Bhopal, macroergonomics would have been concerned with the performance of the poorly trained Third World operators operating advanced technological systems designed by other humans with much different educational backgrounds, as well as cultural and psycho-social attributes (Meshkati, 1989a). Micro- and macroergonomical approaches build upon each other and concentrate on the introduction, integration, and utilization of technology, and its interface with the end-user population. Their overall objective is to optimize the functions (i.e., safety and efficiency) of the intended technological system. This issue is eloquently stated by Rasmussen (1989, p. 1) as:

“The human factors problems of industrial safety in (operation of large-scale systems) not only includes the classical interface problems, but also problems such as the ability of designers to predict and supply the means to control the relevant disturbances to an acceptable degree of completeness, the ability of the operating staff to cope with the unforeseen and rare disturbances, and the ability of the organization in charge of operation to maintain an acceptable quality of risk management. The human factors problems of industrial safety have become a true cross-disciplinary issue.”

Payoffs of the proactive incorporation of, and the on-line compliance with human factors considerations in complex, large-scale technological systems are reflected in economic and non-economic gains. They include improvement of functional effectiveness, higher equipment utilization and maintainability, enhancement of human welfare through reduction of accident potential, which could result in loss of life and property, and minimizing adverse environmental effects (e.g., discharge of hazardous material into the environment) due to haphazard and sub-optimal human-machine-environment interactions (Meshkati, 1988).

III.2 Human Factors of Large-Scale Technological System Transfer

It is contended that the safety and operation of large-scale technological systems become even more important for developing countries because of the transferred nature of the needed technologies. Additionally, a good majority of developing countries do not have the resources to build an infrastructure that makes industries safe. The consequence is that major technological accidents in developing countries lead to far more deaths and injuries than industrialized countries (Shrivastava, 1987; Smets, 1985). In the last ten years, the accidents with the most fatalities have all occurred in the technological systems located in developing countries (Shrivastava, Mitroff, Miller and Miglani, 1988). Incidently, the most catastrophic industrial accident in history — the Bhopal disaster — occurred at a chemical plant using transferred technology in a developing country.

The need for extra attention to the safety-related issues of the hazardous technological systems for developing countries is also raised by Peter Thacher, the former deputy executive director of the United Nations Environmental Program (UNEP). He has stated, in the context of Bhopal, “it is obvious that some manufacturing processes are more dangerous in a developing country than a developed one…You have to assume that in developing country people will not be as careful in terms of inspection, quality control, and maintenance. And you must assume that if a problem occurs, it will be more difficult to cope with it” (cited in Bordewich, 1987, p. 30).

Human-organization-technology interactions in developing countries require additional considerations that are not necessarily needed when the intended technology is used in the country of origin (cf., Meshkati, 1989b). Primary human and organizational-related considerations affecting large-scale technological systems in developing countries include local operator populations’ needs, capabilities and limitations. It also covers, for instance, physical and psychological characteristics, and cultural and religious norms. Examples of human factors considerations in technology transfer would include (Meshkati, 1989c):

Considering local user population’s attributes — e.g., anthropometric and perceptual characteristics, psycho-motor skills, and mental models which affect task performance, error, and the determination of efficient human-workstation interaction;

Considering physical environmental conditions affecting operators’ safety and performance — e.g.,adjusting ventilation requirements based on the installation site’s down-draft wind, heat exchange characteristics, and climatical conditions;

Considering the effects of cultural and religious variables — e.g., adopting separate production schedules, quality control (QC), machine requirement planning (MRP), and adjusting shift duration for the holy month of Ramadan due to dawn to dusk fasting, for the production facilities operating in Islamic countries;

Employing more adaptive managerial and organizational factors — e.g., deciding the rigidity and flatness of the organizational structure; and

Determining the appropriate level of needed `requisite variety'(cf., Meshkati, 1989d), finding the optimum span of control, designing feedback mechanisms; and determining the proper supervisory style based upon the identification of the local operators’ background, understanding, expectations, and tolerance for uncertainty.

III.3 ‘Human Error’ and Systems’ Accidents

It is said that human performance factors are the `guts of every accident,’ or, according to Cherns (1962), an accident is “an error with sad consequences” (p. 162). By declaring that operators must have failed, it always helps to avoid a lot of “undesirable” problems. As suggested by Perrow (1986a), “finding that faulty designs were responsible would entail enormous shutdown and retrofitting costs, finding that management was responsible would threaten those in charge; but finding that operators were responsible preserves the system, with some soporific injunctions about better training” (p. 146).

Operators’ errors should be seen as the result of human variability, which is an integral element in human learning and adaptation (Rasmussen, 1985). This approach considers the human-classification of human errors (Rasmussen, Duncan, and Leplat, 1987). Thus, human error-occurrences are defined by the behavior of the total human-task system (Rasmussen, 1987). Frequently, the human-system mismatch will not be due to spontaneous, inherent human variability, but to events in the environment which act as precursors. Furthermore, in many instances, the working environment can also aggravate the situation. Rasmussen (1986) has characterized this phenomenon as the “unkind work environment.” Once the error is committed, it is not possible for the person to correct the effects of inappropriate variations in performance before they lead to unacceptable consequences, because the effects of the `errors’ are neither observable nor reversible. Finally, according to many case studies, a good portion of errors in complex human-machine systems, so called design-induced errors, are forced upon the human operators.

III.4 Major Causes of `Human Error’ and the Resulting Accidents in Complex, Large-Scale Technological Systems

Performance, as well as the inherent accident potential of complex technological systems is a function of the interactions of their engineered and human (i.e., workstation, personnel and organizational) subsystems. Traditionally, many (technological) systems’ failures implicated in serious accidents have been attributed to operators and their errors (e.g., Three Mile Island and Bhopal). However, this is a great over-simplification. Perrow (1984) has reckoned that while 60-80 percent of all accidents are officially attributed to operators, the real figure might be closer to only 30-40 percent.

On many occasions, the error and the resulting failures are both the attribute and effect of such factors as complicated operational processes, ineffective training, non-responsive managerial systems, non-adaptive organizational designs, haphazard response systems, and sudden environmental disturbances, rather than being their cause (Baldissera, 1988; Meshkati, 1988; 1989, 1990; Reason, 1988). Generally, the solution to the problem of technological systems safety has been defined as an engineering one (Perrow, 1986a). Nowadays, however, through many rigorous scientific and multidisciplinary investigations, it is known that system accidents are caused by the way the (system) parts — engineered and human — fit together and interact.

Moreover, the `system’s environment,’ which includes different operational milieus and their peculiar human subsystem-related factors, could exacerbate the vulnerability of such interactive subsystems, and if not proactively dealt with, could activate a chain reaction resulting in low system performance and efficiency, unsafe operations, higher accident potential, and even disasters. The Bhopal tragedy, for instance, was a typical example of such a vicious circle causing total system failure — an inherently faulty and unprepared human-organizational-technological system aggravated by a developing country’s (India) contextual factors (Shrivastava, 1987).

IV. Human Factors Problems in Large-Scale Technological Systems’ Accidents

There are two main categories of human factors causes of the three systems’ accidents investigated in this work: a) lack of human factors considerations at the (system) Design Stage; and b) lack of human factors considerations at the (system) Operating Stage. Notwithstanding the overlapping domains and intertwined nature of these two stages, the former, using Reason’s (1990) characterization, refers primarily to the `latent errors’– adverse consequences that may lie dormant within the system for a long time, only becoming evident when they combine with other factors to breach the system’s defenses. In the context of this study, they include control room, workstation, and display/control panel design flaws causing confusion and leading to design-induced errors; problems associated with lack of foresight in operators’ workload estimation leading to overload (and stress); inadequate training; organizational rigidity and disarrayed managerial practices. The latter, which is associated with the performance of the front-line operators, immediately before and during the accident, includes sources and variations of `active errors’ such as misjudgments, mistakes and wrong-doings. The two stage, human factors causes of the three aforementioned accidents are further analyzed in the rest of this section.

IV.1 Major Human Factors Causes of the Three Mile Island Accident

The three Mile Island nuclear power plant accident is the most investigated accident in the history of the commercial nuclear industry. The title of the Comptroller General Report to the Congress is “Three Mile Island: The Most Studied Nuclear Accident in the History.” The following, however, is a summary account of only the most critical human factors causes of this accident [for further information and detail analysis see Kemeny (1980), Perrow (1981 & 1984), and Rogovin (1980)].

The lack of human factors considerations at the design stage was most evident in the TMI’s control room. It was poorly designed with problems including: controls located far from instrument displays that showed the condition of the system; cumbersome and inconsistent instruments that often looked identical and were placed side-by-side, but controlled widely differing functions; instrument readings that were difficult to read, obscured by glare or poor lighting or actually hidden from the operators (many key indicators were located on the back wall of the control room and many of these indicators were faulty or misleading); contradictory systems of lights, levers or knobs -lever up may have closed a valve, while pulling another lever down may have closed another one. For instance, in the case of the Pilot-Operated-Relief-Valve (PORV; a pressure relief valve to release water from the core), when it stuck open, its indicator did not show whether the valve was actually open or closed. The indicator only showed the position of its operating switch that was supposed to open or close it. Moreover, there was no direct way or any designated indicator to read the exact water level in the reactor core. This was partly responsible for one of the most significant errors by the operators: they cut back and failed to maintain the high pressure injection (HPI) system. Furthermore, the HPI throttle valves were operated from a front panel while the HPI flow indicator was on a back panel and could not be read from the throttle valve operating position.

In the control room of TMI, there were three audible alarms sounding and more than 1,600 lights blinking at the time of accident. The TMI operators had to literally turn off alarms and shut down the warning lights (Perrow, 1981).

The lack of proper operators training in general, and “stress training” in particular, was a major contributor to the TMI accident. It was a critical human factors consideration that should have been paid attention to at the design stage of the TMI. The Comptroller General’s (1980) report to the U.S. Congress concluded: “Most training, including simulator training, was geared toward preparing the operator to run the plant during routine situations, instead of understanding or coping with the unexpected (emergencies)” (p. 25). According to Rogovin (1980), “other than being required to memorize a few emergency procedures, (TMI) reactor operators are not extensively trained to diagnose and cope with the unexpected equipment malfunction, serious transients (temporary electrical oscillations), events that cannot be easily understood.” In other words, the TMI operators were only trained to handle the discrete events and not to deal with `multiple-failure’ accidents. These accidents were not simulated in the training (Perrow, 1981).

Another lack of human factors consideration at the design stage, the organizational factors, played a major role in TMI accident. Some typical (generic) problems were due to the hierarchical organizational structure, such as problems of mismatches in the response times at the different levels in the hierarchy, and of information overload (cf., Meshkati, 1991). In fact, according to Perrow (1984), “the dangerous accidents lie in the system, not in the components, and the inherent system accident potential can increase in a poorly-designed and managed organization” (p. 351). One of the recommendations of the President’s Commission (1979), investigating TMI accident, was that “to prevent nuclear accidents as serious as TMI, fundamental changes will be necessary in the organization, procedure and practice.”

As it appears from the above, due to enormous human factors problems of the TMI at the design stage, the operating stage’s problems may seem insignificant. However, there were numerous instances of misjudgment by the operators. Finally, as also referred to by Senders (1980), the Kemeny report (1980) made it plain that the causes of the TMI accident went far beyond the errors made by a few front-line operators, “while the major factor … was inappropriate operator action, many factors contributed to the action of the operators, such as deficiencies in their training, lack of clarity in their operating procedures, failure of organizations to learn proper lessons from previous incidents, and deficiencies in the design of the control room. These shortcomings are attributed to the utility [the Metropolitan Edison Company], to suppliers of equipment, and to the federal commission that regulates nuclear power.”

IV.2 Major Human Factors Causes of the Bhopal Accident

The lack of human factors considerations, both at the design and operating stages, played a very significant role in the accident at the Bhopal plant. The overall design and safety of the plant’s control room had many inherent human factors problems. The room was not, in fact, designed plausibly at all, based on a thorough operators’ task analysis. A highly critical pressure gauge that should have indicated buildup of MIC pressure around the relief valve was missing from the control room. It was located close to the valve’s assembly, somewhere in the plant’s site, and was supposed to be monitored manually with no link to the control room or warning system (Chemical and Engineering News, February 11, 1985). Furthermore, according to Time magazine’s special report on the Bhopal accident (December 17, 1984, p. 25), “an important panel in the control room had been removed, perhaps for maintenance, thus preventing the leak from showing up on monitors.” Moreover, approximately half an hour after the start of the gas leak, MIC began to engulf the control room. Many of the operators, not having oxygen masks, could neither see nor breath and had to run from their workstations. This happened at the most critical time of the plant’s life, when they were needed most.

More specifically, the problems with the visual displays –“gauges” and “meters”– at the Bhopal plant were addressed by almost all the studies. The gauges were consistently either broken, malfunctioning, off the scale, giving wrong data or considered “totally unreliable.”

According to the International Confederation of Free Trade Union (ICFTU) report (1985) “broken gauges made it hard for the MIC operators to understand what was happening. In particular, the pressure indicator/control, temperature indicator and the level indicator for the MIC storage tanks had been malfunctioning for more than a year” (p. 9). Having broken gauges “(was) not unusual at the factory,” according to the Bhopal plant operators (New York Times, January 30, 1985).

Based on the Chemical and Engineering and News analysis (February 11, 1985, p. 31), the pressure meters monitoring the leak form the MIC storage tank (number 610) gave abnormally low pressure readings. A pressure of 20 psi, was given as 2 psi. About two hours before the MIC leak, the pressure gauge had risen from 3 to 10 psi. At first, operators thought that the pressure gauge was faulty, as was often the case at the Bhopal plant. Half an hour later, according to Agarwal, Merrifield, and Tandon (1985), operators started to detect a MIC leak; not through any gauge or senor, but because their eyes started to tear. Also, the volume indicator for tank number 619, which was supposed to be empty and usable as a spare, “incorrectly read that it was 22 percent full” (Everest, 1986, p. 28). Furthermore, the temperature gauge of the MIC storage tanks, which sometimes in the summer went off the scale, could not be relied upon (New York Times, January 30, 1985). Moreover, the increase in temperature (of the MIC tank), according to the Union Carbide investigation (1985), was not signaled by the tank high-temperature alarm, since it had not been reset to a temperature above the storage temperature.

Of the two major safety devices at the Bhopal plant, one was the vent-gas scrubber, a system that was designed to pour caustic soda on the MIC so that it would decompose. (The other device was a flare tower that would ignite the gas and burn it off in the air harmlessly. This system was not operational because of a missing piece of pipe and other maintenance problems.) According to an engineering analysis conducted by Naschi (1987), the scrubber unit was not turned on until after the situation had gone out of control. Furthermore, a flow meter also “failed to indicate” that a flow of caustic soda had started (MacKenzie, 1985). The latter problem could have contributed to the overall failure of the vent-gas scrubber’s functional performance because, according to Bowonder, Kasperson, and Kasperson (1985), operators, not having the correct and required information, neglected to augment the flow of caustic soda required to neutralize the MIC.

In the MIC control room of the Bhopal plant, at the time of the accident, the operators were extremely overloaded and found it “virtually impossible to look after the 70-odd panels, indicators and console and keep a check on all the relevant parameters” (Agarwal et. al., 1985, p. 8). The foregoing catalogue of problems may have led Krishnan (1987) to conclude that “the Bhopal tragedy was a pathetic example of how careless display control design can end up in a catastrophe” (p. 5).

Bhopal plant’s rigid organizational structure, according to Kleindorfer and Kunreuther (1987), was one of the three primary causes of the accidents. Moreover, the Bhopal plant was plagued by labor relations and internal management disputes (ICFTU, 1985). For a period of fifteen years prior to the accident, the plant had been run by eight different managers (Shrivastava, 1987). Many of them came from different backgrounds with little or no relevant experience. The last managers, who served the plant at the time of accident, were originally transferred from a UCIL’s battery plant. This group was not fully trained about the hazards and appropriate operating procedures for the pesticides plant (ICFTU, 1985).

The discontinuity of the plant management, its authoritative and sometimes manipulative managerial styles and the non-adaptive and unresponsive organizational system, collectively, contributed to the accident. The latter element, i.e., organizational rigidity, was primarily responsible for not responding and taking the necessary and corrective course of actions to deal with the five reported major accidents occurring at the plant between 1981 and 1984. This leads one to conclude that the catastrophic accident of 1984 was only the inevitable and natural by-product of this symptomatic behavior. This is in accordance with Mitroff’s thesis (1988) that “crises often occur because warning signals were not attended to” (p. 18). Moreover, the Bhopal plant’s organizational culture should also be held responsible for not heeding the many operator warnings regarding safety problems, such as the one after the October 1982 combined release of MIC, Hydrochloric acid and chloroform, which spread into the surrounding community. Bhopal’s monolithic organizational culture, as the plant’s operational milieu, only fostered the centralization of decision-making by rules and regulations or by standardization and hierarchy, both of which required high control and surveillance. This was diametrically different from Wieck’s (1987) contention which has characterized organizational culture as the “source of reliability,” and suggested that high system reliability could only be achieved by simultaneous centralization and decentralization (which allows and encourages operators’ discretion and input).

According to ICFTU (1985), training was a major problem at the Bhopal plant and many operators had been given little or no training about the safety and health hazards of the MIC or other toxic substances in the plant. Language also may have contributed to the lack of understanding about MIC operations and hazards. All signs regarding operating and safety procedures were written in English, even though many of the operators spoke only Hindi.

In addition to emergency training, the lack of human factors considerations at the operating stage at the Bhopal plant, for instance, was reflected in the task-related training. The concern that operators not having adequate task-related training was also raised by the Union Carbide’s safety audit of 1982. This, of course, was partly due to a high turnover rate. Many key personnel were being released for independent operation without having gained sufficient understanding of safe operating procedures. There was also concern that the training relied too much on “rote memorization of steps” instead of an “understanding of the reasoning behind procedures” (Chemical and Engineering New, February 11, 1985). [This is the so-called carbon copy of the training problems at TMI]

IV.3 Major Human Factors Causes of the Chernobyl Accident

Although, as it is referred to in the Introduction, “human element” was acknowledged as the “root cause” of the Chernobyl accident, and it is also stated that “the Chernobyl accident illustrated the critical contribution of the human factor in the nuclear safety” (Nuclear Safety Review for 1987, p. 43), unfortunately a systematic human factors analysis of this event is not available. The urgent need for, and criticality of such a study is strongly recommended by Vladimir M. Munipov (1990), the Deputy Director of the USSR Scientific and Research Institute of Industrial Design (VNIITE). Moreover, a (now deceased) Soviet academician, Valeriy A. Legasov who was also the head of the Soviet delegation to the Post-Accident Review Meeting [organized by the International Atomic Energy Agency (IAEA) in August, 1986], according to Munipov (1990), `declared with great conviction’:

“I advocate the respect for human engineering and sound man-machine interaction. This is a lesson that the Chernobyl taught us” (cited in Munipov, 1990, p. 10).

Lack of human factors considerations at the design stage is one the primary causes of the Chernobyl accident. Attesting to this is Legasov’s statement that one of the “defects of the system was that the designers did not foresee the awkward and silly actions by the operators” (cited in Oberg, 1988, p. 256). He also attributed the accident’s cause to “human error and problems with the man-machine interface” (cited in Wilson, 1987, p. 1639). Also, it is reported that the Chernobyl accident happened because of: i) faults in the concept of the reactor (inherent safety not built-in); ii) faults in the engineering implementation of that concept (insufficient safeguard systems); and iii) failure to understand the man-machine interface, “a colossal psychological mistake” (in the words of Legasov) [cited in a report by the United Kingdom Atomic Energy Authority, The Chernobyl Accident and its Consequences (1988), p. 5.47].

Additional findings of the above report indicated that, i) the shutdown system was, in the event of the accident, inadequate, and might in fact have exacerbated the accident, rather than terminated it; ii) there were no physical controls to prevent the staff from operating the reactor in its unstable regime or with safeguard systems seriously disabled or degraded, iii) and there were no fire-drills, no adequate instrumentation and alarms to warn and alert the operators of the danger.

Although a thorough analysis of the design and operations of Chernobyl’s control room is not available, according to a published report in the prestigious scientific journal, Nature, entitled Coping with the Human Factor (4 September 1986), “the planning of tests (at Chernobyl) seems to have been in tune with the general sloppiness of the operation of the control room at the end of April” [(p. 25); emphasis added].

The lack of proper training, as well as deficiencies in qualifications of operating personnel was considered as another contributing factor to this accident by all investigations, including the IAEA’s Nuclear Safety Review for 1987. The quality of training and re-training personnel was also, however implicitly, acknowledged as a critical factor by the Soviet Report on the Chernobyl Accident (1986).

Managerial and organizational factors also contributed heavily to the catastrophic events at Chernobyl. Apart from the design errors, the other cause of the Chernobyl accident in Wilson’s (1987) analysis was management error: “there were important admissions of management errors, as distinct from operator error” (p. 1639). Also, it is reported that there were deficiencies in the plant organization and management (Nuclear Safety Review 1989). The principal `managers’ who ran and conducted the test at Chernobyl which caused the accident “were electrical engineers from Moscow. The man in charge, an electrical engineer, was not a specialist in reactor plants” [cited in Reason (1990), p. 144]. “Neither the station’s managers nor the Ministry of Power and Electrification’s leadership had any concept of the necessary actions…There was a noticeable confusion even on minor matters” (Pravda, May 20, 1986).

According to the IAEA’s Summary Report on the Post-Accident Review Meeting on the Chernobyl Accident (1986), one of the main contributing factors to the Chernobyl accident was the potential misunderstanding of the physics characteristics of the reactor by the operators. This fact is corroborated by some comments like “the staff was insufficiently familiar with the special features of the technological processes in a nuclear reactor… They had also lost any feeling for the hazards involved” [cited in Reason (1990), p. 144]. In response to these shortcomings, the IAEA report recommended that careful attention must be paid to the design of safety and control systems to enable the operators in the control room to `understand’ the encountered problems and also, to lead them to take the proper course of action(s).

The lack of human factors considerations at the operating stage was highlighted by operator error which also was identified as one of the major causes of the Chernobyl accident. According to an official report prepared by a team of Soviet investigators, an extraordinary sequence of human errors turned some weaknesses in the reactor’s design into deadly flaws. Ramberg (1987), in analyzing the causes and implications of the Chernobyl accident, concluded that it had resulted from “gross operator incompetence –not entirely unlike that which resulted in the accident in 1979 at TMI” (p. 307). Six important safety devices were “deliberately” disconnected on the night of 25 April (Wilson, 1987, p. 1639); the most important of which, the Emergency Core Cooling System (ECCS) was made inoperative. And the reactor was deliberately and improperly run below 20% power.

Finally, a recent report by the IAEA summarizes the lessons learned (primarily) from the Chernobyl accident as “the root causes of most safety significant events were found to be deficiencies in: plant organization and management; the feedback of operational experience, training and qualification, quality assurance in the maintenance and procedures, and the scope of the corrective actions” (Nuclear Safety Review 1989, p. D61).

IV.4 Commonalities of Human Factors Problems in Large-Scale Technological Systems’ Accidents

The comparison of TMI, Bhopal and Chernobyl is not unprecedented. In the case of the former two, many authoritative analogies have already been made. In 1984, the President of the World Resources Institute, James Speth (1984), in his statement at the hearing on the “Implications of the Industrial Disaster in Bhopal” before the Subcommittee on Asian and Pacific Affairs of the U.S. House of Representatives, argued that “it is likely that Bhopal will become the chemical industry’s Three Mile Island, an international symbol deeply imprinted on public consciousness.”

Regardless of the nature of the utilized technology, there are striking similarities and commonalities among the nature and magnitude of the causes of complex, large-scale technological systems’ failures, such as Three Mile Island (TMI), Bhopal, and Chernobyl. Furthermore, it would not be spurious to state that the causes of these accidents are reminiscent of the causes of another past nuclear power plant accident — the accident on January 1961 at the SL1 (Stationary Low Power Reactor No. 1), located at the National Reactor Testing Station, Idaho Falls, Idaho. A quotation from the general conclusions as to the causes of this accident could, as well and almost exactly, be applied to the TMI and Chernobyl cases [As such, one may argue that should it be heeded, these accidents could have been prevented]:

“Most accidents involve design errors, instrumentation errors, and operator or supervisor errors… The SL1 accident is an object lesson on all of these… There has been much discussion of this accident, its causes, and its lessons, but little attention has been paid to the human aspects of its causes… There is a tendency to look only at what happened, at to point out deficiencies in the system without understanding why they happened; why certain decisions were made as they were… Post-accident reviews should consider the situation and the pressures on personnel which existed before the accident” (Thompson, 1964, p. 681).

V. Conclusions and Recommendations

As demonstrated by Shrivastava et. al (1988), technological systems’ crises, e.g., accidents are caused by two set of failures (and their interactions): a) failure in system’s components (or subsystems) and their interactions; and b) failure in system’s environmental factors. The former refers to a complex set of Human, Organizational, and Technological (HOT) factors (and their interactions) which lead to the triggering event for the accident. The latter, according the authors, includes Regulatory, Infrastructural and Preparedness (RIP) failures in the systems’ environments. Although RIP is equally important, the emphasis of this work was on HOT factors. As such, the following conclusions and recommendations address only the HOT-related issues.

As also suggested by Shrivastava et. al (1988), technological “organizations are simultaneously systems of production and of destruction” (p. 297). This fact becomes even more critical for the hazardous large-scale systems, such as the ones discussed in this work. These are risky systems, and risky systems are full of failures. Inevitably, these failures will interact in unexpected ways, defeat the system’s safety devices and bring down the system. This is what Perrow (1984) has called a “normal accident.” Using Perrow’s characterization of these type of industrial accidents, the Bhopal catastrophe, as well as TMI and Chernobyl, could each well be called a `normal accident.’ Normal in the sense that the accident emerged from the inherent characteristics of the respective system itself and, because of the existing serious micro-and macroeonomical problems at both the design and operating stages, it could have been neither prevented nor avoided.

Many scholars such as Goldman (1986), Wilson (1987), Oberg (1988) and particularly Munipov (1990) have implicated the per-Glasnost Soviet secrecy and the ignorance of TMI’s lessons as root causes of the Chernobyl accident. TMI, Bhopal, Chernobyl, previously mentioned SL1 and numerous other accidents will always remind us of George Santayana’s dictum that those who ignore history are forced to relive it. The continued operation of hazardous systems with secrecy, complacency, or ignorance; not heeding the occasional warnings (incidents); and without change to a proactive, integrated and total systems approach to design, operations, safety control, and risk management in complex technological systems, will force us to relive horrors like Chernobyl and tragedies like Bhopal. These accidents were not isolated cases, but were only manifestations — the tip of the iceberg — of the negative effects resulting from the unfortunate and common-practice, lack of human factors considerations in the design and operation of major industrial facilities and process plants throughout the world. No matter what the nature or level of the technology, and regardless of the plant’s location –in industrialized or developing countries– the human factors-related issues are still universally important. Their absence always causes inefficiencies, problems, accidents, and the loss of property and lives. These and many other past major industrial accidents could have been prevented if the critical issue of complex, large-scale technology utilization was not plagued by sheer political, economic, bureaucratic, and/or technical tunnel vision. In fact, Perrow (1986b) contends that catastrophes are possible where community and regional interests are not mobilized or where they are overridden by national policy, and where `supra-organizational goals,’ such as the economic health of an industry are deemed vital.

In the light of the discussion presented in the section IV, the following is recommended for the design stage:

In order to ensure the relative safety of the future and to-be-designed, large-scale technological systems such as chemical processing, nuclear power plants and refineries, a holistic, totally integrated and multidisciplinary approach to system design, construction, staffing, and operation based on sound scientific studies and human factors guidelines is recommended. The Total System Design (TSD) constitutes such an approach. The TSD, according to Baily (1989), isa developmental approach that is based on a series of clearly defined development stages. TSD, which has been used extensively for computer-based systems development at AT&T Bell Laboratories, implies that, from the beginning and the inception of the plan, equal and adequate consideration should be given to all major system components (i.e., human, organizational, and technological). [The system development process, therefore, is partitioned into a series of meaningfully related groups of activities called stages, each of which contains a set of design and its accompanying human factors activities.]

Moreover, as was demonstrated by the TMI, in addition to independent and in addition to independent and isolated problems at the workstation (interface), job (task) and organizational (communication) levels, there was a serious lack of cohesive processes of data collection, integration and coordination. Logically, information is gathered from the interfaces (at the workstation site) and analyzed according to the operators’ stipulated job descriptions (at the job level), and passed through organizational communication network (according to the organizational structure) to the appropriate team members responsible for decision making. Thus, this continuous process in the control room of large-scale technological systems needs: 1) a cohesive and integrated framework for information gathering from the interfaces (at the workstation site), 2) analysis according to the operators’ stipulated job descriptions (at the job level), 3) and its passage through organizational communication network (according to the organizational structure) to the appropriate team members responsible for decision making (Meshkati, 1991).

The early participation of all related and needed disciplines, e.g., human factors, in technological system design and development is also strongly recommended. This mandates and encourages inter-disciplinary dialogue among engineers, managers, human factors and safety specialists. [The need for such a multidisciplinary approach to nuclear safety has also been emphasized recently in all of the articles in the Special Section of the Nuclear News on Human Factors (June 1990), the publication of the American Nuclear Society].

At the operating stage and in the short-run, efforts should immediately be initiated for the close examination of human operators’ physical and psychological needs, capabilities and limitations in the contexts of the plant’s normal and emergency operation. It should also be coupled with thorough analyses of critical workstations and their design features, job demands and operators mental workload (during normal as well as emergency situations), emergency response system, organizational characteristics, training needs, supervisory systems, etc.

The above is a long overdue action, however, and constitutes only a necessary step toward ensuring the safety of complex, large-scale technological systems. To make it sufficient, in the long-run, we need much more commitment, communication and cooperation among those who could make these systems safer — the government and regulatory agencies, plant manufactures and managers, unions, and the human factors and other concerned research communities. We need an overall paradigm shift in dealing with complex technologies’ safety and operation. We need more institutionalized interaction among all stakeholders in the public and private sectors. Above all, we need genuine and real dedication of all parties, not rhetoric or public relations ploys for this collective effort. As professed by the late Nobel physicist Richard Feynman (1986, p. F-1), in the context of another complex technological system’s accident, the Space Shuttle Challenger explosion:

For a successful technology, reality must take precedence over public relations, for nature cannot fooled.

References:

  • Agarwal, A., Merrifield, J., and Tandon, R. (1985). No Place to Run: Local Realities and Global Issues of the Bhopal Disaster. New Market, Tennessee: Highlander Research and Education Center.
  • Baily, R.W. (1989). Human performance engineering: Using human factors/ergonomics to achieve computer system usability (2nd ed.). Englewood Cliffs, New Jersey: Prentice Hall.
  • Baldissera, A. (1988). Incidenti Anormali: Una Critica Della Teoria Degli Incidenti Tecnologici CI C. Perrow (Abnormal incident: A critique of Perrow’s theory of technological incidents), `Reference Paper,’ Presented at the International Conference on Joint Design of Technology, Organization and People Growth, Venice, “Scuola Grande di San Rocco,” October 12-14.
  • Baum, A. (1988, April). Disasters, natural and otherwise. Psychology Today, 57-60.
  • Baum, A., Gatchel, R., and Schaeffer, M. (1983). Emotional, Behavioral, and physiological effects of chronic stress at Three Mile Island. Journal of Consulting and Clinical Psychology, 51, 656-672.
  • Bennett, L.L. (1987). Nuclear power programmes in developing countries: Promotion & Financing. IAEA Bulletin, 29(4), 37-40.
  • Bordewich, F. (1987, March). The lessons of Bhopal. The Atlantic, 30-33.
  • Bowonder, B., Kasperson, J.X., and Kasperson, R.E. (1985). Avoiding future Bhopals. Environment, 27(7), 6-37.
  • The Chernobyl accident and its consequences (1988). London: United Kingdom Atomic Energy Authority.
  • Cherns, A.B. (1962). In A.T. Welford (Ed.), Society: Problems and Methods of Study. London: Routledge and Keegan Paul.
  • The Comptroller General Report to the Congress of the United States (1980). Three Mile Island: The Most Studied Nuclear Accident in History. Washington, D.C.: The United States General Accounting Office.
  • Csik, B.J. and Schenk, K. (1987). Nuclear power in developing countries: Requirements & Constraints. IAEA Bulletin, 29(2), 39-42.
  • Davidson, L. and Baum, A. (1986). Chronic stress and posttraumatic stress disorder. Journal of Consulting and Clinical Psychology, 54, 303-308.
  • Everest, L. (1985). Beyond the poison cloud: Union Carbide’s Bhopal massacre. Chicago: Banner Press.
  • Feynman, R.P. (1986). Personal Observations on reliability of Shuttle. In the Report of the Presidential Commission on the Space Shuttle Challenger Accident (Vol. II). Washington, D.C.: U.S. Government Printing Office.
  • Gale, R.P. and Hauser, T. (1988). Final Warning: The Legacy of Chernobyl. New York: Warner Books.
  • Goldman, M.L. (1986, July). Keeping the cold war out of Chernobyl. Technology Review, 18-19.
  • International Confederation of Free Trade Unions (ICFTU) (1985). The Trade Union Report on Bhopal. Bruxelles, Belgium: ICFTU.
  • Jenkins, R. (1990, July 18-31). Bhopal: Five years latter. In These Times, 12-13.
  • Kemeny, J. (1980). Saving the American democracy: The lessons of Three Mile Island. Technology Review, 83(7), June/July, 65-75.
  • Kleindorfer, P.R. and Kunreuther, H.C. (Eds.) (1987). Insuring and managing industrial risks: From Seveso to Bhopal and beyond. New York: Springer.
  • Krishnan, U. (1987, June 2). Ergonomics/Human Factors: Engineering applications – a review. Paper presented at a lecture session at the Center for Ergonomics of Developing Countries (CEDC), Lulea University of Technology, Sweden, (citation is by the author’s permission).
  • MacKenzie, D. (1985, March 28). Design failings that caused Bhopal disaster. New Scientist, 3-4.
  • Meshkati, N. (1988). An integrative model for designing reliable technological organizations: The role of cultural variables. Invited position paper for the World Bank Workshop on Safety Control and Risk Management in Large-Scale Technological Operations, World Bank, Washington, D.C., October 18-20.
  • Meshkati, N. (1989a). An etiological investigation of micro- and macroergonomic factors in the Bhopal disaster: Lessons for industries of both industrialized and developing countries. International Journal of Industrial Ergonomics, 4, 161-175.
  • Meshkati, N. (1989b). Critical Issues in the Safe Transfer of Large-Scale Technological Systems to the Third World: An Analysis and Agenda for Research. Invited position paper for the World Bank Workshop in Risk Management (in Large-Scale Technological Operations), Organized by the World Bank and the Swedish Rescue Services Board, Karlstad, Sweden, November 6-11, 1989.
  • Meshkati, N. (1989c). Technology transfer to developing countries: a tripartite micro- and macro ergonomic analysis of human-organization-technology interfaces. International Journal of Industrial Ergonomics, 4, 101-115.
  • Meshkati, N. (1989d). Self-Organization, Requisite Variety, and Cultural Environment: Three Links of a Safety Chain to Harness Complex Technological Systems. Invited position paper for the World Bank Workshop in Risk Management (in Large-Scale Technological Operations), Organized by the World Bank and the Swedish Rescue Services Board, Karlstad, Sweden, November 6-11, 1989.
  • Meshkati, N. (1991). Integration of workstation, job, and team structure design in complex human-machine system: A framework. To be published in the International Journal of Industrial Ergonomics.
  • Mitchell, C.M. (1987). GT-MSOCC: A domain for research on human-computer interaction and decision aiding in supervisory control systems. IEEE Transactions on Systems, Man, and Cybernetics, 17(1), 553-572.
  • Mitroff, I.I. (1988, Winter). Crisis management: Cutting through the confusion. Sloan Management Review, 15-20.
  • Munipov, V.M. (1990). Human engineering analysis of the Chernobyl accident. Unpublished manuscript. Moscow: The USSR Scientific and Research Institute of Industrial Design (VNIITE).
  • Naschi, G. (1987). Engineering aspects of severe accidents, with reference to the Seveso, Mexico City, and Bhopal Cases. In P.R. Kleindorfer and H.C. Kunreuther (Eds.). Insuring and managing hazardous risks: From Seveso to Bhopal and beyond. New York: Springer-Verlag.
  • Nuclear Safety Review for 1987. Vienna, Austria: International Atomic Energy Agency.
  • Nuclear Safety Review 1989. Vienna, Austria: International Atomic Energy Agency.
  • Oberg, J.E. (1988). Uncovering Soviet disasters: Exploring the limits of Glasnost. New York: Random House.
  • Perrow, C. (1981). Normal accident at Three Mile Island. Society, July/August, 17-26.
  • Perrow, C. (1984). Normal accidents. New York: Basic Books, Inc.
  • Perrow, C. (1986a). Complex organizations: A critical essay (3rd ed.). New York: Random House.
  • Perrow, C. (1986b). The habit of courting disaster. The Nation, October 11, 1986.
  • President’s Commission on the Accident at Three Mile Island (1979). The need for change: The legacy of TMI. Report of the President’s Commission on the at Three Mile Island.
  • Prince-Embury, S. and Rooney, J. (1988, December). Psychological symptoms of residents in the aftermath of the Three Mile Island nuclear accident and restart. The Journal of Social Psychology, 128(6), 779-790.
  • Ramberg, B. (1987, Winter). Learning from Chernobyl. Foreign Affairs, 304-328.
  • Rasmussen, J. (1985). Trends in human reliability analysis. Ergonomics, 28(8), 1185-1195.
  • Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineering. New York: North-Holland.
  • Rasmussen, J. (1987). The definition of human error and a taxonomy for technical system design. In J. Rasmussen, K. Duncan, and J. Leplat (Eds.). New Technology and Human Error. New York: John Wiley & Sons.
  • Rasmussen, J. (1989). Human error and the problem of causality in analysis of accidents. Invited paper for Royal Society meeting on Human Factors in High Risk Situations, 28-29 June, 1989, London, England.
  • Rasmussen, J. and Rouse, W.B. (Eds.) (1981). Human detection and diagnosis of system failures. New York: Plenum.
  • Rasmussen, J., Duncan, K., and Leplat, J. (Eds.) (1987). New Technology and Human Error. New York: John Wiley & Sons.
  • Reason, J. (1988). Resident Pathogens and Risk Management. Invited position statement presented at the World Bank Workshop on Safety Control and Risk Management, Washington, D.C., October 18-21, 1988.
  • Reason, J. (1990). Human error. New York: Cambridge University Press.
  • Rogovin, M. (1980). Three Mile Island: A report to the commission and to the public, Vol. 1. Washington, D.C.: U.S. Nuclear Regulatory Commission.
  • Senders, J. (1980, April). Is there a cure for human error? Psychology Today, 52-62.
  • Shrivastava, P. (1987). Bhopal: Anatomy of a Crisis. Cambridge, MA: Ballinger Publishing Company.
  • Shrivastava, P., Mitroff, I.I., Miller, D., and Miglani, A. (1988). Understanding industrial crises. Journal of Management Studies, 25(4), 285-303.
  • Smets, H. (1985). Compensation for exceptional environmental damage caused by industrial activities. Paper presented at the Conference on Transportation, Storage and Disposal of Hazardous Materials, IIASA, Laxenburg, Austria, 1-5 July.
  • Soviet Report on the Chernobyl Accident (1986). Data prepared for the International Atomic Energy Agency Expert Conference, 25-29 August, 1986, Vienna, Austria, (translated from the Russian), Washington, D.C.: Department of Energy, NE-40, August 17, 1986.
  • Speth, J.G. (1984). Statement published in The Implications of the Industrial Disaster in Bhopal, India. Transcripts of the Hearing before the Subcommittee on Asian and Pacific Affairs of the Committee on Foreign Affairs House of Representatives, Ninety-Eighth Congress, December 12, 1984, Washington, D.C.: U.S. Government Printing Office.
  • Summary Report on the Post-Accident Review Meeting on the Chernobyl Accident (1986). Vienna, Austria: International Atomic Energy Agency, Safety Series # 75-INSAG-1.
  • Thompson, T. (1964). Accidents and destructive tests. In T.J. Thompson and J.G. (Eds.). The technology of nuclear reactor safety. Cambridge, MA: The MIT Press.
  • Union Carbide Corporation (1985). Bhopal Methyl Isocyanate incident investigation team report. Danbury, Connecticut: Union Carbide Corporation.
  • Uth, H.J. (1988). Can Bhopal Happen in the Federal Republic of Germany? (Some Aspects on Managing High Risk Industries). Paper presented at the International Conference on Industrial Risk Management and Clean Technologies, organized by the Untied Nations Industrial Development Organization (UNIDO) in cooperation with the International Association for Clean Technology (IACT), 13-17 November, 1988, Vienna, Austria.
  • Weick, K.E. (1987). Organizational culture as a source of high reliability. California Management Review, Vol. XXIX, no. 2, 112-126.
  • Weir, D. (1987). The Bhopal syndrome. San Francisco: Sierra Club Books.
  • Wilson, R. (1987, June 26). A visit to Chernobyl. Science, 236, 1636-1640.