Introduction
How well are nuclear plants near you prepared for disaster? Look it up here
The earthquake that shut down a Virginia nuclear power plant on Aug. 23, cracking floor tiles, a containment building, and shifting highly radioactive spent fuel storage casks, was more than twice as strong as the reactors were designed to withstand.
Hurricane Irene also struck with unexpected intensity, threatening nuclear plants along the East Coast and shutting down a Maryland reactor after metal siding blew into high-power lines on a transformer.
Earlier this year, other events considered improbable shook supposedly unshakeable nuclear plants: Japan’s quake and tsunami, unleashing disaster at Fukushima, historic floods along the Missouri River, unusually destructive tornadoes spinning through the South. One storm toppled transmission towers, knocking out power to the Browns Ferry nuclear plant in Alabama, which relied on emergency backup systems for five days.
Repeatedly, unanticipated events have tested nuclear power plants in new ways — and challenged the assumptions of those who operate them and oversee them for safety. Just how well-equipped are U.S. nuclear plants to handle the unexpected? How prepared are operators and regulators to forestall disaster in the face of a threat beyond their expectations?
Nobody knows for sure.
That’s because, according to nuclear operators and regulators, the unexpected is not supposed to happen. The agency overseeing safety of power plants in the U.S., the Nuclear Regulatory Commission, safeguards only events considered likely — not real-world outliers such as freak earthquakes or massive storms. A review by iWatch News of NRC records, along with interviews with regulators and industry insiders, shows that safety often depends on a set of assumptions and calculations that give utility companies wide latitude to decide how to secure reactors. The companies save money and can avoid unnecessary protections, but flawed assumptions, together with meek enforcement, can undermine reactor safety, endangering the public and the environment.
The NRC and nuclear industry enthusiastically have embraced the approach despite warning signs of its limitations – and concern that basing regulations on calculations of relative risk, rather than imposing a set of specific rules, could lead to unsafe outcomes. In a 2000 General Accounting Office survey of NRC staff, 60 percent of those responding predicted the new risk-informed regulatory approach “will reduce plant safety margins.”
Day-to-day operation and oversight of the nation’s 104 nuclear plants rely heavily on assumptions and statistical calculations: The earthquake won’t be stronger than seismologists predict. Floodwaters won’t rise higher than hydrologists estimate. Tornadoes won’t knock out off-site power supplies on which nuclear plants depend. Switches and valves will work.
The unexpected won’t happen.
Such calculations deem unlikely that terrorists will gain access to poorly-secured spent reactor fuel storage facilities. Or that seemingly minor transmission line maintenance won’t accidentally trigger a blackout affecting millions in the West and forcing two reactors to automatically shut down – another unexpected event, last week.
Regulation by predicting what’s probable has been evolving since 1975, when the NRC and reactor owners began moving from a traditional rulebook form of regulation — involving specific requirements, for instance, about equipment and procedures — toward what is known as probabilistic risk assessment, or PRA. Such “risk-based” regulation gives the companies running nuclear plants wider leeway in determining how they are going to operate safely. Reactor owners and government safety officials argue that the old “one-size-fits-all” rulebook approach is expensive and inefficient.
At the heart of the strategy are questions: How likely is something to go wrong? What would be the consequences?
Under probabilistic risk assessment, the goal is to reduce the odds of something going wrong to one-in-a-million. Achieving those odds, however, requires complex strings of assumptions that are reliable only if everything performs as planned.
“Combining the probability of an accident with its consequences gives us a measure of risk,” the NRC explains in documents outlining the approach. “For instance, the consequences of a large meteor striking your house would be devastating, but the risk is low because the probability of such an accident is very small.”
In other documents, the NRC acknowledges that life sometimes doesn’t go as planned. “The uncertainty in the result is partly because reality is more complex than any computer model, partly because modelers do not know everything, and partly because of chance.”
As iWatch News reported in May, many of the nation’s reactors are expected to change from a rulebook-based fire code to risk-based fire regulations — a transition watchdogs say will weaken safety. Fires are the most likely threat to reactor safety, occurring at U.S. reactors an average of 10 times a year. This new approach to fire safety will be built around estimates of the likelihood of an event occurring, allowing reactor owners flexibility to respond accordingly.
Other industries also are shifting to risk-based approaches, among them chemical manufacturing, health care and commercial aviation. Modeling the most likely events allows regulators and companies to focus on problems with the greatest potential to cause harm.
The appeal: Flexibility and lower costs for industry, noted Ulku Oktem, a senior research fellow at the Risk Management and Decision Process Center at the University of Pennsylvania’s Wharton School, which studies low-risk, high-impact accidents.
The underlying weakness of the approach – faulty assumptions or underestimates – is coming into sharper focus.
“Government cannot foresee every possible probability,” said Oktem, a chemical engineer who has developed environmental health and safety programs for industry.
For instance, while computer models might be able to predict when a pump may wear out, they don’t do such a good job foretelling when a utility might be cutting corners on maintenance. Nor are they good at predicting human burnout or simple errors.
“Unlike a valve that does X or Y, fairly simple binary possibilities, humans are much more complicated,” said M.V. Ramana, a physicist at the Woodrow Wilson School of Public and International Affairs at Princeton who studies nuclear safety. How they respond, especially in an emergency, is much more difficult to estimate, he said.
The cascading series of events at the Fukushima reactors in March, which melted reactor cores and released large amounts of radiation, has renewed concerns in this country and abroad about the ability of reactors and their operators to handle the unexpected.
On Wednesday, the NRC Commissioners will meet to discuss which recommendations from its Near-Term Japan Task Force should be implemented promptly. Among the task force’s recommendations is the re-evaluation of the regulatory balance between what is known as “defense-in-depth” (the old rulebook approach to regulations) and the “risk-informed” approach, in the aftermath of events at Fukushima in March.
“The Fukushima meltdown was a long-distance warning to the U.S. nuclear industry to bolster its safety systems,” said Rep. Edward Markey, D-Mass., a senior member of the House Energy and Commerce Committee. “The Virginia earthquake is now our local 911 call to stop delaying the implementation of stricter safety standards.”
The nuclear industry and regulators, responding to questions from iWatch News, downplayed concerns about the hazards of recent disasters and weaknesses in safety standards.
“Every single plant involved in the flooding, tornadoes, earthquake and hurricane either continued to operate safely or shut down safely,” said Scott Burnell of the NRC. “U.S. reactors are designed to withstand severe events and maintain their ability to keep the public safe, and they have done so.”
A similar assessment comes from the Nuclear Energy Institute, an industry group. Nuclear plants are “well designed, sturdily constructed and well operated by highly trained professionals to ensure that they can safely handle extreme events, regardless of the cause,” NEI’s Steven Kerekes told iWatch News. “Public health and safety has been maintained throughout this year’s challenges from nature, as historically has been the case.”
Numbers sometimes provide an incomplete picture, of course. One of the earliest, most worrisome signs of the pitfalls of safeguarding against what’s probable rather than what might happen occurred in the 1979. Peter Bradford, then an NRC commissioner, requested estimates of the likelihood of a major accident at a U.S. nuclear plant. In a memo, the NRC’s Applied Statistics Branch predicted that it would take 400 reactor-operating years before a major accident occurred. In other words: It was exceedingly unlikely.
Nineteen days later, Three Mile Island had a meltdown.
Addressing the expected – and the unexpected
Assumptions underlie even the most routine of reactor operations: The electrical circuitry will work when it’s needed. The valves will open when they should. Reactor operators won’t make mistakes under pressure.
These predictions are reliable — as long as every contingency has been identified and real-life accident scenarios play out according to script.
The problem? Sometimes they don’t.
“Assuming anything is risky,” says David Lochbaum, director of the nuclear safety project of the Union of Concerned Scientists.
It’s especially risky at the nation’s reactors, which are aging, complex industrial facilities surrounding a core of deadly radioactive fuel. Many reactors already are running at power loads higher than their original design.
In coming years, with 20-year license extensions, many will be operating well beyond their engineered 40-year lifespans. Safety reviews to ensure that they can function another half-life have been cursory at best.
Now, new questions are arising: for instance, concerning the casks used to store spent fuel. The assumption has been that the casks are the safest storage technology until a permanent repository can be built. Discovery that an earthquake could cause the concrete casks, which weigh 115 tons each, to move even slightly may require a re-evaluation of storage technology. According to the NRC’s Burnell, this is the first time the NRC has received reports of an earthquake causing casks to move.
Or, as iWatch News reported last month, about the assumption that off-site power will be available when needed.
Before the transition to probabilistic risk assessment, the NRC relied on a safety strategy called “defense-in-depth,” a rulebook approach to designing and operating nuclear facilities that sets out detailed design and performance blueprints. As regulators believed then, “the key is creating multiple independent and redundant layers of defense to compensate for potential human and mechanical failures so that no single layer, no matter how robust, is exclusively relied upon.”
“This approach addresses the expected as well as the unexpected,” explained former NRC Chairman Nils J. Diaz, said at a 2004 homeland security summit. “It actually accommodates the possibility of failures.”
Critics argue that the changeover from rules to assumptions is eroding safety margins by giving too much latitude to plant owners and regulators. The requirements, they say, should be prescriptive, not discretionary.
“Prescriptive means you tell people exactly what they have to do,” said Victor Gilinsky, a former NRC commissioner and occasional critic of NRC oversight. “‘Risk-based’ means that you can do something different if you can show through probabilistic analysis that the overall chance of the system failing meets the basic standard.”
“In principle, if done by dispassionate analysts, they could be equivalent. In practice, the so-called risk-based approach is more flexible and more difficult to criticize.” As for the public getting a “dispassionate” analysis, Gilinsky adds: “Draw your own conclusions.”
Computer models now are used to calculate the likelihood of things going wrong at a reactor. If the computer says the chances of an electrical malfunction or a pipe leak is low, maintenance can drop down on the list of priorities.
High risk, high priority; low risk, low priority. In theory, it makes sense because ranking risk allows reactor owners and regulators to focus attention on the most serious problems.
“The danger is, you put too much importance in that [modeling] information,” said Princeton’s Ramana. “By emphasizing it too much, you might miss some classes of accidents.”
“The multiple problems with the probabilistic risk assessment method suggest that any conclusions about overall accident probabilities derived from its use are far from dependable,” Ramana wrote recently in an article published in the Bulletin of the Atomic Scientists.
During the changeover to the new risk-based strategy for fire protection, which will take several more years, inspectors’ enforcement authority will be limited, a fact NRC Chairman Gregory B. Jaczko called “more than disappointing — it is unacceptable.” In June, iWatch News reported that Jaczko scolded his colleagues in written comments for not dealing more aggressively with the threat of fire at the nation’s nuclear plants.
Reactor owners can develop their own probabilistic risk assessment data to seek changes in their operating licenses. According to the NRC, the system improves the regulatory process in three ways: “Foremost, through safety decision making enhanced by the use of PRA insights; through more efficient use of agency resources; and through a reduction in unnecessary burdens on licensees.”
Although the Nuclear Energy Institute, the industry’s trade group, has promoted the adoption of risk-based safety standards, the organization acknowledges the new approach does have its limits.
Referring to the new risk-based fire codes, Alexander Marion, NEI’s vice president for nuclear operations, told iWatch News the system “is not perfect.” “[N]o methodology of that sort is perfect and can 100 percent guarantee a perfect prediction of an event. But it gives you a reasonably good estimate of what could happen, and then you deal with the results of that evaluation.”
Critics argue that risk-based standards as currently applied by the nuclear industry and those who regulate it have become a “single-edged sword,” cutting in only one direction: Lowering margins of safety through relaxed regulations and enforcement.
“It’s a game they’re using to lower the regulatory burden,” said Edwin Lyman, a physicist with the Union of Concerned Scientists.
Nuclear safety experts say tough enforcement can make even a risk-based approach more reliable. For that to happen, regulators must stand ready to penalize safety plans that contain unrealistic assumptions or flawed emergency procedures.
“That’s crucial,” said Princeton’s Ramana. “Everything we know about nuclear power tells us it is crucial. You can never be too safe.”
But a review of NRC enforcement records indicates the agency is anything but tough.
In the 12 months ending June 30 of this year, the NRC handed out hundreds of safety citations for the nation’s 104 operating reactors. All but 13 citations were green, the lowest category, deemed to be of “very low safety significance.” (Specific details on security violations are not made public “to ensure that potentially useful information is not provided to a possible adversary,” according to the NRC.)
Of all these violations, 11 were “white” (low to moderate safety significance), one was “yellow (substantial safety significance) and one was “red” (high safety significance), issued in May to Browns Ferry, for an incident last year involving a failed valve on a critical reactor cooling system
That sends precisely the wrong message to reactor owners and operators, nuclear watchdogs argue. Too many violations are deemed to be low-risk, based on assumptions that nothing serious will happen. Before long, they say, everybody gets careless, relying on computer estimates and lenient regulators.
“The process is flawed,” said Lyman. “They’ve gone too far with it.”
The effects of weak policing
A discovery last January at a nuclear plant 90 miles southwest of Houston serves as an illustration of the pitfalls of a safety strategy based on assumptions. A worker at the South Texas Project, as the plant is known, discovered that 26 switches had not been tested, as required.
These switches were no ordinary devices; they are critical to allowing control room operators to remotely shut down the reactor. In a fire or other emergency, they allow operators forced to flee the control room to deploy a backup system to run the reactor from other parts of the plant.
A functioning remote shutdown system is mandatory under the plant’s technical specifications and is to be routinely tested to ensure it is working.
But, as the worker discovered, the 26 switches were missing from the surveillance checklist, meaning there was no readily accessible record of when they had been tested — or even when they last worked. The twin reactors began operations in 1988 and 1989.
Rather than confirm that the switches were operational at the start of each shift as required, a supervisor apparently just relied on prior reports, according to an NRC investigation.
“The failure to recognize that risk-significant equipment is in a potentially inoperable condition and, as such, may not be able to perform its specified safety function, would not be recognized and accounted for by operators,” the Nuclear Regulatory Commission noted in an inspection report issued May 9.
In other words, in an emergency, the equipment needed to bring the reactor to a safe shutdown might not work.
That would render worthless a risk-based safety plan that relied on an operable backup system. All the modeling involving the backup system could lead to the wrong conclusion.
Although the NRC determined that the failure to routinely test the switches was a violation of federal safety rules, it decided that the problem had “very low safety significance.” The South Texas violation was rated “green,” the NRC’s lowest category.
Another illustration of the perils of flawed assumptions involved the Browns Ferry nuclear plant in Alabama, operated by the Tennessee Valley Authority, which in May was hit with a “red” or “high safety significance” violation — one of just five red citations issued by the Nuclear Regulatory Commission over the past decade.
In that case, a decades-old valve on a residual heat removal system at Browns Ferry Unit 1 failed to open last October, meaning that one of the plant’s emergency cooling systems would not have functioned as designed. The last time the valve was known to have worked as required was in mid-March 2009.
As with the switches at South Texas, workers at Browns Ferry presumed the valve would open if called upon in an emergency.
“You just assume it will do its job,” TVA spokesman Ray Golden told the Associated Press in June.
Reactors too complex for figuring what’s likely?
Modeling what can go wrong entails figuring out the many things that lead to disaster, which in a nuclear plant can involve multiple, seemingly trivial events cascading towards a disastrous outcome, such as a meltdown. Calculations of these scenarios are now the work of sophisticated computer programs.
Marino di Marzo, chairman of the department of fire protection engineering at the University of Maryland, is a consultant to the NRC and a proponent of PRAs. He described the effort of figuring out what is most likely to go wrong this way: “You create fault trees. You ask: What can fail?” And if it fails, what happens next, and next and next?
If a pump failed three times in the past 10 years and 50 reactors use the same pump, it is possible to calculate the likelihood of a particular failure happening. The goal, explained di Marzo, is to reduce the risk of that occurring to one in a million. That’s where adding backup systems or requiring alternative procedures fits in.
The goal is to reduce the model to that seemingly improbable level. “One in a million doesn’t happen, so you can afford [to take] that risk,” he says. “That is what informs the regulations.”
But that assumes every possible scenario has been factored in — a goal that’s difficult if not impossible to achieve, said Ramana, offering Fukushima as Exhibit A. Nobody planned for an earthquake, tsunami and resulting loss of emergency power. What were the odds of such a sequence of events?
“Reactors are very complex beasts,” noted Ramana. “When you have such a complex thing, how can you be sure you’ve enumerated all the possible things that can go wrong? Usually we don’t get it right.”
Where data is scarce, regulators supplement calculations with knowledgable opinion – what the industry calls “expert elicitation,” in which experts are asked to speculate.
“It’s a process of trying to generate statistically significant data from opinion,” said the Union of Concerned Scientists’ Edwin Lyman, who has participated in expert elicitations. He paraphrased one question: “What’s the chance somebody will push that lever in the wrong direction?” His response? “Oh, I’d guess 20 percent.” Presumably, that was factored into the computer model.
Lyman is especially concerned about the use of probabilistic risk assessment data to set technical specifications for new reactor designs the NRC is evaluating for possible licensing. Relatively little is known about how a new design will hold up in actual operating situations.
“A big part of PRA is validation,” says Lyman. “Can you do risk-informed regulation on a new reactor when it’s never been built or operated anywhere and the calculations are uncertain? We say no.”
For a variety of reasons, many safety experts acknowledged in interviews with iWatch News that underlying PRA data can be shaky.
“In the early years, there were no computers, so people had to use a simple back-of-the-envelope calculation,” explained physicist Ramana. “As a result, engineers had a much better intuitive understanding of how things function and they erred on the side of safety because they weren’t sure of their calculations. So, they’d say, let’s build an additional level of safety into our calculations.
“With computers, these things get compartmentalized. The people who write the [risk assessment] software aren’t nuclear engineers and don’t have a good idea of what they’re programming.”
If the unforeseen happens, how well-trained are control room personnel to respond to severe emergencies that could damage the reactor’s core?
That answer is not very reassuring.
On June 6, as part of its post-Fukushima assessment, the NRC released findings on the level of training and familiarity among control room personnel with what are known as Severe Accident Mitigation Guidelines. The voluntary guidelines, which were put in place in the late 1990s, are meant to help operators contain or reduce the impact of accidents that damage the reactor core. The guidelines are not subject to NRC inspection or enforcement.
In reality, they are the human override of a faulty computer model.
Each plant owner is expected “to perform an analysis designed to discover instances of particular vulnerability to core melt or unusually poor containment performance given a core-melt accident,” according to NRC recommendations. Based on that analysis, the guidelines become the game plan in an emergency that threatens a meltdown.
Only 61 percent of the 104 plants periodically drill operators on the guidelines, according to the NRC. At a number of reactors, training on the guidelines may happen just once in six years. At some plants, guidelines have not been updated for a decade or more.
The NRC plans to incorporate the findings in its long-term review of possible changes in licensing and safety rules, including a review of the balance between “defense-in-depth” and “risk-informed” safety standards, approach, according to the NRC’s Burnell.
As a result, NRC staff will spend 18 months reviewing that balance and make recommendations to the commission about any changes.
Read more in Environment
Environment
Shelved ozone standard would have had modest impact on business, politics
EPA’s final proposal was not among most stringent options considered
Environment
New GOP target at EPA: Graduate student interns who are tools of Obama’s ‘radical policies’
Koch Industries’ hometown congressman complains of wasting $30,000 on “tools” of White House agenda
Join the conversation
Show Comments