zzdedu
Home
/
Educational Science
/
Planet Earth
/
Energy
/
10 Years After Record Blackout, is U.S. Any Better Prepared? (Op-Ed)
10 Years After Record Blackout, is U.S. Any Better Prepared? (Op-Ed)
10 Years After Record Blackout, is U.S. Any Better Prepared? (Op-Ed)

Mike Jacobs is a senior energy analyst for the Climate & Energy program of the Union of Concerned Scientists (UCS). This artcile is adapted from a post that originally appeared on the UCS blog, The Equation. Jacobs contributed this article to LiveScience's Expert Voices: Op-Ed & Insights.

Electricity grid operators knew hours before the Northeast power failure at 4 p.m. on August 14, 2003, that things were going badly. One called his wife, predicting accurately that he would have to work late, and another complained it was "not a good day in the neighborhood."

The largest blackout to hit North America left 50 million people without power and largely without communications, but some engineers knew that the blackout could have been prevented.

As the official report from the crisis makes clear, troubles were building up during the day with computers, communications and coordination. The August 2003 blackout culminated from control systems that were out of service, inflexible schedules at generators and a grid operator who was unable to require necessary flexibility from market-based electricity providers.

With three aging power plants shut down the day before, the conditions were ripe for trouble. When an overloaded power line sagged from excess heat and touched a tree limb, it short-circuited — that was at 2 p.m. and occurred south of Cleveland. Computer, communications and coordination capabilities were insufficient to save the day and prevent the blackout that resulted two hours later.

Improving power-grid reliability

The 2003 blackout had many lessons, but for the industry and regulators, the big one was: Make grid-reliability rules mandatory and enforceable. But in addition to top-down reliability controls, regulators are now also accommodating innovations and flexibility that were needed back on that day in August 2003. These kinds of reforms also provide for lower costs, and easier adoption of renewable energy, as well as greater reliability.

The system-wide blackouts that have hit large areas in the past demonstrate that region-wide systems generally lack adequate regional-scale coordination. Recent Federal Energy Regulatory Commission (FERC) orders address parochial boundaries that limit flexibility, and improve electricity transfers and cooperation across boundaries.

The FERC reforms, which increase flexibility and improve reliability, also improve the integration of renewable energy and make better use of efficiency and response to demand. A morediversified energy supply with more distributed power generation inherently helps reduce U.S. vulnerability to blackouts.

The greatest innovation in the management of the power grid in the past 10 to 15 years is the regional Independent System Operator, or ISO. The ISO coordinates grid planning and operations for the area served by its member companies. Generators and utilities interact through the ISO to coordinate and transact business. When mature, an ISO also consolidates otherwise fragmented practices over a wider area, creating immediate savings through shared reserves —and aggregates and smoothes the variability of wind energy.

Independent System Operators were not as mature in 2003 as they are today. Still, in the western United States (with the exception of California), ISOs do not exist and reforms have been incredibly slow.

Another promising development is a voluntary "energy imbalance market" or EIM. The advantages of either a comprehensive Independent System Operator approach or a more narrowly conceived automated imbalance market like an EIM provide the much needed close coordination between power grid wires and power grid generators. With modern communications and controls, operators in such systems can recognize unused flexibility within the power grid and make the power system more reliable, more economical and better suited for absorbing renewable energy.

As climate change makes conditions for power generation more challenging, and fossil-fired plants are affected by hotter weather and droughts, more flexibility and unanticipated energy trades between power providers will be needed to avoid blackouts.

Just in the past year, a change has been ordered that will increase reliability and flexibility in the power grid. FERC has ordered a change to an old practice that applies to utilities, both big ISOs and small utilities, that still requires the scheduling of energy transfers between grids to be set and unchanged in one-hour blocks. This reduces the flexibility that may be available from a neighboring utility or the generator supplying power. It also offers no flexibility in addressing the steadily changing demand for power during the morning and evening rush hours on the grid, known as "ramps." FERC, in Order 764, required that transmission schedules be changeable at 15-minute intervals, a rule designed to reduce the costs for integrating renewable energy.

Economists at FERC and in the nascent energy-storage industry also recognized that generators have little incentive to change their output when instructed to provide flexibility. The reliance on large, inflexible steam generators (typically coal and nuclear) has made the grid less adaptable.

To recognize superior performance for balancing supply and demand, FERC has adopted a new "Pay for Performance" compensation approach. This has drawn additional and faster response capabilities from existing power producers, customer-owned equipments and evennew storage assets (such as flywheels and batteries).

While much of the attention and controversy about inter-regional cooperation in the electric utility sector is focused on long-term planning for new transmission, or the reliability of imported power, there are great improvements that the United States can make in the operation of the existing system. The nation can adapt controls and rules that recognize the benefits of coordination, greater information sharing and reduced costs.

Sometimes it takes lightning, or a blackout, to wake up and re-evaluate the way we have been doing things. The 2003 Northeast Blackout had that effect, though we are only halfway through the changes we know we need.

The specific needs of Europe and North America

What causes blackouts in North America and Europe is not what gets the most attention. The power grid systems, not a shortage of power plants, are the problem. Take a look at the 13 major power outages that have occurred across the globe over the years, and see that the problems we face are not because we aren't building enough power plants.

Only one of the outages, July 2012 in India, was due to more electricity demand than could be supplied by existing resources. In the industrialized economies of North America andEurope, people more often lose power due to a subtle and difficult challenge: the electrical grid is prone to system failures and needs modernization.

For decades, the concern over power-grid reliability focused on ensuring that an adequate number of power plants were built. And yet, today most of the policy attention, the financial needs and advanced planning are devoted to building enormous new power plants. This is a holdover from past decades when growth in electricity use was high, and the time it took to build a power plant was growing. But when one looks at what has caused major blackouts, insufficient power plants was only a factor in the India example, where people are being added to the Age of Electricity and services gradually reach more communities.

In North America and Europe, we have a different set of concerns. Load growth is barely 1 percent per year, and governments have made significant investments in new generation and technologies to save energy and use renewable energy.

Still, every year the regulators and the utility industry make a number of announcements comparing the expected demand and the expected supply. In many states, this reporting is required by law. The numbers in these comparisons are easy math. When reviewed, everyone feels assured that the power supply is large enough to meet demand, or that the investments are coming — and the required bills for this assurance will be paid. Even Texas, with its energy crunch, has 150 new plants in the planning process.

Unfortunately, it is unexpected disturbances, usually on the wires, that cause almost every blackout. Storms, droughts, and fires knock out whole sections of the system; control errors and flubbed operations trigger shutdowns; coordination failures cause overloads. Transmission reliability is much more complex than the adequacy of the generation fleet.

2013 versus 2003

The August 2003 Northeast Blackout resulted from a combination of key monitoring systems being offline, generators not responding as anticipated or requested, and then an overloaded line sagging from excess heat and short-circuiting to a tree. Obvious to the experts, this blackout could have been prevented if the grid reliability rules, including tree trimming, were mandatory, and the system needs for communications and cooperation were enforceable.

While the attention of utilities and politicians has been on the largest power plants, the practices for running the system were neglected in 2003. Coordination between utilities, adoption of flexible schedules, and use of accurate forecasts allow the transmission system to work reliably. Responsibility had been divided by old territorial boundaries between utility companies, even as the system was becoming more regional.

The creation and strengthening of the regional Independent System Operators has brought great progress inside the regions they serve. However, the utility industry continues to struggle to improve power flows across boundaries, information sharing and cooperation. These reforms are vital to increasing reliability and lowering costs.

In the summary of 13 power outages below, listed chronologically, notice how the weather and the operations of the grid caused the blackouts. Coordination and better information, rather than more old-fashioned power plants, are the demonstrated need that could provide more reliable power-grid systems.

October 2012, Hurricane Sandy: Flooding damaged vulnerable equipment and downed trees cut power to 8.2 million people in 17 states, the District of Columbia,and Canada, many for two weeks. The impacts from sea level rise and flooding are leading to a re-evaluation of local design criteria.

July 30 and 31, 2012, Northern India: High demand, inadequate supply coordination and transmission outages led to a repeating power system collapse that affected hundreds of millions across an area that is home to half of India's population. Four key transmission lines were taken offline in previous days. Mid-summer demand in the north exceeded local supply, making the imports and transfers from the west vital. Excessive demand tripped a transmission line. Within seconds, ten additional transmission lines tripped. Conditions and failure repeated again the following day. A review found poor coordination of outages and regional support agreements.

June 2012, Derecho: Wind-storm damaged trees and equipment, cut power to approximately 4.2 million customers across 11 Midwest and Mid-Atlantic states and the District of Columbia. Widespread tree clearing and line restoration efforts in many cases took 7 to 10 days.

October 2011, Northeast U.S.: A record early snowstorm brought down trees and wires. Outage restoration could only follow the removal of snow and fallen trees. More than three million customers in Mid-Atlantic and New England states were without power, many for 10 days.

September 8, 2011, California-Arizona: The transmission failure was set up by Southern California's heavy dependence on power imports from Arizona, an ongoing problem. Hot weather after the end of the summer season, as determined by the power-grid engineering schedule, conflicted with generation and transmission outages planned for maintenance. Then two weaknesses — operations planning and real-time situational awareness — left operators vulnerable to a technician's mistake switching major equipment. This outage lasted 12 hours, affecting 2.7 million people.

August 28, 2003, London: Two cables failed, and a leaky transformer could not handle the resulting flows. A section of the city and southern suburbs, totaling 250,000 customers, were without power from 6:30 p.m. to 7 p.m. when electricity providers arranged for alternate circuits.

August 14, 2003, Northeastern U.S. and Ontario: A transmission system failed for many reasons, all ones that people had seen in major outages years before. Information was incomplete and misunderstood; inadequate tree trimming caused a short circuit; and operators lacked coordination. System imbalances and overloads seen early in the day were not corrected due to lack of coordination enforcement. 50 million people across eight states and Ontario were without power for up to four days.

June 25, 1998, Ontario and North-Central U.S.: A lightning storm in Minnesota initiated a transmission failure. A 345-kV line was struck by lightning. Underlying lower voltage lines overloaded. Soon, lightning struck a second 345-kV line. Cascading transmission line disconnections continued until the entire northern Midwest was separated from the Eastern power grid, forming three isolated "islands" with power. 52,000 people in the upper Midwest, Ontario, Manitoba and Saskatchewan saw outages of up to 19 hours.

July 2-3, 1996, West Coast: The transmission outage began when a 345-kV line in Idaho overheated and sagged into a tree. Then, a protective device on a parallel transmission line incorrectly tripped. Other relays tripped two Wyoming coal plants. For 23 seconds the system remained in precarious balance, until a 230-kV line between Montana and Idaho tripped. Remedial action separated the system into five pre-engineered islands to minimize customer outages. Two million people in the U.S., Canada and Mexico lost power for minutes to hours.

August 10, 1996, West Coast: Hot weather and inadequate tree trimming set up a transmission collapse. Through the afternoon, five power lines in Oregon and nearby Washington short-circuited on trees. This tripped-off 13 hydro-power turbines operated by BPA at McNary Dam on the Columbia River. Blame fell on inadequate tree-trimming practices, improper operating studies and incorrect instructions to dispatchers. Approximately 7.5 million customers lost power in seven western U.S. states, two Canadian provinces and Baja California, Mexico, for periods ranging from several minutes to six hours.

December 22, 1982, West Coast: Over 5 million people in the West lost power after high winds knocked over a major 500-kV transmission tower. The tower fell into a parallel 500-kV line tower, and the failure mechanically cascaded and caused three additional towers to fail on each line. When those fell, they hit two 230-kV lines crossing under the 500-kV lines. From that point, coordination schemes failed, and communication problems delayed control instructions. Backup plans failed because the coordination devices were not set for such a severe disturbance. Data displayed to operators was unclear, preventing corrective actions.

July 13, 1977, New York City: Transmission failures were caused by a lightning strike shutting lines, and the tripping offline of the Indian Point No. 3 nuclear-power generating plant. When a second lightning strike caused the loss of two more 345-kV lines, the last connection for New York City to the northwest was lost. Power surges, overloads and human error soon followed. Nine million people in New York City suffered outages and looting up to 26 hours. Poor coordination, malfunctioning safety equipment and limited awareness of conditions contributed to the outage.

November 9, 1965, Northeast U.S. and Ontario: The transmission system failed due to a mistaken setting on a protective device near Niagara Falls. Improper coordination caused four more lines to disconnect. Imbalances continued to swing until power failed for 30 million people. The outage lasted up to 13 hours.

This article first appeared as Not a Good Day in the Neighborhood on the blog The Equation. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.

Comments
Welcome to zzdedu comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Show More Comments
Copyright 2023-2024 - www.zzdedu.com All Rights Reserved