Energy combination under study

By Associated Press


CSA Z463 Electrical Maintenance

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$249
Coupon Price:
$199
Reserve Your Seat Today
A demonstration project along the Missouri River might be one result of a study on the possibility of integrating hydroelectric power and wind-generated power, according to Sen. John Thune.

The study by the Western Area Power Administration is expected to be released soon.

The idea comes from a wind-hydro project that was conducted on the Columbia River in Washington, where Thune said managers have successfully integrated the use of river dams and wind turbines. When the wind is strong, less hydropower is used; when the wind is weak, more hydropower is employed.

"When we learned about that, we thought it made a lot of sense for South Dakota because of our mainstem dams, and the fact that 56 percent of our energy in South Dakota is generated from that hydro source," said Thune, R-S.D. "Earlier this week, I discussed that study with WAPA and they informed me that the Missouri River is a potential location for coordination and they recommended we demonstrate a project along the Missouri."

WAPA is an agency within the U.S. Department of Energy that sells electricity to cities, tribes and various agencies. When it cannot meet demand through its traditional power sources, such as the dams along the Missouri River, it is forced to purchase power elsewhere.

Thune said WAPA has been urged to conduct such a study on the Missouri River, but the process was bogged down because "it became much more complicated than they anticipated."

Along with figuring, on an hour-by-hour basis, wind potential in the state, there is the always present issue of energy transmission, Thune said. Transmission has been an ongoing roadblock to wind energy projects in South Dakota.

"I think WAPAs concern has to do with the excess capacity that would be required to transmit energy generated by wind," Thune said. "And since they're the only show in town in some parts of South Dakota that has transmission, it's always a question of if they will have to pass on additional costs to their rate payers to open up their lines."

The senator said he hopes the release of the study shows that WAPA understands the possibilities that exist with wind power in South Dakota.

"I think the time when (a wind-hydro integration) will really work is in times when you have particularly low hydro generation," he said. "When they're not generating (power) on the dam, they have to buy it someplace else and they buy it at a premium. If you can complement hydro and wind, you hopefully can figure out how to drive prices down."

Related News

Independent power project announced by B.C. Hydro now in limbo

Siwash Creek Hydroelectric Project faces downsizing under a BC Hydro power purchase agreement, with run-of-river generation, high grid interconnection costs, First Nations partnership, and surplus electricity from Site C reshaping clean energy procurement.

 

Key Points

A downsized run-of-river plant in BC, co-owned by Kanaka Bar and Green Valley, selling power via a BC Hydro PPA.

✅ Approved at 500 kW under a BC Hydro clean-energy program

✅ Grid interconnection initially quoted at $2.1M

✅ Joint venture: Kanaka Bar and Green Valley Power

 

A small run-of-river hydroelectric project recently selected by B.C. Hydro for a power purchase agreement may no longer be financially viable.

The Siwash Creek project was originally conceived as a two-megawatt power plant by the original proponent Chad Peterson, who holds a 50-per-cent stake through Green Valley Power, with the Kanaka Bar Indian Band holding the other half.

The partners were asked by B.C. Hydro to trim the capacity back to one megawatt, but by the time the Crown corporation announced its approval, it agreed to only half that — 500 kilowatts — under its Standing Order clean-energy program.

“Hydro wanted to charge us $2.1 million to connect to the grid, but then they said they could reduce it if we took a little trim on the project,” said Kanaka Bar Chief Patrick Michell.

The revenue stream for the band and Green Valley Power has been halved to about $250,000 a year. The original cost of running the $3.7-million plant, including financing, was projected to be $273,000 a year, according to the Kanaka Bar economic development plan.

“By our initial forecast, we will have to subsidize the loan for 20 years,” said Michell. “It doesn’t make any sense.”

The Kanaka Band has already invested $450,000 in feasibility, hydrology and engineering studies, with a similar investment from Green Valley.

B.C. Hydro announced it would pursue five purchase agreements last March with five First Nations projects — including Siwash Creek — including hydro, solar and wind energy projects, as two new generating stations were being commissioned at the time. A purchase agreement allows proponents to sell electricity to B.C. Hydro at a set price.

However, at least ten other “shovel-ready” clean energy projects may be doomed while B.C. Hydro completes a review of its own operations and its place in the energy sector, where legal outcomes like the Squamish power project ruling add uncertainty, including B.C.’s future power needs.

With the 1,100-megawatt Site C Dam planned for completion in 2024, and LNG demand cited to justify it, B.C. Hydro now projects it will have a surplus of electricity until the early 2030s.

Even if British Columbians put 300,000 electric vehicles on the road over the next 12 years, amid BC Hydro’s first call for power, they will require only 300 megawatts of new capacity, the company said.

A long-term surplus could effectively halt all small-scale clean energy development, according to Clean Energy B.C., even as Hydro One’s U.S. coal plant remains online in the region.

“(B.C. Hydro) dropped their offer down to 500 kilowatts right around the time they announced their review,” said Michell. “So we filled out the paperwork at 500 kilowatts and (B.C. Hydro) got to make its announcement of five projects.”

In the new few weeks, Kanaka and Green Valley will discuss whether they can move forward with a new financial model or shelve the project, he said.

B.C. Hydro declined to comment on the rationale for downsizing Siwash Creek’s power purchase agreement.

The Kanaka Bar Band successfully operates a 49.9-megawatt run-of-river plant on Kwoiek Creek with partners Innergex Renewable Energy.

 

Related News

View more

Turning thermal energy into electricity

Near-Field Thermophotovoltaics captures radiated energy across a nanoscale gap, using thin-film photovoltaic cells and indium gallium arsenide to boost power density and efficiency, enabling compact Army portable power from emitters via radiative heat transfer.

 

Key Points

A nanoscale TPV method capturing near-field photons for higher power density at lower emitter temperatures.

✅ Nanoscale gap boosts radiative transfer and usable photon flux

✅ Thin-film InGaAs cells recycle sub-band-gap photons via reflector

✅ Achieved ~5 kW/m2 power density with higher efficiency

 

With the addition of sensors and enhanced communication tools, providing lightweight, portable power has become even more challenging, with concepts such as power from falling snow illustrating how diverse new energy-harvesting approaches are. Army-funded research demonstrated a new approach to turning thermal energy into electricity that could provide compact and efficient power for Soldiers on future battlefields.

Hot objects radiate light in the form of photons into their surroundings. The emitted photons can be captured by a photovoltaic cell and converted to useful electric energy. This approach to energy conversion is called far-field thermophotovoltaics, or FF-TPVs, and has been under development for many years; however, it suffers from low power density and therefore requires high operating temperatures of the emitter.

The research, conducted at the University of Michigan and published in Nature Communications, demonstrates a new approach, where the separation between the emitter and the photovoltaic cell is reduced to the nanoscale, enabling much greater power output than what is possible with FF-TPVs for the same emitter temperature.

This approach, which enables capture of energy that is otherwise trapped in the near-field of the emitter is called near-field thermophotovoltaics or NF-TPV and uses custom-built photovoltaic cells and emitter designs ideal for near-field operating conditions, alongside emerging smart solar inverters that help manage conversion and delivery.

This technique exhibited a power density almost an order of magnitude higher than that for the best-reported near-field-TPV systems, while also operating at six-times higher efficiency, paving the way for future near-field-TPV applications, including remote microgrid deployments in extreme environments, according to Dr. Edgar Meyhofer, professor of mechanical engineering, University of Michigan.

"The Army uses large amounts of power during deployments and battlefield operations and must be carried by the Soldier or a weight constrained system," said Dr. Mike Waits, U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "If successful, in the future near-field-TPVs could serve as more compact and higher efficiency power sources for Soldiers as these devices can function at lower operating temperatures than conventional TPVs."

The efficiency of a TPV device is characterized by how much of the total energy transfer between the emitter and the photovoltaic cell is used to excite the electron-hole pairs in the photovoltaic cell, where insights from near-light-speed conduction research help contextualize performance limits in semiconductors. While increasing the temperature of the emitter increases the number of photons above the band-gap of the cell, the number of sub band-gap photons that can heat up the photovoltaic cell need to be minimized.

"This was achieved by fabricating thin-film TPV cells with ultra-flat surfaces, and with a metal back reflector," said Dr. Stephen Forrest, professor of electrical and computer engineering, University of Michigan. "The photons above the band-gap of the cell are efficiently absorbed in the micron-thick semiconductor, while those below the band-gap are reflected back to the silicon emitter and recycled."

The team grew thin-film indium gallium arsenide photovoltaic cells on thick semiconductor substrates, and then peeled off the very thin semiconductor active region of the cell and transferred it to a silicon substrate, informing potential interfaces with home battery systems for distributed use.

All these innovations in device design and experimental approach resulted in a novel near-field TPV system that could complement distributed resources in virtual power plants for resilient operations.

"The team has achieved a record ~5 kW/m2 power output, which is an order of magnitude larger than systems previously reported in the literature," said Dr. Pramod Reddy, professor of mechanical engineering, University of Michigan.

Researchers also performed state-of-the-art theoretical calculations to estimate the performance of the photovoltaic cell at each temperature and gap size, informing hybrid designs with backup fuel cell solutions that extend battery life, and showed good agreement between the experiments and computational predictions.

"This current demonstration meets theoretical predictions of radiative heat transfer at the nanoscale, and directly shows the potential for developing future near-field TPV devices for Army applications in power and energy, communication and sensors," said Dr. Pani Varanasi, program manager, DEVCOM ARL that funded this work.

 

Related News

View more

Ireland and France will connect their electricity grids - here's how

Celtic Interconnector, a subsea electricity link between Ireland and France, connects EU grids via a high-voltage submarine cable, boosting security of supply, renewable integration, and cross-border trade with 700 MW capacity by 2026.

 

Key Points

A 700 MW subsea link between Ireland and France, boosting security, enabling trade, and supporting renewables.

✅ Approx. 600 km subsea cable from East Cork to Brittany

✅ 700 MW capacity; powers about 450,000 homes

✅ Financed by EIB, banks, CEF; Siemens Energy and Nexans

 

France and Ireland signed contracts on Friday to advance the Celtic Interconnector, a subsea electricity link to allow the exchange of electricity between the two EU countries. It will be the first interconnector between continental Europe and Ireland, as similar UK interconnector plans move forward in parallel. 

Representatives for Ireland’s electricity grid operator EirGrid and France’s grid operator RTE signed financial and technical agreements for the high-voltage submarine cable, mirroring developments like Maine’s approved transmission line in North America for cross-border power. The countries’ respective energy ministers witnessed the signing.

European commissioner for energy Kadri Simson said:

In the current energy market situation, marked by electricity price volatility, and the need to move away from imports of Russian fossil fuels, European energy infrastructure has become more important than ever.

The Celtic Interconnector is of paramount importance as it will end Ireland’s isolation from the Union’s power system, with parallels to Cyprus joining the electricity highway in the region, and ensure a reliable high-capacity link improving the security of electricity supply and supporting the development of renewables in both Ireland and France.

EirGrid and RTE signed €800 million ($827 million) worth of financing agreements with Barclays, BNP Paribas, Danske Bank, and the European Investment Bank, similar to the Lake Erie Connector investment that blends public and private capital.

In 2019, the project was awarded a Connecting Europe Facility (CEF) grant worth €530.7 million to support construction works and align with a broader push for electrification in Europe under climate strategies. The CEF program also provided €8.3 million for the Celtic Interconnector’s feasibility study and initial design and pre-consultation.

Siemens Energy will build converter stations in both countries, and Paris-based global cable company Nexans will design and install a 575-km-long cable for the project.

The cable will run between East Cork, on Ireland’s southern coast, and northwestern France’s Brittany coast and will connect into substations at Knockraha in Ireland and La Martyre in France.

The Celtic Interconnector, which is expected to be operational by 2026, will be approximately 600 km (373 miles) long and have a capacity of 700 MW, similar to cross-border initiatives such as Quebec-to-New York power exports expected in 2025, which is enough to power 450,000 households.

 

Related News

View more

Which of the cleaner states imports dirty electricity?

Hourly Electricity Emissions Tracking maps grid balancing areas, embodied emissions, and imports/exports, revealing carbon intensity shifts across PJM, ERCOT, and California ISO, and clarifying renewable energy versus coal impacts on health and climate.

 

Key Points

An hourly method tracing generation, flows, and embodied emissions to quantify carbon intensity across US balancing areas.

✅ Hourly traces of imports/exports and generation mix

✅ Consumption-based carbon intensity by balancing area

✅ Policy insights for renewables, coal, health costs

 

In the United States, electricity generation accounts for nearly 30% of our carbon emissions. Some states have responded to that by setting aggressive renewable energy standards; others are hoping to see coal propped up even as its economics get worse. Complicating matters further is the fact that many regional grids are integrated, and as America goes electric the stakes grow, meaning power generated in one location may be exported and used in a different state entirely.

Tracking these electricity exports is critical for understanding how to lower our national carbon emissions. In addition, power from a dirty source like coal has health and environment impacts where it's produced, and the costs of these aren't always paid by the parties using the electricity. Unfortunately, getting reliable figures on how electricity is produced and where it's used is challenging, even for consumers trying to find where their electricity comes from in the first place, leaving some of the best estimates with a time resolution of only a month.

Now, three Stanford researchers—Jacques A. de Chalendar, John Taggart, and Sally M. Benson—have greatly improved on that standard, and they have managed to track power generation and use on an hourly basis. The researchers found that, of the 66 grid balancing areas within the United States, only three have carbon emissions equivalent to our national average, and they have found that imports and exports of electricity have both seasonal and daily changes. de Chalendar et al. discovered that the net results can be substantial, with imported electricity increasing California's emissions/power by 20%.

Hour by hour
To figure out the US energy trading landscape, the researchers obtained 2016 data for grid features called balancing areas. The continental US has 66 of these, providing much better spatial resolution on the data than the larger grid subdivisions. This doesn't cover everything—several balancing areas in Canada and Mexico are tied in to the US grid—and some of these balancing areas are much larger than others. The PJM grid, serving Pennsylvania, New Jersey, and Maryland, for example, is more than twice as large as Texas' ERCOT, in a state that produces and consumes the most electricity in the US.

Despite these limitations, it's possible to get hourly figures on how much electricity was generated, what was used to produce it, and whether it was used locally or exported to another balancing area. Information on the generating sources allowed the researchers to attach an emissions figure to each unit of electricity produced. Coal, for example, produces double the emissions of natural gas, which in turn produces more than an order of magnitude more carbon dioxide than the manufacturing of solar, wind, or hydro facilities. These figures were turned into what the authors call "embodied emissions" that can be traced to where they're eventually used.

Similar figures were also generated for sulfur dioxide and nitrogen oxides. Released by the burning of fossil fuels, these can both influence the global climate and produce local health problems.

Huge variation
The results were striking. "The consumption-based carbon intensity of electricity varies by almost an order of magnitude across the different regions in the US electricity system," the authors conclude. The low is the Bonneville Power grid region, which is largely supplied by hydropower; it has typical emissions below 100kg of carbon dioxide per megawatt-hour. The highest emissions come in the Ohio Valley Electric region, where emissions clear 900kg/MW-hr. Only three regional grids match the overall grid emissions intensity, although that includes the very large PJM (where capacity auction payouts recently fell), ERCOT, and Southern Co balancing areas.

Most of the low-emissions power that's exported comes from the Pacific Northwest's abundant hydropower, while the Rocky Mountains area exports electricity with the highest associated emissions. That leads to some striking asymmetries. Local generation in the hydro-rich Idaho Power Company has embodied emissions of only 71kg/MW-hr, while its imports, coming primarily from Rocky Mountain states, have a carbon content of 625kg/MW-hr.

The reliance on hydropower also makes the asymmetry seasonal. Local generation is highest in the spring as snow melts, but imports become a larger source outside this time of year. As solar and wind can also have pronounced seasonal shifts, similar changes will likely be seen as these become larger contributors to many of these regional grids. Similar things occur daily, as both demand and solar production (and, to a lesser extent, wind) have distinct daily profiles.

The Golden State
California's CISO provides another instructive case. Imports represent less than 30% of its total electric use in 2016, yet California electricity imports provided 40% of its embodied emissions. Some of these, however, come internally from California, provided by the Los Angeles Department of Water and Power. The state itself, however, has only had limited tracking of imported emissions, lumping many of its sources as "other," and has been exporting its energy policies to Western states in ways that shape regional markets.

Overall, the 2016 inventory provides a narrow picture of the US grid, as plenty of trends are rapidly changing our country's emissions profile, including the rise of renewables and the widespread adoption of efficiency measures and other utility trends in 2017 that continue to evolve. The method developed here can, however, allow for annual updates, providing us with a much better picture of trends. That could be quite valuable to track things like how the rapid rise in solar power is altering the daily production of clean power.

More significantly, it provides a basis for more informed policymaking. States that wish to promote low-emissions power can use the information here to either alter the source of their imports or to encourage the sites where they're produced to adopt more renewable power. And those states that are exporting electricity produced primarily through fossil fuels could ensure that the locations where the power is used pay a price that includes the health costs of its production.

 

Related News

View more

Purdue: As Ransomware Attacks Increase, New Algorithm May Help Prevent Power Blackouts

Infrastructure Security Algorithm prioritizes cyber defense for power grids and critical infrastructure, mitigating ransomware, blackout risks, and cascading failures by guiding utilities, regulators, and cyber insurers on optimal security investment allocation.

 

Key Points

An algorithm that optimizes security spending to cut ransomware and blackout risks across critical infrastructure.

✅ Guides utilities on optimal security allocation

✅ Uses incentives to correct human risk biases

✅ Prioritizes assets to prevent cascading outages

 

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid, as past US utility intrusions have shown.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid's utilities could fill in the most crucial security gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an algorithm to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework for protecting the U.S. power grid that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Power grids are a type of critical infrastructure, which is any network - whether physical like water systems or virtual like health care record keeping - considered essential to a country's function and safety. The biggest ransomware attacks in history have happened in the past year, affecting most sectors of critical infrastructure in the U.S. such as grain distribution systems in the food and agriculture sector and the Colonial Pipeline, which carries fuel throughout the East Coast, prompting increased military preparation for grid hacks in the U.S.

With this trend in mind, Purdue researchers evaluated the algorithm in the context of various types of critical infrastructure in addition to the power sector, including electricity-sector IoT devices that interface with grid operations. The goal is that the algorithm would help secure any large and complex infrastructure system against cyberattacks.

"Multiple companies own different parts of infrastructure. When ransomware hits, it affects lots of different pieces of technology owned by different providers, so that's what makes ransomware a problem at the state, national and even global level," said Saurabh Bagchi, a professor in the Elmore Family School of Electrical and Computer Engineering and Center for Education and Research in Information Assurance and Security at Purdue. "When you are investing security money on large-scale infrastructures, bad investment decisions can mean your power grid goes out, or your telecommunications network goes out for a few days."

Protecting infrastructure from hacks by improving security investment decisions

The researchers tested the algorithm in simulations of previously reported hacks to four infrastructure systems: a smart grid, industrial control system, e-commerce platform and web-based telecommunications network. They found that use of this algorithm results in the most optimal allocation of security investments for reducing the impact of a cyberattack.

The team's findings appear in a paper presented at this year's IEEE Symposium on Security and Privacy, the premier conference in the area of computer security. The team comprises Purdue professors Shreyas Sundaram and Timothy Cason and former PhD students Mustafa Abdallah and Daniel Woods.

"No one has an infinite security budget. You must decide how much to invest in each of your assets so that you gain a bump in the security of the overall system," Bagchi said.

The power grid, for example, is so interconnected that the security decisions of one power utility company can greatly impact the operations of other electrical plants. If the computers controlling one area's generators don't have adequate security protection, as seen when Russian hackers accessed control rooms at U.S. utilities, then a hack to those computers would disrupt energy flow to another area's generators, forcing them to shut down.

Since not all of the grid's utilities have the same security budget, it can be hard to ensure that critical points of entry to the grid's controls get the most investment in security protection.

The algorithm that Purdue researchers developed would incentivize each security decision maker to allocate security investments in a way that limits the cumulative damage a ransomware attack could cause. An attack on a single generator, for instance, would have less impact than an attack on the controls for a network of generators, which sophisticated grid-disruption malware can target at scale, rather than for the protection of a single generator.

Building an algorithm that considers the effects of human behavior

Bagchi's research shows how to increase cybersecurity in ways that address the interconnected nature of critical infrastructure but don't require an overhaul of the entire infrastructure system to be implemented.

As director of Purdue's Center for Resilient Infrastructures, Systems, and Processes, Bagchi has worked with the U.S. Department of Defense, Northrop Grumman Corp., Intel Corp., Adobe Inc., Google LLC and IBM Corp. on adopting solutions from his research. Bagchi's work has revealed the advantages of establishing an automatic response to attacks, and analyses like Symantec's Dragonfly report highlight energy-sector risks, leading to key innovations against ransomware threats, such as more effective ways to make decisions about backing up data.

There's a compelling reason why incentivizing good security decisions would work, Bagchi said. He and his team designed the algorithm based on findings from the field of behavioral economics, which studies how people make decisions with money.

"Before our work, not much computer security research had been done on how behaviors and biases affect the best defense mechanisms in a system. That's partly because humans are terrible at evaluating risk and an algorithm doesn't have any human biases," Bagchi said. "But for any system of reasonable complexity, decisions about security investments are almost always made with humans in the loop. For our algorithm, we explicitly consider the fact that different participants in an infrastructure system have different biases."

To develop the algorithm, Bagchi's team started by playing a game. They ran a series of experiments analyzing how groups of students chose to protect fake assets with fake investments. As in past studies in behavioral economics, they found that most study participants guessed poorly which assets were the most valuable and should be protected from security attacks. Most study participants also tended to spread out their investments instead of allocating them to one asset even when they were told which asset is the most vulnerable to an attack.

Using these findings, the researchers designed an algorithm that could work two ways: Either security decision makers pay a tax or fine when they make decisions that are less than optimal for the overall security of the system, or security decision makers receive a payment for investing in the most optimal manner.

"Right now, fines are levied as a reactive measure if there is a security incident. Fines or taxes don't have any relationship to the security investments or data of the different operators in critical infrastructure," Bagchi said.

In the researchers' simulations of real-world infrastructure systems, the algorithm successfully minimized the likelihood of losing assets to an attack that would decrease the overall security of the infrastructure system.

Bagchi's research group is working to make the algorithm more scalable and able to adapt to an attacker who may make multiple attempts to hack into a system. The researchers' work on the algorithm is funded by the National Science Foundation, the Wabash Heartland Innovation Network and the Army Research Lab.

Cybersecurity is an area of focus through Purdue's Next Moves, a set of initiatives that works to address some of the greatest technology challenges facing the U.S. Purdue's cybersecurity experts offer insights and assistance to improve the protection of power plants, electrical grids and other critical infrastructure.

 

Related News

View more

Its Electric Grid Under Strain, California Turns to Batteries

California Battery Storage is transforming grid reliability as distributed energy, solar-plus-storage, and demand response mitigate rolling blackouts, replace peaker plants, and supply flexible capacity during heat waves and evening peaks across utilities and homes.

 

Key Points

California Battery Storage uses distributed and utility batteries to stabilize power, shift solar, and curb blackouts.

✅ Supplies flexible capacity during peak demand and heat waves

✅ Enables demand response and replaces gas peaker plants

✅ Aggregated assets form virtual power plants for grid support

 

Last month as a heat wave slammed California, state regulators sent an email to a group of energy executives pleading for help to keep the lights on statewide. “Please consider this an urgent inquiry on behalf of the state,” the message said.

The manager of the state’s grid was struggling to increase the supply of electricity because power plants had unexpectedly shut down and demand was surging. The imbalance was forcing officials to order rolling blackouts across the state for the first time in nearly two decades.

What was unusual about the emails was whom they were sent to: people who managed thousands of batteries installed at utilities, businesses, government facilities and even homes. California officials were seeking the energy stored in those machines to help bail out a poorly managed grid and reduce the need for blackouts.

Many energy experts have predicted that batteries could turn homes and businesses into mini-power plants that are able to play a critical role in the electricity system. They could soak up excess power from solar panels and wind turbines and provide electricity in the evenings when the sun went down or after wildfires and hurricanes, which have grown more devastating because of climate change in recent years. Over the next decade, the argument went, large rows of batteries owned by utilities could start replacing power plants fueled by natural gas.

But that day appears to be closer than earlier thought, at least in California, which leads the country in energy storage. During the state’s recent electricity crisis, more than 30,000 batteries supplied as much power as a midsize natural gas plant. And experts say the machines, which range in size from large wall-mounted televisions to shipping containers, will become even more important because utilities, businesses and homeowners are investing billions of dollars in such devices.

“People are starting to realize energy storage isn’t just a project or two here or there, it’s a whole new approach to managing power,” said John Zahurancik, chief operating officer at Fluence, which makes large energy storage systems bought by utilities and large businesses. That’s a big difference from a few years ago, he said, when electricity storage was seen as a holy grail — “perfect, but unattainable.”

On Friday, Aug. 14, the first day California ordered rolling blackouts, Stem, an energy company based in the San Francisco Bay Area, delivered 50 megawatts — enough to power 20,000 homes — from batteries it had installed at businesses, local governments and other customers. Some of those devices were at the Orange County Sanitation District, which installed the batteries to reduce emissions by making it less reliant on natural gas when energy use peaks.

John Carrington, Stem’s chief executive, said his company would have provided even more electricity to the grid had it not been for state regulations that, among other things, prevent businesses from selling power from their batteries directly to other companies.

“We could have done two or three times more,” he said.

The California Independent System Operator, which manages about 80 percent of the state’s grid, has blamed the rolling blackouts on a confluence of unfortunate events, including extreme weather impacts on the grid that limited supply: A gas plant abruptly went offline, a lack of wind stilled thousands of turbines, and power plants in other states couldn’t export enough electricity. (On Thursday, the grid manager urged Californians to reduce electricity use over Labor Day weekend because temperatures are expected to be 10 to 20 degrees above normal.)

But in recent weeks it has become clear that California’s grid managers also made mistakes last month, highlighting the challenge of fixing California’s electric grid in real time, that were reminiscent of an energy crisis in 2000 and 2001 when millions of homes went dark and wholesale electricity prices soared.

Grid managers did not contact Gov. Gavin Newsom’s office until moments before it ordered a blackout on Aug. 14. Had it acted sooner, the governor could have called on homeowners and businesses to reduce electricity use, something he did two days later. He could have also called on the State Department of Water Resources to provide electricity from its hydroelectric plants.

Weather forecasters had warned about the heat wave for days. The agency could have developed a plan to harness the electricity in numerous batteries across the state that largely sat idle while grid managers and large utilities such as Pacific Gas & Electric scrounged around for more electricity.

That search culminated in frantic last-minute pleas from the California Public Utilities Commission to the California Solar and Storage Association. The commission asked the group to get its members to discharge batteries they managed for customers like the sanitation department into the grid. (Businesses and homeowners typically buy batteries with solar panels from companies like Stem and Sunrun, which manage the systems for their customers.)

“They were texting and emailing and calling us: ‘We need all of your battery customers giving us power,’” said Bernadette Del Chiaro, executive director of the solar and storage association. “It was in a very last-minute, herky-jerky way.”

At the time of blackouts on Aug. 14, battery power to the electric grid climbed to a peak of about 147 megawatts, illustrating how virtual power plants can rapidly scale, according to data from California I.S.O. After officials asked for more power the next day, that supply shot up to as much as 310 megawatts.

Had grid managers and regulators done a better job coordinating with battery managers, the devices could have supplied as much as 530 megawatts, Ms. Del Chiaro said. That supply would have exceeded the amount of electricity the grid lost when the natural gas plant, which grid managers have refused to identify, went offline.

Officials at California I.S.O. and the public utilities commission said they were working to determine the “root causes” of the crisis after the governor requested an investigation.

Grid managers and state officials have previously endorsed the use of batteries, using AI to adapt as they integrate them at scale. The utilities commission last week approved a proposal by Southern California Edison, which serves five million customers, to add 770 megawatts of energy storage in the second half of 2021, more than doubling its battery capacity.

And Mr. Zahurancik’s company, Fluence, is building a 400 megawatt-hour battery system at the site of an older natural gas power plant at the Alamitos Energy Center in Long Beach. Regulators this week also approved a plan to extend the life of the power plant, which was scheduled to close at the end of the year, to support the grid.

But regulations have been slow to catch up with the rapidly developing battery technology.

Regulators and utilities have not answered many of the legal and logistical questions that have limited how batteries owned by homeowners and businesses are used. How should battery owners be compensated for the electricity they provide to the grid? Can grid managers or utilities force batteries to discharge even if homeowners or businesses want to keep them charged up for their own use during blackouts?

During the recent blackouts, Ms. Del Chiaro said, commercial and industrial battery owners like Stem’s customers were compensated at the rates similar to those that are paid to businesses to not use power during periods of high electricity demand. But residential customers were not paid and acted “altruistically,” she said.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.