Idaho Power: Equipment failure led to wildfire

By Associated Press


Electrical Testing & Commissioning of Power Systems

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$599
Coupon Price:
$499
Reserve Your Seat Today
Investigators have concluded that an equipment failure on Idaho Power Co. electricity lines ignited a wildfire in which one woman died and nearly 20 homes were destroyed or damaged in a southeastern Boise neighborhood.

A defective piece of heated metal hardware "was a factor in this accidental and devastating fire," LaMont Keen, the utility's chief executive officer, told a news conference.

"To the extent we are found to be responsible," Keen said, "we will fulfill our obligations."

Several events played a role in the wildfire, including the loose piece of hot metal that fell from the utility pole to the ground and ignited the blaze, according to investigators. Also contributing were high winds and an additional electrical current caused when a tree fell into power lines shortly before the fire.

The Boise Fire Department, Idaho Power and the federal Bureau of Land Management participated in the investigation.

The fire was reported around 7 p.m. August 25 in a vacant field of sagebrush and dry grasses. Winds gusted up to 50 mph and within two minutes the blaze had spread to a nearby ridge and roared toward a line of homes in a Boise subdivision, Fire Chief Dennis Doan said.

"This was a terrible accident caused by high winds," Doan said.

Fire officials have said 10 homes were destroyed.

After the blaze was brought under control fire crews discovered the body of Mary Ellen Ryder in a house that was destroyed. Ryder, 56, was a professor of English and linguistics at Boise State University and Boise police have determined she was likely a victim of the wildfire.

Idaho Power owns the land beneath the utility line where the fire started. The piece of equipment that malfunctioned is called a connector and it was last inspected in 2006, said Lisa Grow, the company's vice president of delivery engineering and operations.

"These types of devices rarely fail," Grow said. "It's quite rare."

Idaho Power, the state's largest utility, routinely inspects power lines every three years, Grow said. "The loose connector would have not been visible even if an inspector saw it the day before the fire," she said.

Earlier this year, fire crews helped residents in the Boise neighborhood where the fire occurred clear "defensible spaces" around their homes, urging them to cut down sagebrush and dry grasses.

But fire prevention measures proved futile against the fast-moving winds, which carried the fire to the line of homes overlooking the ridge, and then continued onto houses across the street.

Related News

Report call for major changes to operation of Nova Scotia's power grid

Nova Scotia Energy Modernization Act proposes an independent system operator, focused energy regulation, coal phase-out by 2030, renewable integration, transmission upgrades, and competitive market access to boost consumer trust and grid reliability across the province.

 

Key Points

Legislation to create an independent system operator and energy regulator, enabling coal phase-out and renewable integration.

✅ Transfers grid control from Nova Scotia Power to an ISO

✅ Establishes a focused energy regulator for multi-sector oversight

✅ Accelerates coal retirement, renewables build-out, and grid upgrades

 

Nova Scotia is poised for a significant overhaul in how its electricity grid operates, with the electricity market headed for a reshuffle as the province vows changes, following a government announcement that will strip the current electric utility of its grid access control. This move is part of a broader initiative to help the province achieve its ambitious energy objectives, including the cessation of coal usage by 2030.

The announcement came from Tory Rushton, the Minister of Natural Resources, who highlighted the recommendations from the Clean Electricity Task Force's report to make the electricity system more accountable to Nova Scotians according to the authors. The report suggests the creation of two distinct entities: an autonomous system operator for energy system planning and an independent body for energy regulation.

Minister Rushton expressed the government's agreement with these recommendations, while the premier had earlier urged regulators to reject a 14% rate hike to protect customers, stating plans to introduce a new Energy Modernization Act in the next legislative session.

Under the proposed changes, Nova Scotia Power, a privately-owned entity, will retain its operational role but will relinquish control over the electricity grid. This responsibility will shift to an independent system operator, aiming to foster competitive practices essential for phasing out coal—currently a major source of the province’s electricity.

Additionally, the existing Utility and Review Board, which recently approved a 14% rate increase despite political opposition, will undergo rebranding to become the Nova Scotia Regulatory and Appeals Board, reflecting a broader mandate beyond energy. Its electricity-related duties will be transferred to the newly proposed Nova Scotia Energy Board, which will oversee various energy sectors including electricity, natural gas, and retail gasoline.

The task force, led by Alison Scott, a former deputy energy minister, and John MacIsaac, an ex-executive of Nalcor Energy, was established by the province in April 2023 to determine the needs of the electrical system in meeting Nova Scotia's environmental goals.

Minister Rushton praised the report for providing a clear direction towards achieving the province's 2030 environmental targets and beyond. He estimated that establishing the recommended bodies would take 18 months to two years, and noted the government cannot order the utility to cut rates under current law, promising job security for current employees of Nova Scotia Power and the Utility and Review Board throughout the transition.

The report advocates for the new system operator to improve consumer trust by distancing electricity system decisions from Nova Scotia Power's corporate interests. It also critiques the current breadth of the Utility and Review Board's mandate as overly extensive for addressing the energy transition's long-term requirements.

Nova Scotia Power's president, Peter Gregg, welcomed the recommendations, emphasizing their role in the province's shift towards renewable energy, as neighboring jurisdictions like P.E.I. explore community generation to build resilience, he highlighted the importance of a focused energy regulator and a dedicated system operator in advancing essential projects for reliable customer service.

The task force's 12 recommendations also include the requirement for Nova Scotia Power to submit an annual asset management plan for regulatory approval and to produce reports on vegetation and wood pole management. It suggests the government assess Ontario's hydro policies for potential adaptation in Nova Scotia and calls for upgrades to the transmission grid infrastructure, with projected costs detailed by Stantec.

Alison Scott remarked on the comparative expense of coal power against renewable sources like wind, suggesting that investments in the grid to support renewables would be economically beneficial in the long run.

 

Related News

View more

All-electric home sports big windows, small footprint

Cold-Climate Heat Pumps deliver efficient heating and cooling for Northern B.C. Net Zero Ready homes, with air-source Mitsubishi H2i systems, triple-pane windows, blower door ACH 0.8, BC Hydro rebates, and CleanBC incentives.

 

Key Points

Electric air-source systems that heat and cool in subzero climates, cutting emissions and lowering energy costs.

✅ Net Zero Ready, Step Code 5, ACH 0.8 airtightness

✅ Operate efficiently to about -28 C with backup heat

✅ Eligible for BC Hydro and CleanBC rebates

 

Heat pump provides heating, cooling in northern B.C. home
It's a tradition at Vanderhoof-based Northern Homecraft that, on the day of the blower door test for a just-completed home, everyone who worked on the build gathers to watch it happen. And in the spring of 2021, on a dazzling piece of land overlooking the mouth of the Stuart River near Fort St. James, that day was a cause for celebration.

A new 3,400-square foot home subjected to the blower door test – a diagnostic tool to determine how much air is entering or escaping from a home – was rated as having just .8 air changes per hour (ACH). That helps make it a Net Zero Ready home, and BC Energy Code Step 5 compliant. That means it would take about a third of the amount of energy to heat the home compared to a typical similar-sized home in B.C. today.

From an energy-efficiency perspective, this is a home whose evident beauty is anything but skin deep.

"The home has lot of square footage of finished living space, and it also has a lot of glazing," says Northern Homecraft owner Shay Bulmer, referring to the home's large windows. "We had a lot of window space to deal with, as well as large vaulted open areas where you can only achieve so much additional insulation. There were a few things that the home had going against it as far as performance goes. There were challenges in keeping it comfortable year-round."


Well-insulated home ideal for heat pump option
Most homes in colder areas of B.C. lean on gas-fueled heating systems to deal with the often long, chilly winters. But with the arrival of cold climate heat pumps capable of providing heat efficiently when temperatures dip as low as -30°C, there's now a clean option for those homes, and using more electricity for heat is gaining support in the North as well.

Heat pumps are an increasingly popular option, both for new and existing homes, because they avoid carbon emissions associated with fossil use while also offering summer cooling, even as record-high electricity demand in Yukon underscores the need for efficient systems.

The Fort St. James home, which was built with premium insulation, airtightness and energy efficiency in mind, made the decision to opt for a heat pump even easier. Still, the heat pump option took the home's owners Dexter and Cheryl Hodder by surprise. While their focus was on designing a home that took full advantage of views down to the river, the couple was under the distinct impression that heat pumps couldn't cut it in the chilly north.

"I wasn't really considering a heat pump, which I thought was only a good solution in a moderate climate," says Dexter, who as director of research and education for the John Prince Research Forest, studies wildlife and forestry interactions in north central B.C. "The specs on the heat pump indicate it would work down to -28°C, and I was skeptical of that. But it worked exactly to spec. It almost seems ridiculous to generate heat from outside air at those low temperatures, but it does."

 

Getting it right with support and rebates
Northern Homecraft took advantage of BC Hydro's Mechanical System Design Pilot program to ensure proper heat pump system design, installation, and verification for the home were applied, and with BC Hydro's first call for power in 15 years driven by electrification, the team prioritized efficient load management.

Based on the home's specific location, size, and performance targets, they installed a ducted Mitsubishi H2I air-source heat pump system. Windows are triple pane, double coated, and a central feature of the home, while insulation specifications were R-40 deep frame insulation in the exterior walls, R-80 insulation in the attic, and R-40 insulation in the vaulted ceilings.

The combination of the year-round benefits of heat pumps, their role in reducing fossil fuel emissions, and the availability of rebates, is making the systems increasingly attractive in B.C., especially as two new BC generating stations were recently commissioned to expand clean supply.

BC Hydro offers home renovation rebates of up to $10,000 for energy-efficient upgrades to existing homes. Rebates are available for windows and doors, insulation, heat pumps, and heat pump hot water heaters. In partnership with CleanBC, rebates of up to $11,000 are also available – when combined with the federal Greener Homes program – for those switching from fossil fuel heating to an electric heat pump.


'Heat dome' pushes summer highs to 40°C
Cooling wasn't really a consideration for Dexter and Cheryl when they were living in a smaller bungalow shaded by trees. But they knew that with the big windows, vaulted ceiling in the living room, and an upstairs bedroom in the new home, there may come a time when they needed air conditioning.

That day arrived shortly after the home was built, as the infamous "heat dome" settled on B.C. and drove temperatures at Fort St. James to a dizzying 40°C.

"It was disgustingly hot, and I don't care if I never see that again here," says Hodder, with a laugh. "But the heat pump maintained the house really nicely throughout, at about 22 degrees. The whole house stayed cool. We just had to close the door to the upper bedroom so it wasn't really heating up during the day."

Hodder says he had to work with the heat pump manufacturer Mitsubishi a couple times over that first year to fix a few issues with the system's controls. But he's confident that the building's tight and well-insulated envelope, and the heat pump's backup electric heat that kicks in when temperatures dip below -28°C, will make it the system-for-all-seasons it was designed to be.

Even with the use of supplemental electric heating during the record chill of December-January, the home's energy costs weren't much higher than the mid-winter energy bills they used to pay in the couple's smaller bungalow that relied on a combination of gas-fired in-floor heating and electric baseboards, as gas-for-electricity swaps are being explored elsewhere.

Fort St. James is a former fur trading post located northwest of Prince George and a short drive north of Vanderhoof. Winters are cold and snowy, with average daily low temperatures in December and January of around -14°C.

"During the summer and into the fall, we were paying well less than $100 a month," says Hodder, looking back at electricity bills over the first year in the home. "And that's everything. We're only electric here, and we also had both of us working from home all last year."

 

Word of mouth making heat pumps popular in Fort St. James
While the size of the home presented new challenges for the builders, it's one of five Net Zero Ready or Net Zero homes – all equipped with some form of heat pump – that Northern Homecraft has built in Fort St. James, even as debates about going nuclear for electricity continue in B.C.

The smallest of the homes is a two-bedroom, one-bathroom home that's just under 900 square feet. Northern Homecraft may be based in Vanderhoof, but it's the much smaller town of Fort St. James where they're making their mark with super-efficient homes. Net Zero Ready homes are up to 80% more efficient than the standard building code, and become Net Zero once renewable energy generation – usually in the form of photovoltaic solar – is installed, and programs like switching 5,000 homes to geothermal show the broader momentum for clean heating.

"We were pretty proud that the first home we built in Fort St. James was the first single family Net Zero Ready home built in B.C.," says Northern Homecraft's Bulmer. "And I think it's kind of caught on in a smaller community where everyone talks to everyone."

 

Related News

View more

EU Plans To Double Electricity Use By 2050

European Green Deal Electrification accelerates decarbonization via renewables, electric vehicles, heat pumps, and clean industry, backed by sustainable finance, EIB green lending, just transition funds, and energy taxation reform to phase out fossil fuels.

 

Key Points

An EU plan to replace fossil fuels with renewable electricity in transport, buildings, and industry, supported by green finance.

✅ Doubles electricity's share to cut CO2 and phase out fossil fuels.

✅ Drives EVs, heat pumps, and electrified industry via renewables.

✅ Funded by EIB lending, EU budget, and just transition support.

 

The European Union is preparing an ambitious plan to completely decarbonize by 2050. Increasing the share of electricity in Europe’s energy system – electricity that will increasingly come from renewable sources - will be at the center of this strategy, aligning with the broader global energy transition under way, the new head of the European Commission’s energy department said yesterday.

This will mean more electric cars, electric heating and electric industry. The idea is that fossil fuels should no longer be a primary energy source, heating homes, warming food or powering cars. In the medium term they should only be used to generate electricity, a shift mirrored by New Zealand's electricity shift efforts, which then powers these things, resulting in less CO2 emissions.

“First assessments show we need to double the share of electricity in energy consumption by 2050,” Ditte Juul-Jørgensen said at an event in Brussels this week, a goal echoed by recent calls to double investment in power systems from world leaders. “We’ve already seen an increase in the last decade, but we need to go further”.

Juul-Jørgensen, who started in her job as director-general of the commission’s energy department in August, has come to the role at a pivotal time for energy. The 2050 decarbonization proposal from the Commission, the EU’s executive branch, is expected to be approved next month by EU national leaders. A veto from Poland that has blocked adoption until now is likely to be overcome if Poland and other Eastern European countries are offered financial assistance from a “just transition fund”, according to EU sources.

Ursula von der Leyen, the incoming President of the Commission, has promised to unveil a “European Green Deal” in her first 100 days in office designed to get the EU to its 2050 goal. Juul-Jørgensen will be working with the incoming EU Energy Commissioner, Kadri Simson, on designing this complex strategy. The overall aim will be to phase out fossil fuels, and increase the use of electricity from green sources, amid trends like oil majors pivoting to electric across Europe today.

“This will be about how do we best make use of electricity to feed into other sectors,” Juul-Jørgensen said. “We need to think about transforming it into other sources, and how to best transport it.”

“But the biggest challenge from what I see today is that of investment and finance - the changes we have to make are very significant.”

 

Financing problems

The Commission is going to try to tackle the challenges of financing the energy transition with two tools: dedicated climate funding in the EU budget, and dedicated climate lending from the European Investment Bank.

“The EIB will play an increasing role in future. We hope to see agreement [with the EIB board] on that in the coming months so there’s a clear operator in the EIB to support the green transition. We’re looking at something around €400 billion a year.”

The Commission’s proposed dedicated climate spending in the next seven-year budget must still be approved by the 28 EU national governments. Juul-Jørgensen said there is unanimous agreement on the amount: 25% of the budget. But there is disagreement about how to determine what is green spending.

“A lot of work has been ongoing to ensure that when it comes to counting it reflects the reality of the investments,” she said. “We’re working on the taxonomy on sustainable finance - internally identifying sectors contributing to overall climate objectives.”

 

Electricity pact

Juul-Jørgensen was speaking at an event organized by the the Electrification Alliance, a pact between nine industry organizations to lobby for electricity to be put at the heart of the European green deal. They signed a declaration at the event calling for a variety of measures to be included in the green deal, reflecting debates over a fully renewable grid by 2030 in other jurisdictions, including a change to the EU’s energy taxation regime which incentivizes a switch from fossil fuel to electricity consumption.

“Electrification is the most important solution to turn the vision of a fossil-free Europe into reality,” said Laurence Tubiana, CEO of the European Climate Foundation, one of the signatories, and co-architect of the Paris Agreement.

“We are determined to deliver, but we must be mindful of the different starting points and secure sufficient financing to ensure a fair transition”, said Magnus Hall, President of electricity industry association Eurelectric, another signatory.

The energy taxation issue has been particularly tricky for the EU, since any change in taxation rules requires the unanimous consent of all 28 EU countries. But experts say that current taxation structures are subsidizing fossil fuels and punishing electricity, as recent UK net zero policy changes illustrate, and unless this is changed the European Green Deal can have little effect.

“Yes this issue will be addressed in the incoming commission once it takes up its function,” Juul-Jørgensen said in response to an audience question. “We all know the challenge - the unanimity requirement in the Council - and so I hope that member states will agree to the direction of work and the need to address energy taxation systems to make sure they’re consistent with the targets we’ve set ourselves.”

But some are concerned that the transformation envisioned by the green deal will have negative impacts on some of the most vulnerable members of society, including those who work in the fossil fuel sector.

This week the Centre on Regulation in Europe sent an open letter to Frans Timmermans, the Commission Vice President in charge of climate, warning that they need to be mindful of distributional effects. These worries have been heightened by the yellow vest protests in France, which were sparked by French President Emmanuel Macron’s attempt to increase fuel taxes for non-electric cars.

“The effectiveness of climate action and sustainability policies will be challenged by increasing social and political pressures,” wrote Máximo Miccinilli, the center’s director for energy. “If not properly addressed, those will enhance further populist movements that undermine trust in governance and in the public institutions.”

Miccinilli suggests that more research be done into identifying, quantifying and addressing distributional effects before new policies are put in place to phase out fossil fuels. He proposes launching a new European Observatory for Distributional Effects of the Energy Transition to deal with this.

EU national leaders are expected to vote on the 2050 decarbonization target, building on member-state plans such as Spain's 100% renewable electricity goal by mid-century, at a summit in Brussels on December 12, and Von der Leyen will likely unveil her European Green Deal in March.

 

Related News

View more

Was there another reason for electricity shutdowns in California?

PG&E Wind Shutdown and Renewable Reliability examines PSPS strategy, wildfire risk, transmission line exposure, wind turbine cut-out speeds, grid stability, and California's energy mix amid historic high-wind events and supply constraints across service areas.

 

Key Points

An overview of PG&E's PSPS decisions, wildfire mitigation, and how wind cut-out limits influence grid reliability.

✅ Wind turbines reach cut-out near 55 mph, reducing generation.

✅ PSPS mitigates ignition from damaged transmission infrastructure.

✅ Baseload diversity improves resilience during high-wind events.

 

According to the official, widely reported story, Pacific Gas & Electric (PG&E) initiated power shutoffs across substantial portions of its electric transmission system in northern California as a precautionary measure.

Citing high wind speeds they described as “historic,” the utility claims that if it didn’t turn off the grid, wind-caused damage to its infrastructure could start more wildfires.

Perhaps that’s true. Perhaps. This tale presumes that the folks who designed and maintain PG&E’s transmission system are unaware of or ignored the need to design it to withstand severe weather events, and that the Federal Energy Regulatory Commission (FERC) and North American Electric Reliability Corp. (NERC) allowed the utility to do so.

Ignorance and incompetence happens, to be sure, but there’s much about this story that doesn’t smell right—and it’s disappointing that most journalists and elected officials are apparently accepting it without question.

Take, for example, this statement from a Fox News story about the Kincade Fires: “A PG&E meteorologist said it’s ‘likely that many trees will fall, branches will break,’ which could damage utility infrastructure and start a fire.”

Did you ever notice how utilities cut wide swaths of trees away when transmission lines pass through forests? There’s a reason for that: When trees fall and branches break, the grid can still function, and even as the electric rhythms of New York City shifted during COVID-19, operators planned for variability.

So, if badly designed and poorly maintained infrastructure isn’t the reason PG&E cut power to millions of Californians, what might have prompted them to do so? Could it be that PG&E’s heavy reliance on renewable energy means they don’t have the power to send when a “historic” weather event occurs, especially as policymakers weigh the postponed closure of three power plants elsewhere in California?

 

Wind Speed Limits

The two most popular forms of renewable energy come with operating limitations, which is why some energy leaders urge us to keep electricity options open when planning the grid. With solar power, the constraint is obvious: the availability of sunlight. One doesn’t generate solar power at night and energy generation drops off with increasing degrees of cloud cover during the day.

The main operating constraint of wind power is, of course, wind speed, and even in markets undergoing 'transformative change' in wind generation, operators adhere to these technical limits. At the low end of the scale, you need about a 6 or 7 miles-per-hour wind to get a turbine moving. This is called the “cut-in speed.” To generate maximum power, about a 30 mph wind is typically required. But, if the wind speed is too high, the wind turbine will shut down. This is called the “cut-out speed,” and it’s about 55 miles per hour for most modern wind turbines.

It may seem odd that wind turbines have a cut-out speed, but there’s a very good reason for it. Each wind turbine rotor is connected to an electric generator housed in the turbine nacelle. The connection is made through a gearbox that is sized to turn the generator at the precise speed required to produce 60 Hertz AC power.

The blades of the wind turbine are airfoils, just like the wings of an airplane. Adjusting the pitch (angle) of the blades allows the rotor to maintain constant speed, which, in turn, allows the generator to maintain the constant speed it needs to safely deliver power to the grid. However, there’s a limit to blade pitch adjustment. When the wind is blowing so hard that pitch adjustment is no longer possible, the turbine shuts down. That’s the cut-out speed.

Now consider how California’s power generation profile has changed. According to Energy Information Administration data, the state generated 74.3 percent of its electricity from traditional sources—fossil fuels and nuclear, amid debates over whether to classify nuclear as renewable—in 2001. Hydroelectric, geothermal, and biomass-generated power accounted for most of the remaining 25.7 percent, with wind and solar providing only 1.98 percent of the total.

By 2018, the state’s renewable portfolio had jumped to 43.8 percent of total generation, with clean power increasing and wind and solar now accounting for 17.9 percent of total generation. That’s a lot of power to depend on from inherently unreliable sources. Thus, it wouldn’t be at all surprising to learn that PG&E didn’t stop delivering power out of fear of starting fires, but because it knew it wouldn’t have power to deliver once high winds shut down all those wind turbines

 

Related News

View more

Swiss Earthquake Service and ETH Zurich aim to make geothermal energy safer

Advanced Traffic Light System for Geothermal Safety models fracture growth and friction with rock physics, geophones, and supercomputers to predict induced seismicity during hydraulic stimulation, enabling real-time risk control for ETH Zurich and SED.

 

Key Points

ATLS uses rock physics, geophones, and HPC to forecast induced seismicity in real time during geothermal stimulation.

✅ Real-time seismic risk forecasts during hydraulic stimulation

✅ Uses rock physics, friction, and fracture modeling on HPC

✅ Supports ETH Zurich and SED field tests in Iceland and Bedretto

 

The Swiss Earthquake Service and ETH Zurich want to make geothermal energy safer, so news piece from Switzerland earlier this month. This is to be made possible by new software, including machine learning, and the computing power of supercomputers. The first geothermal tests have already been carried out in Iceland, and more will follow in the Bedretto laboratory.

In areas with volcanic activity, the conditions for operating geothermal plants are ideal. In Iceland, the Hellisheidi power plant makes an important contribution to sustainable energy use, alongside innovations like electricity from snow in cold regions.

Deep geothermal energy still has potential. This is the basis of the 2050 energy strategy. While the inexhaustible source of energy in volcanically active areas along fault zones of the earth’s crust can be tapped with comparatively little effort and, where viable, HVDC transmission used to move power to demand centers, access on the continents is often much more difficult and risky. Because the geology of Switzerland creates conditions that are more difficult for sustainable energy production.

Improve the water permeability of the rock

On one hand, you have to drill four to five kilometers deep to reach the correspondingly heated layers of earth in Switzerland. It is only at this depth that temperatures between 160 and 180 degrees Celsius can be reached, which is necessary for an economically usable water cycle. On the other hand, the problem of low permeability arises with rock at these depths. “We need a permeability of at least 10 millidarcy, but you can typically only find a thousandth of this value at a depth of four to five kilometers,” says Thomas Driesner, professor at the Institute of Geochemistry and Petrology at ETH Zurich.

In order to improve the permeability, water is pumped into the subsurface using the so-called “fracture”. The water acts against friction, any fracture surfaces shift against each other and tensions are released. This hydraulic stimulation expands fractures in the rock so that the water can circulate in the hot crust. The fractures in the earth’s crust originate from tectonic tensions, caused in Switzerland by the Adriatic plate, which moves northwards and presses against the Eurasian plate.

In addition to geothermal energy, the “Advanced Traffic Light System” could also be used in underground construction or in construction projects for the storage of carbon dioxide.

Quake due to water injection

The disadvantage of such hydraulic stimulations are vibrations, which are often so weak or cannot be perceived without measuring instruments. But that was not the case with the geothermal projects in St. Gallen 2013 and Basel 2016. A total of around 11,000 cubic meters of water were pumped into the borehole in Basel, causing the pressure to rise. Using statistical surveys, the magnitudes 2.4 and 2.9 defined two limit values ??for the maximum permitted magnitude of the earthquakes generated. If these are reached, the water supply is stopped.

In Basel, however, there was a series of vibrations after a loud bang, with a time delay there were stronger earthquakes, which startled the residents. In both cities, earthquakes with a magnitude greater than 3 have been recorded. Since then it has been clear that reaching threshold values ??determines the stop of the water discharge, but this does not guarantee safety during the actual drilling process.

Simulation during stimulation

The Swiss Seismological Service SED and the ETH Zurich are now pursuing a new approach that can be used to predict in real time, building on advances by electricity prediction specialists in Europe, during a hydraulic stimulation whether noticeable earthquakes are expected in the further course. This is to be made possible by the so-called “Advanced Traffic Light System” based on rock physics, a software developed by the SED, which carries out the analysis on a high-performance computer.

Geophones measure the ground vibrations around the borehole, which serve as indicators for the probability of noticeable earthquakes. The supercomputer then runs through millions of possible scenarios, similar to algorithms to prevent power blackouts during ransomware attacks, based on the number and type of fractures to be expected, the friction and tensions in the rock. Finally, you can filter out the scenario that best reflects the underground.

Further tests in the mountain

However, research is currently still lacking any real test facility for the system, because incorrect measurements must be eliminated and a certain data format adhered to before the calculations on the supercomputer. The first tests were carried out in Iceland last year, with more to follow in the Bedretto geothermal laboratory in late summer, where reliable backup power from fuel cell solutions can keep instrumentation running. An optimum can now be found between increasing the permeability of rock layers and an adequate water supply.

The new approach could make geothermal energy safer and ultimately help this energy source to become more accepted, while grid upgrades like superconducting cables improve efficiency. Research also sees areas of application wherever artificially caused earthquakes can occur, such as in underground mining or in the storage of carbon dioxide underground.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.