Japan restarting mothballed hydro

By Industrial Info Resources


NFPA 70e Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$199
Coupon Price:
$149
Reserve Your Seat Today
To meet electric power demands, Japan's Ministry of Land, Infrastructure and Transport MLIT has announced that it will reopen five closed hydropower plants.

One of the stations belongs to East Japan Railway Company JR East and is located along the Shinano River in Niigata prefecture. The remaining four, Shiobara, Shinanogawa, Kiyotsugawa-Yuzawa and Uonogawa-Ishiuchi, belong to Tokyo Electric Power Company Incorporated TEPCO.

The Shiobara plant was shut down for unlicensed operation in 2007. However after the earthquake, all accrued punishment was lifted, and TEPCO was given special permission by MLIT to bring the power station back into operation. TEPCO and JR East were given permission to reduce river capacity in order to generate power for electrical demand.

The five hydropower stations are expected to bring 360 megawatts of power online and will be able to provide electricity for at least 120,000 homes throughout TEPCO's service area.

TEPCO announced that it would be unable to sufficiently provide electricity to its service area due to a compromised electrical infrastructure and the loss of several key power stations, including Fukushima Daiichi nuclear power station and Fukushima Daini nuclear power station. TEPCO's service area includes Yamanashi and Shizuoka prefectures, as well as the Kanto region of Japan, which is comprised of Tokyo, Chiba, Kanagawa, Gunma, Tochigi, Saitama and Ibaraki prefectures.

As a result of insufficient electrical capacity, the company broke its service area into five groups that are currently undergoing scheduled power outages in specified time blocks. TEPCO typically provides for a daily 40-gigawatt GW power demand from these areas, but announced that it would only be able to provide approximately 20 GW and called on residents, businesses, and industries alike to cooperate in conserving power to aid in power outages.

Related News

Experts Advise Against Cutting Quebec's Energy Exports Amid U.S. Tariff War

Quebec Hydropower Export Retaliation examines using electricity exports to counter U.S. tariffs amid Canada-U.S. trade tensions, weighing clean energy supply, grid reliability, energy security, legal risks, and long-term market impacts.

 

Key Points

Using Quebec electricity exports as leverage against U.S. tariffs, and its economic, legal, and diplomatic consequences.

✅ Revenue loss for Quebec and higher costs for U.S. consumers

✅ Risk of legal disputes under trade and energy agreements

✅ Long-term erosion of market share and grid cooperation

 

As trade tensions between Canada and the United States continue to escalate, with electricity exports at risk according to recent reporting, discussions have intensified around potential Canadian responses to the imposition of U.S. tariffs. One of the proposals gaining attention is the idea of reducing or even halting the export of energy from Quebec to the U.S. This measure has been suggested by some as a potential countermeasure to retaliate against the tariffs. However, experts and industry leaders are urging caution, emphasizing that the consequences of such a decision could have significant economic and diplomatic repercussions for both Canada and the United States.

Quebec plays a critical role in energy trade, particularly in supplying hydroelectric power to the United States, especially to the northeastern states, including New York where tariffs may spike energy prices according to analysts, strengthening the case for stable cross-border flows. This energy trade is deeply embedded in the economic fabric of both regions. For Quebec, the export of hydroelectric power represents a crucial source of revenue, while for the U.S., it provides access to a steady and reliable supply of clean, renewable energy. This mutually beneficial relationship has been a cornerstone of trade between the two countries, promoting economic stability and environmental sustainability.

In the wake of recent U.S. tariffs on Canadian goods, some policymakers have considered using energy exports as leverage, echoing threats to cut U.S. electricity exports in earlier disputes, to retaliate against what is viewed as an unfair trade practice. The idea is to reduce or stop the flow of electricity to the U.S. as a way to strike back at the tariffs and potentially force a change in U.S. policy. On the surface, this approach may appear to offer a viable means of exerting pressure. However, experts warn that such a move would be fraught with significant risks, both economically and diplomatically.

First and foremost, Quebec's economy is heavily reliant on revenue from hydroelectric exports to the U.S. Any reduction in these energy sales could have serious consequences for the province's economic stability, potentially resulting in job losses and a decrease in investment. The hydroelectric power sector is a major contributor to Quebec's GDP, and recent events, including a tariff threat delaying a green energy bill in Quebec, illustrate how trade tensions can ripple through the policy landscape, while disrupting this source of income could harm the provincial economy.

Additionally, experts caution that reducing energy exports could have long-term ramifications on the energy relationship between Quebec and the northeastern U.S. These two regions have developed a strong and interconnected energy network over the years, and abruptly cutting off the flow of electricity could damage this vital partnership. Legal challenges could arise under existing trade agreements, and even as tariff threats boost support for Canadian energy projects among some stakeholders, the situation would grow more complex. Such a move could also undermine trust between the two parties, making future negotiations on energy and other trade issues more difficult.

Another potential consequence of halting energy exports is that U.S. states may seek alternative sources of energy, diminishing Quebec's market share in the long run. As the U.S. has a growing demand for clean energy, especially as it looks to transition away from fossil fuels, and looks to Canada for green power in several regions, cutting off Quebec’s electricity could prompt U.S. states to invest in other forms of energy, including renewables or even nuclear power. This could have a lasting effect on Quebec's position in the U.S. energy market, making it harder for the province to regain its footing.

Moreover, reducing or ceasing energy exports could further exacerbate trade tensions, leading to even greater economic instability. The U.S. could retaliate by imposing additional tariffs on Canadian goods or taking other measures that would negatively impact Canada's economy. This could create a cycle of escalating trade barriers that would hurt both countries and undermine the broader North American trade relationship.

While the concept of using energy exports as a retaliatory tool may seem appealing to some, the experts' advice is clear: the potential economic and diplomatic costs of such a strategy outweigh the short-term benefits. Quebec’s role as an energy supplier to the U.S. is crucial to its own economy, and maintaining a stable, reliable energy trade relationship is essential for both parties. Rather than escalating tensions further, it may be more prudent for Canada and the U.S. to seek diplomatic solutions that preserve trade relations and minimize harm to their economies.

While the idea of using Quebec’s energy exports as leverage in response to U.S. tariffs may appear attractive on the surface, and despite polls showing support for tariffs on energy and minerals among Canadians, it carries significant risks. Experts emphasize the importance of maintaining a stable energy export strategy to protect Quebec’s economy and preserve positive diplomatic relations with the U.S. Both countries have much to lose from further escalating trade tensions, and a more measured approach is likely to yield better outcomes in the long run.

 

Related News

View more

7 steps to make electricity systems more resilient to climate risks

Electricity System Climate Resilience underpins grid reliability amid heatwaves and drought, integrating solar, wind, hydropower, nuclear, storage, and demand response with efficient transmission, flexibility, and planning to secure power for homes, industry, and services.

 

Key Points

Power systems capacity to endure extreme weather and integrate clean energy, maintaining reliability and flexibility.

✅ Grid hardening, transmission upgrades, and digital forecasting.

✅ Flexible low-carbon supply: hydropower, nuclear, storage.

✅ Demand response, efficient cooling, and regional integration.

 

Summer is just half done in the northern hemisphere and yet we are already seeing electricity systems around the world struggling to cope with the severe strain of heatwaves and low rainfall.

These challenges highlight the urgent need for strong and well-planned policies and investments to improve the security of our electricity systems, which supply power to homes, offices, factories, hospitals, schools and other fundamental parts of our economies and societies. This means making our electricity systems more resilient to the effects of global warming – and more efficient and flexible as they incorporate rising levels of solar and wind power, as solar is now the cheapest electricity in history according to the IEA, which will be critical for reaching net-zero emissions in time to prevent even worse impacts from climate change.

A range of different countries, including the US, Canada and Iraq, have been hard hit by extreme weather recently in the form of unusually high temperatures. In North America, the heat soared to record levels in the Pacific Northwest. An electricity watchdog says that five US regions face elevated risks to the security of their electricity supplies this summer, underscoring US grid climate risks that could worsen, and that California’s risk level is even higher.

Heatwaves put pressure on electricity systems in multiple ways. They increase demand as people turn up air conditioning, driving higher US electricity bills for many households, and as some appliances work harder to maintain cool temperatures. At the same time, higher temperatures can also squeeze electricity supplies by reducing the efficiency and capacity of traditional thermal power plants, such as coal, natural gas and nuclear. Extreme heat can reduce the availability of water for cooling plants or transporting fuel, forcing operators to reduce their output. In some cases, it can result in power plants having to shut down, increasing the risk of outages. If the heat wave is spread over a wide geographic area, it also reduces the scope for one region to draw on spare capacity from its neighbours, since they have to devote their available resources to meeting local demand.

A recent heatwave in Texas forced the grid operator to call for customers to raise their thermostats’ temperatures to conserve energy. Power generating companies suffered outages at much higher rates than expected, providing an unwelcome reminder of February’s brutal cold snap when outages – primarily from natural gas power plants – left up to 5 million customers across the US without power over a period of four days.

At the same time, lower than average rainfall and prolonged dry weather conditions are raising concerns about hydropower’s electricity output in various parts of the world, including Brazil, China, India and North America. The risks that climate change brings in the form of droughts adds to the challenges faced by hydropower, the world’s largest source of clean electricity, highlighting the importance of developing hydropower resources sustainably and ensuring projects are climate resilient.

The recent spate of heatwaves and unusually long dry spells are fresh warnings of what lies ahead as our climate continues to heat up: an increase in the scale and frequency of extreme weather events, which will cause greater impacts and strains on our energy infrastructure.

Heatwaves will increase the challenge of meeting electricity demand while also decarbonizing the electricity supply. Today, the amount of energy used for cooling spaces – such as homes, shops, offices and factories – is responsible for around 1 billion tonnes of global CO2 emissions. In particular, energy for cooling can have a major impact on peak periods of electricity demand, intensifying the stress on the system. Since the energy demand used for air conditioners worldwide could triple by 2050, these strains are set to grow unless governments introduce stronger policy measures to improve the energy efficiency of air conditioning units.

Electricity security is crucial for smooth energy transitions
Many countries around the world have announced ambitious targets for reaching net-zero emissions by the middle of this century and are seeking to step up their clean energy transitions. The IEA’s recent Global Roadmap to Net Zero by 2050 makes it clear that achieving this formidable goal will require much more electricity, much cleaner electricity and for that electricity to be used in far more parts of our economies than it is today. This means electricity reaching much deeper into sectors such as transport (e.g. EVs), buildings (e.g. heat-pumps) and industry (e.g. electric-arc steel furnaces), and in countries like New Zealand's electrification plans it is accelerating broader efforts. As clean electricity’s role in the economy expands and that of fossil fuels declines, secure supplies of electricity become ever-more important. This is why the climate resilience of the electricity sector must be a top priority in governments’ policy agendas.

Changing climate patterns and more frequent extreme weather events can hit all types of power generation sources. Hydropower resources typically suffer in hot and dry conditions, but so do nuclear and fossil fuel power plants. These sources currently help ensure electricity systems have the flexibility and capacity to integrate rising shares of solar and wind power, whose output can vary depending on the weather and the time of day or year.

As governments and utilities pursue the decarbonization of electricity systems, mainly through growing levels of solar and wind, and carbon-free electricity options, they need to ensure they have sufficiently robust and diverse sources of flexibility to ensure secure supplies, including in the event of extreme weather events. This means that the possible decommissioning of existing power generation assets requires careful assessments that take into account the importance of climate resilience.

Ensuring electricity security requires long-term planning and stronger policy action and investment
The IEA is committed to helping governments make well-informed decisions as they seek to build a clean and secure energy future. With this in mind, here are seven areas for action for ensuring electricity systems are as resilient as possible to climate risks:

1. Invest in electricity grids to make them more resilient to extreme weather. Spending today is far below the levels needed to double the investment for cleaner, more electrified energy systems, particularly in emerging and developing economies. Economic recovery plans from the COVID-19 crisis offer clear opportunities for economies that have the resources to invest in enhancing grid infrastructure, but much greater international efforts are required to mobilize and channel the necessary spending in emerging and developing economies.

2. Improve the efficiency of cooling equipment. Cost-effective technology already exists in most markets to double or triple the efficiency of cooling equipment. Investing in higher efficiency could halve future energy demand and reduce investment and operating costs by $3 trillion between now and 2050. In advance of COP26, the Super-Efficient Equipment and Appliance Deployment (SEAD) initiative is encouraging countries to sign up to double the energy efficiency of equipment sold in their countries by 2030.

3. Enable the growth of flexible low-carbon power sources to support more solar and wind. These electricity generation sources include hydropower and nuclear, for countries who see a role for one or both of them in their energy transitions. Guaranteeing hydropower resilience in a warming climate will require sophisticated methods and tools – such as the ones implemented in Brazil – to calculate the necessary level of reserves and optimize management of reservoirs and hydropower output even in exceptional conditions. Batteries and other forms of storage, combined with solar or wind, can also provide important amounts of flexibility by storing power and releasing it when needed.

4. Increase other sources of electricity system flexibility. Demand-response and digital technologies can play an important role. The IEA estimates that only a small fraction of the huge potential for demand response in the buildings sector is actually tapped at the moment. New policies, which associate digitalization and financial behavioural incentives, could unlock more flexibility. Regional integration of electricity systems across national borders can also increase access to flexible resources.

5. Expedite the development and deployment of new technologies for managing extreme weather threats. The capabilities of electricity utilities in forecasting and situation awareness should be enhanced with the support of the latest information and communication technologies.

6. Make climate resilience a central part of policy-making and system planning. The interconnected nature of recent extreme weather events reminds us that we need to account for many contingencies when planning resilient power systems. Climate resilience should be integral to policy-making by governments and power system planning by utilities and relevant industries, and debates over Canadian climate policy underscore how grid implications must be considered. According to the recent IEA report on climate resilience, only nine out of 38 IEA member and association countries include concrete actions on climate adaptation and resilience for every segment of electricity systems.

7. Strengthen international cooperation on electricity security. Electricity underpins vital services and basic needs, such as health systems, water supplies and other energy industries. Maintaining a secure electricity supply is thus of critical importance. The costs of doing nothing in the face of growing climate threats are becoming abundantly clear. The IEA is working with all countries in the IEA family, as well as others around the world, by providing unrivalled data, analysis and policy advice on electricity security issues. It is also bringing governments together at various levels to share experiences and best practices, and identify how to hasten the shift to cleaner and more resilient energy systems.


 

 

Related News

View more

Federal net-zero electricity regulations will permit some natural gas power generation

Canada Clean Electricity Regulations allow flexible, technology-neutral pathways to a 2035 net-zero grid, permitting limited natural gas with carbon capture, strict emissions standards, and exemptions for emergencies and peak demand across provinces and territories.

 

Key Points

Federal draft rules for a 2035 net-zero grid, allowing limited gas with CCS under strict performance and compliance standards.

✅ Performance cap: 30 tCO2 per GWh annually for gas plants

✅ CCS must sequester 95% of emissions to comply

✅ Emergency and peak demand exemptions permitted

 

After facing pushback from Alberta and Saskatchewan, and amid looming power challenges nationwide, Canada's draft net-zero electricity regulations — released today — will permit some natural gas power generation. 

Environment Minister Steven Guilbeault released Ottawa's proposed Clean Electricity Regulations on Thursday.

Provinces and territories will have a minimum 75-day window to comment on the draft regulations. The final rules are intended to pave the way to a net-zero power grid in Canada, aligning with 2035 clean electricity goals established nationally. 

Calling the regulations "technology neutral," Guilbeault said the federal government believes there's enough flexibility to accommodate the different energy needs of Canada's diverse provinces and territories, including how Ontario is embracing clean power in its planning. 

"What we're talking about is not a fossil fuel-free grid by 2035; it's a net zero grid by 2035," Guilbeault said. 

"We understand there will be some fossil fuels remaining … but we're working to minimize those, and the fossil fuels that will be used in 2035 will have to comply with rigorous environmental and emission standards," he added. 

Some analysts argue that scrapping coal-fired electricity can be costly and ineffective, underscoring the trade-offs in transition planning.

While non-emitting sources of electricity — hydroelectricity, wind and solar and nuclear — should not have any issues complying with the regulations, natural gas plants will have to meet specific criteria.

Those operations, the government said, will need to emit the equivalent of 30 tonnes of carbon dioxide per gigawatt hour or less annually to help balance demand and emissions across the grid.

Federal officials said existing natural gas power plants could comply with that performance standard with the help of carbon capture and storage systems, which would be required to sequester 95 per cent of their emissions.

"In other words, it's achievable, and it is achievable by existing technology," said a government official speaking to reporters Thursday on background and not for attribution.

The regulations will also allow a certain level of natural gas power production without the need to capture emissions. Capturing emissions will be exempted during emergencies and peak periods when renewables cannot keep up with demand. 

Some newer plants might not have to comply with the rules until the 2040s, because the regulations apply to plants 20 years after they are commissioned, which dovetails with net-zero by 2050 commitments from electricity associations. 

The two-decade grace period does not apply to plants that open after the regulations are expected to be finalized in 2025.

 

Related News

View more

PG&E keeps nearly 60,000 Northern California customers in the dark to reduce wildfire risk

PG&E Public Safety Power Shutoff reduces wildfire risk during extreme winds, triggering de-energization across the North Bay and Sierra Foothills under red flag warnings, with safety inspections and staged restoration to improve grid resilience.

 

Key Points

A utility protocol to de-energize lines during extreme fire weather, reducing ignition risks and improving grid safety.

✅ Triggered by red flag warnings, humidity, wind, terrain

✅ Temporary de-energization of transmission and distribution lines

✅ Inspections precede phased restoration to minimize wildfire risk

 

PG&E purposefully shut off electricity to nearly 60,000 Northern California customers Sunday night, aiming to mitigate wildfire risks from power lines during extreme winds.

Pacific Gas and Electric planned to restore power to 70 percent of affected customers in the North Bay and Sierra Foothills late Monday night. As crews inspect lines for safety by helicopter, vehicles and on foot, the remainder will have power sometime Tuesday.

While it was the first time the company shut off power for public safety, PG&E announced its criteria and procedures for such an event in June, said spokesperson Paul Doherty. After wildfires devastated Northern California's wine country last October, he added, PG&E developed its community wildfire safety program division to make power grids and communities more resilient, and prepares for winter storm season through enhanced local response. 

Two sagging PG&E power lines caused one of those wildfires during heavy winds, killing four people and injuring a firefighter, the California Department of Forestry and Fire Protection determined earlier this month. Trees or tree branches hitting PG&E power lines started another four wildfires in October 2017. Altogether, the power company has been blamed for igniting 13 wildfires last year.

"We're adapting our electric system our operating practices to improve safety and reliability," Doherty said of the safety program. "That's really the bottom line for us."

Turning off power to so many customers was a "last resort given the extreme fire danger conditions these communities are experiencing," Pat Hogan, senior vice president of electric operations, said in a statement. Conditions that led the company to shut off power included the National Weather Service's red flag fire warnings, humidity levels, sustained winds, temperature, dry fuel and local terrain, Doherty said, amid possible rolling blackouts during grid strain.

The company de-energized more than 78 miles of transmission lines and more than 2,150 miles of distribution power lines Sunday night. Many schools in the area were closed Monday because of the planned power outage, highlighting unequal access to electricity across communities.

Late Saturday and early Sunday, PG&E warned 97,000 customers in 12 counties that the shut off might go into effect. Through automated calls, texts and emails, the company encouraged customers to have drinking water, canned food, flashlights, prescriptions and baby supplies on hand.

Power was also turned off in Southern California on Monday.

San Diego Gas & Electric turned off service to about 360 customers near Cleveland National Forest, where multiple fires have scorched large swaths of land in recent years.

SDG&E has pre-emptively shut off power to customers in the past, most recently in December when 14,000 customers went without power.

Southern California Edison, the primary electric provider across Southern California — including Los Angeles — has a similar power shutoff program. As of Monday night, SCE had yet to turn off power in any of its service areas, a spokesperson told USA TODAY.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Setbacks at Hinkley Point C Challenge UK's Energy Blueprint

Hinkley Point C delays highlight EDF cost overruns, energy security risks, and wholesale power prices, complicating UK net zero plans, Sizewell C financing, and small modular reactor adoption across the grid.

 

Key Points

Delays at EDF's 3.2GW Hinkley Point C push operations to 2031, lift costs to £46bn, and risk pricier UK electricity.

✅ First unit may slip to 2031; second unit date unclear.

✅ LSEG sees 6% wholesale price impact in 2029-2032.

✅ Sizewell C replicates design; SMR contracts expected soon.

 

Vincent de Rivaz, former CEO of EDF, confidently announced in 2016 the commencement of the UK's first nuclear power station since the 1990s, Hinkley Point C. However, despite milestones such as the reactor roof installation, recent developments have belied this optimism. The French state-owned utility EDF recently disclosed further delays and cost overruns for the 3.2 gigawatt plant in Somerset.

These complications at Hinkley Point C, which is expected to power 6 million homes, have sparked new concerns about the UK's energy strategy and its ambition to decarbonize the grid by 2050.

The UK government's plan to achieve net zero by 2050 includes a significant role for nuclear energy, reflecting analyses that net-zero may not be possible without nuclear and aiming to increase capacity from the current 5.88GW to 24GW by mid-century.

Simon Virley, head of energy at KPMG in the UK, stressed the importance of nuclear energy in transitioning to a net zero power system, echoing industry calls for multiple new stations to meet climate goals. He pointed out that failing to build the necessary capacity could lead to increased reliance on gas.

Hinkley Point C is envisioned as the pioneer in a new wave of nuclear plants intended to augment and replace Britain's existing nuclear fleet, jointly managed by EDF and Centrica. Nuclear power contributed about 14 percent of the UK's electricity in 2022, even as Europe is losing nuclear power across the continent. However, with the planned closure of four out of five plants by March 2028 and rising electricity demand, there is concern about potential power price increases.

Rob Gross, director of the UK Energy Research Centre, emphasized the link between energy security and affordability, highlighting the risk of high electricity prices if reliance on expensive gas increases.

The first 1.6GW reactor at Hinkley Point C, initially set for operation in 2027, may now face delays until 2031, even after first reactor installation milestones were reported. The in-service date for the second unit remains uncertain, with project costs possibly reaching £46bn.

LSEG analysts predict that these delays could increase wholesale power prices by up to 6 percent between 2029 and 2032, assuming the second unit becomes operational in 2033.

Martin Young, an analyst at Investec, warned of the price implications of removing a large power station from the supply side.

In response to these delays, EDF is exploring the extension of its four oldest plants. Jerry Haller, EDF’s former decommissioning director, had previously expressed skepticism about extending the life of the advanced gas-cooled reactor fleet, but EDF has since indicated more positive inspection results. The company had already decided to keep the Heysham 1 and Hartlepool plants operational until at least 2026.

Nevertheless, the issues at Hinkley Point C raise doubts about the UK's ability to meet its 2050 nuclear build target of 24GW.

Previous delays at Hinkley were attributed to the COVID-19 pandemic, but EDF now cites engineering problems, similar to those experienced at other European power stations using the same technology.

The next major UK nuclear project, Sizewell C in Suffolk, will replicate Hinkley Point C's design, aligning with the UK's green industrial revolution agenda. EDF and the UK government are currently seeking external investment for the £20bn project.

Compared with Hinkley Point C, Sizewell C's financing model involves exposing billpayers to some risk of cost overruns. This, coupled with EDF's track record, could affect investor confidence.

Additionally, the UK government is supporting the development of small modular reactors, while China's nuclear program continues on a steady track, with contracts expected to be awarded later this year.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified