Re-Volted by policy

By Financial Post


NFPA 70e Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$199
Coupon Price:
$149
Reserve Your Seat Today
When GM CEO Rick Wagoner unveiled the latest version of the plug-in Chevrolet Volt during the company’s 100th anniversary celebrations, he whistled the happy tune that current financial turmoil wouldn’t affect U.S. government loan guarantees to develop such “alternative” vehicles.

But Washington is suddenly looking strapped for cash, so the prospects for the $25-billion of cheap loans to the automotive industry that were approved last year as part of an energy bill are looking distinctly iffy. That is far from the VoltÂ’s only problem.

“The Volt symbolizes GM’s commitment to the future,” declared Mr. Wagoner, “the kind of technological innovation that our industry needs to respond to today and tomorrow’s energy and environmental challenges.”

But what it really symbolizes is a desperate response to the power of radical environmentalism, the threat of draconian policies and the dubious desire for energy independence.

Ironically, Bob Lutz, GM’s vice-chairman and the man brought in to revitalize its car line-up, has claimed that the theory of man-made climate change is a “crock of shit.” However, he is a convert to energy independence, which is one hell of a burden to put on any company’s balance sheet.

Mr. Lutz demonstrated his bravery by appearing on The Colbert Report, where he admitted that the Volt wouldn’t lay rubber in going from 0-60, but might get its owner laid with “no-makeup environmental types.”

Private pursuit of political or social objectives always tends to be a risky business, but the auto industry is being asked to help save the planet by performing just-in-time technological miracles.

The Volt, which GM announced two years ago, is being peddled as the great white hope of less gasoline-intensive driving (although perhaps such terminology is not entirely appropriate since a recent article in The Atlantic referred to the Volt — tongue in cheek — as “the Barack Obama of automobiles — everyone’s hope for change.”

According to GM, the car will have a top speed of 160 km/h and a range of 64 km without using any gasoline. Then a gas engine will charge up — not to run the car, but to run its generator and charge its battery, which will continue to supply the car’s motive power.

There are a couple of gargantuan hurdles facing the Volt. One is that the technology to achieve the above-mentioned marvels doesnÂ’t actually exist. Success still depends on advances in lithium ion battery technology that cannot be guaranteed. In any sane world, the battery technology breakthrough would be made before you started building a car around it, but GM is using the ready, fire, aim approach.

The other problem is that even if the thousands of engineers toiling at GM and its component suppliers actually pull of this marvel, its retail cost is estimated at between $35,000 and $50,000.

Within GM, apparently, the Volt project is being compared to the Apollo moonshot, but presumably they donÂ’t mean crushingly expensive and commercially pointless. GM cannot afford a no-expenses spared approach because it happens to be a profit-making corporation.

But invoking Apollo is perhaps useful in trying to pry loose those taxpayer funds.

Fighting man-made climate change is the new Earth-bound Apollo program, a rocket aimed at the worldÂ’s economy. But then if governments want to dictate what people drive, then presumably they should cough up some of the cash.

After all, as David Paterson, GM Canada’s VP of corporate and environmental affairs, told the Post’s Nicholas Van Praet, “We’re literally reinventing automobiles by regulation.”

The North American industry was already reeling from failure to respond to superior overseas competitors, and to horrendous legacy costs, which have now significantly been addressed, but it is hardly in sound shape.

Protecting the environment for future generations appears a legitimate if somewhat megalomanic objective, but this worthy sentiment has been hijacked by the UNÂ’s sustainable development agenda, which amounts to the greatest attack on markets since Das Kapital.

As for the VoltÂ’s prospects, we might remember that in 1999, DaimlerChrysler, as it then was, unveiled a sexy-looking soon-to-be-commercial vehicle, powered by a government-subsidized Ballard fuel cell, that reportedly went at 90 miles per hour for 280 miles. The problem was that the fuel cell (not the car, just the fuel cell) reportedly cost $35,000.

DaimlerChryslerÂ’s then head, Jurgen Schrempp, nevertheless claimed that such fuel-cell-powered cars could be on the road by 2004.

You will notice that they are not.

Perhaps the worst of all outcomes would be if GM did pull off its Volt miracle, because that would encourage activists and governments to believe that all they have to do to control the economy in the public — not to mention the planetary — interest is to threaten and regulate. But the odds seem stacked against any such success.

Like its previous electric car, the EV1, which was a dud, the Volt faces an uphill battle, and electric motors have trouble with hills. Meanwhile we can be absolutely sure about one critical aspect of the Volt, which is perhaps its most bizarre feature: it wonÂ’t have the slightest impact on either the global environment or the geopolitics of oil.

Related News

Japan to host one of world's largest biomass power plants

eRex Biomass Power Plant will deliver 300 MW in Japan, offering stable baseload renewable energy, coal-cost parity, and feed-in tariff independence through economies of scale, efficient fuel procurement, and utility-scale operations supporting RE100 demand.

 

Key Points

A 300 MW Japan biomass project targeting coal-cost parity and FIT-free, stable baseload renewable power.

✅ 300 MW capacity; enough for about 700,000 households

✅ Aims to skip feed-in tariff via economies of scale

✅ Targets coal-cost parity with stable, dispatchable output

 

Power supplier eRex will build its largest biomass power plant to date in Japan, hoping the facility's scale will provide healthy margins, a strategy increasingly seen among renewable developers pursuing diverse energy sources, and a means of skipping the government's feed-in tariff program.

The Tokyo-based electric company is in the process of selecting a location, most likely in eastern Japan. It aims to open the plant around 2024 or 2025 following a feasibility study. The facility will cost an estimated 90 billion yen ($812 million) or so, and have an output of 300 megawatts -- enough to supply about 700,000 households. ERex may work with a regional utility or other partner

The biggest biomass power plant operating in Japan currently has an output of 100 MW. With roughly triple that output, the new facility will rank among the world's largest, reflecting momentum toward 100% renewable energy globally that is shaping investment decisions.

Nearly all biomass power facilities in Japan sell their output through the government-mediated feed-in tariff program, which requires utilities to buy renewable energy at a fixed price. For large biomass plants that burn wood or agricultural waste, the rate is set at 21 yen per kilowatt-hour. But the program costs the Japanese public more than 2 trillion yen a year, and is said to hamper price competition.

ERex aims to forgo the feed-in tariff with its new plant by reaping economies of scale in operation and fuel procurement. The goal is to make the undertaking as economical as coal energy, which costs around 12 yen per kilowatt-hour, even as solar's rise in the U.S. underscores evolving benchmarks for competitive renewables.

Much of the renewable energy available in Japan is solar power, which fluctuates widely according to weather conditions, though power prediction accuracy has improved at Japanese PV projects. Biomass plants, which use such materials as wood chips and palm kernel shells as fuel, offer a more stable alternative.

Demand for reliable sources of renewable energy is on the rise in the business world, as shown by the RE100 initiative, in which 100 of the world's biggest companies, such as Olympus, have announced their commitment to get 100% of their power from renewable sources. ERex's new facility may spur competition.

 

Related News

View more

Southern California Edison Faces Lawsuits Over Role in California Wildfires

SCE Wildfire Lawsuits allege utility equipment and power lines sparked deadly Los Angeles blazes; investigations, inverse condemnation, and stricter utility regulations focus on liability, vegetation management, and wildfire safety amid Santa Ana winds.

 

Key Points

Residents sue SCE, alleging power lines ignited LA wildfires; seeking compensation under inverse condemnation.

✅ Videos cited show sparking lines near alleged ignition points.

✅ SCE denies wrongdoing; probes and inspections ongoing.

✅ Inverse condemnation may apply regardless of negligence.

 

In the aftermath of devastating wildfires in Los Angeles, residents have initiated legal action, similar to other mega-fire lawsuits underway in California, against Southern California Edison (SCE), alleging that the utility's equipment was responsible for sparking one of the most destructive fires. The fires have resulted in significant loss of life and property, prompting investigations into the causes and accountability of the involved parties.

The Fires and Their Impact

In early January 2025, Los Angeles experienced severe wildfires that ravaged neighborhoods, leading to the loss of at least 29 lives and the destruction of approximately 155 square kilometers of land. Areas such as Pacific Palisades and Altadena were among the hardest hit. The fires were exacerbated by arid conditions and strong Santa Ana winds, which contributed to their rapid spread and intensity.

Allegations Against Southern California Edison

Residents have filed lawsuits against SCE, asserting that the utility's equipment, particularly power lines, ignited the fires. Some plaintiffs have presented videos they claim show sparking power lines in the vicinity of the fire's origin. These legal actions seek to hold SCE accountable for the damages incurred, including property loss, personal injury, and emotional distress.

SCE's Response and Legal Context

Southern California Edison has denied any wrongdoing, stating that it has not detected any anomalies in its equipment that could have led to the fires. The utility has pledged to cooperate fully with investigations to determine the causes of the fires. California's legal framework, particularly the doctrine of "inverse condemnation," allows property owners to seek compensation from utilities for damages caused by public services, even without proof of negligence. This legal principle has been central in previous cases involving utility companies and wildfire damages, and similar allegations have arisen in other jurisdictions, such as an alleged faulty transformer case, highlighting shared risks.

Historical Context and Precedents

This situation is not unprecedented. In 2018, Pacific Gas and Electric (PG&E) faced similar allegations when its equipment was implicated in the Camp Fire, the deadliest wildfire in California's history. PG&E's equipment was found to have ignited the fire, and the company later pleaded guilty in the Camp Fire, leading to extensive litigation and financial repercussions for the company, while its bankruptcy plan won support from wildfire victims during restructuring. The case highlighted the significant risks utilities face regarding wildfire safety and the importance of maintaining infrastructure to prevent such disasters.

Implications for California's Utility Regulations

The current lawsuits against SCE underscore the ongoing challenges California faces in balancing utility operations with wildfire prevention, as regulators face calls for action amid rising electricity bills. The state has implemented stricter regulations and oversight, and lawmakers have moved to crack down on utility spending to mitigate wildfire risks associated with utility infrastructure. Utilities are now required to invest in enhanced safety measures, including equipment inspections, vegetation management, and the implementation of advanced technologies to detect and prevent potential fire hazards. These regulatory changes aim to reduce the incidence of utility-related wildfires and protect communities from future disasters.

The legal actions against Southern California Edison reflect the complex interplay between utility operations, public safety, and environmental stewardship. As investigations continue, the outcomes of these lawsuits may influence future policies and practices concerning utility infrastructure and wildfire prevention in California. The state remains committed to enhancing safety measures to protect its residents and natural resources from the devastating effects of wildfires.

 

Related News

View more

Are Norwegian energy firms ‘best in class’ for environmental management?

CO2 Tax for UK Offshore Energy Efficiency can accelerate adoption of aero-derivative gas turbines, flare gas recovery, and combined cycle power, reducing emissions on platforms like Equinor's Mariner and supporting net zero goals.

 

Key Points

A carbon price pushing operators to adopt efficient turbines, flare recovery, and combined cycle to cut emissions.

✅ Aero-derivative turbines beat industrial units on efficiency

✅ Flare gas recovery cuts routine flaring and fuel waste

✅ Combined cycle raises efficiency and lowers emissions

 

By Tom Baxter

The recent Energy Voice article from the Equinor chairman concerning the Mariner project heralding a ‘significant point of reference’ for growth highlighted the energy efficiency achievements associated with the platform.

I view energy efficiency as a key enabler to net zero, and alongside this the UK must start large-scale storage to meet system needs; it is a topic I have been involved with for many years.

As part of my energy efficiency work, I investigated Norwegian practices and compared them with the UK.

There were many differences, here are three;


1. Power for offshore installations is usually supplied from gas turbines burning fuel from the oil and gas processing plant, and even as the UK's offshore wind supply accelerates, installations convert that to electricity or couple the gas turbine to a machine such as a gas compressor.

There are two main generic types of gas turbine – aero-derivative and industrial. As the name implies aero-derivatives are aviation engines used in a static environment. Aero-derivative turbines are designed to be energy efficient as that is very import for the aviation industry.

Not so with industrial type gas turbines; they are typically 5-10% less efficient than a comparable aero-derivative.

Industrial machines do have some advantages – they can be cheaper, require less frequent maintenance, they have a wide fuel composition tolerance and they can be procured within a shorter time frame.

My comparison showed that aero-derivative machines prevailed in Norway because of the energy efficiency advantages – not the case in the UK where there are many more offshore industrial gas turbines.

Tom Baxter is visiting professor of chemical engineering at Strathclyde University and a retired technical director at Genesis Oil and Gas Consultants


2. Offshore gas flaring is probably the most obvious source of inefficient use of energy with consequent greenhouse gas emissions.

On UK installations gas is always flared due to the design of the oil and gas processing plant.

Though not a large quantity of gas, a continuous flow of gas is routinely sent to flare from some of the process plant.

In addition the flare requires pilot flames to be maintained burning at all times and, while Europe explores electricity storage in gas pipes, a purge of hydrocarbon gas is introduced into the pipes to prevent unsafe air ingress that could lead to an explosive mixture.

On many Norwegian installations the flare system is designed differently. Flare gas recovery systems are deployed which results in no flaring during continuous operations.

Flare gas recovery systems improve energy efficiency but they are costly and add additional operational complexity.


3. Returning to gas turbines, all UK offshore gas turbines are open cycle – gas is burned to produce energy and the very hot exhaust gases are vented to the atmosphere. Around 60 -70% of the energy is lost in the exhaust gases.

Some UK fields use this hot gas as a heat source for some of the oil and gas treatment operations hence improving energy efficiency.

There is another option for gas turbines that will significantly improve energy efficiency – combined cycle, and in parallel plans for nuclear power under the green industrial revolution aim to decarbonise supply.

Here the exhaust gases from an open cycle machine are taken to a separate turbine. This additional turbine utilises exhaust heat to produce steam with the steam used to drive a second turbine to generate supplementary electricity. It is the system used in most UK power stations, even as UK low-carbon generation stalled in 2019 across the grid.

Open cycle gas turbines are around 30 – 40% efficient whereas combined cycle turbines are typically 50 – 60%. Clearly deploying a combined cycle will result in a huge greenhouse gas saving.

I have worked on the development of many UK oil and gas fields and combined cycle has rarely been considered.

The reason being is that, despite the clear energy saving, they are too costly and complex to justify deploying offshore.

However that is not the case in Norway where combined cycle is used on Oseberg, Snorre and Eldfisk.

What makes the improved Norwegian energy efficiency practices different from the UK – the answer is clear; the Norwegian CO2 tax.

A tax that makes CO2 a significant part of offshore operating costs.

The consequence being that deploying energy efficient technology is much easier to justify in Norway when compared to the UK.

Do we need a CO2 tax in the UK to meet net zero – I am convinced we do. I am in good company. BP, Shell, ExxonMobil and Total are supporting a carbon tax.

Not without justification there has been much criticism of Labour’s recent oil tax plans, alongside proposals for state-owned electricity generation that aim to reshape the power market.

To my mind Labour’s laudable aims to tackle the Climate Emergency would be much better served by supporting a CO2 tax that complements the UK's coal-free energy record by strengthening renewable investment.

 

Related News

View more

Minnesota 2050 carbon-free electricity plan gets first hearing

Minnesota Carbon-Free Power by 2050 aims to shift utilities to renewable energy, wind and solar, boosting efficiency while managing grid reliability, emissions, and costs under a clean energy mandate and statewide climate policy.

 

Key Points

A statewide goal to deliver 100% carbon-free power by 2050, prioritizing renewables, efficiency, and grid reliability.

✅ Targets 100% carbon-free electricity statewide by 2050

✅ Prioritizes wind, solar, and efficiency before fossil fuels

✅ Faces utility cost, reliability, and legislative challenges

 

Gov. Tim Walz's plan for Minnesota to get 100 percent of its electricity from carbon-free sources by 2050, similar to California's 100% carbon-free mandate in scope, was criticized Tuesday at its first legislative hearing, with representatives from some of the state's smaller utilities saying they can't meet that goal.

Commerce Commissioner Steve Kelley told the House climate committee that the Democratic governor's plan is ambitious. But he said the state's generating system is "aging and at a critical juncture," with plants that produce 70 percent of the state's electricity coming up for potential retirement over the next two decades. He said it will ensure that utilities replace them with wind, solar and other innovative sources, and increased energy efficiency, before turning to fossil fuels.

"Utilities will simply need to demonstrate why clean energy would not work whenever they propose to replace or add new generating capacity," he said.

Walz's plan, announced last week, seeks to build on the success of a 2007 law that required Minnesota utilities to get at least 25 percent of their electricity from renewable sources by 2025. The state largely achieved that goal in 2017 thanks to the growth of wind and solar power, and the topic of climate change has only grown hotter, with some proposals like a fully renewable grid by 2030 pushing even faster timelines, hence the new goal for 2050.

But Joel Johnson, a lobbyist for the Minnkota Power Cooperative, testified that the governor's plan is "misguided and unrealistic" even with new technology to capture carbon dioxide emissions from power plants. Johnson added that even the big utilities that have set goals of going carbon-free by mid-century, such as Minneapolis-based Xcel Energy, acknowledge they don't know yet how they'll hit the net-zero electricity by mid-century target they have set.

 

Minnkota serves northwestern Minnesota and eastern North Dakota.

Tim Sullivan, president and CEO of the Wright-Hennepin Cooperative Electric Association in the Twin Cities area, said the plan is a "bad idea" for the 1.7 million state electric consumers served by cooperatives. He said Minnesota is a "minuscule contributor" to total global carbon emissions, even as the EU plans to double electricity use by 2050 to meet electrification demands.

"The bill would have a devastating impact on electric consumers," Sullivan said. "It represents, in our view, nothing short of a first-order threat to the safety and reliability of Minnesota's grid."

Isaac Orr is a policy fellow at the Minnesota-based conservative think tank, the Center for the American Experiment, which released a report critical of the plan Tuesday. Orr said all Minnesota households would face higher energy costs and it would harm energy-intensive industries such as mining, manufacturing and health care, while doing little to reduce global warming.

"This does not pass a proper cost-benefit analysis," he testified.

Environmental groups, including Conservation Minnesota and the Sierra Club, supported the proposal while acknowledging the challenges, noting that cleaning up electricity is critical to climate pledges in many jurisdictions.

"Our governor has called climate change an existential crisis," said Kevin Lee, director of the climate and energy program at the Minnesota Center for Environmental Advocacy. "This problem is the defining challenge of our time, and it can feel overwhelming."

Rep. Jean Wagenius, the committee chairwoman and Minneapolis Democrat who's held several hearings on the threats that climate change poses, said she expected to table the bill for further consideration after taking more testimony in the evening and would not hold a vote Tuesday.

While the bill has support in the Democratic-controlled House, it's not scheduled for action in the Republican-led Senate. Rep. Pat Garofalo, a Farmington Republican, quipped that it "has a worse chance of becoming law than me being named the starting quarterback for the Minnesota Vikings."

 

Related News

View more

Toronto Power Outages Persist for Hundreds After Spring Storm

Toronto Hydro Storm Outages continue after strong winds and heavy rain, with crews restoring power, clearing debris and downed lines. Safety alerts and real-time updates guide affected neighborhoods via website and social media.

 

Key Points

Toronto Hydro Storm Outages are weather-related power cuts; crews restore service safely and share public updates.

✅ Crews prioritize areas with severe damage and limited access

✅ Report downed power lines; keep a safe distance

✅ Check website and social media for restoration updates

 

In the aftermath of a powerful spring storm that swept through Toronto on Tuesday, approximately 400 customers remain without power as of Sunday. The storm, which brought strong winds and heavy rain that caused severe flooding in some areas, led to significant damage across the city, including downed trees and power lines. Toronto Hydro crews have been working tirelessly to restore service, similar to efforts by Sudbury Hydro crews in Northern Ontario, focusing on areas with the most severe damage. While many customers have had their power restored, the remaining outages are concentrated in neighborhoods where access is challenging due to debris and fallen infrastructure.

Toronto Hydro has assured residents that restoration efforts are ongoing and that they are prioritizing safety and efficiency, in step with recovery from damaging storms in Ontario across the province. The utility company has urged residents to report any downed power lines and to avoid approaching them, as they may still be live and dangerous, and notes that utilities sometimes rely on mutual aid deployments to speed restoration in large-scale events. Additionally, Toronto Hydro has been providing updates through their website and social media channels, keeping the public informed about the status of power restoration in affected areas.

The storm's impact has also led to disruptions in other services, and power outages in London disrupted morning routines for thousands earlier in the week. Some public transportation routes experienced delays due to debris on tracks, and several schools in the affected areas were temporarily closed. City officials are coordinating with various agencies to address these issues and ensure that services return to normal as quickly as possible, even as Quebec contends with widespread power outages after severe windstorms.

Residents are advised to stay updated on the situation through official channels and to exercise caution when traveling in storm-affected areas. Toronto Hydro continues to work diligently to restore power to all customers and appreciates the public's patience during this challenging time, a challenge echoed when Texas utilities struggled to restore power during Hurricane Harvey.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified