DTE Energy to issue RFP for energy-efficiency programs

By DTE Energy


Protective Relay Training - Basic

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
DTE Energy will issue a Request for Proposal for a package of energy-efficiency programs whose vendor contracts expire.

DTE's RFP indicates the success and continuation of its current programs. The company will seek two-year contracts for programs that include appliance recycling heating, ventilation and air conditioning, home performance with insulation and windows and lighting and appliances.

The scope of work includes achieving energy savings within the prescribed program budget, marketing, retail program management, rebate processing, recycling of appliances and management of trade ally networks. The work also will include conducting outreach events to educate customers about the impact of their energy usage and further their energy-efficiency journey.

Customers who participated in DTE's energy efficiency programs in 2014 – the most recent year for which statistics are available -- will save $585 million over the lifetime of their energy-saving upgrades.

DTE Energy's energy-efficiency programs also will result in $4.5 billion in customer savings and other economic benefits through 2029.

The residential programs also include the DTE Insight app to help customers track and manage real-time energy use.

DTE Energy is a Detroit-based diversified energy company involved in the development and management of energy-related businesses and services nationwide.

Related News

Some old dams are being given a new power: generating clean electricity

Hydroelectric retrofits for unpowered dams leverage turbines to add renewable capacity, bolster grid reliability, and enable low-impact energy storage, supporting U.S. and Canada decarbonization goals with lower costs, minimal habitat disruption, and climate resilience.

 

Key Points

They add turbines to existing dams to make clean power, stabilize the grid, and offer low-impact storage at lower cost.

✅ Lower capex than new dams; minimal habitat disruption

✅ Adds firming and storage to support wind and solar

✅ New low-head turbines unlock more retrofit sites

 

As countries race to get their power grids off fossil fuels to fight climate change, there's a big push in the U.S. to upgrade dams built for purposes such as water management or navigation with a feature they never had before — hydroelectric turbines. 

And the strategy is being used in parts of Canada, too, with growing interest in hydropower from Canada supplying New York and New England.

The U.S. Energy Information Administration says only three per cent of 90,000 U.S. dams currently generate electricity. A 2012 report from the U.S. Department of Energy found that those dams have 12,000 megawatts (MW) of potential hydroelectric generation capacity. (According to the National Hydropower Association, 1 MW can power 750 to 1,000 homes. That means 12,000 MW should be able to power more than nine million homes.)

As of May 2019, there were projects planned to convert 32 unpowered dams to add 330 MW to the grid over the next several years.

One that was recently completed was the Red Rock Hydroelectric Project, a 60-year-old flood control dam on the Des Moines River in Iowa that was retrofitted in 2014 to generate 36.4 MW at normal reservoir levels, and up to 55 MW at high reservoir levels and flows. It started feeding power to the grid this spring, and is expected to generate enough annually to supply power to 18,000 homes.

It's an approach that advocates say can convert more of the grid from fossil fuels to clean energy, often with a lower cost and environmental impact than building new dams.

Hydroelectric facilities can also be used for energy storage, complementing intermittent clean energy sources such as wind and solar with pumped storage to help maintain a more reliable, resilient grid.

The Nature Conservancy and the World Wildlife Fund are two environmental groups that oppose new hydro dams because they can block fish migration, harm water quality, damage surrounding ecosystems and release methane and CO2, and in some regions, Western Canada drought has reduced hydropower output as reservoirs run low. But they say adding turbines to non-powered dams can be part of a shift toward low-impact hydro projects that can support expansion of solar and wind power.

Paul Norris, president of the Ontario Waterpower Association, said there's typically widespread community support for such projects in his province amid ongoing debate over whether Ontario is embracing clean power in its future plans. "Any time that you can better use existing assets, I think that's a good thing."

New turbine technology means water doesn't need to fall from as great a height to generate power, providing opportunities at sites that weren't commercially viable in the past, Norris said, with recent investments such as new turbines in Manitoba showing what is possible.

In Ontario, about 1,000 unpowered dams are owned by various levels of government. "With the appropriate policy framework, many of these assets have the potential to be retrofitted for small hydro," Norris wrote in a letter to Ontario's Independent Electricity System Operator this year as part of a discussion on small-scale local energy generation resources.

He told CBC that several such projects are already in operation, such as a 950 kW retrofit of the McLeod Dam at the Moira River in Belleville, Ont., in 2008. 

Four hydro stations were going to be added during dam refurbishment on the Trent-Severn Waterway, but they were among 758 renewable energy projects cancelled by Premier Doug Ford's government after his election in 2018, a move examined in an analysis of Ontario's dirtier electricity outlook and its implications.

Patrick Bateman, senior vice-president of Waterpower Canada, said such dam retrofit projects are uncommon in most provinces. "I don't see it being a large part of the future electricity generation capacity."

He said there has been less movement on retrofitting unpowered dams in Canada compared to the U.S., because:

There are a lot more opportunities in Canada to refurbish large, existing hydro-generating stations to boost capacity on a bigger scale.

There's less growth in demand for clean energy, because more of Canada's grid is already non-carbon-emitting (80 per cent) compared to the U.S. (40 per cent).

Even so, Norris thinks Canadians should be looking at all opportunities and options when it comes to transitioning the grid away from fossil fuels, including retrofitting non-powered dams, especially as a recent report highlights Canada's looming power problem over the coming decades.

"If we're going to be serious about addressing the inevitable challenges associated with climate change targets and net zero, it really is an all-of-the-above approach."

 

Related News

View more

BC Hydro completes major milestone on Site C transmission line work

Site C 500 kV transmission lines strengthen the BC Hydro grid, linking the new substation and Peace Canyon via a 75 kilometre right-of-way to deliver clean energy, with 400 towers built and both circuits energized.

 

Key Points

High-voltage lines connecting Site C substation to the BC Hydro grid, delivering clean energy via Peace Canyon.

✅ Two 75 km circuits between Site C and Peace Canyon

✅ Connect new 500 kV substation to BC Hydro grid

✅ Over 400 towers built along existing right-of-way

 

The second and final 500 kilovolt, 75 kilometre transmission line on the Site C project, which has faced stability questions in recent years, has been completed and energized.

With this milestone, the work to connect the new Site C substation to the BC Hydro grid, amid treaty rights litigation that has at times shaped schedules, is complete. Once the Site C project begins generating electricity, much like when the Maritime Link first power flowed between Newfoundland and Nova Scotia, the transmission lines will help deliver clean energy to the rest of the province.

The two 75 kilometre transmission lines run along an existing right-of-way between Site C and the Peace Canyon generating station, a route that has seen community concerns from some northerners. The project’s first 500 kilovolt, 75 kilometre transmission line – along with the Site C substation – were both completed and energized in the fall of 2020.

BC Hydro awarded the Site C transmission line construction contract to Allteck Line Contractors Inc. (now Allteck Limited Partnership) in 2018. Since construction started on this part of the project in summer 2018, crews have built more than 400 towers and strung lines, even as other interties like the Manitoba-Minnesota line have faced scheduling uncertainty, over a total of 150 kilometres.

The two transmission lines are a major component of the Site C project, comparable to initiatives such as the New England Clean Power Link in scale, which also consists of the new 500 kilovolt substation and expanding the existing Peace Canyon 500 kilovolt gas-insulated switchgear to incorporate the two new 500 kilovolt transmission line terminals.

Work to complete three other 500 kilovolt transmission lines that will span one kilometre between the Site C generating station and Site C substation, similar to milestones on the Maritime Link project, is still underway. This work is expected to be complete in 2023.

 

Related News

View more

Why the promise of nuclear fusion is no longer a pipe dream

ITER Nuclear Fusion advances tokamak magnetic confinement, heating deuterium-tritium plasma with superconducting magnets, targeting net energy gain, tritium breeding, and steam-turbine power, while complementing laser inertial confinement milestones for grid-scale electricity and 2025 startup goals.

 

Key Points

ITER Nuclear Fusion is a tokamak project confining D-T plasma with magnets to achieve net energy gain and clean power.

✅ Tokamak magnetic confinement with high-temp superconducting coils

✅ Deuterium-tritium fuel cycle with on-site tritium breeding

✅ Targets net energy gain and grid-scale, low-carbon electricity

 

It sounds like the stuff of dreams: a virtually limitless source of energy that doesn’t produce greenhouse gases or radioactive waste. That’s the promise of nuclear fusion, often described as the holy grail of clean energy by proponents, which for decades has been nothing more than a fantasy due to insurmountable technical challenges. But things are heating up in what has turned into a race to create what amounts to an artificial sun here on Earth, one that can provide power for our kettles, cars and light bulbs.

Today’s nuclear power plants create electricity through nuclear fission, in which atoms are split, with next-gen nuclear power exploring smaller, cheaper, safer designs that remain distinct from fusion. Nuclear fusion however, involves combining atomic nuclei to release energy. It’s the same reaction that’s taking place at the Sun’s core. But overcoming the natural repulsion between atomic nuclei and maintaining the right conditions for fusion to occur isn’t straightforward. And doing so in a way that produces more energy than the reaction consumes has been beyond the grasp of the finest minds in physics for decades.

But perhaps not for much longer. Some major technical challenges have been overcome in the past few years and governments around the world have been pouring money into fusion power research as part of a broader green industrial revolution under way in several regions. There are also over 20 private ventures in the UK, US, Europe, China and Australia vying to be the first to make fusion energy production a reality.

“People are saying, ‘If it really is the ultimate solution, let’s find out whether it works or not,’” says Dr Tim Luce, head of science and operation at the International Thermonuclear Experimental Reactor (ITER), being built in southeast France. ITER is the biggest throw of the fusion dice yet.

Its $22bn (£15.9bn) build cost is being met by the governments of two-thirds of the world’s population, including the EU, the US, China and Russia, at a time when Europe is losing nuclear power and needs energy, and when it’s fired up in 2025 it’ll be the world’s largest fusion reactor. If it works, ITER will transform fusion power from being the stuff of dreams into a viable energy source.


Constructing a nuclear fusion reactor
ITER will be a tokamak reactor – thought to be the best hope for fusion power. Inside a tokamak, a gas, often a hydrogen isotope called deuterium, is subjected to intense heat and pressure, forcing electrons out of the atoms. This creates a plasma – a superheated, ionised gas – that has to be contained by intense magnetic fields.

The containment is vital, as no material on Earth could withstand the intense heat (100,000,000°C and above) that the plasma has to reach so that fusion can begin. It’s close to 10 times the heat at the Sun’s core, and temperatures like that are needed in a tokamak because the gravitational pressure within the Sun can’t be recreated.

When atomic nuclei do start to fuse, vast amounts of energy are released. While the experimental reactors currently in operation release that energy as heat, in a fusion reactor power plant, the heat would be used to produce steam that would drive turbines to generate electricity, even as some envision nuclear beyond electricity for industrial heat and fuels.

Tokamaks aren’t the only fusion reactors being tried. Another type of reactor uses lasers to heat and compress a hydrogen fuel to initiate fusion. In August 2021, one such device at the National Ignition Facility, at the Lawrence Livermore National Laboratory in California, generated 1.35 megajoules of energy. This record-breaking figure brings fusion power a step closer to net energy gain, but most hopes are still pinned on tokamak reactors rather than lasers.

In June 2021, China’s Experimental Advanced Superconducting Tokamak (EAST) reactor maintained a plasma for 101 seconds at 120,000,000°C. Before that, the record was 20 seconds. Ultimately, a fusion reactor would need to sustain the plasma indefinitely – or at least for eight-hour ‘pulses’ during periods of peak electricity demand.

A real game-changer for tokamaks has been the magnets used to produce the magnetic field. “We know how to make magnets that generate a very high magnetic field from copper or other kinds of metal, but you would pay a fortune for the electricity. It wouldn’t be a net energy gain from the plant,” says Luce.


One route for nuclear fusion is to use atoms of deuterium and tritium, both isotopes of hydrogen. They fuse under incredible heat and pressure, and the resulting products release energy as heat


The solution is to use high-temperature, superconducting magnets made from superconducting wire, or ‘tape’, that has no electrical resistance. These magnets can create intense magnetic fields and don’t lose energy as heat.

“High temperature superconductivity has been known about for 35 years. But the manufacturing capability to make tape in the lengths that would be required to make a reasonable fusion coil has just recently been developed,” says Luce. One of ITER’s magnets, the central solenoid, will produce a field of 13 tesla – 280,000 times Earth’s magnetic field.

The inner walls of ITER’s vacuum vessel, where the fusion will occur, will be lined with beryllium, a metal that won’t contaminate the plasma much if they touch. At the bottom is the divertor that will keep the temperature inside the reactor under control.

“The heat load on the divertor can be as large as in a rocket nozzle,” says Luce. “Rocket nozzles work because you can get into orbit within minutes and in space it’s really cold.” In a fusion reactor, a divertor would need to withstand this heat indefinitely and at ITER they’ll be testing one made out of tungsten.

Meanwhile, in the US, the National Spherical Torus Experiment – Upgrade (NSTX-U) fusion reactor will be fired up in the autumn of 2022, while efforts in advanced fission such as a mini-reactor design are also progressing. One of its priorities will be to see whether lining the reactor with lithium helps to keep the plasma stable.


Choosing a fuel
Instead of just using deuterium as the fusion fuel, ITER will use deuterium mixed with tritium, another hydrogen isotope. The deuterium-tritium blend offers the best chance of getting significantly more power out than is put in. Proponents of fusion power say one reason the technology is safe is that the fuel needs to be constantly fed into the reactor to keep fusion happening, making a runaway reaction impossible.

Deuterium can be extracted from seawater, so there’s a virtually limitless supply of it. But only 20kg of tritium are thought to exist worldwide, so fusion power plants will have to produce it (ITER will develop technology to ‘breed’ tritium). While some radioactive waste will be produced in a fusion plant, it’ll have a lifetime of around 100 years, rather than the thousands of years from fission.

At the time of writing in September, researchers at the Joint European Torus (JET) fusion reactor in Oxfordshire were due to start their deuterium-tritium fusion reactions. “JET will help ITER prepare a choice of machine parameters to optimise the fusion power,” says Dr Joelle Mailloux, one of the scientific programme leaders at JET. These parameters will include finding the best combination of deuterium and tritium, and establishing how the current is increased in the magnets before fusion starts.

The groundwork laid down at JET should accelerate ITER’s efforts to accomplish net energy gain. ITER will produce ‘first plasma’ in December 2025 and be cranked up to full power over the following decade. Its plasma temperature will reach 150,000,000°C and its target is to produce 500 megawatts of fusion power for every 50 megawatts of input heating power.

“If ITER is successful, it’ll eliminate most, if not all, doubts about the science and liberate money for technology development,” says Luce. That technology development will be demonstration fusion power plants that actually produce electricity, where advanced reactors can build on decades of expertise. “ITER is opening the door and saying, yeah, this works – the science is there.”

 

Related News

View more

Altmaier's new electricity forecast: the main driver is e-mobility

Germany 2030 Electricity Demand Forecast projects 658 TWh, driven by e-mobility, heat pumps, and green hydrogen. BMWi and BDEW see higher renewables, onshore wind, photovoltaics, and faster grid expansion to meet climate targets.

 

Key Points

A BMWi outlook to 658 TWh by 2030, led by e-mobility, plus demand from heat pumps, green hydrogen, and industry.

✅ Transport adds ~70 TWh; cars take 44 TWh by 2030

✅ Heat pumps add 35 TWh; green hydrogen needs ~20 TWh

✅ BDEW urges 70% renewables and faster grid expansion

 

Gross electricity consumption in Germany will increase from 595 terawatt hours (TWh) in 2018 to 658 TWh in 2030. That is an increase of eleven percent. This emerges from the detailed analysis of the development of electricity demand that the Federal Ministry of Economics (BMWi) published on Tuesday. The main driver of the increase is therefore the transport sector. According to the paper, increased electric mobility in particular contributes 68 TWh to the increase, in line with rising EV power demand trends across markets. Around 44 TWh of this should be for cars, 7 TWh for light commercial vehicles and 17 TWh for heavy trucks. If the electricity consumption for buses and two-wheelers is added, this results in electricity consumption for e-mobility of around 70 TWh.

The number of purely battery-powered vehicles is increasing according to the investigation by the BMWi to 16 million by 2030, reflecting the global electric car market momentum, plus 2.2 million plug-in hybrids. In 2018 there were only around 100,000 electric cars, the associated electricity consumption was an estimated 0.3 TWh, and plug-in mileage in 2021 highlighted the rapid uptake elsewhere. For heat pumps, the researchers predict an increase in demand by 35 TWh to around 42 TWh. They estimate the electricity consumption for the production of around 12.5 TWh of green hydrogen in 2030 to be just under 20 TWh. The demand at battery factories and data centers will increase by 13 TWh compared to 2018 by this point in time. In the data centers, there is no higher consumption due to more efficient hardware despite advancing digitization.

The updated figures are based on ongoing scenario calculations by Prognos, in which the market researchers took into account the goals of the Climate Protection Act for 2030 and the wider European electrification push for decarbonization. In the preliminary estimate presented by Federal Economics Minister Peter Altmaier (CDU) in July, a range of 645 to 665 TWh was determined for gross electricity consumption in 2030. Previously, Altmaier officially said that electricity demand in this country would remain constant for the next ten years. In June, Chancellor Angela Merkel (CDU) called for an expanded forecast that would have to include trends in e-mobility adoption within a decade and the Internet of Things, for example.

Higher electricity demand
The Federal Association of Energy and Water Management (BDEW) is assuming an even higher electricity demand of around 700 TWh in nine years. In any case, a higher share of renewable energies in electricity generation of 70 percent by 2030 is necessary in order to be able to achieve the climate targets and to address electricity price volatility risks. The expansion paths urgently need to be increased and obstacles removed. This could mean around 100 gigawatts (GW) for onshore wind turbines, 11 GW for biomass and at least 150 GW for photovoltaics by 2030. Faster network expansion and renovation will also become even more urgent, as electric cars challenge grids in many regions.
 

 

Related News

View more

LNG powered with electricity could be boon for B.C.'s independent power producers

B.C. LNG Electrification embeds clean hydro and wind power into low-emission liquefied natural gas, cutting carbon intensity, enabling coal displacement in Asia, and opening grid-scale demand for independent power producers and ITMO-based climate accounting.

 

Key Points

Powering LNG with clean electricity cuts carbon intensity, displaces coal, and grows demand for B.C.'s clean power.

✅ Electric-drive LNG cuts emissions intensity by up to 80%.

✅ Creates major grid load, boosting B.C. independent power producers.

✅ Enables ITMO crediting when coal displacement is verified.

 

B.C. has abundant clean power – if only there was a way to ship those electrons across the sea to help coal-dependent countries reduce their emissions, and even regionally, Alberta–B.C. grid link benefits could help move surplus power domestically.

Natural gas that is liquefied using clean hydro and wind power and then exported would be, in a sense, a way of embedding B.C.’s low emission electricity in another form of energy, and, alongside the Canada–Germany clean energy pact, part of a broader export strategy.

Given the increased demand that could come from an LNG industry – especially one that moves towards greater electrification and, as the IEA net-zero electricity report notes, broader system demand – poses some potentially big opportunities for B.C.’s clean energy independent power sector, as those attending the Clean Energy Association of BC's annual at the Generate conference heard recently.

At a session on LNG electrification, delegates were told that LNG produced in B.C. with electricity could have some significant environmental benefits.

Given how much power an LNG plant that uses electric drive consumes, an electrified LNG industry could also pose some significant opportunities for independent power producers – a sector that had the wind taken out of its sails with the sanctioning of the Site C dam project.

Only one LNG plant being built in B.C. – Woodfibre LNG – will use electric drive to produce LNG, although the companies behind Kitimat LNG have changed their original design plans, and now plan to use electric drive drive as well.

Even small LNG plants that use electric drive require a lot of power.

“We’re talking about a lot of power, since it’s one of the biggest consumers you can connect to a grid,” said Sven Demmig, head of project development for Siemens.

Most LNG plants still burn natural gas to drive the liquefaction process – a choice that intersects with climate policy and electricity grids in Canada. They typically generate 0.35 tonnes of CO2e per tonne of LNG produced.

Because it will use electric drive, LNG produced by Woodfibre LNG will have an emissions intensity that is 80% less than LNG produced in the Gulf of Mexico, said Woodfibre president David Keane.

In B.C., the benchmark for GHG intensities for LNG plants has been set at 0.16 tonnes of CO2e per tonne of LNG. Above that, LNG producers would need to pay higher carbon taxes than those that are below the benchmark.

The LNG Canada plant has an intensity of 0.15 tonnes og CO2e per tonne of LNG. Woodfibre LNG will have an emissions intensity of just 0.059, thanks to electric drive.

“So we will be significantly less than any operating facility in the world,” Keane said.

Keane said Sinopec has recently estimated that it expects China’s demand for natural gas to grow by 82% by 2030.

“So China will, in fact, get its gas supply,” Keane said. “The question is: where will that supply come from?

“For every tonne of LNG that’s being produced today in the United States -- and tonne of LNG that we’re not producing in Canada -- we’re seeing about 10 million tonnes of carbon leakage every single year.”

The first Canadian company to produce LNG that ended up in China is FortisBC. Small independent operators have been buying LNG from FortisBC’s Tilbury Island plant and shipping to China in ISO containers on container ships.

David Bennett, director of communications for FortisBC, said those shipments are traced to industries in China that are, indeed, using LNG instead of coal power now.

“We know where those shipping containers are going,” he said. “They’re actually going to displace coal in factories in China.”

Verifying what the LNG is used for is important, if Canadian producers want to claim any kind of climate credit. LNG shipped to Japan or South Korea to displace nuclear power, for example, would actually result in a net increase in GHGs. But used to displace coal, the emissions reductions can be significant, since natural gas produces about half the CO2 that coal does.

The problem for LNG producers here is B.C.’s emissions reduction targets as they stand today. Even LNG produced with electricity will produce some GHGs. The fact that LNG that could dramatically reduce GHGs in other countries, if it displaces coal power, does not count in B.C.’s carbon accounting.

Under the Paris Agreement, countries agree to set their own reduction targets, and, for Canada, cleaning up Canada’s electricity remains critical to meeting climate pledges, but don’t typically get to claim any reductions that might result outside their own country.

Canada is exploring the use of Internationally Transferred Mitigation Outcomes (ITMO) under the Under the Paris Agreement to allow Canada to claim some of the GHG reductions that result in other countries, like China, through the export of Canadian LNG.

“For example, if I were producing 4 million tonnes of greenhouse gas emissions in B.C. and I was selling 100% of my LNG to China, and I can verify that they’re replacing coal…they would have a reduction of about 60 or million tonnes of greenhouse gas emissions,” Keane said.

“So if they’re buying 4 million tonnes of emissions from us, under these ITMOs, then they have net reduction of 56 million tonnes, we’d have a net increase of zero.”

But even if China and Canada agreed to such a trading arrangement, the United Nations still hasn’t decided just how the rules around ITMOs will work.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.