Tories mum on nuclear waste proposal

By Toronto Star


Substation Relay Protection Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
The Conservative government has yet to decide whether Canada will show up at a major international conference that could result in Canada accepting, storing and refining spent nuclear fuel.

The Global Nuclear Energy Partnership, an initiative launched by U.S. President George W. Bush last year, meets in Vienna on Sunday and Canada's participation – or lack thereof – remained a closely guarded secret.

A spokeswoman for Prime Minister Stephen Harper referred questions about the nuclear group to Foreign Affairs Minister Maxime Bernier's staff.

And with only three days before the event, a spokeswoman for Bernier said no decision had been made.

"We are fully assessing possible future implications prior to making any decision on joining the Global Nuclear Energy Partnership," said Isabelle Fontaine, the minister's director of communications.

Queries to Natural Resources Minister Gary Lunn, who has responsibility for nuclear issues, went unanswered.

Opposition parties said the continuing silence was unacceptable on the eve of an event that could have profound environmental consequences in this country.

"They're just hoping the issue will go away, the interest will die down and they do whatever they want in secret," said Liberal natural resources critic Mark Holland.

"If they're refusing to talk, you have to ask why. This is a huge decision – whether we're going to take part, whether we're going to allow nuclear waste to be imported from other countries."

The initiative proposes that nuclear energy-using countries and uranium-exporting countries form a new association.

The intent of this new nuclear club would be to promote and safeguard the industry. But at the heart of the plan is a proposal that all used nuclear fuel be repatriated to the original uranium exporting country for disposal.

Canada is the world's largest uranium exporter.

The continuing silence means only one thing to New Democrat MP Nathan Cullen: the decision to take part has already been made.

"These guys are in this negotiation and they're terrified to bring this back to the Canadian people because they will be slapped silly," he said.

Australia, another huge uranium exporter, has been engaged in a loud, angry debate about the partnership and Cullen said Harper's Conservatives, who hold a minority in the Commons, have watched that situation with alarm.

The initiative also proposes to find ways to reprocess spent nuclear fuel – an idea that has prompted France and Japan to sign on. Both of those countries are researching sodium-cooled fast reactors, which are meant to make nuclear energy production more efficient.

Bernier said Canada would make a decision on whether to participate within "a matter of days."

One of the biggest criticisms of the initiative is that it forces participating countries to develop uranium enrichment or reprocessing technology right away.

Before Canada makes a decision whether to join the partnership, there should be open public consultation, Liberal Leader Stéphane Dion has said.

Environmental groups are against Canada becoming involved in the organization and accepting spent fuel for refinement.

"That's all we need – a plant to reprocess uranium in Canada," said John Bennett of ClimateforChange.ca

"It's an exceedingly dirty process and it creates more high-level wastes than we're already producing in own reactors, and when you start to reprocess the chance of the material getting into the wrong hands increases."

Bennett has been a longtime opponent of nuclear power.

Talking points prepared for the Conservatives in 2006 show there is enthusiasm within the government for the Global Nuclear Energy Partnership and that Canadian officials have already held talks with U.S. counterparts on becoming involved.

Related News

IEA: Asia set to use half of world's electricity by 2025

Asia Electricity Consumption 2025 highlights an IEA forecast of surging global power demand led by China, lagging access in Africa, rising renewables and nuclear output, stable emissions, and weather-dependent grids needing flexibility and electrification.

 

Key Points

An IEA forecast that Asia will use half of global power by 2025, led by China, as renewables and nuclear drive supply.

✅ Asia to use half of global electricity; China leads growth

✅ Africa just 3% consumption despite rapid population growth

✅ Renewables, nuclear expand; grids must boost flexibility

 

Asia will for the first time use half of the world’s electricity by 2025, even as global power demand keeps rising and Africa continues to consume far less than its share of the global population, according to a new forecast released Wednesday by the International Energy Agency.

Much of Asia’s electricity use will be in China, a nation of 1.4 billion people whose China's electricity sector is seeing shifts as its share of global consumption will rise from a quarter in 2015 to a third by the middle of this decade, the Paris-based body said.

“China will be consuming more electricity than the European Union, United States and India combined,” said Keisuke Sadamori, the IEA’s director of energy markets and security.

By contrast, Africa — home to almost a fifth of world’s nearly 8 billion inhabitants — will account for just 3% of global electricity consumption in 2025.

“This and the rapidly growing population mean there is still a massive need for increased electrification in Africa,” said Sadamori.

The IEA’s annual report predicts that low-emissions sources will account for much of the growth in global electricity supply over the coming three years, including nuclear power and renewables such as wind and solar. This will prevent a significant rise in greenhouse gas emissions from the power sector, it said.

Scientists say sharp cuts in all sources of emissions are needed as soon as possible to keep average global temperatures from rising 1.5 degrees Celsius (2.7 Fahrenheit) above pre-industrial levels. That target, laid down in the 2015 Paris climate accord, appears increasingly doubtful as temperatures have already increased by more than 1.1 C since the reference period.

One hope for meeting the goal is a wholesale shift away from fossil fuels such as coal, gas and oil toward low-carbon sources of energy. But while some regions are reducing their use of coal and gas for electricity production, in others, soaring electricity and coal use are increasing, the IEA said.

The 134-page also report warned that surging electricity demand and supply are becoming increasingly weather dependent, a problem it urged policymakers to address.

“In addition to drought in Europe, there were heat waves in India (last year),” said Sadamori. “Similarly, central and eastern China were hit by heatwaves and drought. The United States, where electricity sales projections continue to fall, also saw severe winter storms in December, and all those events put massive strain on the power systems of these regions.”

“As the clean energy transition gathers pace, the impact of weather events on electricity demand will intensify due to the increased electrification of heating, while the share of weather-dependent renewables poised to eclipse coal will continue to grow in the generation mix,” the IEA said. “In such a world, increasing the flexibility of power systems while ensuring security of supply and resilience of networks will be crucial.”

 

Related News

View more

Longer, more frequent outages afflict the U.S. power grid as states fail to prepare for climate change

Power Grid Climate Resilience demands storm hardening, underground power lines, microgrids, batteries, and renewable energy as regulators and utilities confront climate change, sea level rise, and extreme weather to reduce outages and protect vulnerable communities.

 

Key Points

It is the grid capacity to resist and recover from climate hazards using buried lines, microgrids, and batteries.

✅ Underground lines reduce wind outages and wildfire ignition risk.

✅ Microgrids with solar and batteries sustain critical services.

✅ Regulators balance cost, resilience, equity, and reliability.

 

Every time a storm lashes the Carolina coast, the power lines on Tonye Gray’s street go down, cutting her lights and air conditioning. After Hurricane Florence in 2018, Gray went three days with no way to refrigerate medicine for her multiple sclerosis or pump the floodwater out of her basement.

What you need to know about the U.N. climate summit — and why it matters
“Florence was hell,” said Gray, 61, a marketing account manager and Wilmington native who finds herself increasingly frustrated by the city’s vulnerability.

“We’ve had storms long enough in Wilmington and this particular area that all power lines should have been underground by now. We know we’re going to get hit.”

Across the nation, severe weather fueled by climate change is pushing aging electrical systems past their limits, often with deadly results. Last year, amid increasing nationwide blackouts, the average American home endured more than eight hours without power, according to the U.S. Energy Information Administration — more than double the outage time five years ago.

This year alone, a wave of abnormally severe winter storms caused a disastrous power failure in Texas, leaving millions of homes in the dark, sometimes for days, and at least 200 dead. Power outages caused by Hurricane Ida contributed to at least 14 deaths in Louisiana, as some of the poorest parts of the state suffered through weeks of 90-degree heat without air conditioning.

As storms grow fiercer and more frequent, environmental groups are pushing states to completely reimagine the electrical grid, incorporating more grid-scale batteries, renewable energy sources and localized systems known as “microgrids,” which they say could reduce the incidence of wide-scale outages. Utility companies have proposed their own storm-proofing measures, including burying power lines underground.

But state regulators largely have rejected these ideas, citing pressure to keep energy rates affordable. Of $15.7 billion in grid improvements under consideration last year, regulators approved only $3.4 billion, according to a national survey by the NC Clean Energy Technology Center — about one-fifth, highlighting persistent vulnerabilities in the grid nationwide.

After a weather disaster, “everybody’s standing around saying, ‘Why didn’t you spend more to keep the lights on?’ ” Ted Thomas, chairman of the Arkansas Public Service Commission, said in an interview with The Washington Post. “But when you try to spend more when the system is working, it’s a tough sell.”

A major impediment is the failure by state regulators and the utility industry to consider the consequences of a more volatile climate — and to come up with better tools to prepare for it. For example, a Berkeley Lab study last year of outages caused by major weather events in six states found that neither state officials nor utility executives attempted to calculate the social and economic costs of longer and more frequent outages, such as food spoilage, business closures, supply chain disruptions and medical problems.

“There is no question that climatic changes are happening that directly affect the operation of the power grid,” said Justin Gundlach, a senior attorney at the Institute for Policy Integrity, a think tank at New York University Law School. “What you still haven’t seen … is a [state] commission saying: 'Isn’t climate the through line in all of this? Let’s examine it in an open-ended way. Let’s figure out where the information takes us and make some decisions.’ ”

In interviews, several state commissioners acknowledged that failure.

“Our electric grid was not built to handle the storms that are coming this next century,” said Tremaine L. Phillips, a commissioner on the Michigan Public Service Commission, which in August held an emergency meeting to discuss the problem of power outages. “We need to come up with a broader set of metrics in order to better understand the success of future improvements.”

Five disasters in four years
The need is especially urgent in North Carolina, where experts warn Atlantic grids and coastlines need a rethink as the state has declared a federal disaster from a hurricane or tropical storm five times in the past four years. Among them was Hurricane Florence, which brought torrential rain, catastrophic flooding and the state’s worst outage in over a decade in September 2018.

More than 1 million residents were left disconnected from refrigerators, air conditioners, ventilators and other essential machines, some for up to two weeks. Elderly residents dependent on oxygen were evacuated from nursing homes. Relief teams flew medical supplies to hospitals cut off by flooded roads. Desperate people facing closed stores and rotting food looted a Wilmington Family Dollar.

“I have PTSD from Hurricane Florence, not because of the actual storm but the aftermath,” said Evelyn Bryant, a community organizer who took part in the Wilmington response.

The storm reignited debate over a $13 billion proposal by Duke Energy, one of the largest power companies in the nation, to reinforce the state’s power grid. A few months earlier, the state had rejected Duke’s request for full repayment of those costs, determining that protecting the grid against weather is a normal part of doing business and not eligible for the type of reimbursement the company had sought.

After Florence, Duke offered a smaller, $2.5 billion plan, along with the argument that severe weather events are one of seven “megatrends” (including cyberthreats and population growth) that require greater investment, according to a PowerPoint presentation included in testimony to the state. The company owns the two largest utilities in North Carolina, Duke Energy Carolinas and Duke Energy Progress.

Vote Solar, a nonprofit climate advocacy group, objected to Duke’s plan, saying the utility had failed to study the risks of climate impacts. Duke’s flood maps, for example, had not been updated to reflect the latest projections for sea level rise, they said. In testimony, Vote Solar claimed Duke was using environmental trends to justify investments “it had already decided to pursue.”

The United States is one of the few countries where regulated utilities are usually guaranteed a rate of return on capital investments, even as studies show the U.S. experiences more blackouts than much of the developed world. That business model incentivizes spending regardless of how well it solves problems for customers and inspires skepticism. Ric O’Connell, executive director of GridLab, a nonprofit group that assists state and regional policymakers on electrical grid issues, said utilities in many states “are waving their hands and saying hurricanes” to justify spending that would do little to improve climate resilience.

In North Carolina, hurricanes convinced Republicans that climate change is real

Duke Energy spokesman Jeff Brooks acknowledged that the company had not conducted a climate risk study but pointed out that this type of analysis is still relatively new for the industry. He said Duke’s grid improvement plan “inherently was designed to think about future needs,” including reinforced substations with walls that rise several feet above the previous high watermark for flooding, and partly relied on federal flood maps to determine which stations are at most risk.

Brooks said Duke is not using weather events to justify routine projects, noting that the company had spent more than a year meeting with community stakeholders and using their feedback to make significant changes to its grid improvement plan.

This year, the North Carolina Utilities Commission finally approved a set of grid improvements that will cost customers $1.2 billion. But the commission reserved the right to deny Duke reimbursement of those costs if it cannot prove they are prudent and reasonable. The commission’s general counsel, Sam Watson, declined to discuss the decision, saying the commission can comment on specific cases only in public orders.

The utility is now burying power lines in “several neighborhoods across the state” that are most vulnerable to wide-scale outages, Brooks said. It is also fitting aboveground power lines with “self-healing” technology, a network of sensors that diverts electricity away from equipment failures to minimize the number of customers affected by an outage.

As part of a settlement with Vote Solar, Duke Energy last year agreed to work with state officials and local leaders to further evaluate the potential impacts of climate change, a process that Brooks said is expected to take two to three years.

High costs create hurdles
The debate in North Carolina is being echoed in states across the nation, where burying power lines has emerged as one of the most common proposals for insulating the grid from high winds, fires and flooding. But opponents have balked at the cost, which can run in the millions of dollars per mile.

In California, for example, Pacific Gas & Electric wants to bury 10,000 miles of power lines, both to make the grid more resilient and to reduce the risk of sparking wildfires. Its power equipment has contributed to multiple deadly wildfires in the past decade, including the 2018 Camp Fire that killed at least 85 people.

PG&E’s proposal has drawn scorn from critics, including San Jose Mayor Sam Liccardo, who say it would be too slow and expensive. But Patricia Poppe, the company’s CEO, told reporters that doing nothing would cost California even more in lost lives and property while struggling to keep the lights on during wildfires. The plan has yet to be submitted to the state, but Terrie Prosper, a spokeswoman for the California Public Utilities Commission, said the commission has supported underground lines as a wildfire mitigation strategy.

Another oft-floated solution is microgrids, small electrical systems that provide power to a single neighborhood, university or medical center. Most of the time, they are connected to a larger utility system. But in the event of an outage, microgrids can operate on their own, with the aid of solar energy stored in batteries.

In Florida, regulators recently approved a four-year microgrid pilot project, but the technology remains expensive and unproven. In Maryland, regulators in 2016 rejected a plan to spend about $16 million for two microgrids in Baltimore, in part because the local utility made no attempt to quantify “the tangible benefits to its customer base.”

Amid shut-off woes, a beacon of energy

In Texas, where officials have largely abandoned state regulation in favor of the free market, the results have been no more encouraging. Without requirements, as exist elsewhere, for building extra capacity for times of high demand or stress, the state was ill-equipped to handle an abnormal deep freeze in February that knocked out power to 4 million customers for days.

Since then, Berkshire Hathaway Energy and Starwood Energy Group each proposed spending $8 billion to build new power plants to provide backup capacity, with guaranteed returns on the investment of 9 percent, but the Texas legislature has not acted on either plan.

New York is one of the few states where regulators have assessed the risks of climate change and pushed utilities to invest in solutions. After 800,000 New Yorkers lost power for 10 days in 2012 in the wake of Hurricane Sandy, state regulators ordered utility giant Con Edison to evaluate the state’s vulnerability to weather events.

The resulting report, which estimated climate risks could cost the company as much as $5.2 billion by 2050, gave ConEd data to inform its investments in storm hardening measures, including new storm walls and submersible equipment in areas at risk of flooding.

Meanwhile, the New York Public Service Commission has aggressively enforced requirements that utility companies keep the lights on during big storms, fining utility providers nearly $190 million for violations including inadequate staffing during Tropical Storm Isaias in 2020.

“At the end of the day, we do not want New Yorkers to be at the mercy of outdated infrastructure,” said Rory M. Christian, who last month was appointed chair of the New York commission.

The price of inaction
In North Carolina, as Duke Energy slowly works to harden the grid, some are pursuing other means of fostering climate-resilient communities.

Beth Schrader, the recovery and resilience director for New Hanover County, which includes Wilmington, said some of the people who went the longest without power after Florence had no vehicles, no access to nearby grocery stores and no means of getting to relief centers set up around the city.

For example, Quanesha Mullins, a 37-year-old mother of three, went eight days without power in her housing project on Wilmington’s east side. Her family got by on food from the Red Cross and walked a mile to charge their phones at McDonald’s. With no air conditioning, they slept with the windows open in a neighborhood with a history of violent crime.

Schrader is working with researchers at the University of North Carolina in Charlotte to estimate the cost of helping people like Mullins. The researchers estimate that it would have cost about $572,000 to provide shelter, meals and emergency food stamp benefits to 100 families for two weeks, said Robert Cox, an engineering professor who researches power systems at UNC-Charlotte.

Such calculations could help spur local governments to do more to help vulnerable communities, for example by providing “resilience outposts” with backup power generators, heating or cooling rooms, Internet access and other resources, Schrader said. But they also are intended to show the costs of failing to shore up the grid.

“The regulators need to be moved along,” Cox said.

In the meantime, Tonye Gray finds herself worrying about what happens when the next storm hits. While Duke Energy says it is burying power lines in the most outage-prone areas, she has yet to see its yellow-vested crews turn up in her neighborhood.

“We feel,” she said, “that we’re at the end of the line.”

 

Related News

View more

Wind power making gains as competitive source of electricity

Canada Wind Energy Costs are plunging as renewable energy auctions, CfD contracts, and efficient turbines drive prices to 2-4 cents/kWh across Alberta and Saskatchewan, outcompeting grid power via competitive bidding and improved capacity factors.

 

Key Points

Averaging 2-4 cents/kWh via auctions, CfD support, and bigger turbines, wind is now cost-competitive across Canada.

✅ Alberta CfD bids as low as 3.9 cents/kWh.

✅ Turbine outputs rose from 1 MW to 3.3 MW per tower.

✅ Competitive auctions cut costs ~70% over nine years.

 

It's taken a decade of technological improvement and a new competitive bidding process for electrical generation contracts, but wind may have finally come into its own as one of the cheapest ways to create power.

Ten years ago, Ontario was developing new wind power projects at a cost of 28 cents per kilowatt hour (kWh), the kind of above-market rate that the U.K., Portugal and other countries were offering to try to kick-start development of renewables. 

Now some wind companies say they've brought generation costs down to between 2 and 4 cents — something that appeals to provinces that are looking to significantly increase their renewable energy deployment plans.

The cost of electricity varies across Canada, by province and time of day, from an average of 6.5 cents per kWh in Quebec to as much as 15 cents in Halifax.

Capital Power, an Edmonton-based company, recently won a contract for the Whitla 298.8-megawatt (MW) wind project near Medicine Hat, Alta., with a bid of 3.9 cents per kWh, at a time when three new solar facilities in Alberta have been contracted at lower cost than natural gas, underscoring the trend. That price covers capital costs, transmission and connection to the grid, as well as the cost of building the project.

Jerry Bellikka, director of government relations, said Capital Power has been building wind projects for a decade, in the U.S., Alberta, B.C. and other provinces. In that time the price of wind generation equipment has been declining continually, while the efficiency of wind turbines increases.

 

Increased efficiency

"It used to be one tower was 1 MW; now each turbine generates 3.3 MW. There's more electricity generated per tower than several years ago," he said.

One wild card for Whitla may be steel prices — because of the U.S. and Canada slapping tariffs on one other's steel and aluminum products. Whitla's towers are set to come from Colorado, and many of the smaller components from China.

 

Canada introduces new surtaxes to curb flood of steel imports

"We haven't yet taken delivery of the steel. It remains to be seen if we are affected by the tariffs." Belikka said.

Another company had owned the site and had several years of meteorological data, including wind speeds at various heights on the site, which is in a part of southern Alberta known for its strong winds.

But the choice of site was also dependent on the municipality, with rural Forty Mile County eager for the development, Belikka said.

 

Alberta aims for 30% electricity from wind by 2030

Alberta wants 30 per cent of its electricity to come from renewable sources by 2030 and, as an energy powerhouse, is encouraging that with a guaranteed pricing mechanism in what is otherwise a market-bidding process.

While the cost of generating energy for the Alberta Electric System Operator (AESO) fluctuates hourly and can be a lot higher when there is high demand, the winners of the renewable energy contracts are guaranteed their fixed-bid price.

The average pool price of electricity last year in Alberta was 5 cents per kWh; in boom times it rose to closer to 8 cents. But if the price rises that high after the wind farm is operating, the renewable generator won't get it, instead rebating anything over 3.9 cents back to the government.

On the other hand, if the average or pool price is a low 2 cents kWh, the province will top up their return to 3.9 cents.

This contract-for-differences (CfD) payment mechanism has been tested in renewable contracts in the U.K. and other jurisdictions, including some U.S. states, according to AESO.

 

Competitive bidding in Saskatchewan

In Saskatchewan, the plan is to double its capacity of renewable electricity, to 50 per cent of generation capacity, by 2030, and it uses an open bidding system between the private sector generator and publicly owned SaskPower.

In bidding last year on a renewable contract, 15 renewable power developers submitted bids, with an average price of 4.2 cents per kWh.

One low bidder was Potentia with a proposal for a 200 MW project, which should provide electricity for 90,000 homes in the province, at less than 3 cents kWh, according to Robert Hornung of the Canadian Wind Energy Association.

"The cost of wind energy has fallen 70 per cent in the last nine years," he says. "In the last decade, more wind energy has been built than any other form of electricity."

Ontario remains the leading user of wind with 4,902 MW of wind generation as of December 2017, most of that capacity built under a system that offered an above-market price for renewable power, put in place by the previous Liberal government.

In June of last year, the new Conservative government of Doug Ford halted more than 700 renewable-energy projects, one of them a wind farm that is sitting half-built, even as plans to reintroduce renewable projects continue to advance.

The feed-in tariff system that offered a higher rate to early builders of renewable generation ended in 2016, but early contracts with guaranteed prices could last up to 20 years.

Hornung says Ontario now has an excess of generating capacity, as it went on building when the 2008-9 bust cut market consumption dramatically.

But he insists wind can compete in the open market, offering low prices for generation when Ontario needs new  capacity.

"I expect there will be competitive processes put in place. I'm quite confident wind projects will continue to go ahead. We're well positioned to do that."

 

Related News

View more

BC Hydro launches program to help coronavirus-affected customers with their bills

BC Hydro COVID-19 Bill Relief provides payment deferrals, no-penalty payment plans, Crisis Fund grants up to $600, and utility bill assistance as customers face pandemic layoffs, social distancing, and increased home power usage.

 

Key Points

A BC Hydro program offering bill deferrals, no-penalty plans, and up to $600 Crisis Fund grants during COVID-19.

✅ Defer payments or set no-penalty payment plans

✅ Apply for up to $600 Customer Crisis Fund grants

✅ Measures to ensure reliable power and remote customer service

 

BC Hydro is implementing a program, including bill relief measures, to help people pay their bills if they’re affected by the novel coronavirus.

The Crown corporation says British Columbians are facing a variety of financial pressures related to the COVID-19 pandemic, as some workplaces close or reduce staffing levels and commercial power consumption plummets across the province.

BC Hydro said it also expects increased power usage as more people stay home amid health officials’ requests that people take social distancing measures, even as electricity demand is down 10% provincewide.

Under the new program, customers will be able to defer bill payments or arrange a payment plan with no penalty, though a recent report on deferred operating costs outlines long-term implications for the utility.

BC Hydro says some customers could also be eligible for grants of up to $600 under its Customer Crisis Fund, if facing power disconnection due to job loss, illness or loss of a family member, while in other jurisdictions power bills were cut for households during the pandemic.

The company says it has taken precautions to keep power running by isolating key facilities, including its control centre, and by increasing its cleaning schedule, a priority even as some utilities face burgeoning debt amid COVID-19.

It has also closed its walk-in customer service desks to reduce risk from face-to-face contact and suspended all non-essential business travel, public meetings and site tours, and warned businesses about BC Hydro impersonation scams during this period.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

UK's Energy Transition Stalled by Supply Delays

UK Clean Energy Supply Chain Delays are slowing decarbonization as transformer lead times, grid infrastructure bottlenecks, and battery storage contractors raise costs and risk 2030 targets despite manufacturing expansions by Siemens Energy and GE Vernova.

 

Key Points

Labor and equipment bottlenecks delay transformers and grid upgrades, risking the UK's 2030 clean power target.

✅ Transformer lead times doubled or tripled, raising project costs

✅ Grid infrastructure and battery storage contractors in short supply

✅ Firms expand capacity cautiously amid uncertain demand signals

 

The United Kingdom's ambitious plans to transition to clean energy are encountering significant obstacles due to prolonged delays in obtaining essential equipment such as transformers and other electrical components. These supply chain challenges are impeding the nation's progress toward decarbonizing its power sector by 2030, even as wind leads the power mix in key periods.

Supply Chain Challenges

The global surge in demand for renewable energy infrastructure, including large-scale storage solutions, has led to extended lead times for critical components. For example, Statera Energy's storage plant in Thurrock experienced a 16-month delay for transformers from Siemens Energy. Such delays threaten the UK's goal to decarbonize power supplies by 2030.

Economic Implications

These supply chain constraints have doubled or tripled lead times over the past decade, resulting in increased costs and straining the energy transition as wind became the main source of UK electricity in a recent milestone. Despite efforts to expand manufacturing capacity by companies like GE Vernova, Hitachi Energy, and Siemens Energy, the sector remains cautious about overinvesting without predictable demand, and setbacks at Hinkley Point C have reinforced concerns about delivery risks.

Workforce and Manufacturing Capacity

Additionally, there is a limited number of companies capable of constructing and maintaining battery sites, adding to the challenges. These issues underscore the necessity for new factories and a trained workforce to support the electrification plans and meet the 2030 targets.

Government Initiatives

In response to these challenges, the UK government is exploring various strategies to bolster domestic manufacturing capabilities and streamline supply chains while supporting grid reform efforts underway to improve system resilience. Investments in infrastructure and workforce development are being considered to mitigate the impact of global supply chain disruptions and advance the UK's green industrial revolution for next-generation reactors.

The UK's energy transition is at a critical juncture, with supply chain delays posing substantial risks to achieving decarbonization goals, including the planned end of coal power after 142 years for the UK. Addressing these challenges will require coordinated efforts between the government, industry stakeholders, and international partners to ensure a sustainable and timely shift to clean energy.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.