Kentucky PSC looking for more reliability data

By Kentucky Public Service Commission


Arc Flash Training CSA Z462 - Electrical Safety Essentials

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$249
Coupon Price:
$199
Reserve Your Seat Today
Electric utilities in Kentucky are being told to provide the Kentucky Public Service Commission (PSC) with more detailed information about the reliability of their distribution systems.

In a recent order, the PSC also requires electric utilities to submit plans detailing how they manage vegetation along distribution system power lines.

“The information we are requiring from the electric utilities will enable the PSC to determine in the coming years whether there is a need for standards for both electric service reliability and vegetation management,” PSC Chairman Mark David Goss said. “We cannot, nor should we, set standards until we know how well the utilities are doing.”

This order marks the end of an investigation opened by the PSC in December 2006. The investigation stemmed from recommendations made in 2005 in a PSC report on KentuckyÂ’s electric infrastructure. That report noted that utilities are not required to and do not report reliability data in a standard way. Utilities are also not subject to vegetation management standards.

All regulated electric distribution utilities in Kentucky participated in the proceeding. They provided the PSC with information on how they measure and report reliability and their vegetation management practices.

In written testimony and at a hearing, utilities also provided their views on requiring uniform reliability reporting, setting standards for reliability and vegetation management and requiring vegetation management plans.

The PSC determined that a reliability standard is not needed at this time, in part because there is “no broad evidence of inadequate service or sufficient comparative information” to support such a standard, the PSC said in the order.

Furthermore, considerable differences in “geography, customer density, age of infrastructure, past operating practices, and other factors” can lead to differing expected reliability levels among utilities, the PSC said. A uniform standard might be too lenient for some utilities but unreasonable for others. But utilities should report their reliability data in a uniform manner, the PSC said.

This order sets out the following reporting requirements, which take effect immediately: • Each reliability index should be calculated for at least the five calendar years preceding the filing of the annual report, which is due by April 1 of each year. • Each reliability index should be calculated for the utility’s entire system. • Utilities are to record outages and their duration. • Reports must include an analysis of the causes of outages in the previous year and how much each cause contributed to outages overall. • Utilities are to identify the 10 worst-performing circuits for each outage index and identify the predominant cause of the reliability problems on that circuit.

“This action by the PSC is just the first step in a careful process of evaluating how well the jurisdictional electric utilities in Kentucky are meeting the statutory requirement that they provide their customers with reasonable continuity of service,” Goss said.

“Kentucky’s electric utilities do not appear to have any widespread reliability issues,” he said. “These reporting requirements will serve to identify any emerging problems and enable the PSC to take the necessary steps to have them corrected.”

The formal plans are to include information on how often rights-of-way are cleared, how reliability data are used in setting vegetation management practices and how a utility judges the effectiveness of its vegetation management practices.

In assessing vegetation management, the PSC noted that the need to keep vegetation away from power lines often conflicts with the desire of property owners to minimize tree trimming. In opposing a uniform vegetation management standard, utilities said that they need flexibility in order to accommodate property owners on a case-by-case basis.

The PSC agreed, but noted that an evaluation of reliability data over the next several years might indicate the need for a vegetation management requirement in order to improve reliability. However, the PSC found that formal vegetation management plans meeting certain minimum requirements are necessary and ordered utilities to submit such plans by the end of this year. While most utilities already have internal plans, they are not currently required to be filed with the commission.

Utilities are required to report reliability using specific methodologies and indices that are standard in the electric industry.

Related News

B.C. politicians must focus more on phasing out fossil fuels, report says

BC Fossil Fuel Phase-Out outlines a just transition to a green economy, meeting climate targets by mid-century through carbon budgets, ending subsidies for fracking, capping production, and investing in renewable energy, remediation, and resilient infrastructure.

 

Key Points

A strategic plan to wind down oil and gas, end subsidies, and achieve climate targets with a just transition in BC.

✅ End new leases, phase out subsidies, cap fossil production

✅ Carbon budgets and timelines to meet mid-century climate targets

✅ Just transition: income supports, retraining, site remediation jobs

 

Politicians in British Columbia aren't focused enough on phasing out fossil fuel industries, a new report says.

The report, authored by the left-leaning Canadian Centre for Policy Alternatives, says the province must move away from fossil fuel industries by mid-century in order to meet its climate targets, with B.C. projected to fall short of 2050 targets according to recent analysis, but adds that the B.C. government is ill prepared to transition to a green economy.

"We are totally moving in the wrong direction," said economist Marc Lee, one of the authors of the report, on The Early Edition Wednesday. 

He said most of the emphasis of B.C. government policy has been on slowing reductions in emissions from transportation or emissions from buildings, even though Canada will need more electricity to hit net-zero according to the IEA, while still subsidizing fossil fuel extraction, such as fracking projects, that Lee said should be phased out.

"What we are putting on the table is politically unthinkable right now," said Lee, adding that last month's provincial budget called for a 26 per cent increased gas production over the next three years, even though electrified LNG facilities could boost demand for clean power.

B.C.'s $830M in fossil fuel subsidies undermines efforts to fight climate crisis, report says
He said B.C. needs to start thinking instead about how its going to wind down its dependence on fossil fuel industries.

 

'Greener' job transition needed
The report said the provincial government's continued interest in expanding production and exporting fossil fuels, even as Canada's race to net-zero intensifies across the energy sector, suggests little political will to think about a plan to move away from them.

It suggests the threat of major job losses in those industries is contributing to the political inaction, but cited several examples of ways governments can help move workers into greener jobs, as many fossil-fuel workers are ready to support the transition according to recent commentary. 

Lee said early retirement provisions or income replacement for transitioning workers are options to consider.

"We actually have seen a lot of real-world policy around transition starting to happen, including in Alberta, which brought in a whole transition package for coal workers producing coal for electricity generation, and regional cooperation like bridging the electricity gap between Alberta and B.C. could further support reliability," Lee said.

Give cities the power to move more quickly on the environment, say Metro Van politicians
Make it easier for small businesses to go green, B.C. Chamber of Commerce urges government
Lee also said well-paying jobs could be created by, for example, remediating old coal mines and gas wells and building green infrastructure and renewable electricity projects in affected areas.

The report also calls for a moratorium on new fossil fuel leases and ending fossil fuel subsidies, as well as creating carbon budgets and fossil fuel production limits.

"Change is coming," said Lee. "We need to get out ahead of it."

 

Related News

View more

New England's solar growth is creating tension over who pays for grid upgrades

New England Solar Interconnection Costs highlight distributed generation strains, transmission charges, distribution upgrades, and DAF fees as National Grid maps hosting capacity, driving queue delays and FERC disputes in Rhode Island and Massachusetts.

 

Key Points

Rising upfront grid upgrade and DAF charges for distributed solar in RI and MA, including some transmission costs.

✅ Upfront grid upgrades shifted to project developers

✅ DAF and transmission charges increase per MW costs

✅ Queue delays tied to hosting capacity and cluster studies

 

Solar developers in Rhode Island and Massachusetts say soaring charges to interconnect with the electric grid are threatening the viability of projects. 

As more large-scale solar projects line up for connections, developers are being charged upfront for the full cost of the infrastructure upgrades required, a long-common practice that they say is now becoming untenable amid debates over a new solar customer charge in Nova Scotia. 

“It is a huge issue that reflects an under-invested grid that is not ready for the volume of distributed generation that we’re seeing and that we need, particularly solar,” said Jeremy McDiarmid, vice president for policy and government affairs at the Northeast Clean Energy Council, a nonprofit business organization. 

Connecting solar and wind systems to the grid often requires upgrades to the distribution system to prevent problems, such as voltage fluctuations and reliability risks highlighted by Australian distributors in their networks. Costs can vary considerably from place to place, depending on the amount of distributed generation coming online and the level of capacity planning by regulators, said David Feldman, a senior financial analyst at the National Renewable Energy Laboratory.

“Certainly the Northeast often has more distribution challenges than much of the rest of the country just because it’s more populous and often the infrastructure is older,” he said. “But it’s not unique to the Northeast — in the Midwest, for example, there’s a significant amount of wind projects in the queues and significant delays.”

In Rhode Island and Massachusetts, where strong incentive programs are driving solar development, the level of solar coming online is “exposing the under-investment in the distribution system that is causing these massive costs that National Grid is assigning to particular projects or particular groups of projects,” McDiarmid said. “It is going to be a limiting factor for how much clean energy we can develop and bring online.”

Frank Epps, chief executive officer at Energy Development Partners, has been developing solar projects in Rhode Island since 2010. In that time, he said, interconnection charges on his projects have grown from about $80,000-$120,000 per megawatt to more than $400,000 per megawatt. He attributed the increase to a lack of investment in the distribution network by National Grid over the last decade.

He and other developers say the utility is now adding further to their costs by passing along not just the cost of improving the distribution system — the equivalent of the city street of the grid that brings power directly to customers — but also costs for modifying the transmission system — the interstate highway that moves bulk power over long distances to substations. 

Solar developers who are only requesting to hook into the distribution system, and not applying for transmission service, say they should not be charged for those additional upgrades under state interconnection rules unless they are properly authorized under the federal law that governs the transmission system. 

A Rhode Island solar and wind developer filed a complaint with the Federal Energy Regulatory Commission in February over transmission system improvement charges for its four proposed solar projects. Green Development said National Grid subsidiaries Narragansett Electric and New England Power Company want to charge the company more than $500,000 a year in operating and maintenance expenses assessed as so-called direct assignment facility charges. 

“This amount nearly doubles the interconnection costs associated with the projects,” which total 38.4 megawatts in North Smithfield, the company says in its complaint. “Crucially, these charges are linked to recovering costs associated with providing transmission service — even though no such transmission service is being provided to Green Development.”

But Ted Kresse, a spokesperson for National Grid, said the direct assignment facility, or DAF, construct has been in place for decades and has been applied to any customer affecting the need for transmission upgrades.

“It is the result of the high penetration and continued high volume of distributed generation interconnections that has recently prompted the need for transmission upgrades, and subsequently the pass-through of the associated DAF charges,” he said. 

Several complaints before the Rhode Island Public Utilities Commission object to these DAF and other transmission charges.

One petition for dispute resolution concerns four solar projects totaling 40 MW being developed by Energy Development Partners in a former gravel pit in North Kingstown. Brown University has agreed to purchase the power. 

The developer signed interconnection service agreements with Narragansett Electric in 2019 requiring payment of $21.6 million for costs associated with connecting the projects at a new Wickford Junction substation. Last summer, Narragansett sought to replace those agreements with new ones that reclassified a portion of the costs as transmission-level costs, through New England Power, National Grid’s transmission subsidiary.

That shift would result in additional operational and maintenance charges of $835,000 per year for the estimated 35-year life of the projects, the complaint says.

“This came as a complete shock to us,” Epps said. “We’re not just paying for the maintenance of a new substation. We are paying a share of the total cost that the system owner has to own and operate the transmission system. So all of the sudden, it makes it even tougher for distributed energy resources to be viable.”

In its response to the petition, National Grid argues that the charges are justified because the solar projects will require transmission-level upgrades at the new substation. The company argues that the developer should be responsible for the costs rather than ratepayers, “who are already supporting renewable energy development through their electric rates.”

Seth Handy, one of the lawyers representing Green Development in the FERC complaint, argues that putting transmission system costs on distribution assets is unfair because the distributed resources are “actually reducing the need to move electricity long distances. We’ve been fighting these fights a long time over the underestimating of the value of distributed energy in reducing system costs.”

Handy is also representing the Episcopal Diocese of Rhode Island before the state Supreme Court in its appeal of an April 2020 public utilities commission order upholding similar charges for a proposed 2.2-megawatt solar project at the diocese’s conference center and camp in Glocester. 

Todd Bianco, principal policy associate at the utilities commission, said neither he nor the chairperson can comment on the pending dockets contesting these charges. But he noted that some of these issues are under discussion in another docket examining National Grid’s standards for connecting distributed generation. Among the proposals being considered is the appointment of an independent ombudsperson to resolve interconnection disputes. 

Separately, legislation pending before the Rhode Island General Assembly would remove responsibility for administering the interconnection of renewable energy from utilities, and put it under the authority of the Rhode Island Infrastructure Bank, a financing agency.

Handy, who recently testified in support of the bill, said he believes National Grid has too many conflicting interests to administer interconnecting charges in a timely, transparent and fair fashion, and pointed to utility moves such as changes to solar compensation in other states as examples. In particular, he noted the company’s interests in expanding natural gas infrastructure. 

“There are all kinds of economic interests that they have that conflict with our state policy to provide lower-cost renewable energy and more secure energy solutions,” Handy said.

In testimony submitted to the House Committee on Corporations opposing the legislation, National Grid said such powers are well beyond the purpose and scope of the infrastructure bank. And it cited figures showing Rhode Island is third in the country for the most installed solar per square mile (behind New Jersey and Massachusetts).

Nadav Enbar, program manager at the Electric Power Research Institute, a nonprofit research organization for the utility industry, said interconnection delays and higher costs are becoming more common due to “the incredible uptake” in distributed renewable energy, particularly solar.

That’s impacting hosting capacity, the room available to connect all resources to a circuit without causing adverse harm to reliability and safety. 

“As hosting capacity is being reduced, it’s causing an increasing number of situations where utilities need to study their systems to guarantee interconnection without compromising their systems,” he said. “And that is the reason why you’re starting to see some delays, and it has translated into some greater costs because of the need for upgrades to infrastructure.”

The cost depends on the age or absence of infrastructure, projected load growth, the number of renewable energy projects in the queue, and other factors, he said. As utilities come under increasing pressure to meet state renewable goals, and as some states pilot incentives like a distributed energy rebate in Illinois to drive utility innovation, some (including National Grid) are beginning to provide hosting capacity maps that provide detailed information to developers and policymakers about the amount of distributed energy that can be accommodated at various locations on the grid, he said. 

In addition, the coming availability of high-tech “smart inverters” should help ease some of these problems because they provide the grid with more flexibility when it comes to connecting and communicating with distributed energy resources, Enbar said. 

In Massachusetts, the Department of Public Utilities has opened a docket to explore ways to better plan for and share the cost of upgrading distribution infrastructure to accommodate solar and other renewable energy sources as part of a grid overhaul for renewables nationwide. National Grid has been conducting “cluster studies” there that attempt to analyze the transmission impacts of a group of solar projects and the corresponding interconnection cost to each developer.

Kresse, of National Grid, said the company favors cost-sharing methodologies under consideration that would “provide a pathway to spread cost over the total enabled capacity from the upgrade, as opposed to spreading the cost over only those customers in the queue today.” 

Solar developers want regulators to take an even broader approach that factors in how the deployment of renewables and the resulting infrastructure upgrades benefit not just the interconnecting generator, but all customers. 

“Right now, if your project is the one that causes a multimillion-dollar upgrade, you are assigned that cost even though that upgrade is going to benefit a lot of other projects, as well as make the grid stronger,” said McDiarmid, of the clean energy council. “What we’re asking for is a way of allocating those costs among a variety of developers, as well as to the grid itself, meaning ratepayers. There’s a societal benefit to increasing the modernization of the grid, and improving the resilience of the grid.”

In the meantime, BlueHub Capital, a Boston-based solar developer focused on serving affordable housing developments, recently learned from National Grid that, as a part of one of the area studies, it will be required to pay $5.8 million in transmission and distribution upgrades to interconnect a 2-megawatt solar-plus-storage project that leverages cheaper batteries to enhance resilience, approved for a brownfield site in Gardner, Massachusetts. 

According to testimony submitted to the department, the sum is supposed to be paid within the next year, even though the project will have to wait to be interconnected until April 2027, when a new transmission line is completed. In addition, BlueHub will be responsible for DAF charges totaling $3.4 million over the 20-year life of the project. 

“We’re being asked to pay a fortune to provide solar that the state wants,” said DeWitt Jones, BlueHub’s president. “It’s so expensive that the upgrades are driving everyone out of the interconnection queue. The costs stay the same, but they fall on fewer projects. We need a process of grid design and modernization to guide this.”

 

Related News

View more

Michigan Public Service Commission grants Consumers Energy request for more wind generation

Consumers Energy Wind Expansion gains MPSC approval in Michigan, adding up to 525 MW of wind power, including Gratiot Farms, while solar capacity requests face delays over cost projections under the renewable portfolio standard targets.

 

Key Points

A regulatory-approved plan enabling Consumers Energy to add 525 MW of wind while solar additions await cost review.

✅ MPSC approves up to 525 MW in new wind projects

✅ Gratiot Farms purchase allowed before May 1

✅ Solar request delayed over high cost projections

 

Consumers Energy Co.’s efforts to expand its renewable offerings gained some traction this week when the Michigan Public Service Commission (MPSC) approved a request for additional wind generation capacity.

Consumers had argued that both more wind and solar facilities are needed to meet the state’s renewable portfolio standard, which was expanded in 2016 to encompass 12.5 percent of the retail power of each Michigan electric provider. Those figures will continue to rise under the law through 2021 when the figure reaches 15 percent, alongside ongoing electricity market reforms discussions. However, Consumers’ request for additional solar facilities was delayed at this time due to what the Commission labeled unrealistically high-cost projections.

Consumers will be able to add as much as 525 megawatts of new wind projects amid a shifting wind market, including two proposed 175-megawatt wind projects slated to begin operation this year and next. Consumers has also been allowed to purchase the Gratiot Farms Wind Project before May 1.

The MPSC said a final determination would be made on Consumers’ solar requests during a decision in April. Consumers had sought an additional 100 megawatts of solar facilities, hoping to get them online sometime in 2024 and 2025.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Utilities commission changes community choice exit fees; what happens now in San Diego?

CPUC Exit Fee Increase for CCAs adjusts the PCIA, affecting utilities, San Diego ratepayers, renewable energy procurement, customer equity, and cost allocation, while providing regulatory certainty for Community Choice Aggregation programs and clean energy goals.

 

Key Points

A CPUC-approved change raising PCIA exit fees paid by CCAs to utilities, balancing cost shifts and customer equity.

✅ PCIA rises from about 2.5c to roughly 4.25c per kWh in San Diego

✅ Aims to reduce cost shifts and protect non-CCA customers

✅ Offers regulatory certainty for CCA launches and clean energy goals

 

The California Public Utilities Commission approved an increase on the exit fees charged to customers who take part in Community Choice Aggregation -- government-run alternatives to traditional utilities like San Diego Gas & Electric.

After reviewing two competing exit fee proposals, all five commissioners voted Thursday in favor of an adjustment that many CCA advocates predicted could hamper the growth of the community choice movement.

But minutes after the vote was announced, one of the leading voices in favor of the city San Diego establishing its own CCA said the decision was good news because it provides some regulatory certainty.

"For us in San Diego, it's a green light to move forward with community choice," said Nicole Capretz, executive director of the Climate Action Campaign. "For us, it's let's go, let's launch and let's give families a choice. We no longer have to wait."

Under the CCA model, utilities still maintain transmission and distribution lines (poles and wires, etc.) and handle customer billing. But officials in a given local government entity make the final decisions about what kind of power sources are purchased.

Once a CCA is formed, its customers must pay an exit fee -- called a Power Charge Indifference Adjustment -- to the legacy utility serving that particular region. The fee is included in customers' monthly bills.

The fee is required to offset the costs of the investments utilities made over the years for things like natural gas power plants, renewable energy facilities and other infrastructure.

Utilities argue if the exit fee is set too low, it does not fairly compensate them for their investments; if it's too high, CCAs complain it reduces the financial incentive for their potential customers.

The Public Utilities Commission chose to adopt a proposal that some said was more favorable to utilities, leading to complaints from CCA boosters.

"We see this will really throw sand in the gears in our ability to do things that can move us toward (climate change) goals," Jim Parks, staff member of Valley Clean Energy, a CCA based in Davis, said before the vote.

Commissioner Carla Peterman, who authored the proposal that passed, said she supports CCAs but stressed the commission has a "legal obligation" to make sure increased costs are not shouldered by "customers who do not, or cannot, join a CCA. Today's proposal ensures a more level playing field between customers."

As for what the vote means for the exit fee in San Diego, Peterman's office earlier in the week estimated the charge would rise from 2.5 cents a kilowatt-hour to about 4.25 cents.

The Clear the Air Coaltion, a San Diego County group critical of CCAs, said the newly established exit fee -- which goes into effect starting next year -- is "a step in the direction."

But the group, which includes the San Diego Regional Chamber of Commerce, the San Diego County Taxpayers Association and lobbyists for Sempra Energy (the parent company of SDG&E), repeated concerns it has brought up before.

"If the city of San Diego decides to get into the energy business this decision means ratepayers in National City, Chula Vista, Carlsbad, Imperial Beach, La Mesa, El Cajon and all other neighboring communities would see higher energy bills, and San Diego taxpayers would be faced with mounting debt," coalition spokesman Tony Manolatos said in an email.

CCA supporters say community choice is critical in ensuring San Diego meets the pledge made by Mayor Kevin Faulconer to adopt the city's Climate Action Plan, mandating 100 percent of the city's electricity needs must come from renewable sources by 2035.

Now attention turns to Faulconer, who promised to make a decision on bringing a CCA proposal to the San Diego City Council only after the utilities commission made its decision.

A Faulconer spokesman said Thursday afternoon that the vote "provides the clarity we've been waiting for to move forward" but did not offer a specific time table.

"We're on schedule to reach Mayor Faulconer's goal of choosing a pathway that achieves our renewable energy goals while also protecting ratepayers, and the mayor looks forward to making his recommendation in the next few weeks," said Craig Gustafson, a Faulconer spokesman, in an email.

A feasibility study released last year predicted a CCA in San Diego has the potential to deliver cheaper rates over time than SDG&E's current service, while providing as much as 50 percent renewable energy by 2023 and 80 percent by 2027.

"The city has already figured out we are still capable of launching a program, having competitive, affordable rates and finally offering families a choice as to who their energy provider is," said Capretz, who helped draft an initial blueprint of the climate plan as a city staffer.

SDG&E has come to the city with a counterproposal that offers 100 percent renewables by 2035.

Thus far, the utility has produced a rough outline for a "tariff" program that would charge ratepayers the cost of delivering more clean sources of energy over time.

Some council members have expressed frustration more specifics have not been sketched out.

SDG&E officials said they will take the new exit fee into account as they go forward with their counterproposal to the city council.

Speaking in general about the utility commission's decision, SDG&E spokeswoman Helen Gao called it "a victory for our customers, as it minimizes the cost shifts that they have been burdened with under the existing fee formula.

"As commissioners noted in rendering their decision, reforming the (exit fee) addresses a customer-to-customer equity issue and has nothing to do with increasing profits for investor-owned utilities," Gao said in an email.

 

Related News

View more

UK Lockdown knocks daily electricity demand by 10 per cent

Britain Electricity Demand During Lockdown is around 10 percent lower, as industrial consumers scale back. National Grid reports later morning peaks and continues balancing system frequency and voltage to maintain grid stability.

 

Key Points

Measured drop in UK power use, later morning peaks, and grid actions to keep frequency and voltage within safe limits.

✅ Daily demand about 10 percent lower since lockdown.

✅ Morning peak down nearly 18 percent and occurs later.

✅ National Grid balances frequency and voltage using flexible resources.

 

Daily electricity demand in Britain is around 10% lower than before the country went into lockdown last week due to the coronavirus outbreak, data from grid operator National Grid showed on Tuesday.

The fall is largely due to big industrial consumers using less power across sectors, the operator said.

Last week, Prime Minister Boris Johnson ordered Britons to stay at home to halt the spread of the virus, imposing curbs on everyday life without precedent in peacetime.

Morning peak demand has fallen by nearly 18% compared to before the lockdown was introduced and the normal morning peak is later than usual because the times people are getting up are later and more spread out with fewer travelling to work and school, a pattern also seen in Ottawa during closures, National Grid said.

Even though less power is needed overall, the operator still has to manage lower demand for electricity, as well as peaks, amid occasional short supply warnings from National Grid, and keep the frequency and voltage of the system at safe levels.

Last August, a blackout cut power to one million customers and caused transport chaos as almost simultaneous loss of output from two generators caused by a lightning strike caused the frequency of the system to drop below normal levels, highlighting concerns after the emergency energy plan stalled.

National Grid said it can use a number of tools to manage the frequency, such as working with flexible generators to reduce output or draw on storage providers to increase demand, and market conditions mean peak power prices have spiked at times.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.