Report claims new coal plant will lead to higher rates

By The Virginian-Pilot


Electrical Testing & Commissioning of Power Systems

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$599
Coupon Price:
$499
Reserve Your Seat Today
An environmental coalition has launched the first serious attack on Old Dominion Electric Cooperative's proposed coal-fired power plant in Surry County.

A report, released by the Wise Energy for Virginia Coalition, which includes five environmental groups, argues that consumers who receive power from Old Dominion will pay more for electricity from the coal plant than they would with the use of renewable energy and efficiency programs.

The environmental coalition commissioned the study by Synapse Energy Economics Inc., an energy research and consulting firm based in Cambridge, Mass. The report concluded that rising construction expenses, economic uncertainty and the costs of controlling carbon dioxide emissions will lead to unnecessarily high electricity rates for consumers who depend on Old Dominion for their power.

The cooperative has proposed to build what would be the state's largest coal plant in the town of Dendron, about 50 miles west of Norfolk. Cypress Creek Power Station would cost up to $6 billion and generate as much as 1,500 megawatts of electricity.

"The confluence of factors described in this report make it unlikely that investment in a new coal-fired facility at this time of regulatory uncertainty and increasing costs will be the lowest-cost option for customers," the report's authors wrote. "This is especially true given the project's $6 billion estimated construction cost, the likely costs of complying with federal regulation of CO2 emissions, potential structural changes in the natural gas market leading to lower prices, both current and long-term, and the availability of low cost energy efficiency."

Old Dominion is owned by 11 mostly rural electricity cooperatives in Virginia, Maryland and Delaware. Those cooperatives buy electricity from Old Dominion and deliver it to their members, including those in A&N Electric Cooperative's territory on the Eastern Shore and in Community Electric Cooperative's area west of Hampton Roads.

The environmental groups are hoping the economic argument will convince Old Dominion and other utility companies to stop relying on a fossil fuel that causes pollution and depends on mining practices that damage the landscape. Based on the report's findings, they argue that a combination of energy-efficiency efforts, wind turbines, biomass energy from wood materials and cleaner natural gas-fired plants would equal Cypress Creek's output but would cost ratepayers 1.7 cents to 4.5 cents less per kilowatt hour.

"Electric utilities are stuck in an old way of thinking," said Tom Cormons, who directs the Virginia office of Appalachian Voices, a member of the Wise Energy coalition. "It's useful to make a financial case, which to a growing extent reflects the environmental reality."

Old Dominion officials have argued that coal is the best source for generating enough baseload power - electricity that is constantly available - to meet growing demand in its territory. Renewable options such as wind and solar power are not reliable, they have said, and energy-efficiency programs cannot reduce usage enough to offset growing demand.

"It is still the least-expensive way for us to make sure we have the electricity we need and at an affordable price," said Jeb Hockman, Old Dominion's spokesman.

The technology to capture and contain carbon dioxide emissions remains far from developed, leaving Old Dominion without a viable method to reduce the 14.6 million tons of carbon dioxide each year that it projects the plant will produce. With the expected implementation of President Barack Obama's "cap-and-trade" program, which would limit carbon emissions and require companies to pay for their pollution, Old Dominion and its customers would end up with a hefty tab, the report said.

The report also questioned the cooperative's assessment of growing demand and the need for a plant of the proposed size of Cypress Creek.

Old Dominion has factored the costs of cap-and-trade requirements and carbon controls into its projections for Cypress Creek, Hockman said. "We still feel like it's very economically feasible and makes a lot of sense."

Related News

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

California Blackouts reveal lapses in power supply

California Electricity Reliability covers grid resilience amid heat waves, rolling blackouts, renewable energy integration, resource adequacy, battery storage, natural gas peakers, ISO oversight, and peak demand management to keep homes, businesses, and industry powered.

 

Key Points

Dependable California power delivery despite heat waves, peak demand, and challenges integrating renewables into grid.

✅ Rolling blackouts revealed gaps in resource adequacy.

✅ Early evening solar drop requires fast ramping and storage.

✅ Agencies pledge planning reforms and flexible backup supply.

 

One hallmark of an advanced society is a reliable supply of electrical energy for residential, commercial and industrial consumers. Uncertainty that California electricity will be there when we need it it undermines social cohesion and economic progress, as demonstrated by the travails of poor nations with erratic energy supplies.

California got a small dose of that syndrome in mid-August when a record heat wave struck the state and utilities were ordered to impose rolling blackouts to protect the grid from melting down under heavy air conditioning demands.

Gov. Gavin Newsom quickly demanded that the three overseers of electrical service to most of the state - the Public Utilities Commission, the Energy Commission and the California Independent Service Operator – explain what went wrong.

"These blackouts, which occurred without prior warning or enough time for preparation, are unacceptable and unbefitting of the nation's largest and most innovative state," Newsom wrote. "This cannot stand. California residents and businesses deserve better from their government."

Initially, there was some fingerpointing among the three entities. The blackouts had been ordered by the California Independent System Operator, which manages the grid and its president, Steve Berberich, said he had warned the Public Utilities Commission about the potential supply shortfall facing the state.

"We have indicated in filing after filing after filing that the resource adequacy program was broken and needed to be fixed," he said. "The situation we are in could have been avoided."

However, as political heat increased, the three agencies hung together and produced a joint report that admitted to lapses of supply planning and grid management and promised steps to avoid a repeat next summer.

"The existing resource planning processes are not designed to fully address an extreme heat storm like the one experienced in mid August," their report said. "In transitioning to a reliable, clean and affordable resource mix, resource planning targets have not kept pace to lead to sufficient resources that can be relied upon to meet demand in the early evening hours. This makes balancing demand and supply more challenging."

Although California's grid had experienced greater heat-related demands in previous years, most notably 2006, managers then could draw standby power from natural gas-fired plants and import juice from other Western states when necessary.

Since then, the state has shut down a number of gas-fired plants and become more reliant on renewable but less reliable sources such as windmills and solar panels.

August's air conditioning demand peaked just as output from solar panels was declining with the setting of the sun and grid managers couldn't tap enough electrons from other sources to close the gap.

While the shift to renewables didn't, unto itself, cause the blackouts, they proved the need for a bigger cushion of backup generation or power storage in batteries or some other technology. The Public Utilities Commission, as Beberich suggested, has been somewhat lax in ordering development of backup supply.

In the aftermath of the blackouts, the state Water Resources Control Board, no doubt with direction from Newsom's office, postponed planned shutdowns of more coastal plants, which would have reduced supply flexibility even more.

Shifting to 100% renewable electricity, the state's eventual goal, while maintaining reliability will not get any easier. The state's last nuclear plant, Diablo Canyon, is ticketed for closure and demand will increase as California eliminates gasoline- and diesel-powered vehicles in favor of "zero emission vehicles" as part of its climate policies push and phases out natural gas in homes and businesses.

Politicians such as Newsom and legislators in last week's blackout hearing may endorse a carbon-free future in theory, but they know that they'll pay the price as electricity prices climb if nothing happens when Californians flip the switch.

 

Related News

View more

Setbacks at Hinkley Point C Challenge UK's Energy Blueprint

Hinkley Point C delays highlight EDF cost overruns, energy security risks, and wholesale power prices, complicating UK net zero plans, Sizewell C financing, and small modular reactor adoption across the grid.

 

Key Points

Delays at EDF's 3.2GW Hinkley Point C push operations to 2031, lift costs to £46bn, and risk pricier UK electricity.

✅ First unit may slip to 2031; second unit date unclear.

✅ LSEG sees 6% wholesale price impact in 2029-2032.

✅ Sizewell C replicates design; SMR contracts expected soon.

 

Vincent de Rivaz, former CEO of EDF, confidently announced in 2016 the commencement of the UK's first nuclear power station since the 1990s, Hinkley Point C. However, despite milestones such as the reactor roof installation, recent developments have belied this optimism. The French state-owned utility EDF recently disclosed further delays and cost overruns for the 3.2 gigawatt plant in Somerset.

These complications at Hinkley Point C, which is expected to power 6 million homes, have sparked new concerns about the UK's energy strategy and its ambition to decarbonize the grid by 2050.

The UK government's plan to achieve net zero by 2050 includes a significant role for nuclear energy, reflecting analyses that net-zero may not be possible without nuclear and aiming to increase capacity from the current 5.88GW to 24GW by mid-century.

Simon Virley, head of energy at KPMG in the UK, stressed the importance of nuclear energy in transitioning to a net zero power system, echoing industry calls for multiple new stations to meet climate goals. He pointed out that failing to build the necessary capacity could lead to increased reliance on gas.

Hinkley Point C is envisioned as the pioneer in a new wave of nuclear plants intended to augment and replace Britain's existing nuclear fleet, jointly managed by EDF and Centrica. Nuclear power contributed about 14 percent of the UK's electricity in 2022, even as Europe is losing nuclear power across the continent. However, with the planned closure of four out of five plants by March 2028 and rising electricity demand, there is concern about potential power price increases.

Rob Gross, director of the UK Energy Research Centre, emphasized the link between energy security and affordability, highlighting the risk of high electricity prices if reliance on expensive gas increases.

The first 1.6GW reactor at Hinkley Point C, initially set for operation in 2027, may now face delays until 2031, even after first reactor installation milestones were reported. The in-service date for the second unit remains uncertain, with project costs possibly reaching £46bn.

LSEG analysts predict that these delays could increase wholesale power prices by up to 6 percent between 2029 and 2032, assuming the second unit becomes operational in 2033.

Martin Young, an analyst at Investec, warned of the price implications of removing a large power station from the supply side.

In response to these delays, EDF is exploring the extension of its four oldest plants. Jerry Haller, EDF’s former decommissioning director, had previously expressed skepticism about extending the life of the advanced gas-cooled reactor fleet, but EDF has since indicated more positive inspection results. The company had already decided to keep the Heysham 1 and Hartlepool plants operational until at least 2026.

Nevertheless, the issues at Hinkley Point C raise doubts about the UK's ability to meet its 2050 nuclear build target of 24GW.

Previous delays at Hinkley were attributed to the COVID-19 pandemic, but EDF now cites engineering problems, similar to those experienced at other European power stations using the same technology.

The next major UK nuclear project, Sizewell C in Suffolk, will replicate Hinkley Point C's design, aligning with the UK's green industrial revolution agenda. EDF and the UK government are currently seeking external investment for the £20bn project.

Compared with Hinkley Point C, Sizewell C's financing model involves exposing billpayers to some risk of cost overruns. This, coupled with EDF's track record, could affect investor confidence.

Additionally, the UK government is supporting the development of small modular reactors, while China's nuclear program continues on a steady track, with contracts expected to be awarded later this year.

 

Related News

View more

B.C. government freezes provincial electricity rates

BC Hydro Rate Freeze delivers immediate relief on electricity rates in British Columbia, reversing a planned 3% hike, as BCUC oversight, a utility review, and Site C project debates shape provincial energy policy.

 

Key Points

A one-year provincial policy halting BC Hydro electricity rate hikes while a utility review finds cost savings.

✅ Freeze replaces planned 3% hike approved by BCUC.

✅ Government to conduct comprehensive BC Hydro review.

✅ Critics warn $150M revenue loss impacts capital projects.

 

British Columbia's NDP government has announced it will freeze BC Hydro rates effective immediately, fulfilling a key election promise.

Energy, Mines and Petroleum Resources Minister Michelle Mungall says hydro rates have gone up by more than 24 per cent in the last four years and by more than 70 per cent since 2001, reflecting proposals such as a 3.75% increase over two years announced previously.

"After years of escalating electricity costs, British Columbians deserve a break on their bills," Mungall said in a news release.

BC Hydro had been approved by the B.C. Utilities Commission to increase the rate by three per cent next year, but Mungall said it will pull back its request in order to comply with the freeze.

In the meantime, the government says it will undertake a comprehensive review of the utility meant to identify cost-savings measures for customers often asked to pay an extra $2 a month on electricity bills.

The Liberal critic, Tracy Redies, says the one year rate freeze is going to cost BC Hydro, calling it a distraction from the bigger issue of the future of the Site C project and the oversight of a BC Hydro fund surplus as well.

"A one year rate freeze costs Hydro $150 million," Redies said. "That means there's $150 million less to invest in capital projects and other investments that the utility needs to make."

"This is putting off decisions that should be made today to the future."

Recommendations from the review — including possible new rates — will be implemented starting in April 2019.

 

Related News

View more

Louisiana power grid needs 'complete rebuild' after Hurricane Laura, restoration to take weeks

Louisiana Grid Rebuild After Hurricane Laura will overhaul transmission lines and distribution networks in Lake Charles, as Entergy restores power after catastrophic outages, replacing poles, transformers, and spans to stabilize critical electric infrastructure.

 

Key Points

Entergy's project replacing transmission and distribution in Lake Charles to restore power after the Cat 4 storm

✅ 1,000+ transmission structures and 6,637 poles damaged

✅ Entergy targets first energized line into Lake Charles in 2 weeks

✅ Full rebuild of Calcasieu and Cameron lines will take weeks

 

The main power utility for southwest Louisiana will need to "rebuild" the region's grid after Hurricane Laura blasted the region with 150 mph winds last week, top officials said.

The Category 4 hurricane made landfall last Thursday just south of Lake Charles near Cameron, damaging or destroying thousands of electric poles as well as leaving "catastrophic damages" to the transmission system for southwest Louisiana, similar to impacts seen during Typhoon Mangkhut outages in Hong Kong that left many without electricity.

“This is not a restoration," Entergy Louisiana president and CEO Phillip May said in a statement. "It’s almost a complete rebuild of our transmission and distribution system that serves Calcasieu and Cameron parishes.”

According to Entergy, all nine transmission lines that deliver power into the Lake Charles area are currently out service due to storm damage to multiple structures and spans of wire.

The transmission system is a critical component in the delivery of power to customers’ homes, and failures at substations can trigger large outages, as seen in Los Angeles station fire outage reported recently, according to the company.

Of those structures impacted, many were damaged "beyond repair" and require complete replacement.

Broken electrical poles are seen in Holly Beach, La., in the aftermath of Hurricane Laura, Saturday, Aug. 29, 2020. (AP Photo/Gerald Herbert)

Entergy said the damage in southwest Louisiana includes 1,000 transmission structures, 6,637 broken poles, 2,926 transformers and 338 miles of downed distribution wire, highlighting why proactive reliability investments in Hamilton are being pursued by other utilities.

Some 8,300 workers are now in the area working to rebuild the transmission lines, but Entergy said that it will be about two to three weeks before power is available to customers in the Lake Charles area, a timeline similar to Tennessee outages after severe storms reported recently in other states.

"Restoring power will take longer to customers in inaccessible areas of the region," the company said. "While not impacting the expected restoration of service to residential customers, initial estimates are it will take weeks to rebuild all transmission lines in Calcasieu and Cameron parishes."

Entergy Louisiana expects to energize the first of its transmission lines into Lake Charles in two weeks.

“We understand going without power for this extended period will be challenging, and this is not the news customers want to hear. But we have thousands of workers dedicated to rebuilding our grid as quickly as they safely can to return some normalcy to our customers’ lives,” May said.

According to power outage tracking website poweroutage.us, over 164,000 customers remain without service in Louisiana as of Thursday morning, while a Carolinas outage update shows hundreds of thousands affected there as well.

On Wednesday, the Edison Electric Institute, the association of investor-owned electric companies in the U.S., said in a statement to FOX Business that electricity has been restored to approximately 737,000 customers, or 75% of those impacted by the storm across Louisiana, eastern Texas, Mississippi, and Arkansas, even as utilities adapt to climate change to improve resilience.

At least 29,000 workers from 29 states, the District of Columbia and Canada are working to restore power in the region, according to the Electricity Subsector Coordinating Council (ESCC), which is coordinating efforts from government and power industry.

“The transmission loss in Louisiana is significant, with more than 1,000 transmission structures damaged or destroyed by the storm," Department of Energy (DOE) Deputy Secretary Mark Menezes said in a statement. Rebuilding the transmission system is essential to the overall restoration effort and will take weeks given the massive scale and complexity of the work. We will continue to coordinate closely to ensure the full capabilities of the industry and government are marshaled to rebuild this critical infrastructure as quickly as possible.” 

At least 17 deaths in Louisiana have been attributed to the storm; more than half of those killed by carbon monoxide poisoning from the unsafe operation of generators, and residents are urged to follow generator safety tips to reduce these risks. Two additional deaths were verified on Wednesday in Beauregard Parish, which health officials said were due to heat-related illness following the storm.

 

Related News

View more

Clocks are running slow across Europe because of an argument over who pays the electricity bill

European Grid Frequency Clock Slowdown has made appliance clocks run minutes behind as AC frequency drifts on the 50 Hz electricity grid, driven by a Kosovo-Serbia billing dispute and ENTSO-E monitored supply-demand imbalance.

 

Key Points

An EU-wide timing error where 50 Hz AC deviations slow appliance clocks due to Kosovo-Serbia grid imbalances.

✅ Clocks drifted up to six minutes across interconnected Europe

✅ Cause: unpaid power in N. Kosovo, contested by Serbia

✅ ENTSO-E reported 50 Hz deviations from supply-demand mismatch

 

Over the past couple of months, Europeans have noticed time slipping away from them. It’s not just their imaginations: all across the continent, clocks built into home appliances like ovens, microwaves, and coffee makers have been running up to six minutes slow. The unlikely cause? A dispute between Kosovo and Serbia over who pays the electricity bill.

To make sense of all this, you need to know that the clocks in many household devices use the frequency of electricity to keep time. Electric power is delivered to our homes in the form of an alternating current, where the direction of the flow of electricity switches back and forth many times a second. (How this system came to be established is complex, but the advantage is that it allows electricity to be transmitted efficiently.) In Europe, this frequency is 50 Hertz — meaning a current alternating of 50 times a second. In America, it’s 60 Hz, and during peak summer demand utilities often prepare for blackouts as heat drives loads higher.

Since the 1930s, manufacturers have taken advantage of this feature to keep time. Each clock needs a metronome — something with a consistent rhythm that helps space out each second — and an alternating current provides one, saving the cost of extra components. Customers simply set the time on their oven or microwave once, and the frequency keeps it precise.

At least, that’s the theory. But because this timekeeping method is reliant on electrical frequency, when the frequency changes, so do the clocks. That is what has been happening in Europe.

The news was announced this week by ENTSO-E, the agency that oversees the single, huge electricity grid connecting 25 European countries and which recently synchronized with Ukraine to bolster regional resilience. It said that variations in the frequency of the AC caused by imbalances between supply and demand on the grid have been messing with the clocks. The imbalance is itself caused by a political argument between Serbia and Kosovo. “This is a very sensitive dispute that materializes in the energy issues,” Susanne Nies, a spokesperson for ENTSO-E, told The Verge.

Essentially, after Kosovo declared independence from Serbia in 2008, there were long negotiations over custody of utilities like telecoms and electricity infrastructure. As part of the ongoing agreements (Serbia still does not recognize Kosovo as a sovereign state), four Serb-majority districts in the north of Kosovo stopped paying for electricity. Kosovo initially covered this by charging the rest of the country more, but last December, it decided it had had enough and stopped paying. This led to an imbalance: the Kosovan districts were still using electricity, but no one was paying to put it on the grid.

This might sound weird, but it’s because electricity grids work on a system of supply and demand, where surging consumption has even triggered a Nordic grid blockade in response to constrained flows. As Stewart Larque of the UK’s National Grid explains, you want to keep the same amount of electricity going onto the grid from power stations as the amount being taken off by homes and businesses. “Think of it like driving a car up a hill at a constant speed,” Larque told The Verge. “You need to carefully balance acceleration with gravity.” (The UK itself has not been affected by these variations because it runs its own grid.)

 

“THEY ARE FREE-RIDING ON THE SYSTEM.”

This balancing act is hugely complex and requires constant monitoring of supply and demand and communication between electricity companies across Europe, and growing cyber risks have spurred a renewed focus on protecting the U.S. power grid among operators worldwide. The dispute between Kosovo and Serbia, though, has put this system out of whack, as the two governments have been refusing to acknowledge what the other is doing.

“The Serbians [in Kosovo] have, according to our sources, not been paying for their electricity. So they are free-riding on the system,” says Nies.

The dispute came to a temporary resolution on Tuesday, when the Kosovan government stepped up to the plate and agreed to pay a fee of €1 million for the electricity used by the Serb-majority municipalities. “It is a temporary decision but as such saves our network functionality,” said Kosovo’s prime minister Ramush Haradinaj. In the longer term, though, a new agreement will need to be reached.

There have been rumors that the increase in demand from northern Kosovo was caused by cryptocurrency miners moving into the area to take advantage of the free electricity. But according to ENTSO-E, this is not the case. “It is absolutely unrelated to cryptocurrency,” Nies told The Verge. “There’s a lot of speculation about this, and it’s absolutely unrelated.” Representatives of Serbia’s power operator, EMS, refused to answer questions on this.

For now, “Kosovo is in balance again,” says Nies. “They are producing enough [electricity] to supply the population. The next step is to take the system back to normal, which will take several weeks.” In other words, time will return to normal for Europeans — if they remember to change their clocks, even as the U.S. power grid sees more blackouts than other developed nations.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified