Audits aimed toward energy efficiency

By Southeast Farm Press


NFPA 70e Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$199
Coupon Price:
$149
Reserve Your Seat Today
Agriculture Secretary Tom Vilsack has announced an initiative designed to help agricultural producers transition to more energy efficient operations. This initiative will make funding available for individual onfarm energy audits designed to save both money and energy when fully implemented.

Reducing energy use on Americas farms and ranches will not only help our agricultural producers become more profitable, but also help the United States become more energy independent, said Vilsack. Through this initiative, producers will be able to receive individual onfarm energy audit evaluations and assistance with implementation of energy conservation and efficiency measures.

Approximately 1,000 onfarm energy audit evaluations in 29 states will be funded by $2 million through the Environmental Quality Incentives Program EQIP in fiscal year 2010. The energy audits will be individually tailored to ensure coverage of each farms primary energy uses such as milk cooling, irrigation pumping, heating and cooling of livestock production facilities, manure collection and transfer, grain drying, and similar common onfarm activities.

Participating states include: Alabama, Arizona, Arkansas, California, Colorado, Connecticut, Florida, Georgia, Idaho, Louisiana, Maine, Maryland, Massachusetts, Mississippi, Nevada, New Hampshire, New Mexico, New York, Oklahoma, Pennsylvania, Rhode Island, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, West Virginia, and Wisconsin.

Implementation will occur in stages beginning with the shortterm immediate goal of providing the onfarm energy audits to help identify how the operations can become more energy efficient. Longerterm goals will involve development of agricultural energy management plans for cost effective implementation of the recommendations provided in their onfarm energy audits. More information about agricultural energy management plans is available at www.nrcs.usda.gov/programs/eqip/cap.html.

The 2008 farm bill provides authority to use EQIP financial assistance funds for payment of practices and conservation activities involving the development of an Agricultural Energy Management Plan AgEMP appropriate for the eligible land of a program participant. The Farm Bill statute allows EQIP payments for up to 75 percent of the estimated incurred cost of practice implementation for the development of an AgEMP meeting agency standards and requirements. Eligible producers in the above listed states may apply for the AgEMP through application at their local NRCS office. EQIP payments are made directly to program participants for development of an AgEMP by a certified Technical Service Provider TSP http://techreg.usda.gov/CustLocateTSP.aspx.

Information about how to apply for an AgEMP is available at www.nrcs.usda.gov/programs/eqip/EQIP_signup/2009_signup/index.html. Click on the state where the property you are interested in obtaining an EQIP AgEMP is located. Dairy, beef, poultry, swine, and other agricultural operations are included in this energy efficiency initiative. USDAs Natural Resources Conservation Service, in partnership with USDA Rural Development, will implement the agricultural energy conservation and efficiency initiative.

For information about other NRCS conservation programs, online visit www.nrcs.usda.gov, or visit the nearest USDA Service Center in your area.

This year represents the 75th year of NRCS helping people help the land. Since its inception the NRCS conservation delivery system has advanced a unique partnership with state and local governments and private landowners delivering conservation based on specific, local conservation needs, while accommodating state and national interests.

Related News

Coal, Business Interests Support EPA in Legal Challenge to Affordable Clean Energy Rule

Affordable Clean Energy Rule Lawsuit pits EPA and coal industry allies against health groups over Clean Power Plan repeal, greenhouse gas emissions standards, climate change, public health, and state authority before the D.C. Circuit.

 

Key Points

A legal fight over EPA's ACE rule and CPP repeal, weighing emissions policy, state authority, climate, and public health.

✅ Challenges repeal of Clean Power Plan and adoption of ACE.

✅ EPA backed by coal, utilities; health groups seek stricter limits.

✅ D.C. Circuit to review emissions authority and state roles.

 

The largest trade association representing coal interests in the country has joined other business and electric utility groups in siding with the EPA in a lawsuit challenging the Trump administration's repeal of the Clean Power Plan.

The suit -- filed by the American Lung Association and the American Public Health Association -- seeks to force the U.S. Environmental Protection Agency to drop a new rule-making process that critics claim would allow higher levels of greenhouse gas emissions, further contributing to the climate crisis and negatively impacting public health.

The new rule, which the Trump administration calls the "Affordable Clean Energy rule" (ACE), "would replace the 2015 Clean Power Plan, which EPA has proposed to repeal because it exceeded EPA's authority. The Clean Power Plan was stayed by the U.S. Supreme Court and has never gone into effect," according to an EPA statement.

EPA has also moved to rewrite wastewater limits for coal power plants, signaling a broader rollback of related environmental requirements.

America's Power -- formerly the American Coalition for Clean Coal Electricity -- the U.S. Chamber of Commerce, the National Mining Association, and the National Rural Electric Cooperative Association have filed motions seeking to join the lawsuit. The U.S. Court of Appeals for the District of Columbia Circuit has not yet responded to the motion.

Separately, energy groups warned that President Trump and Energy Secretary Rick Perry were rushing major changes to electricity pricing that could disrupt markets.

"In this rule, the EPA has accomplished what eluded the prior administration: providing a clear, legal pathway to reduce emissions while preserving states' authority over their own grids," Hal Quinn, president and chief executive officer of the mining association, said when the new rule was released last month. "ACE replaces a proposal that was so extreme that the Supreme Court issued an unprecedented stay of the proposal, having recognized the economic havoc the mere suggestion of such overreach was causing in the nation's power grid."

Around the same time, a coal industry CEO blasted a federal agency's decision on the power grid as harmful to reliability.

The trade and business groups have argued that the Clean Power Plan, set by the Obama administration, was an overreach of federal power. Finalized in 2015, the plan was President Obama's signature policy on climate change, rooted in compliance with the Paris Climate Treaty. It would have set state limits on emissions from existing power plants but gave wide latitude for meeting goals, such as allowing plant operators to switch from coal to other electric generating sources to meet targets.

Former EPA Administrator Scott Pruitt argued that the rule exceeded federal statutory limits by imposing "outside the fence" regulations on coal-fired plants instead of regulating "inside the fence" operations that can improve efficiency.

The Clean Power Plan set a goal of reducing carbon emissions from power generators by 32 percent by the year 2030. An analysis from the Rhodium Group found that had states taken full advantage of the CPP's flexibility, emissions would have been reduced by as much as 72 million metric tons per year on average. Still, even absent federal mandates, the group noted that states are taking it upon themselves to enact emission-reducing plans based on market forces.

In its motion, America's Power argues the EPA "acknowledged that the [Best System of Emission Reduction] for a source category must be 'limited to measures that can be implemented ... by the sources themselves.'" If plants couldn't take action, compliance with the new rule would require the owners or operators to buy emission rate credits that would increase investment in electricity from gas-fired or renewable sources. The increase in operating costs plus federal efforts to shift power generation to other sources of energy, thereby increasing costs, would eventually force the coal-fired plants out of business.

In related proceedings, renewable energy advocates told FERC that a DOE proposal to subsidize coal and nuclear plants was unsupported by the record, highlighting concerns about market distortions.

"While we are confident that EPA will prevail in the courts, we also want to help EPA defend the new rule against others who prefer extreme regulation," said Michelle Bloodworth, president and CEO of America's Power.

"Extreme regulation" to one group is environmental and health protections to another, though.

Howard A. Learner, executive director of the Environmental Law & Policy Center of the Midwest, defended the Clean Power Plan in an opinion piece published in June.

"The Midwest still produces more electricity from coal plants than any other region of the country, and Midwesterners bear the full range of pollution harms to public health, the Great Lakes, and overall environmental quality," Learner wrote. "The new [Affordable Clean Energy] Rule is a misguided policy, moves our nation backward in solving climate change problems, and misses opportunities for economic growth and innovation in the global shift to renewable energy. If not reversed by the courts, as it should be, the next administration will have the challenge of doing the right thing for public health, the climate and our clean energy future."

When it initially filed its lawsuit against the Trump administration's Affordable Clean Energy Rule, the American Lung Association accused the EPA of "abdicat[ing] its legal duties and obligations to protect public health." It also referred to the new rule as "dangerous."

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Was there another reason for electricity shutdowns in California?

PG&E Wind Shutdown and Renewable Reliability examines PSPS strategy, wildfire risk, transmission line exposure, wind turbine cut-out speeds, grid stability, and California's energy mix amid historic high-wind events and supply constraints across service areas.

 

Key Points

An overview of PG&E's PSPS decisions, wildfire mitigation, and how wind cut-out limits influence grid reliability.

✅ Wind turbines reach cut-out near 55 mph, reducing generation.

✅ PSPS mitigates ignition from damaged transmission infrastructure.

✅ Baseload diversity improves resilience during high-wind events.

 

According to the official, widely reported story, Pacific Gas & Electric (PG&E) initiated power shutoffs across substantial portions of its electric transmission system in northern California as a precautionary measure.

Citing high wind speeds they described as “historic,” the utility claims that if it didn’t turn off the grid, wind-caused damage to its infrastructure could start more wildfires.

Perhaps that’s true. Perhaps. This tale presumes that the folks who designed and maintain PG&E’s transmission system are unaware of or ignored the need to design it to withstand severe weather events, and that the Federal Energy Regulatory Commission (FERC) and North American Electric Reliability Corp. (NERC) allowed the utility to do so.

Ignorance and incompetence happens, to be sure, but there’s much about this story that doesn’t smell right—and it’s disappointing that most journalists and elected officials are apparently accepting it without question.

Take, for example, this statement from a Fox News story about the Kincade Fires: “A PG&E meteorologist said it’s ‘likely that many trees will fall, branches will break,’ which could damage utility infrastructure and start a fire.”

Did you ever notice how utilities cut wide swaths of trees away when transmission lines pass through forests? There’s a reason for that: When trees fall and branches break, the grid can still function, and even as the electric rhythms of New York City shifted during COVID-19, operators planned for variability.

So, if badly designed and poorly maintained infrastructure isn’t the reason PG&E cut power to millions of Californians, what might have prompted them to do so? Could it be that PG&E’s heavy reliance on renewable energy means they don’t have the power to send when a “historic” weather event occurs, especially as policymakers weigh the postponed closure of three power plants elsewhere in California?

 

Wind Speed Limits

The two most popular forms of renewable energy come with operating limitations, which is why some energy leaders urge us to keep electricity options open when planning the grid. With solar power, the constraint is obvious: the availability of sunlight. One doesn’t generate solar power at night and energy generation drops off with increasing degrees of cloud cover during the day.

The main operating constraint of wind power is, of course, wind speed, and even in markets undergoing 'transformative change' in wind generation, operators adhere to these technical limits. At the low end of the scale, you need about a 6 or 7 miles-per-hour wind to get a turbine moving. This is called the “cut-in speed.” To generate maximum power, about a 30 mph wind is typically required. But, if the wind speed is too high, the wind turbine will shut down. This is called the “cut-out speed,” and it’s about 55 miles per hour for most modern wind turbines.

It may seem odd that wind turbines have a cut-out speed, but there’s a very good reason for it. Each wind turbine rotor is connected to an electric generator housed in the turbine nacelle. The connection is made through a gearbox that is sized to turn the generator at the precise speed required to produce 60 Hertz AC power.

The blades of the wind turbine are airfoils, just like the wings of an airplane. Adjusting the pitch (angle) of the blades allows the rotor to maintain constant speed, which, in turn, allows the generator to maintain the constant speed it needs to safely deliver power to the grid. However, there’s a limit to blade pitch adjustment. When the wind is blowing so hard that pitch adjustment is no longer possible, the turbine shuts down. That’s the cut-out speed.

Now consider how California’s power generation profile has changed. According to Energy Information Administration data, the state generated 74.3 percent of its electricity from traditional sources—fossil fuels and nuclear, amid debates over whether to classify nuclear as renewable—in 2001. Hydroelectric, geothermal, and biomass-generated power accounted for most of the remaining 25.7 percent, with wind and solar providing only 1.98 percent of the total.

By 2018, the state’s renewable portfolio had jumped to 43.8 percent of total generation, with clean power increasing and wind and solar now accounting for 17.9 percent of total generation. That’s a lot of power to depend on from inherently unreliable sources. Thus, it wouldn’t be at all surprising to learn that PG&E didn’t stop delivering power out of fear of starting fires, but because it knew it wouldn’t have power to deliver once high winds shut down all those wind turbines

 

Related News

View more

Nord Stream: Norway and Denmark tighten energy infrastructure security after gas pipeline 'attack'

Nord Stream Pipeline Sabotage triggers Baltic Sea gas leaks as Norway and Denmark tighten energy infrastructure security, offshore surveillance, and exclusion zones, after drone sightings near platforms and explosions reported by experts.

 

Key Points

An alleged attack causing Baltic gas leaks and heightened energy security measures in Norway and Denmark.

✅ Norway boosts offshore and onshore site security

✅ Denmark enforces 5 nm exclusion zone near leaks

✅ Drones spotted; police probe sabotage and safety breaches

 

Norway and Denmark will increase security and surveillance around their energy infrastructure sites after the alleged sabotage of Russia's Nord Stream gas pipeline in the Baltic Sea, as the EU pursues a plan to dump Russian energy to safeguard supplies. 

Major leaks struck two underwater natural gas pipelines running from Russia to Germany, which has moved to a 200 billion-euro energy shield amid surging prices, with experts reporting that explosions rattled the Baltic Sea beforehand.

Norway -- an oil-rich nation and Europe's biggest supplier of gas -- will strengthen security at its land and offshore installations, even as it weighs curbing electricity exports to avoid shortages, the country's energy minister said.

The Scandinavian country's Petroleum Safety Authority also urged vigilance on Monday after unidentified drones were seen flying near Norway's offshore oil and gas platforms.

"The PSA has received a number of warnings/notifications from operator companies on the Norwegian Continental Shelf concerning the observation of unidentified drones/aircraft close to offshore facilities" the agency said in a statement.

"Cases where drones have infringed the safety zone around facilities are now being investigated by the Norwegian police."

Meanwhile Denmark will increase security across its energy sector after the Nord Stream incident, as wider market strains, including Germany's struggling local utilities, ripple across Europe, a spokesperson for gas transmission operator Energinet told Upstream.

The Danish Maritime Agency has also imposed an exclusion zone for five nautical miles around the leaks, warning ships of a danger they could lose buoyancy, and stating there is a risk of the escaping gas igniting "above the water and in the air," even as Europe weighs emergency electricity measures to limit prices.

Denmark's defence minister said there was no cause for security concerns in the Baltic Sea region.

"Russia has a significant military presence in the Baltic Sea region and we expect them to continue their sabre-rattling," Morten Bodskov said in a statement.

Video taken by a Danish military plane on Tuesday afternoon showed the extent of one of gas pipeline leaks, with the surface of the Baltic bubbling up as gas escapes, highlighting Europe's energy crisis for global audiences:

Meanwhile police in Sweden have opened a criminal investigation into "gross sabotage" of the Nord Stream 1 and Nord Stream 2 pipelines, and Sweden's crisis management unit was activated to monitor the situation. The unit brings together representatives from different government agencies. 

Swedish Foreign Minister Ann Linde had a call with her Danish counterpart Jeppe Kofod on Tuesday evening, and the pair also spoke with Norwegian Foreign Minister Anniken Huitfeldt on Wednesday, as the bloc debates gas price cap strategies to address the crisis, with Kofod saying there should be a "clear and unambiguous EU statement about the explosions in the Baltic Sea." 

"Focus now on uncovering exactly what has happened - and why. Any sabotage against European energy infrastructure will be met with a robust and coordinated response," said Kofod. 

 

Related News

View more

Two-thirds of the U.S. is at risk of power outages this summer

Home Energy Independence reduces electricity costs and outage risks with solar panels, EV charging, battery storage, net metering, and smart inverters, helping homeowners offset tiered rates and improve grid resilience and reliability.

 

Key Points

Home Energy Independence pairs solar, batteries, and smart EV charging to lower bills and keep power on during outages.

✅ Offset rising electricity rates via solar and net metering

✅ Add battery storage for backup power and peak shaving

✅ Optimize EV charging to avoid tiered rate penalties

 

The Department of Energy recently warned that two-thirds of the U.S. is at risk of losing power this summer. It’s an increasingly common refrain: Homeowners want to be less reliant on the aging power grid and don’t want to be at the mercy of electric utilities due to rising energy costs and dwindling faith in the power grid’s reliability.

And it makes sense. While the inflated price of eggs and butter made headlines earlier this year, electricity prices quietly increased at twice the rate of overall inflation in 2022, even as studies indicate renewables aren’t making power more expensive overall, and homeowners have taken notice. In fact, according to Aurora Solar’s Industry Snapshot, 62% expect energy prices will continue to rise.

Homeowners aren’t just frustrated that electricity is pricey when they need it, they’re also worried it won’t be available at all when they feel the most vulnerable. Nearly half (48%) of homeowners are concerned about power outages stemming from weather events, or grid imbalances from excess solar in some regions, followed closely by outages due to cyberattacks on the power grid.

These concerns around reliability and cost are creating a deep lack of confidence in the power grid. Yet, despite these growing concerns, homeowners are increasingly using electricity to displace other fuel sources.

The electrification of everything
From electric heat pumps to electric stoves and clothes dryers, homeowners are accelerating the electrification of their homes. Perhaps the most exciting example is electric vehicle (EV) adoption and the need for home charging. With major vehicle makers committing to ambitious electric vehicle targets and even going all-electric in the future, EVs are primed to make an even bigger splash in the years to come.

The by-product of this electrification movement is, of course, higher electric bills because of increased consumption. Homeowners also risk paying more for every unit of energy they use if they’re part of a tiered pricing utility structure, where energy-insecure households often pay 27% more on electricity because customers are charged different rates based on the total amount of energy they use. Many new electric vehicle owners don’t realize this until they are deep into purchasing their new vehicle, or even when they open that first electric bill after the car is in their driveway.

Sure, this electrification movement can feel counterintuitive given the power grid concerns. But it’s actually the first step toward energy independence, and emerging models like peer-to-peer energy sharing could amplify that over time.

Balancing conflicting movements
The fact is that electrification is moving forward quickly, even among homeowners who are concerned about electricity prices and power grid reliability, and about why the grid isn’t yet 100% renewable in the U.S. This has the potential to lead to even more discontent with electric utilities and growing anxiety over access to electricity in extreme situations. There is a third trend, though, that can help reconcile these two conflicting movements: the growth of solar.

The popularity of solar is likely higher than you think: Nearly 77% of homeowners either have solar panels on their homes or are interested in purchasing solar. The Aurora Solar Industry Snapshot report also showed a nearly 40% year-over-year increase in residential solar projects across the U.S. in 2022, as the country moves toward 30% power from wind and solar overall, aligning with the Solar Energy Industries Association’s (SEIA) Solar Market Insight Report, which found, “Residential solar had a record year [in 2022] with nearly 6 GWdc of installations, representing 40% growth over 2021.”

It makes sense that finding ways to tamp down—even eliminate—growing bills caused by the electrification of homes is accelerating interest in solar, as more households weigh whether residential solar is worth it for their budgets, and residential solar installers are seeing this firsthand. The link between EVs and solar is a great proof point: Almost 80% of solar professionals said EV adoption often drives new interest in solar. 

 

Related News

View more

Can the Electricity Industry Seize Its Resilience Moment?

Hurricane Grid Resilience examines how utilities manage outages with renewables, microgrids, and robust transmission and distribution systems, balancing solar, wind, and batteries to restore service, harden infrastructure, and improve storm response and recovery.

 

Key Points

Hurricane grid resilience is a utility approach to withstand storms, reduce outages, and speed safe power restoration.

✅ Focus on T&D hardening, vegetation management, remote switching

✅ Balance generation mix; integrate solar, wind, batteries, microgrids

✅ Plan 12-hour shifts; automate forecasting and outage restoration

 

When operators of Duke Energy's control room in Raleigh, North Carolina wait for a hurricane, the mood is often calm in the hours leading up to the storm.

“Things are usually fairly quiet before the activity starts,” said Mark Goettsch, the systems operations manager at Duke. “We’re anxiously awaiting the first operation and the first event. Once that begins, you get into storm mode.”

Then begins a “frenzied pace” that can last for days — like when Hurricane Florence parked over Duke’s service territory in September.

When an event like Florence hits, all eyes are on transmission and distribution. Where it’s available, Duke uses remote switching to reconnect customers quickly. As outages mount, the utility forecasts and balances its generation with electricity demand.

The control center’s four to six operators work 12-hour shifts, while nearby staff members field thousands of calls and alarms on the system. After it’s over, “we still hold our breath a little bit to make sure we’ve operated everything correctly,” said Goettsch. Damage assessment and rebuilding can only begin once a storm passes.

That cycle is becoming increasingly common in utility service areas like Duke's.

A slate of natural disasters that reads like a roll call — Willa, Michael, Harvey, Irma, Maria, Florence and Thomas — has forced a serious conversation about resiliency. And though Goettsch has heard a lot about resiliency as a “hot topic” at industry events and meetings, those conversations are only now entering Duke’s control room.

Resilience discussions come and go in the energy industry. Storms like Hurricane Sandy and Matthew can spur a nationwide focus on resiliency, but change is largely concentrated in local areas that experienced the disaster. After a few news cycles, the topic fades into the background.

However, experts agree that resilience is becoming much more important to year-round utility planning and operations as utilities pursue decarbonization goals across their fleets. It's not a fad.

“If you look at the whole ecosystem of utilities and vendors, there’s a sense that there needs to be a more resilient grid,” said Miki Deric, Accenture’s managing director of utilities, transmission and distribution for North America. “Even if they don’t necessarily agree on everything, they are all working with the same objective.”

Can renewables meet the challenge?

After Hurricane Florence, The Intercept reported on coal ash basins washed out by the storm’s overwhelming waters. In advance of that storm, Duke shut down one nuclear plant to protect it from high winds. The Washington Post also recently reported on a slowly leaking oil spill, which could surpass Deepwater Horizon in size, caused by Hurricane Ivan in 2004.

Clean energy boosters have seized on those vulnerabilities.They say solar and wind, which don’t rely on access to fuel and can often generate power immediately after a storm, provide resilience that other electricity sources do not.

“Clearly, logistics becomes a big issue on fossil plants, much more than renewable,” said Bruce Levy, CEO and president at BMR Energy, which owns and operates clean energy projects in the Caribbean and Latin America. “The ancillaries around it — the fuel delivery, fuel storage, water in, water out — are all as susceptible to damage as a renewable plant.”

Duke, however, dismissed the notion that one generation type could beat out another in a serious storm.

“I don’t think any generation source is immune,” said Duke spokesperson Randy Wheeless. “We’ve always been a big supporter of a balanced energy mix, reflecting why the grid isn't 100% renewable in practice today. That’s going to include nuclear and natural gas and solar and renewables as well. We do that because not every day is a good day for each generation source.”

In regard to performance, Wade Schauer, director of Americas Power & Renewables Research at Wood Mackenzie, said the situation is “complex.” According to him, output of solar and wind during a storm depends heavily on the event and its location.

While comprehensive data on generation performance is sparse, Schauer said coal and gas generators could experience outages at 25 percent while stormy weather might cut 95 percent of output from renewables, underscoring clean energy's dirty secret about variability under stress. Ahead of last year’s “bomb cyclone” in New England, WoodMac data shows that wind dropped to less than 1 percent of the supply mix.

“When it comes to resiliency, ‘average performance’ doesn't cut it,” said Schauer.

In the future, he said high winds could impact all U.S. offshore wind farms, since projects are slated for a small geographic area in the Northeast. He also pointed to anecdotal instances of solar arrays in New England taken out by feet of snow. During Florence, North Carolina’s wind farms escaped the highest winds and continued producing electricity throughout. Cloud cover, on the other hand, pushed solar production below average levels.

After Florence passed, Duke reported that most of its solar came online quickly, although four of its utility-owned facilities remained offline for weeks afterward. Only one was because of damage; the other three remained offline due to substation interconnection issues.

“Solar performed pretty well,” said Wheeless. “But did it come out unscathed? No.”

According to installer reports, solar systems fared relatively well in recent storms, even as the Covid-19 impact on renewables constrained projects worldwide. But the industry has also highlighted potential improvements. Following Hurricanes Maria and Irma, the Federal Emergency Management Agency published guidelines for installing and maintaining storm-resistant solar arrays. The document recommended steps such as annual checks for bolt tightness and using microinverters rather than string inverters.

Rocky Mountain Institute (RMI) also assembled a guide for retrofitting and constructing new installations. It described attributes of solar systems that survived storms, like lateral racking supports, and those that failed, like undersized and under-torqued bolts.

“The hurricanes, as much as no one liked them, [were] a real learning experience for folks in our industry,” said BMR’s Levy. “We saw what worked, and what didn’t.”          

Facing the "800-pound gorilla" on the grid

Advocates believe wind, solar, batteries and microgrids offer the most promise because they often rely less on transmitting electricity long distances and could support peer-to-peer energy models within communities.

Most extreme weather outages arise from transmission and distribution problems, not generation issues. Schauer at WoodMac called storm damage to T&D the “800-pound gorilla.”

“I'd be surprised if a single customer power outage was due to generators being offline, especially since loads where so low due to mild temperatures and people leaving the area ahead of the storm,” he said of Hurricane Florence. “Instead, it was wind [and] tree damage to power lines and blown transformers.”

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified