Google working on smart plug-in hybrid charging

By Reuters


Protective Relay Training - Basic

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
Google Inc is in the early stages of looking at ways to write software that would fully integrate plug-in hybrid vehicles to the power grid, minimize strain on the grid and help utilities manage vehicle charging load.

"We are doing some preliminary work," said Dan Reicher, Google's director of Climate Change and Energy Initiatives. "We have begun some work on smart charging of electric vehicles and how you would integrate large number of electric vehicles into the grid successfully."

"We have done a little bit of work on the software side looking at how you would write a computer code to manage this sort of charging infrastructure," he said in an interview on the sidelines of an industry conference.

Google, known for its Internet search engine, in 2007 announced a program to test Toyota Prius and Ford Escape gasoline-electric hybrid vehicles that were converted to rechargeable plug-in hybrids that run mostly on electricity.

One of the experimental technologies that was being tested by the Web search giant allowed parked plug-ins to transfer stored energy back to the electric grid, opening a potential back-up source of power for the system in peak hours.

Google has pushed ahead in addressing climate change issues as a philanthropic effort through its Google.org arm.

Reicher said Google has been testing its fleet of plug-in hybrids "pretty intensely" for the last couple of years.

"One of the great things about plug-ins is this great opportunity for the first time to finally have a storage technology," he said.

Reicher said the company is trying to figure out how to manage the impact of having millions of future electric vehicle owners plugging in their vehicles at the same time.

"We got to be careful how we manage these things," he said. "On a hot day in July when 5 million Californians come home, you don't want them all plugging in at the same moment."

Reicher laid out a scenario where power utilities, during a time of high demand, could turn on or off the charging of electric vehicles. The owner of these vehicles, who have agreed to such an arrangement, would get a credit from the utility in turn.

"The grid operators may well be indifferent to either putting 500 megawatts of new generation on or taking 500 megawatts off," he said. "The beauty of plug-in vehicles is that with the right software behind them, you could manage their charging."

Apart from plug-in hybrids, Google also is working on other green technologies such as developing its own new mirror technology that could reduce the cost of building solar thermal plants by a quarter or more, and looking at gas turbines that would run on solar power rather than natural gas.

The often-quirky company also said in late 2007 that it would invest in companies and do research of its own to produce affordable renewable energy — at a price less than burning coal — within a few years, casting the move as a philanthropic effort to address climate change.

Related News

UK must start construction of large-scale storage or fail to meet net zero targets.

UK Hydrogen Storage Caverns enable long-duration, low-carbon electricity balancing, storing surplus wind and solar power as green hydrogen in salt formations to enhance grid reliability, energy security, and net zero resilience by 2035 and 2050.

 

Key Points

They are salt caverns storing green hydrogen to balance wind and solar, stabilizing a low-carbon UK grid.

✅ Stores surplus wind and solar as green hydrogen in salt caverns

✅ Enables long-duration, low-carbon grid balancing and security

✅ Complements wind and solar; reduces dependence on flexible CCS

 

The U.K. government must kick-start the construction of large-scale hydrogen storage facilities if it is to meet its pledge that all electricity will come from low-carbon electricity sources by 2035 and reach legally binding net zero targets by 2050, according to a report by the Royal Society.

The report, "Large-scale electricity storage," published Sep. 8, examines a wide variety of ways to store surplus wind and solar generated electricity—including green hydrogen, advanced compressed air energy storage (ACAES), ammonia, and heat—which will be needed when Great Britain's electricity generation is dominated by volatile wind and solar power.

It concludes that large scale electricity storage is essential to mitigate variations in wind and sunshine, particularly long-term variations in the wind, and to keep the nation's lights on. Storing most of the surplus as hydrogen, in salt caverns, would be the cheapest way of doing this.

The report, based on 37 years of weather data, finds that in 2050 up to 100 Terawatt-hours (TWh) of storage will be needed, which would have to be capable of meeting around a quarter of the U.K.'s current annual electricity demand. This would be equivalent to more than 5,000 Dinorwig pumped hydroelectric dams. Storage on this scale, which would require up to 90 clusters of 10 caverns, is not possible with batteries or pumped hydro.

Storage requirements on this scale are not currently foreseen by the government, and the U.K.'s energy transition faces supply delays. Work on constructing these caverns should begin immediately if the government is to have any chance of meeting its net zero targets, the report states.

Sir Chris Llewellyn Smith FRS, lead author of the report, said, "The need for long-term storage has been seriously underestimated. Demand for electricity is expected to double by 2050 with the electrification of heat, transport, and industrial processing, as well as increases in the use of air conditioning, economic growth, and changes in population.

"It will mainly be met by wind and solar generation. They are the cheapest forms of low-carbon electricity generation, but are volatile—wind varies on a decadal timescale, so will have to be complemented by large scale supply from energy storage or other sources."

The only other large-scale low-carbon sources are nuclear power, gas with carbon capture and storage (CCS), and bioenergy without or with CCS (BECCS). While nuclear and gas with CCS are expected to play a role, they are expensive, especially if operated flexibly.

Sir Peter Bruce, vice president of the Royal Society, said, "Ensuring our future electricity supply remains reliable and resilient will be crucial for our future prosperity and well-being. An electricity system with significant wind and solar generation is likely to offer the lowest cost electricity but it is essential to have large-scale energy stores that can be accessed quickly to ensure Great Britain's energy security and sovereignty."

Combining hydrogen with ACAES, or other forms of storage that are more efficient than hydrogen, could lower the average cost of electricity overall, and would lower the required level of wind power and solar supply.

There are currently three hydrogen storage caverns in the U.K., which have been in use since 1972, and the British Geological Survey has identified the geology for ample storage capacity in Cheshire, Wessex and East Yorkshire. Appropriate, novel business models and market structures will be needed to encourage construction of the large number of additional caverns that will be needed, the report says.

Sir Chris observes that, although nuclear, hydro and other sources are likely to play a role, Britain could in principle be powered solely by wind power and solar, supported by hydrogen, and some small-scale storage provided, for example, by batteries, that can respond rapidly and to stabilize the grid. While the cost of electricity would be higher than in the last decade, we anticipate it would be much lower than in 2022, he adds.

 

Related News

View more

National Grid and SSE to use electrical transformers to heat homes

Grid Transformer Waste Heat Recovery turns substations into neighborhood boilers, supplying district heating via heat networks, helping National Grid and SSE cut emissions, boost energy efficiency, and advance low carbon, net zero decarbonization.

 

Key Points

Grid Transformer Waste Heat Recovery captures substation heat for district heating, cutting emissions and gas use.

✅ Captures waste heat from National Grid transformers

✅ Feeds SSE district heat networks for nearby homes

✅ Cuts carbon, improves efficiency, aligns with net zero

 

Thousands of homes could soon be warmed by the heat from giant electricity grid transformers for the first time as part of new plans to harness “waste heat” and cut carbon emissions from home heating.

Trials are due to begin on how to capture the heat generated by transmission network transformers, owned by National Grid, to provide home heating for households connected to district heating networks operated by SSE.

Currently, hot air is vented from the giant substations to help cool the transformers that help to control the electricity running through National Grid’s high-voltage transmission lines.

However, if the trial succeeds, about 1,300 National Grid substations could soon act as neighbourhood “boilers”, piping water heated by the substations into nearby heating networks, and on into the thousands of homes that use SSE’s services.

“Electric power transformers generate huge amounts of heat as a byproduct when electricity flows through them. At the moment, this heat is just vented directly into the atmosphere and wasted,” said Nathan Sanders, the managing director of SSE Energy Solutions.

“This groundbreaking project aims to capture that waste heat and effectively turn transformers into community ‘boilers’ that serve local heat networks with a low- or even zero-carbon alternative to fossil-fuel-powered heat sources such as gas boilers, a shift akin to a gas-for-electricity swap in heating markets,” Sanders added.

Alexander Yanushkevich, National Grid’s innovation manager, said the scheme was “essential to achieve net zero” and a “great example of how, taking a whole-system approach, including power-to-gas in Europe precedents, the UK can lead the way in helping accelerate decarbonisation”.

The energy companies believe the scheme could initially reduce heat network carbon emissions by more than 40% compared with fossil gas systems. Once the UK’s electricity system is zero carbon, and with recent milestones where wind was the main source of UK electricity on the grid, the heating solution could play a big role in helping the UK meet its climate targets.

The first trials have begun at National Grid’s specially designed testing site at Deeside in Wales to establish how the waste heat could be used in district heating networks. Once complete, the intellectual property will be shared with smaller regional electricity network owners, which may choose to roll out schemes in their areas.

Tim O’Reilly, the head of strategy at National Grid, said: “We have 1,300 transmission transformers, but there’s no reason why you couldn’t apply this technology to smaller electricity network transformers, too, echoing moves to use more electricity for heat in colder regions.”

Once the trials are complete, National Grid and SSE will have a better idea of how many homes could be warmed using the heat generated by electricity network substations, O’Reilly said, and how the heat can be used in ways that complement virtual power plants for grid resilience.

“The heavier the [electricity] load, which typically reaches a peak at around teatime, the more heat energy the transformer will be able to produce, aligning with times when wind leads the power mix nationally. So it fits quite nicely to when people require heat in the evenings,” he added.

Other projects designed to capture waste heat to use in district heating schemes include trapping the heat generated on the Northern line of London’s tube network to warm homes in Islington, and harnessing the geothermal heat from disused mines for district heating networks in Durham.

Only between 2% and 3% of the UK is connected to a district heating network, but more networks are expected to emerge in the years ahead as the UK tries to reduce the carbon emissions from homes, alongside its nuclear power plans in the wider energy strategy.

 

Related News

View more

Electricity restored to 75 percent of customers in Puerto Rico

Puerto Rico Power Restoration advances as PREPA, FEMA, and the Army Corps rebuild the grid after Hurricane Maria; 75% of customers powered, amid privatization debate, Whitefish contract fallout, and a continuing island-wide boil-water advisory.

 

Key Points

Effort to rebuild Puerto Rico's grid and restore power, led by PREPA with FEMA support after Hurricane Maria.

✅ 75.35% of customers have power; 90.8% grid generating

✅ PREPA, FEMA, and Army Corps lead restoration work

✅ Privatization debate, Whitefish contract scrutiny

 

Nearly six months after Hurricane Maria decimated Puerto Rico, the island's electricity has been restored to 75 percent capacity, according to its utility company, a contrast to California power shutdowns implemented for different reasons.

The Puerto Rico Electric Power Authority said Sunday that 75.35 percent of customers now have electricity. It added that 90.8 percent of the electrical grid, already anemic even before the Sept. 20 storm barrelled through the island, is generating power again, though demand dynamics can vary widely as seen in Spain's power demand during lockdowns.

Thousands of power restoration personnel made up of the Puerto Rico Electric Power Authority (PREPA), the Federal Emergency Management Agency (FEMA), industry workers from the mainland, and the Army Corps of Engineers have made marked progress in recent weeks, even as California power shutoffs highlight grid risks elsewhere.

Despite this, 65 people in shelters and an island-wide boil water advisory is still in effect even though almost 100 percent of Puerto Ricans have access to drinking water, local government records show.

The issue of power became controversial after Puerto Rico Gov. Ricardo Rossello recently announced plans to privatize PREPA after it chose to allocate a $300 million power restoration contract to Whitefish, a Montana-based company with only a few staffers, rather than put it through the mutual-aid network of public utilities usually called upon to coordinate power restoration after major disasters, and unlike investor-owned utilities overseen by regulators such as the Florida PSC on the mainland.

That contract was nixed and Whitefish stopped working in Puerto Rico after FEMA raised "significant concerns" over the procurement process, scrutiny mirrored by the fallout from Taiwan's widespread outage where the economic minister resigned.

 

Related News

View more

Why the promise of nuclear fusion is no longer a pipe dream

ITER Nuclear Fusion advances tokamak magnetic confinement, heating deuterium-tritium plasma with superconducting magnets, targeting net energy gain, tritium breeding, and steam-turbine power, while complementing laser inertial confinement milestones for grid-scale electricity and 2025 startup goals.

 

Key Points

ITER Nuclear Fusion is a tokamak project confining D-T plasma with magnets to achieve net energy gain and clean power.

✅ Tokamak magnetic confinement with high-temp superconducting coils

✅ Deuterium-tritium fuel cycle with on-site tritium breeding

✅ Targets net energy gain and grid-scale, low-carbon electricity

 

It sounds like the stuff of dreams: a virtually limitless source of energy that doesn’t produce greenhouse gases or radioactive waste. That’s the promise of nuclear fusion, often described as the holy grail of clean energy by proponents, which for decades has been nothing more than a fantasy due to insurmountable technical challenges. But things are heating up in what has turned into a race to create what amounts to an artificial sun here on Earth, one that can provide power for our kettles, cars and light bulbs.

Today’s nuclear power plants create electricity through nuclear fission, in which atoms are split, with next-gen nuclear power exploring smaller, cheaper, safer designs that remain distinct from fusion. Nuclear fusion however, involves combining atomic nuclei to release energy. It’s the same reaction that’s taking place at the Sun’s core. But overcoming the natural repulsion between atomic nuclei and maintaining the right conditions for fusion to occur isn’t straightforward. And doing so in a way that produces more energy than the reaction consumes has been beyond the grasp of the finest minds in physics for decades.

But perhaps not for much longer. Some major technical challenges have been overcome in the past few years and governments around the world have been pouring money into fusion power research as part of a broader green industrial revolution under way in several regions. There are also over 20 private ventures in the UK, US, Europe, China and Australia vying to be the first to make fusion energy production a reality.

“People are saying, ‘If it really is the ultimate solution, let’s find out whether it works or not,’” says Dr Tim Luce, head of science and operation at the International Thermonuclear Experimental Reactor (ITER), being built in southeast France. ITER is the biggest throw of the fusion dice yet.

Its $22bn (£15.9bn) build cost is being met by the governments of two-thirds of the world’s population, including the EU, the US, China and Russia, at a time when Europe is losing nuclear power and needs energy, and when it’s fired up in 2025 it’ll be the world’s largest fusion reactor. If it works, ITER will transform fusion power from being the stuff of dreams into a viable energy source.


Constructing a nuclear fusion reactor
ITER will be a tokamak reactor – thought to be the best hope for fusion power. Inside a tokamak, a gas, often a hydrogen isotope called deuterium, is subjected to intense heat and pressure, forcing electrons out of the atoms. This creates a plasma – a superheated, ionised gas – that has to be contained by intense magnetic fields.

The containment is vital, as no material on Earth could withstand the intense heat (100,000,000°C and above) that the plasma has to reach so that fusion can begin. It’s close to 10 times the heat at the Sun’s core, and temperatures like that are needed in a tokamak because the gravitational pressure within the Sun can’t be recreated.

When atomic nuclei do start to fuse, vast amounts of energy are released. While the experimental reactors currently in operation release that energy as heat, in a fusion reactor power plant, the heat would be used to produce steam that would drive turbines to generate electricity, even as some envision nuclear beyond electricity for industrial heat and fuels.

Tokamaks aren’t the only fusion reactors being tried. Another type of reactor uses lasers to heat and compress a hydrogen fuel to initiate fusion. In August 2021, one such device at the National Ignition Facility, at the Lawrence Livermore National Laboratory in California, generated 1.35 megajoules of energy. This record-breaking figure brings fusion power a step closer to net energy gain, but most hopes are still pinned on tokamak reactors rather than lasers.

In June 2021, China’s Experimental Advanced Superconducting Tokamak (EAST) reactor maintained a plasma for 101 seconds at 120,000,000°C. Before that, the record was 20 seconds. Ultimately, a fusion reactor would need to sustain the plasma indefinitely – or at least for eight-hour ‘pulses’ during periods of peak electricity demand.

A real game-changer for tokamaks has been the magnets used to produce the magnetic field. “We know how to make magnets that generate a very high magnetic field from copper or other kinds of metal, but you would pay a fortune for the electricity. It wouldn’t be a net energy gain from the plant,” says Luce.


One route for nuclear fusion is to use atoms of deuterium and tritium, both isotopes of hydrogen. They fuse under incredible heat and pressure, and the resulting products release energy as heat


The solution is to use high-temperature, superconducting magnets made from superconducting wire, or ‘tape’, that has no electrical resistance. These magnets can create intense magnetic fields and don’t lose energy as heat.

“High temperature superconductivity has been known about for 35 years. But the manufacturing capability to make tape in the lengths that would be required to make a reasonable fusion coil has just recently been developed,” says Luce. One of ITER’s magnets, the central solenoid, will produce a field of 13 tesla – 280,000 times Earth’s magnetic field.

The inner walls of ITER’s vacuum vessel, where the fusion will occur, will be lined with beryllium, a metal that won’t contaminate the plasma much if they touch. At the bottom is the divertor that will keep the temperature inside the reactor under control.

“The heat load on the divertor can be as large as in a rocket nozzle,” says Luce. “Rocket nozzles work because you can get into orbit within minutes and in space it’s really cold.” In a fusion reactor, a divertor would need to withstand this heat indefinitely and at ITER they’ll be testing one made out of tungsten.

Meanwhile, in the US, the National Spherical Torus Experiment – Upgrade (NSTX-U) fusion reactor will be fired up in the autumn of 2022, while efforts in advanced fission such as a mini-reactor design are also progressing. One of its priorities will be to see whether lining the reactor with lithium helps to keep the plasma stable.


Choosing a fuel
Instead of just using deuterium as the fusion fuel, ITER will use deuterium mixed with tritium, another hydrogen isotope. The deuterium-tritium blend offers the best chance of getting significantly more power out than is put in. Proponents of fusion power say one reason the technology is safe is that the fuel needs to be constantly fed into the reactor to keep fusion happening, making a runaway reaction impossible.

Deuterium can be extracted from seawater, so there’s a virtually limitless supply of it. But only 20kg of tritium are thought to exist worldwide, so fusion power plants will have to produce it (ITER will develop technology to ‘breed’ tritium). While some radioactive waste will be produced in a fusion plant, it’ll have a lifetime of around 100 years, rather than the thousands of years from fission.

At the time of writing in September, researchers at the Joint European Torus (JET) fusion reactor in Oxfordshire were due to start their deuterium-tritium fusion reactions. “JET will help ITER prepare a choice of machine parameters to optimise the fusion power,” says Dr Joelle Mailloux, one of the scientific programme leaders at JET. These parameters will include finding the best combination of deuterium and tritium, and establishing how the current is increased in the magnets before fusion starts.

The groundwork laid down at JET should accelerate ITER’s efforts to accomplish net energy gain. ITER will produce ‘first plasma’ in December 2025 and be cranked up to full power over the following decade. Its plasma temperature will reach 150,000,000°C and its target is to produce 500 megawatts of fusion power for every 50 megawatts of input heating power.

“If ITER is successful, it’ll eliminate most, if not all, doubts about the science and liberate money for technology development,” says Luce. That technology development will be demonstration fusion power plants that actually produce electricity, where advanced reactors can build on decades of expertise. “ITER is opening the door and saying, yeah, this works – the science is there.”

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Scottish North Sea wind farm to resume construction after Covid-19 stoppage

NnG Offshore Wind Farm restarts construction off Scotland, backed by EDF Renewables and ESB, CfD 2015, 54 turbines, powering 375,000 homes, 500 jobs, delivering GBP 540 million, with Covid-19 safety measures and staggered workforce.

 

Key Points

A 54-turbine Scottish offshore project by EDF Renewables and ESB, resuming to power 375,000 homes and support 500 jobs.

✅ Awarded a CfD in 2015; 54 turbines off Scotland's east coast.

✅ Projected to power 375,000 homes and deliver GBP 540 million locally.

✅ Staggered workforce return with Covid-19 control measures and oversight.

 

Neart Na Gaoithe (NnG) Offshore Wind Farm, owned by  EDF Renewables and Irish firm ESB, stopped construction in March, even as the world's most powerful tidal turbine showcases progress in marine energy.

Project boss Matthias Haag announced last night the 54-turbine wind farm would restart construction this week, as the largest UK offshore wind farm begins supplying power, underscoring sector momentum.

Located off Scotland’s east coast, where wind farms already power millions of homes, it was awarded a Contract for Difference (CfD) in 2015 and will look to generate enough energy to power 375,000 homes.

It is expected to create around 500 jobs, and supply chain growth like GE's new offshore blade factory jobs shows wider industry momentum, while also delivering £540 million to the local economy.

Mr Haag, NnG project director, said the wind farm build would resume with a small, staggered workforce return in line social distancing rules, and with broader energy sector conditions, including Hinkley Point C setbacks that challenge the UK's blueprint.

He added: “Initially, we will only have a few people on site to put in place control measures so the rest of the team can start work safely later that week.

“Once that’s happened we will have a reduced workforce on site, including essential supervisory staff.

“The arrangements we have put in place will be under regular review as we continue to closely monitor Covid-19 and follow the Scottish Government’s guidance.”

NnG wind farm, a 54-turbine projects, was due to begin full offshore construction in June 2020 before the Covid-19 outbreak, at a time when a Scottish tidal project had just demonstrated it could power thousands of homes.

EDF Renewables sold half of the NnG project to Irish firm ESB in November last year, and parent company EDF recently saw the Hinkley C reactor roof lifted into place, highlighting progress alongside renewables.

The first initial payment was understood to be around £50 million.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified