Toronto aims for zero emissions

By Toronto Star


Substation Relay Protection Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
Toronto's planned Lower Don Lands development will seek to achieve zero emissions of greenhouse gases – a goal announced at an environmental conference in Seoul, South Korea, chaired by Mayor David Miller.

The site east of downtown is one of 16 projects around the world that will receive assistance from a program founded by former U.S. president Bill Clinton, Miller told Toronto media during a video conference from Seoul.

The Clinton Climate Initiative is a partner with the Miller-led C40 Cities Climate Leadership Group, which meets every two years to discuss ways to combat global warming.

The Clinton program will provide advice on how the Lower Don Lands can generate solar and geothermal energy on-site to supply neighbourhoods to be built around the rejuvenated mouth of the Don River, Miller said.

Miller said he would like to see the development actually produce more clean energy than it needs, and feed the excess into the electricity grid.

"It's not about money," he said.

"It's about technical expertise. It's a partnership between C40, Toronto, Waterfront Toronto and the Clinton Climate Initiative."

The C40 group, whose cities are home to about 600 million people, was set up in 2005 by former London mayor Ken Livingstone.

Since then, it has become an important forum for sharing ideas about how to reduce emissions, Miller said.

"You need state-of-the-art knowledge to do that. That's the important part of the C40, transferring knowledge and expertise."

C40 set up a 10-city working group to study ways to make it easier to charge electric vehicles.

"It's early stages yet, but there are some real issues for cities, particularly in the infrastructure for electric vehicles. How do you create a set of charging stations? Where do you put them? How do you regulate that?

"Are there ways cities can work together to say, ‘This is the standard we're going to have,’ whether its Los Angeles, London or Seoul? Could we set the same standard, thereby bring down its cost, and make it far more likely that the electric vehicles will be successful?"

The group plans to attend the next international climate change summit this December in Copenhagen. It will emphasize that cities must be at the table because, while they occupy only two per cent of the land, they produce 80 per cent of the greenhouse gases, the mayor said.

Closer to home, Miller said the city has received advice from C40 on retrofitting public housing projects.

Wrapping older apartment buildings with a new layer of insulation and other retrofits could reduce the city's total emissions of greenhouse gases by some 3 to 5 per cent, Miller said.

Related News

Toshiba, Tohoku Electric Power and Iwatani start development of large H2 energy system

Fukushima Hydrogen Energy System leverages a 10,000 kW H2 production hub for grid balancing, demand response, and renewable integration, delivering hydrogen supply across Tohoku while supporting storage, forecasting, and flexible power management.

 

Key Points

A 10,000 kW H2 project in Namie for grid balancing, renewable integration, and regional hydrogen supply.

✅ 10,000 kW H2 production hub in Namie, Fukushima

✅ Balances renewable-heavy grids via demand response

✅ Supported by NEDO; partners Toshiba, Tohoku Electric, Iwatani

 

Toshiba Corporation, Tohoku Electric Power Co. and Iwatani Corporation have announced they will construct and operate a large-scale hydrogen (H2) energy system in Japan, based on a 10,000 kilowat class H2 production facility, which reflects advances in PEM hydrogen R&D worldwide.

The system, which will be built in Namie-Cho, Fukushima, will use H2 to offset grid loads and deliver H2 to locations in Tohoku and beyond, while complementary approaches like power-to-gas storage in Europe demonstrate broader storage options, and will seek to demonstrate the advantages of H2 as a solution in grid balancing and as a H2 gas supply.

The product has won a positive evaluation from Japan’s New Energy and Industrial Technology Development Organisation (NEDO), and its continued support for the transition to the technical demonstration phase. The practical effectiveness of the large-scale system will be determined by verification testing in financial year 2020, even as interest grows in nuclear beyond electricity for complementary services.

The main objectives of the partners are to promote expanded use of renewable energy in the electricity grid, including UK offshore wind investment by Japanese utilities, in order to balance supply and demand and process load management; and to realise a new control system that optimises H2 production and supply with demand forecasting for H2.

Hiroyuki Ota, General Manager of Toshiba’s Energy Systems and Solutions Company, said, “Through this project, Toshiba will continue to provide comprehensive H2 solutions, encompassing all processes from the production to utilisation of hydrogen.”

Manager of Tohoku Electric Power Co., Ltd, Mitsuhiro Matsumoto, added, “We will study how to use H2 energy systems to stabilize electricity grids with the aim of increasing the use of renewable energy and contributing to Fukushima.”

Moriyuki Fujimoto, General Manager of Iwatani Corporation, commented, “Iwatani considers that this project will contribute to the early establishment of a H2 economy that draws on our experience in the transportation, storage and supply of industrial H2, and the construction and operation of H2stations.”

Japan’s Ministry of Economy, Trade and Industry’s ‘Long-term Energy Supply and Demand Outlook’ targets increasing the share of renewable energy in Japan’s overall power generation mix from 10.7% in 2013 to 22-24% by 2030. Since output from renewable energy sources is intermittent and fluctuates widely with the weather and season, grid management requires another compensatory power source, as highlighted by a near-blackout event in Japan. The large hydrogen energy system is expected to provide a solution for grids with a high penetration of renewables.

 

Related News

View more

Michigan Public Service Commission grants Consumers Energy request for more wind generation

Consumers Energy Wind Expansion gains MPSC approval in Michigan, adding up to 525 MW of wind power, including Gratiot Farms, while solar capacity requests face delays over cost projections under the renewable portfolio standard targets.

 

Key Points

A regulatory-approved plan enabling Consumers Energy to add 525 MW of wind while solar additions await cost review.

✅ MPSC approves up to 525 MW in new wind projects

✅ Gratiot Farms purchase allowed before May 1

✅ Solar request delayed over high cost projections

 

Consumers Energy Co.’s efforts to expand its renewable offerings gained some traction this week when the Michigan Public Service Commission (MPSC) approved a request for additional wind generation capacity.

Consumers had argued that both more wind and solar facilities are needed to meet the state’s renewable portfolio standard, which was expanded in 2016 to encompass 12.5 percent of the retail power of each Michigan electric provider. Those figures will continue to rise under the law through 2021 when the figure reaches 15 percent, alongside ongoing electricity market reforms discussions. However, Consumers’ request for additional solar facilities was delayed at this time due to what the Commission labeled unrealistically high-cost projections.

Consumers will be able to add as much as 525 megawatts of new wind projects amid a shifting wind market, including two proposed 175-megawatt wind projects slated to begin operation this year and next. Consumers has also been allowed to purchase the Gratiot Farms Wind Project before May 1.

The MPSC said a final determination would be made on Consumers’ solar requests during a decision in April. Consumers had sought an additional 100 megawatts of solar facilities, hoping to get them online sometime in 2024 and 2025.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Was there another reason for electricity shutdowns in California?

PG&E Wind Shutdown and Renewable Reliability examines PSPS strategy, wildfire risk, transmission line exposure, wind turbine cut-out speeds, grid stability, and California's energy mix amid historic high-wind events and supply constraints across service areas.

 

Key Points

An overview of PG&E's PSPS decisions, wildfire mitigation, and how wind cut-out limits influence grid reliability.

✅ Wind turbines reach cut-out near 55 mph, reducing generation.

✅ PSPS mitigates ignition from damaged transmission infrastructure.

✅ Baseload diversity improves resilience during high-wind events.

 

According to the official, widely reported story, Pacific Gas & Electric (PG&E) initiated power shutoffs across substantial portions of its electric transmission system in northern California as a precautionary measure.

Citing high wind speeds they described as “historic,” the utility claims that if it didn’t turn off the grid, wind-caused damage to its infrastructure could start more wildfires.

Perhaps that’s true. Perhaps. This tale presumes that the folks who designed and maintain PG&E’s transmission system are unaware of or ignored the need to design it to withstand severe weather events, and that the Federal Energy Regulatory Commission (FERC) and North American Electric Reliability Corp. (NERC) allowed the utility to do so.

Ignorance and incompetence happens, to be sure, but there’s much about this story that doesn’t smell right—and it’s disappointing that most journalists and elected officials are apparently accepting it without question.

Take, for example, this statement from a Fox News story about the Kincade Fires: “A PG&E meteorologist said it’s ‘likely that many trees will fall, branches will break,’ which could damage utility infrastructure and start a fire.”

Did you ever notice how utilities cut wide swaths of trees away when transmission lines pass through forests? There’s a reason for that: When trees fall and branches break, the grid can still function, and even as the electric rhythms of New York City shifted during COVID-19, operators planned for variability.

So, if badly designed and poorly maintained infrastructure isn’t the reason PG&E cut power to millions of Californians, what might have prompted them to do so? Could it be that PG&E’s heavy reliance on renewable energy means they don’t have the power to send when a “historic” weather event occurs, especially as policymakers weigh the postponed closure of three power plants elsewhere in California?

 

Wind Speed Limits

The two most popular forms of renewable energy come with operating limitations, which is why some energy leaders urge us to keep electricity options open when planning the grid. With solar power, the constraint is obvious: the availability of sunlight. One doesn’t generate solar power at night and energy generation drops off with increasing degrees of cloud cover during the day.

The main operating constraint of wind power is, of course, wind speed, and even in markets undergoing 'transformative change' in wind generation, operators adhere to these technical limits. At the low end of the scale, you need about a 6 or 7 miles-per-hour wind to get a turbine moving. This is called the “cut-in speed.” To generate maximum power, about a 30 mph wind is typically required. But, if the wind speed is too high, the wind turbine will shut down. This is called the “cut-out speed,” and it’s about 55 miles per hour for most modern wind turbines.

It may seem odd that wind turbines have a cut-out speed, but there’s a very good reason for it. Each wind turbine rotor is connected to an electric generator housed in the turbine nacelle. The connection is made through a gearbox that is sized to turn the generator at the precise speed required to produce 60 Hertz AC power.

The blades of the wind turbine are airfoils, just like the wings of an airplane. Adjusting the pitch (angle) of the blades allows the rotor to maintain constant speed, which, in turn, allows the generator to maintain the constant speed it needs to safely deliver power to the grid. However, there’s a limit to blade pitch adjustment. When the wind is blowing so hard that pitch adjustment is no longer possible, the turbine shuts down. That’s the cut-out speed.

Now consider how California’s power generation profile has changed. According to Energy Information Administration data, the state generated 74.3 percent of its electricity from traditional sources—fossil fuels and nuclear, amid debates over whether to classify nuclear as renewable—in 2001. Hydroelectric, geothermal, and biomass-generated power accounted for most of the remaining 25.7 percent, with wind and solar providing only 1.98 percent of the total.

By 2018, the state’s renewable portfolio had jumped to 43.8 percent of total generation, with clean power increasing and wind and solar now accounting for 17.9 percent of total generation. That’s a lot of power to depend on from inherently unreliable sources. Thus, it wouldn’t be at all surprising to learn that PG&E didn’t stop delivering power out of fear of starting fires, but because it knew it wouldn’t have power to deliver once high winds shut down all those wind turbines

 

Related News

View more

Which of the cleaner states imports dirty electricity?

Hourly Electricity Emissions Tracking maps grid balancing areas, embodied emissions, and imports/exports, revealing carbon intensity shifts across PJM, ERCOT, and California ISO, and clarifying renewable energy versus coal impacts on health and climate.

 

Key Points

An hourly method tracing generation, flows, and embodied emissions to quantify carbon intensity across US balancing areas.

✅ Hourly traces of imports/exports and generation mix

✅ Consumption-based carbon intensity by balancing area

✅ Policy insights for renewables, coal, health costs

 

In the United States, electricity generation accounts for nearly 30% of our carbon emissions. Some states have responded to that by setting aggressive renewable energy standards; others are hoping to see coal propped up even as its economics get worse. Complicating matters further is the fact that many regional grids are integrated, and as America goes electric the stakes grow, meaning power generated in one location may be exported and used in a different state entirely.

Tracking these electricity exports is critical for understanding how to lower our national carbon emissions. In addition, power from a dirty source like coal has health and environment impacts where it's produced, and the costs of these aren't always paid by the parties using the electricity. Unfortunately, getting reliable figures on how electricity is produced and where it's used is challenging, even for consumers trying to find where their electricity comes from in the first place, leaving some of the best estimates with a time resolution of only a month.

Now, three Stanford researchers—Jacques A. de Chalendar, John Taggart, and Sally M. Benson—have greatly improved on that standard, and they have managed to track power generation and use on an hourly basis. The researchers found that, of the 66 grid balancing areas within the United States, only three have carbon emissions equivalent to our national average, and they have found that imports and exports of electricity have both seasonal and daily changes. de Chalendar et al. discovered that the net results can be substantial, with imported electricity increasing California's emissions/power by 20%.

Hour by hour
To figure out the US energy trading landscape, the researchers obtained 2016 data for grid features called balancing areas. The continental US has 66 of these, providing much better spatial resolution on the data than the larger grid subdivisions. This doesn't cover everything—several balancing areas in Canada and Mexico are tied in to the US grid—and some of these balancing areas are much larger than others. The PJM grid, serving Pennsylvania, New Jersey, and Maryland, for example, is more than twice as large as Texas' ERCOT, in a state that produces and consumes the most electricity in the US.

Despite these limitations, it's possible to get hourly figures on how much electricity was generated, what was used to produce it, and whether it was used locally or exported to another balancing area. Information on the generating sources allowed the researchers to attach an emissions figure to each unit of electricity produced. Coal, for example, produces double the emissions of natural gas, which in turn produces more than an order of magnitude more carbon dioxide than the manufacturing of solar, wind, or hydro facilities. These figures were turned into what the authors call "embodied emissions" that can be traced to where they're eventually used.

Similar figures were also generated for sulfur dioxide and nitrogen oxides. Released by the burning of fossil fuels, these can both influence the global climate and produce local health problems.

Huge variation
The results were striking. "The consumption-based carbon intensity of electricity varies by almost an order of magnitude across the different regions in the US electricity system," the authors conclude. The low is the Bonneville Power grid region, which is largely supplied by hydropower; it has typical emissions below 100kg of carbon dioxide per megawatt-hour. The highest emissions come in the Ohio Valley Electric region, where emissions clear 900kg/MW-hr. Only three regional grids match the overall grid emissions intensity, although that includes the very large PJM (where capacity auction payouts recently fell), ERCOT, and Southern Co balancing areas.

Most of the low-emissions power that's exported comes from the Pacific Northwest's abundant hydropower, while the Rocky Mountains area exports electricity with the highest associated emissions. That leads to some striking asymmetries. Local generation in the hydro-rich Idaho Power Company has embodied emissions of only 71kg/MW-hr, while its imports, coming primarily from Rocky Mountain states, have a carbon content of 625kg/MW-hr.

The reliance on hydropower also makes the asymmetry seasonal. Local generation is highest in the spring as snow melts, but imports become a larger source outside this time of year. As solar and wind can also have pronounced seasonal shifts, similar changes will likely be seen as these become larger contributors to many of these regional grids. Similar things occur daily, as both demand and solar production (and, to a lesser extent, wind) have distinct daily profiles.

The Golden State
California's CISO provides another instructive case. Imports represent less than 30% of its total electric use in 2016, yet California electricity imports provided 40% of its embodied emissions. Some of these, however, come internally from California, provided by the Los Angeles Department of Water and Power. The state itself, however, has only had limited tracking of imported emissions, lumping many of its sources as "other," and has been exporting its energy policies to Western states in ways that shape regional markets.

Overall, the 2016 inventory provides a narrow picture of the US grid, as plenty of trends are rapidly changing our country's emissions profile, including the rise of renewables and the widespread adoption of efficiency measures and other utility trends in 2017 that continue to evolve. The method developed here can, however, allow for annual updates, providing us with a much better picture of trends. That could be quite valuable to track things like how the rapid rise in solar power is altering the daily production of clean power.

More significantly, it provides a basis for more informed policymaking. States that wish to promote low-emissions power can use the information here to either alter the source of their imports or to encourage the sites where they're produced to adopt more renewable power. And those states that are exporting electricity produced primarily through fossil fuels could ensure that the locations where the power is used pay a price that includes the health costs of its production.

 

Related News

View more

Court reinstates constitutional challenge to Ontario's hefty ‘global adjustment’ electricity charge

Ontario Global Adjustment Charge faces constitutional scrutiny as a regulatory charge vs tax; Court of Appeal revives case over electricity pricing, feed-in tariff contracts, IESO policy, and hydro rate impacts on consumers and industry.

 

Key Points

A provincial electricity fee funding generator contracts, now central to a court fight over tax versus regulatory charge.

✅ Funds gap between market price and contracted generator rates

✅ At issue: regulatory charge vs tax under constitutional law

✅ Linked to feed-in tariff, IESO policy, and hydro rate hikes

 

Ontario’s court of appeal has decided that a constitutional challenge of a steep provincial electricity charge should get its day in court, overturning a lower-court judgment that had dismissed the legal bid.

Hamilton, Ont.-based National Steel Car Ltd. launched the challenge in 2017, saying Ontario’s so-called global adjustment charge was unconstitutional because it is a tax — not a valid regulatory charge — that was not passed by the legislature.

The global adjustment funds the difference between the province’s hourly electricity price and the price guaranteed under contracts to power generators. It is “the component that covers the cost of building new electricity infrastructure in the province, maintaining existing resources, as well as providing conservation and demand management programs,” the province’s Independent Electricity System Operator says.

However, the global adjustment now makes up most of the commodity portion of a household electricity bill, and its costs have ballooned, as regulators elsewhere consider a proposed 14% rate hike in Nova Scotia.

Ontario’s auditor general said in 2015 that global adjustment fees had increased from $650 million in 2006 to more than $7 billion in 2014. She added that consumers would pay $133 billion in global adjustment fees from 2015 to 2032, after having already paid $37 billion from 2006 to 2014.

National Steel Car, which manufactures steel rail cars and faces high electricity rates that hurt Ontario factories, said its global adjustment costs went from $207,260 in 2008 to almost $3.4 million in 2016, according to an Ontario Court of Appeal decision released on Wednesday.

The company claimed the global adjustment was a tax because one of its components funds electricity procurement contracts under a “feed-in tariff” program, or FIT, which National Steel Car called “the main culprit behind the dramatic price increases for electricity,” the decision said.

Ontario’s auditor general said the FIT program “paid excessive prices to renewable energy generators.” The program has been ended, but contracts awarded under it remain in place.


National Steel Car claimed the FIT program “was actually designed to accomplish social goals unrelated to the generation of electricity,” such as helping rural and indigenous communities, and was therefore a tax trying to help with policy goals.

“The appellant submits that the Policy Goals can be achieved by Ontario in several ways, just not through the electricity pricing formula,” the decision said.

National Steel Car also argued the global adjustment violated a provincial law that requires the government to hold a referendum for new taxes.

“The appellant’s principal claim is that the Global Adjustment was a ‘colourable attempt to disguise a tax as a regulatory charge with the purpose of funding the costs of the Policy Goals,’” the decision said. “The appellant pressed this argument before the motion judge and before this court. The motion judge did not directly or adequately address it.”

The Ontario government applied to have the challenge thrown out for having “no reasonable cause of action,” and a Superior Court judge did so in 2018, saying the global adjustment is not a tax.

National Steel Car appealed the decision, and the decision published Wednesday allowed the appeal, set aside the lower-court judgment, and will send the case back to Superior Court, where it could get a full hearing.

“The appellant’s claim is sufficiently plausible on the evidentiary record it put forward that the applications should not have been dismissed on a pleadings motion before the development of a full record,” wrote Justice Peter D. Lauwers. “It is not plain, obvious and beyond doubt that the Global Adjustment, and particularly the challenged component, is properly characterized as a valid regulatory charge and not as an impermissible tax.”

Jerome Morse of Morse Shannon LLP, one of National Steel Car’s lawyers, said the Ontario government would now have 60 days to decide whether to seek permission to appeal to the Supreme Court of Canada.

“What the court has basically said is, ‘this is a plausible argument, here are the reasons why it’s plausible, there was no answer to this,’” Morse told the Financial Post.

Ontario and the IESO had supported the lower-court decision, but there has been a change in government since the challenge was first launched, with Progressive Conservative Premier Doug Ford replacing the Liberals and Kathleen Wynne in power. The Liberals had launched a plan aimed at addressing hydro costs before losing in a 2018 election, the main thrust of which had been to refinance global adjustment costs.

Wednesday’s decision states that “Ontario’s counsel advised the court that the current Ontario government ‘does not agree with the former government’s electricity procurement policy (since-repealed).’

“The government’s view is that: ‘The solution does not lie with the courts, but instead in the political arena with political actors,’” it adds.

A spokesperson for Ontario Energy Minister Greg Rickford said in an email that they are reviewing the decision but “as this matter is in the appeal period, it would be inappropriate to comment.” 

Ontario had also requested to stay the matter so a regulator, the Ontario Energy Board, could weigh in, while the Nova Scotia regulator approved a 14% hike in a separate case.

“However, Ontario only sought this relief from the motion judge in the alternative, and given the motion judge’s ultimate decision, she did not rule on the stay,” Thursday’s decision said. “It would be premature for this court to rule on the issue, although it seems incongruous for Ontario to argue that the Superior Court is the convenient forum in which to seek to dismiss the applications as meritless, but that it is not the convenient forum for assessing the merits of the applications.”

National Steel Car’s challenge bears a resemblance to the constitutional challenges launched by Ontario and other provinces over the federal government’s carbon tax, but Justice Lauwers wrote “that the federal legislative scheme under consideration in those cases is distinctly different from the legislation at issue in this appeal.”

“Nothing in those decisions impacts this appeal,” the judge added.
 

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified