Niagara Tunnel nearly complete

By Toronto Star


Substation Relay Protection Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
Formally, itÂ’s called the Niagara Tunnel.

Energy minister Brad Duguid calls it a project that will deliver cleaner, greener power to Ontario families.

Critics call it a $985 million project thatÂ’s now coming in at $1.6 billion.

Ontario Power Generation boss Tom Mitchell calls it a “top of the scale” engineering project.

But no one who has seen it calls it a simple hole in the ground.

ItÂ’s a 10.2 kilometre tunnel bored through the Niagara escarpment, carrying water from above Niagara Falls to the Sir Adam Beck generating station in Queenston at a rate of 500 cubic metres a second.

And now, the main tunnel is nearly done.

Big Becky, the 4,000-tonne drilling behemoth that has chewed her way through solid rock since 2006, is 9.5 kilometres from the starting point, less then a kilometre from the finish.

She should break through in April, just above the Falls.

Marko Sobota sat happily at the controls, grinding forward at up to 1.8 metres and hour, driving the 14-metre diameter tunnel ever closer to the goal.

“I’m just a local guy,” says Sobota happily. He was part of the crew that helped assemble Big Becky in 2005, and graduated to learning how to operate the machine.

“It’s great to get a job 10 minutes from home. And to be on a project this huge. It’s unbelievable.”

The additional water flowing through the Beck generating station will boost its annual output by 1.6 billion kilowatt hours, up from the current 12 billion.

But itÂ’s a huge job. Boring the tunnel means moving 1.7 million cubic metres of solid rock.

It dips as low as 150 metres below ground, diving under a subterranean gorge invisible from the surface but not made of solid rock and therefore unsuitable for tunneling.

“If anyone tells you doing green energy is easy, bring him down here,” says Mitchell, standing in the cavernous space as Big Becky throbs, rumbles and vibrates with a deafening cacophony.

The project found that out the hard way.

Dipping around the underground obstacle forced the project to re-route the tunnel, driving the cost higher than expected.

Mitchell says it was the right decision in the circumstances. And when power starts to flow as a result of the project, probably in 2013, heÂ’ll have to persuade the Ontario Energy Board that the added cost is justified, in order to build the cost into the rates that Ontario Power charges for electricity.

But thatÂ’s still a couple of years off because thereÂ’s plenty of work to be done on the tunnel, even when Becky finishes her work this spring.

The whole tunnel must be lined with smooth concrete: In effect, Mitchell says it amounts to drilling a hole and then building a pipe inside it.

The more perfectly round the pipe, and the smoother its walls, the more energy is transmitted to the turbines in the generating station.

But in the run-up to this fallÂ’s provincial election, with the Conservatives sniping at the Liberals for high-energy bills, engineering and technology sometimes take a back seat to politics.

Touring the tunnel with reporters, Duguid takes time out to slam Conservative leader Tim Hudak and New Democratic Party leader Andrea Horwath for showing insufficient enthusiasm for the LiberalsÂ’ green, and often pricey, energy projects.

Hudak, says Duguid, is “trying to hoodwink Ontarians to think we can build that cleaner, modern energy system for free.”

In October, voters get to decide whether heÂ’s right.

Related News

Only one in 10 utility firms prioritise renewable electricity – global study

Utility Renewable Investment Gap highlights Oxford study in Nature Energy: most electric utilities favor fossil fuels over clean energy transition, expanding coal and gas, risking stranded assets and missing climate targets despite global decarbonization commitments.

 

Key Points

Most utilities grow fossil capacity over renewables, slowing decarbonization and jeopardizing climate goals.

✅ Only 10% expand renewables faster than coal and gas growth

✅ 60% still add fossil plants; 15% actively cut coal and gas

✅ Risks: stranded assets, missed climate targets, policy backlash

 

Only one in 10 of the world’s electric utility companies are prioritising clean energy investment over growing their capacity of fossil fuel power plants, according to research from the University of Oxford.

The study of more than 3,000 utilities found most remain heavily invested in fossil fuels despite international efforts to reduce greenhouse gas emissions and barriers to 100% renewables in the US that persist, and some are actively expanding their portfolio of polluting power plants.

The majority of the utility companies, many of which are state owned, have made little change to their generation portfolio in recent years.

Only 10% of the companies in the study, published in the research journal Nature Energy, are expanding their renewable energy capacity, mirroring global wind and solar growth patterns, at a faster rate than their gas- or coal-fired capacity.

Advertisement
Of the companies prioritising renewable energy growth, 60% have not stopped concurrently expanding their fossil fuel portfolio and only 15% of these companies are actively reducing their gas and coal capacity.

Galina Alova, the author of the report, said the research highlighted “a worrying gap between what is needed” to tackle the climate crisis, with calls for a fossil fuel lockdown gaining attention, and “what actions are being taken by the utility sector”.

The report found 10% of utilities were favouring growth in gas-fired power plants. This cluster is dominated by US utilities, even as renewables surpass coal in US generation in the broader market, eager to take advantage of the country’s shale gas reserves, followed by Russia and Germany.

Only 2% of utilities are actively growing their coal-fired power capacity ahead of renewables or gas. This cluster is dominated by Chinese utilities – which alone contributed more than 60% of coal-focused companies – followed by India and Vietnam.

The report found the majority of companies prioritising renewable energy were clustered in Europe. Many of the industry’s biggest players are investing in low-carbon energy and green technologies, even as clean energy's dirty secret prompts debate, to replace their ageing fossil fuel power plants.


Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk
In the UK, amid UK renewables backlog that has stalled billions, coal plants are shutting at pace ahead of the government’s 2025 ban on coal-fired power in part because the UK’s domestic carbon tax on power plants make them uneconomic to run.

“Although there have been a few high-profile examples of individual electric utilities investing in renewables, this study shows that overall, the sector is making the transition to clean energy slowly or not at all,” Alova said.

“Utilities’ continued investment in fossil fuels leaves them at risk of stranded assets – where power plants will need to be retired early – and undermines global efforts to tackle climate change.”
 

 

Related News

View more

More red ink at Manitoba Hydro as need for new power generation looms

Manitoba NDP Energy Financing Strategy outlines public ownership of renewables, halts private wind farms, stabilizes hydroelectric rates, and addresses Manitoba Hydro deficits amid drought, export revenue declines, and rising demand for grid reliability.

 

Key Points

A plan to fund public renewables, pause private wind, and stabilize Manitoba Hydro rates, improving utility finances.

✅ Public ownership favored over private wind contracts

✅ Focus on rate freeze and Manitoba Hydro debt management

✅ Addresses drought impacts, export revenue declines, rising demand

 

Manitoba's NDP administration has declared its intention to formulate a strategy for financing new energy ventures, following a decision to halt the development of additional private-sector wind farms and to extend a pause on new cryptocurrency connections amid grid pressures. This plan will accompany efforts to stabilize hydroelectric rates and manage the financial obligations of the province's state-operated energy company.

Finance Minister Adrien Sala, overseeing Manitoba Hydro, shared these insights during a legislative committee meeting on Thursday, emphasizing the government's desire for future energy expansions to remain under public ownership, even as Ontario moves to reintroduce renewable energy projects after prior cancellations, and expressing trust in Manitoba Hydro's governance to realize these goals.

This announcement was concurrent with Manitoba Hydro unveiling increased financial losses in its latest quarterly report. The utility anticipates a $190-million deficit for the fiscal year ending in March, marking a $29 million increase from its previous forecast and a significant deviation from an initial $450 million profit expectation announced last spring. Contributing factors to this financial downturn include reduced hydroelectric power generation due to drought conditions, diminished export revenues, and a mild fall season impacting heating demand.

The recent financial update aligns with a period of significant changes at Manitoba Hydro, initiated by the NDP government's board overhaul following its victory over the former Progressive Conservative administration in the October 3 election, and comes as wind projects are scrapped in Alberta across the broader Canadian energy landscape.

Subsequently, the NDP-aligned board discharged CEO Jay Grewal, who had advocated for integrating wind energy from third-party sources, citing competitive wind power trends, to promptly address the province's escalating energy requirements. Grewal's approach, though not unprecedented, sought to offer a quicker, more cost-efficient alternative to constructing new Manitoba Hydro dams, highlighting an imminent energy production shortfall projected for as early as 2029.

The opposition Progressive Conservatives have criticized the NDP for dismissing the wind power initiative without presenting an alternate solution, warning about costly cancellation fees seen in Ontario when projects are halted, and emphasizing the urgency of addressing the predicted energy gap.

In response, Sala reassured that the government is in the early stages of policy formulation, reflecting broader electricity policy debates in Ontario about how to fix the power system, and criticized the previous administration for its inaction on enhancing generation capacity during its tenure.

Manitoba Hydro has named Hal Turner as the acting CEO while it searches for Grewal's successor, following controversies such as Solar Energy Program mismanagement raised by a private developer. Turner informed the committee that the utility is still deliberating on its approach to new energy production and is exploring ways to curb rising demand.

Expressing optimism about collaborating with the new board, Turner is confident in finding a viable strategy to fulfill Manitoba's energy needs in a safe and affordable manner.

Additionally, the NDP's campaign pledge to freeze consumer rates for a year remains a priority, with Sala committing to implement this freeze before the next provincial election slated for 2027.

 

Related News

View more

How ‘Virtual Power Plants’ Will Change The Future Of Electricity

Virtual Power Plants orchestrate distributed energy resources like rooftop solar, home batteries, and EVs to deliver grid services, demand response, peak shaving, and resilience, lowering costs while enhancing reliability across wholesale markets and local networks.

 

Key Points

Virtual Power Plants aggregate solar and batteries to provide grid services, cut peak costs, and boost reliability.

✅ Aggregates DERs via cloud to bid into wholesale markets

✅ Reduces peak demand, defers costly grid upgrades

✅ Enhances resilience vs outages, cyber risks, and wildfires

 

If “virtual” meetings can allow companies to gather without anyone being in the office, then remotely distributed solar panels and batteries can harness energy and act as “virtual power plants.” It is simply the orchestration of millions of dispersed assets within a smarter electricity infrastructure to manage the supply of electricity — power that can be redirected back to the grid and distributed to homes and businesses. 

The ultimate goal is to revamp the energy landscape, making it cleaner and more reliable. By using onsite generation such as rooftop solar and smart solar inverters in combination with battery storage, those services can reduce the network’s overall cost by deferring expensive infrastructure upgrades and by reducing the need to purchase cost-prohibitive peak power. 

“We expect virtual power plants, including aggregated home solar and batteries, to become more common and more impactful for energy consumers throughout the country in the coming years,” says Michael Sachdev, chief product officer for Sunrun Inc., a rooftop solar company, in an interview. “The growth of home solar and batteries will be most apparent in places where households have an immediate need for backup power, as they do in California, where grid reliability pressures have led utilities to turn off the electricity to reduce wildfire risk.”

Most Popular In: Energy

How Extremophile Bacteria Living In Nuclear Reactors Might Help Us Make Vaccines
Apple, Ford, McDonald’s, Microsoft Among This Summer’s Climate Leaders
What’s Next For Oil And Gas?
Home battery adoption, such as Tesla Powerwall systems, is becoming commonplace in Hawaii and in New England, he adds, because those distributed assets are improving the efficiency of the electrical network. It is a trend that is reshaping the country’s energy generation and delivery system by relying more on clean onsite generation and less on fossil fuels.

Sunrun has recently formed a business partnership with AutoGrid, which will manage Sunrun’s fleet of rechargeable batteries. It is a cloud-based system that allows Sunrun to work with utilities to dispatch its “storage fleet” to optimize the economic results. AutoGrid compiles the data and makes AI-driven forecasts that enable it to pinpoint potential trouble spots. 

But a distributed energy system, or a virtual power plant, would have 200,000 subsystems. Or, 200,000 5 kilowatt batteries would be the equivalent of one power plant that has a capacity of 1,000 megawatts. 

“A virtual power plant acts as a generator,” says Amit Narayan, chief executive officer of AutoGrid, in an interview. “It is one of the top five innovations of the decade. If you look at Sunrun, 60% of every solar system it sells in the Bay Area is getting attached to a battery. The value proposition comes when you can aggregate these batteries and market them as a generation unit. The pool of individual assets may improve over time. But when you add these up, it is better than a large-scale plant. It is like going from mainframe computers to laptops.”

The AutoGrid executive goes on to say that centralized systems are less reliable than distributed resources. While one battery could falter, 200,000 of them that operate from remote locations will prove to be more durable — able to withstand cyber attacks and wildfires. Sunrun’s Sachdev adds that the ability to store energy in batteries, as seen in California’s expanding grid-scale battery use supporting reliability, and to move it to the grid on demand creates value not just for homes and businesses but also for the network as a whole.

The good news is that the trend worldwide is to make it easier for smaller distributed assets, including energy storage for microgrids that support local resilience, to get the same regulatory treatment as power plants. System operators have been obligated to call up those power supplies that are the most cost-effective and that can be easily dispatched. But now regulators are giving virtual power plants comprised of solar and batteries the same treatment. 

In the United States, for example, the Federal Energy Regulatory Commission issued an order in 2018 that allows storage resources to participate in wholesale markets — where electricity is bought directly from generators before selling that power to homes and businesses. Under the ruling, virtual power plants are paid the same as traditional power suppliers. A federal appeals court this month upheld the commission’s order, saying that it had the right to ensure “technological advances in energy storage are fully realized in the marketplace.” 

“In the past, we have used back-up generators,” notes AutoGrid’s Narayan. “As we move toward more automation, we are opening up the market to small assets such as battery storage and electric vehicles. As we deploy more of these assets, there will be increasing opportunities for virtual power plants.” 

Virtual power plants have the potential to change the energy horizon by harnessing locally-produced solar power and redistributing that to where it is most needed — all facilitated by cloud-based software that has a full panoramic view. At the same time, those smaller distributed assets can add more reliability and give consumers greater peace-of-mind — a dynamic that does, indeed, beef-up America’s generation and delivery network.

 

Related News

View more

Offshore wind is set to become a $1 trillion business

Offshore wind power accelerates low-carbon electrification, leveraging floating turbines, high capacity factors, HVDC transmission, and hydrogen production to decarbonize grids, cut CO2, and deliver competitive, reliable renewable energy near demand centers.

 

Key Points

Offshore wind power uses offshore turbines to deliver low-carbon electricity with high capacity factors and falling costs.

✅ Sea-based wind farms with 40-50% capacity factors

✅ Floating turbines unlock deep-water, far-shore resources

✅ Enables hydrogen production and strengthens grid reliability

 

The need for affordable low-carbon technologies is greater than ever

Global energy-related CO2 emissions reached a historic high in 2018, driven by an increase in coal use in the power sector. Despite impressive gains for renewables, fossil fuels still account for nearly two-thirds of electricity generation, the same share as 20 years ago. There are signs of a shift, with increasing pledges to decarbonise economies and tackle air pollution, and with World Bank support helping developing countries scale wind, but action needs to accelerate to meet sustainable energy goals. As electrification of the global energy system continues, the need for clean and affordable low-carbon technologies to produce this electricity is more pressing than ever. This World Energy Outlook special report offers a deep dive on a technology that today has a total capacity of 23 GW (80% of it in Europe) and accounts for only 0.3% of global electricity generation, but has the potential to become a mainstay of the world's power supply. The report provides the most comprehensive analysis to date of the global outlook for offshore wind, its contributions to electricity systems and its role in clean energy transitions.

 

The offshore wind market has been gaining momentum

The global offshore wind market grew nearly 30% per year between 2010 and 2018, benefitting from rapid technology improvements. Over the next five years, about 150 new offshore wind projects are scheduled to be completed around the world, pointing to an increasing role for offshore wind in power supplies. Europe has fostered the technology's development, led by the UK offshore wind sector alongside Germany and Denmark. The United Kingdom and Germany currently have the largest offshore wind capacity in operation, while Denmark produced 15% of its electricity from offshore wind in 2018. China added more capacity than any other country in 2018.

 

The untapped potential of offshore wind is vast

The best offshore wind sites could supply more than the total amount of electricity consumed worldwide today. And that would involve tapping only the sites close to shores. The IEA initiated a new geospatial analysis for this report to assess offshore wind technical potential country by country. The analysis was based on the latest global weather data on wind speed and quality while factoring in the newest turbine designs. Offshore wind's technical potential is 36 000 TWh per year for installations in water less than 60 metres deep and within 60 km from shore. Global electricity demand is currently 23 000 TWh. Moving further from shore and into deeper waters, floating turbines could unlock enough potential to meet the world's total electricity demand 11 times over in 2040. Our new geospatial analysis indicates that offshore wind alone could meet several times electricity demand in a number of countries, including in Europe, the United States and Japan. The industry is adapting various floating foundation technologies that have already been proven in the oil and gas sector. The first projects are under development and look to prove the feasibility and cost-effectiveness of floating offshore wind technologies.

 

Offshore wind's attributes are very promising for power systems

New offshore wind projects have capacity factors of 40-50%, as larger turbines and other technology improvements are helping to make the most of available wind resources. At these levels, offshore wind matches the capacity factors of gas- and coal-fired power plants in some regions – though offshore wind is not available at all times. Its capacity factors exceed those of onshore wind and are about double those of solar PV. Offshore wind output varies according to the strength of the wind, but its hourly variability is lower than that of solar PV. Offshore wind typically fluctuates within a narrower band, up to 20% from hour to hour, than solar PV, which varies up to 40%.

Offshore wind's high capacity factors and lower variability make its system value comparable to baseload technologies, placing it in a category of its own – a variable baseload technology. Offshore wind can generate electricity during all hours of the day and tends to produce more electricity in winter months in Europe, the United States and China, as well as during the monsoon season in India. These characteristics mean that offshore wind's system value is generally higher than that of its onshore counterpart and more stable over time than that of solar PV. Offshore wind also contributes to electricity security, with its high availability and seasonality patterns it is able to make a stronger contribution to system needs than other variable renewables. In doing so, offshore wind contributes to reducing CO2 and air pollutant emissions while also lowering the need for investment in dispatchable power plants. Offshore wind also has the advantage of avoiding many land use and social acceptance issues that other variable renewables are facing.

 

Offshore wind is on track to be a competitive source of electricity

Offshore wind is set to be competitive with fossil fuels within the next decade, as well as with other renewables including solar PV. The cost of offshore wind is declining and is set to fall further. Financing costs account for 35% to 50% of overall generation cost, and supportive policy frameworks are now enabling projects to secure low cost financing in Europe, with zero-subsidy tenders being awarded. Technology costs are also falling. The levelised cost of electricity produced by offshore wind is projected to decline by nearly 60% by 2040. Combined with its relatively high value to the system, this will make offshore wind one of the most competitive sources of electricity. In Europe, recent auctions indicate that offshore wind will soon beat new natural gas-fired capacity on cost and be on a par with solar PV and onshore wind. In China, offshore wind is set to become competitive with new coal-fired capacity around 2030 and be on par with solar PV and onshore wind. In the United States, recent project proposals indicate that offshore wind will soon be an affordable option, even as the 1 GW timeline continues to evolve, with potential to serve demand centres along the country's east coast.

Innovation is delivering deep cost reductions in offshore wind, and transmission costs will become increasingly important. The average upfront cost to build a 1 gigawatt offshore wind project, including transmission, was over $4 billion in 2018, but the cost is set to drop by more than 40% over the next decade. This overall decline is driven by a 60% reduction in the costs of turbines, foundations and their installation. Transmission accounts for around one-quarter of total offshore wind costs today, but its share in total costs is set to increase to about one-half as new projects move further from shore. Innovation in transmission, for example through work to expand the limits of direct current technologies, will be essential to support new projects without raising their overall costs.

 

Offshore wind is set to become a $1 trillion business

Offshore wind power capacity is set to increase by at least 15-fold worldwide by 2040, becoming a $1 trillion business. Under current investment plans and policies, the global offshore wind market is set to expand by 13% per year, reflecting its growth despite Covid-19 in recent years, passing 20 GW of additions per year by 2030. This will require capital spending of $840 billion over the next two decades, almost matching that for natural gas-fired or coal-fired capacity. Achieving global climate and sustainability goals would require faster growth: capacity additions would need to approach 40 GW per year in the 2030s, pushing cumulative investment to over $1.2 trillion. 

The promising outlook for offshore wind is underpinned by policy support in an increasing number of regions. Several European North Seas countries – including the United Kingdom, Germany, the Netherlands and Denmark – have policy targets supporting offshore wind. Although a relative newcomer to the technology, China is quickly building up its offshore wind industry, aiming to develop a project pipeline of 10 GW by 2020. In the United States, state-level targets and federal incentives are set to kick-start the U.S. offshore wind surge in the coming years. Additionally, policy targets are in place and projects under development in Korea, Japan, Chinese Taipei and Viet Nam.

 The synergies between offshore wind and offshore oil and gas activities provide new market opportunities. Since offshore energy operations share technologies and elements of their supply chains, oil and gas companies started investing in offshore wind projects many years ago. We estimate that about 40% of the full lifetime costs of an offshore wind project, including construction and maintenance, have significant synergies with the offshore oil and gas sector. That translates into a market opportunity of $400 billion or more in Europe and China over the next two decades. The construction of foundations and subsea structures offers potential crossover business, as do practices related to the maintenance and inspection of platforms. In addition to these opportunities, offshore oil and gas platforms require electricity that is often supplied by gas turbines or diesel engines, but that could be provided by nearby wind farms, thereby reducing CO2 emissions, air pollutants and costs.

 

Offshore wind can accelerate clean energy transitions

Offshore wind can help drive energy transitions by decarbonising electricity and by producing low-carbon fuels. Over the next two decades, its expansion could avoid between 5 billion and 7 billion tonnes of CO2 emissions from the power sector globally, while also reducing air pollution and enhancing energy security by reducing reliance on imported fuels. The European Union is poised to continue leading the wind energy at sea in Europe industry in support of its climate goals: its offshore wind capacity is set to increase by at least fourfold by 2030. This growth puts offshore wind on track to become the European Union's largest source of electricity in the 2040s. Beyond electricity, offshore wind's high capacity factors and falling costs makes it a good match to produce low-carbon hydrogen, a versatile product that could help decarbonise the buildings sector and some of the hardest to abate activities in industry and transport. For example, a 1 gigawatt offshore wind project could produce enough low-carbon hydrogen to heat about 250 000 homes. Rising demand for low-carbon hydrogen could also dramatically increase the market potential for offshore wind. Europe is looking to develop offshore "hubs" for producing electricity and clean hydrogen from offshore wind.

 

It's not all smooth sailing

Offshore wind faces several challenges that could slow its growth in established and emerging markets, but policy makers and regulators can clear the path ahead. Developing efficient supply chains is crucial for the offshore wind industry to deliver low-cost projects. Doing so is likely to call for multibillion-dollar investments in ever-larger support vessels and construction equipment. Such investment is especially difficult in the face of uncertainty. Governments can facilitate investment of this kind by establishing a long-term vision for offshore wind and by drawing on U.K. policy lessons to define the measures to be taken to help make that vision a reality. Long-term clarity would also enable effective system integration of offshore wind, including system planning to ensure reliability during periods of low wind availability.

The success of offshore wind depends on developing onshore grid infrastructure. Whether the responsibility for developing offshore transmission lies with project developers or transmission system operators, regulations should encourage efficient planning and design practices that support the long-term vision for offshore wind. Those regulations should recognise that the development of onshore grid infrastructure is essential to the efficient integration of power production from offshore wind. Without appropriate grid reinforcements and expansion, there is a risk of large amounts of offshore wind power going unused, and opportunities for further expansion could be stifled. Development could also be slowed by marine planning practices, regulations for awarding development rights and public acceptance issues.

The future of offshore wind looks bright but hinges on the right policies

The outlook for offshore wind is very positive as efforts to decarbonise and reduce local pollution accelerate. While offshore wind provides just 0.3% of global electricity supply today, it has vast potential around the world and an important role to play in the broader energy system. Offshore wind can drive down CO2 emissions and air pollutants from electricity generation. It can also do so in other sectors through the production of clean hydrogen and related fuels. The high system value of offshore wind offers advantages that make a strong case for its role alongside other renewables and low-carbon technologies. Government policies will continue to play a critical role in the future of offshore wind and  the overall pace of clean energy transitions around the world.

 

Related News

View more

Group to create Canadian cyber standards for electricity sector IoT devices

Canadian Industrial IoT Cybersecurity Standards aim to unify device security for utilities, smart grids, SCADA, and OT systems, aligning with NERC CIP, enabling certification, trust marks, compliance testing, and safer energy sector deployments.

 

Key Points

National standards to secure industrial IoT for utilities and grids, enabling certification and NERC CIP alignment.

✅ Aligns with NERC CIP and NIST frameworks for energy sector security

✅ Defines certification, testing tools, and a trusted device repository

✅ Enhances OT, SCADA, and smart grid resilience against cyber threats

 

The Canadian energy sector has been buying Internet-connected sensors for monitoring a range of activities in generating plants, distribution networks facing harsh weather risks and home smart meters for several years. However, so far industrial IoT device makers have been creating their own security standards for devices, leaving energy producers and utilities at their mercy.

The industry hopes to change that by creating national cybersecurity standards for industrial IoT devices, with the goal of improving its ability to predict, prevent, respond to and recover from cyber threats, such as emerging ransomware attacks across the grid.

To help, the federal government today announced an $818,000 grant support a CIO Strategy Council project oversee the setting of standards.

In an interview council executive director Keith Jansa said the money will help a three-year effort that will include holding a set of cross-country meetings with industry, government, academics and interest groups to create the standards, tools to be able to test devices against the standards and the development of product repository of IoT safe devices companies can consult before making purchases.

“The challenge is there are a number of these devices that will be coming online over the next few years,” Jansa said. “IoT devices are designed for convenience and not for security, so how do you ensure that a technology an electricity utility secures is in fact safeguarded against cyber threats? Currently, there is no associated trust mark or certification that gives confidence associated with these devices.”

He also said the council will work with the North American Electric Reliability Corporation (NERC), which sets North American-wide utility safety procedural standards and informs efforts on protecting the power grid across jurisdictions. The industrial IoT standards will be product standards.

According to Robert Wong, vice-president and CIO of Toronto Hydro, all the big provincial utilities are subject to adhering to NERC CIP standards which have requirements for both cyber and physical security. Ontario is different from most provinces in that it has local distribution companies — like Toronto Hydro — which buy electricity in bulk and resell it to customers.  These LDCs don’t own or operate critical infrastructure and therefore don’t have to follow the NERC CIP standards.

Regional reforms, such as regulatory changes in Atlantic Canada, aim to bring greener power options to the grid.

Electricity is considered around the world as one of a country’s critical national infrastructure. Threats to the grid can be used for ransom or by a country for political pressure. Ukraine had its power network knocked offline in 2015 and 2016 by what were believed to be Russian-linked attackers operating against utilities.

All the big provincial utilities operate “critical infrastructure” and are subject to adhering to NERC CIP (critical infrastructure protection) standards, which have requirements for both cyber and physical security, as similar compromises at U.S. electric utilities have highlighted recently.  There are audited on a regular basis for compliance and can face hefty fines if they fail to meet the requirements.  The LDCs in Ontario don’t own or operate “critical infrastructure” and therefore are not required to adopt NERC CIP standards (at least for now).

The CIO Strategy Council is a forum for chief information officers that is helping set standards in a number of areas. In January it announced a partnership with the Internet Society’s Canada Chapter to create standards of practice for IoT security for consumer devices. As part of the federal government’s updated national cybersecurity strategy it is also developing a national cybersecurity standard for small and medium-sized businesses. That strategy would allow SMBs to advertise to customers that they meet minimum security requirements.

“The security of Canadians and our critical infrastructure is paramount,” federal minister of natural resources Seamus O’Regan said in a statement with today’s announcement. “Cyber attacks are becoming more common and dangerous. That’s why we are supporting this innovative project to protect the Canadian electricity sector.”

The announcement was welcomed by Robert Wong, Toronto Hydro’s vice-president and CIO. “Any additional investment towards strengthening the safeguards against cyberattacks to Canada’s critical infrastructure is definitely good news.  From the perspective of the electricity sector, the convergence of IT and OT (operational technology) has been happening for some time now as the traditional electricity grid has been transforming into a Smart Grid with the introduction of smart meters, SCADA systems, electronic sensors and monitors, smart relays, intelligent automated switching capabilities, distributed energy resources, and storage technologies (batteries, flywheels, compressed air, etc.).

“In my experience, many OT device and system manufacturers and vendors are still lagging the traditional IT vendors in incorporating Security by Design philosophies and effective security features into their products.  This, in turn, creates greater risks and challenges for utilities to protecting their critical infrastructures and ensuring a reliable supply of electricity to its customers.”

The Ontario Energy Board, which regulates the industry in the province, has led an initiative for all utilities to adopt the National Institute of Standards and Technology (NIST) Cybersecurity Framework, along with the ES-C2M2 maturity and Privacy By Design models, he noted.  Toronto Hydro has been managing its cybersecurity practice in adherence to these standards, as the city addresses growing electricity needs as well, he said.

“Other jurisdictions, such as Israel, have invested heavily on a national level in developing its cybersecurity capabilities and are seen as global leaders.  I am confident that given the availability of talent, capabilities and resources in Canada (especially around the GTA) if we get strong support and leadership at a federal level we can also emerge as a leader in this area as well.”

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified