AREVA aligns with Microsoft for smarter grid solutions

By Electricity Forum


Protective Relay Training - Basic

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
AREVA's Transmission and Distribution (T&D) division announced the extension of its three-year long collaboration with Microsoft to include the development of Smarter Grid Management solutions to help the worldwide power industry efficiently and reliably fulfill the future global demand for power.

AREVA and Microsoft's joint efforts have already led to the delivery of key capabilities to AREVA customers. Through this new agreement, both companies will work together towards improving cyber security and integration of AREVA's applications with Microsoft Office tools and enterprise business processes. AREVA's T&D division sees Microsoft technology as a strategic facilitator for the development and deployment of smart grid solutions.

AREVA's T&D Automation division is a recognized leader in the Smart Grid application field with multiple solutions already in use by major utilities worldwide and a number one position in Energy Management Systems, in numerous regions including North America, India, and the Middle East. The company's initiatives in the smart grid arena are delivering tools and applications for smarter dispatch, integrated distribution management, and demand response. As a result, power companies will see fewer, shorter outages, and shorter service restoration times.

New Smarter Grid capabilities will include the ability to manage distributed generation and a variety of renewable sources of energy. AREVA demonstrated an Integrated Distribution Management solution based on the Microsoft platform in major exhibitions in India and the US (GridTech and DistribuTECH), earlier this year.

"The combination of AREVA and Microsoft technologies will help utilities address the need to better manage the growing around-the-clock global demand for energy," said Brian Scott, Vice President, Microsoft Worldwide Industry. "We share a vision of the innovation needed to create the smart energy ecosystem that will enhance both transmission and distribution grid management and improve operational efficiency across the utility value chain."

"The AREVA Smarter Grid solution will help transform the current structure of the electric grid into a more intelligent system that is reliable, stable and energy-efficient. Partnering with Microsoft, and indeed integrating their best-in-class platform into the heart of our offer, is a key element of our effort to make smarter grids a reality," said Jean-Michel Cornille, Executive Vice President, AREVA T&D Automation.

Related News

BloombergNEF: World offshore wind costs 'drop 32% per cent'

Global Renewable LCOE Trends reveal offshore wind costs down 32%, with 10MW turbines, lower CAPEX and OPEX, and parity for solar PV and onshore wind in Europe, China, and California, per BloombergNEF analysis.

 

Key Points

Benchmarks showing falling LCOE for offshore wind, onshore wind, and solar PV, driven by larger turbines and lower CAPEX

✅ Offshore wind LCOE $78/MWh; $53-64/MWh in DK/NL excl. transmission

✅ Onshore wind $47/MWh; solar PV $51/MWh, best $26-36/MWh

✅ Cost drivers: 10MW turbines, lower CAPEX/OPEX, weak China demand

 

World offshore wind costs have fallen 32% from just a year ago and 12% compared with the first half of 2019, according to a BNEF long-term outlook from BloombergNEF.

In its latest Levelized Cost of Electricity (LCOE) Update, BloombergNEF said its current global benchmark LCOE estimate for offshore wind is $78 a megawatt-hour.

“New offshore wind projects throughout Europe, including the UK's build-out, now deploy turbines with power ratings up to 10MW, unlocking CAPEX and OPEX savings,” BloombergNEF said.

In Denmark and the Netherlands, it expects the most recent projects financed to achieve $53-64/MWh excluding transmission.

New solar and onshore wind projects have reached parity with average wholesale power prices in California and parts of Europe, while in China levelised costs are below the benchmark average regulated coal price, according to BloombergNEF.

The company's global benchmark levelized cost figures for onshore wind and PV projects financed in the last six months are at $47 and $51 a megawatt-hours, underscoring that renewables are now the cheapest new electricity option in many regions, down 6% and 11% respectively compared with the first half of 2019.

BloombergNEF said for wind this is mainly down to a fall in the price of turbines – 7% lower on average globally compared with the end of 2018.

In China, the world’s largest solar market, the CAPEX of utility-scale PV plants has dropped 11% in the last six months, reaching $0.57m per MW.

“Weak demand for new plants in China has left developers and engineering, procurement and construction firms eager for business, and this has put pressure on CAPEX,” BloombergNEF said.

It added that estimates of the cheapest PV projects financed recently – in India, Chile and Australia – will be able to achieve an LCOE of $27-36/MWh, assuming competitive returns for their equity investors.

Best-in-class onshore wind farms in Brazil, India, Mexico and Texas can reach levelized costs as low as $26-31/MWh already, the research said.

Programs such as the World Bank wind program are helping developing countries accelerate wind deployment as costs continue to drop.

BloombergNEF associate in the energy economics team Tifenn Brandily said: “This is a three- stage process. In phase one, new solar and wind get cheaper than new gas and coal plants on a cost-of- energy basis.

“In phase two, renewables reach parity with power prices. In phase three, they become even cheaper than running existing thermal plants.

“Our analysis shows that phase one has now been reached for two-thirds of the global population.

“Phase two started with California, China and parts of Europe. We expect phase three to be reached on a global scale by 2030.

“As this all plays out, thermal power plants will increasingly be relegated to a balancing role, looking for opportunities to generate when the sun doesn’t shine or the wind doesn’t blow.”

 

Related News

View more

Smart grid and system improvements help avoid more than 500,000 outages over the summer

ComEd Smart Grid Reliability drives outage reduction across Illinois, leveraging smart switches, grid modernization, and peak demand programs to keep customers powered, improve power quality, and enhance energy savings during extreme weather and severe storms.

 

Key Points

ComEd's smart grid performance, cutting outages and improving power quality to enhance reliability and customer savings.

✅ Smart switches reroute power to avoid customer interruptions

✅ Fewer outages during extreme weather across northern Illinois

✅ Peak Time Savings rewards for reduced peak demand usage

 

While the summer of 2019 set records for heat and brought severe storms, ComEd customers stayed cool thanks to record-setting reliability during the season. These smart grid investments over the last seven years helped to set records in key reliability measurements, including frequency of outages metrics, and through smart switches that reroute power around potential problem areas, avoided more than 538,000 customer interruptions from June to August.

"In a summer where we were challenged by extreme weather, we saw our smart grid investments and our people continue to deliver the highest levels of reliability, backed by extensive disaster planning across utilities, for the families and businesses we serve," said Joe Dominguez, CEO of ComEd. "We're proud to deliver the most affordable, cleanest and, as we demonstrated this summer, most reliable energy to our customers. I want to thank our 6,000 employees who work around the clock in often challenging conditions to power our communities."

ComEd has avoided more than 13 million customer interruptions since 2012, due in part to smart grid and system improvements. The avoided outages have resulted in $2.4 billion in estimated savings to society. In addition to keeping energy flowing for residents, strong power reliability continues to help persuade industrial and commercial companies to expand in northern Illinois and Chicago. The GridWise Alliance recently recognized Illinois as the No. 2 state in the nation for its smart grid implementation.

"Our smart grid investments has vastly improved the infrastructure of our system," said Terry Donnelly, ComEd president and chief operating officer. "We review the system and our operations continually to make sure we're investing in areas that benefit the greatest number of customers, and to prepare for public-health emergencies as well. On a daily basis and during storms or to reduce wildfire risk when necessary, our customers are seeing fewer and fewer interruptions to their lives and businesses."

ComEd customers also set records for energy savings this summer. Through its Peak Time Savings program and other energy-efficiency programs offered by utilities, ComEd empowered nearly 300,000 families and individuals to lower their bills by a total of more than $4 million this summer for voluntarily reducing their energy use during times of peak demand. Since the Peak Time Savings program launched in 2015, participating customers have earned a total of more than $10 million in bill credits.

 

Related News

View more

Purdue: As Ransomware Attacks Increase, New Algorithm May Help Prevent Power Blackouts

Infrastructure Security Algorithm prioritizes cyber defense for power grids and critical infrastructure, mitigating ransomware, blackout risks, and cascading failures by guiding utilities, regulators, and cyber insurers on optimal security investment allocation.

 

Key Points

An algorithm that optimizes security spending to cut ransomware and blackout risks across critical infrastructure.

✅ Guides utilities on optimal security allocation

✅ Uses incentives to correct human risk biases

✅ Prioritizes assets to prevent cascading outages

 

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid, as past US utility intrusions have shown.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid's utilities could fill in the most crucial security gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an algorithm to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework for protecting the U.S. power grid that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Power grids are a type of critical infrastructure, which is any network - whether physical like water systems or virtual like health care record keeping - considered essential to a country's function and safety. The biggest ransomware attacks in history have happened in the past year, affecting most sectors of critical infrastructure in the U.S. such as grain distribution systems in the food and agriculture sector and the Colonial Pipeline, which carries fuel throughout the East Coast, prompting increased military preparation for grid hacks in the U.S.

With this trend in mind, Purdue researchers evaluated the algorithm in the context of various types of critical infrastructure in addition to the power sector, including electricity-sector IoT devices that interface with grid operations. The goal is that the algorithm would help secure any large and complex infrastructure system against cyberattacks.

"Multiple companies own different parts of infrastructure. When ransomware hits, it affects lots of different pieces of technology owned by different providers, so that's what makes ransomware a problem at the state, national and even global level," said Saurabh Bagchi, a professor in the Elmore Family School of Electrical and Computer Engineering and Center for Education and Research in Information Assurance and Security at Purdue. "When you are investing security money on large-scale infrastructures, bad investment decisions can mean your power grid goes out, or your telecommunications network goes out for a few days."

Protecting infrastructure from hacks by improving security investment decisions

The researchers tested the algorithm in simulations of previously reported hacks to four infrastructure systems: a smart grid, industrial control system, e-commerce platform and web-based telecommunications network. They found that use of this algorithm results in the most optimal allocation of security investments for reducing the impact of a cyberattack.

The team's findings appear in a paper presented at this year's IEEE Symposium on Security and Privacy, the premier conference in the area of computer security. The team comprises Purdue professors Shreyas Sundaram and Timothy Cason and former PhD students Mustafa Abdallah and Daniel Woods.

"No one has an infinite security budget. You must decide how much to invest in each of your assets so that you gain a bump in the security of the overall system," Bagchi said.

The power grid, for example, is so interconnected that the security decisions of one power utility company can greatly impact the operations of other electrical plants. If the computers controlling one area's generators don't have adequate security protection, as seen when Russian hackers accessed control rooms at U.S. utilities, then a hack to those computers would disrupt energy flow to another area's generators, forcing them to shut down.

Since not all of the grid's utilities have the same security budget, it can be hard to ensure that critical points of entry to the grid's controls get the most investment in security protection.

The algorithm that Purdue researchers developed would incentivize each security decision maker to allocate security investments in a way that limits the cumulative damage a ransomware attack could cause. An attack on a single generator, for instance, would have less impact than an attack on the controls for a network of generators, which sophisticated grid-disruption malware can target at scale, rather than for the protection of a single generator.

Building an algorithm that considers the effects of human behavior

Bagchi's research shows how to increase cybersecurity in ways that address the interconnected nature of critical infrastructure but don't require an overhaul of the entire infrastructure system to be implemented.

As director of Purdue's Center for Resilient Infrastructures, Systems, and Processes, Bagchi has worked with the U.S. Department of Defense, Northrop Grumman Corp., Intel Corp., Adobe Inc., Google LLC and IBM Corp. on adopting solutions from his research. Bagchi's work has revealed the advantages of establishing an automatic response to attacks, and analyses like Symantec's Dragonfly report highlight energy-sector risks, leading to key innovations against ransomware threats, such as more effective ways to make decisions about backing up data.

There's a compelling reason why incentivizing good security decisions would work, Bagchi said. He and his team designed the algorithm based on findings from the field of behavioral economics, which studies how people make decisions with money.

"Before our work, not much computer security research had been done on how behaviors and biases affect the best defense mechanisms in a system. That's partly because humans are terrible at evaluating risk and an algorithm doesn't have any human biases," Bagchi said. "But for any system of reasonable complexity, decisions about security investments are almost always made with humans in the loop. For our algorithm, we explicitly consider the fact that different participants in an infrastructure system have different biases."

To develop the algorithm, Bagchi's team started by playing a game. They ran a series of experiments analyzing how groups of students chose to protect fake assets with fake investments. As in past studies in behavioral economics, they found that most study participants guessed poorly which assets were the most valuable and should be protected from security attacks. Most study participants also tended to spread out their investments instead of allocating them to one asset even when they were told which asset is the most vulnerable to an attack.

Using these findings, the researchers designed an algorithm that could work two ways: Either security decision makers pay a tax or fine when they make decisions that are less than optimal for the overall security of the system, or security decision makers receive a payment for investing in the most optimal manner.

"Right now, fines are levied as a reactive measure if there is a security incident. Fines or taxes don't have any relationship to the security investments or data of the different operators in critical infrastructure," Bagchi said.

In the researchers' simulations of real-world infrastructure systems, the algorithm successfully minimized the likelihood of losing assets to an attack that would decrease the overall security of the infrastructure system.

Bagchi's research group is working to make the algorithm more scalable and able to adapt to an attacker who may make multiple attempts to hack into a system. The researchers' work on the algorithm is funded by the National Science Foundation, the Wabash Heartland Innovation Network and the Army Research Lab.

Cybersecurity is an area of focus through Purdue's Next Moves, a set of initiatives that works to address some of the greatest technology challenges facing the U.S. Purdue's cybersecurity experts offer insights and assistance to improve the protection of power plants, electrical grids and other critical infrastructure.

 

Related News

View more

Financial update from N.L energy corp. reflects pandemic's impact

Nalcor Energy Pandemic Loss underscores Muskrat Falls delays, hydroelectric risks, oil price shocks, and COVID-19 impacts, affecting ratepayers, provincial debt, timelines, and software commissioning for the Churchill River project and Atlantic Canada subsea transmission.

 

Key Points

A $171M Q1 2020 downturn linked to COVID-19, oil price collapse, and Muskrat Falls delays impacting schedules and costs.

✅ Q1 2020 profit swing: +$92M to -$171M amid oil price crash

✅ Muskrat Falls timeline slips; cost may reach $13.1B

✅ Software, workforce, COVID-19 constraints slow commissioning

 

Newfoundland and Labrador's Crown energy corporation reported a pandemic-related profit loss from the first quarter of 2020 on Tuesday, along with further complications to the beleaguered Muskrat Falls hydroelectric project.

Nalcor Energy recorded a profit loss of $171 million in the first quarter of 2020, down from a $92 million profit in the same period last year, due in part to falling oil prices during the COVID-19 pandemic.

The company released its financial statements for 2019 and the first quarter of 2020 on Tuesday, and officials discussed the numbers in a livestreamed presentation that detailed the impact of the global health crisis on the company's operations.

The loss in the first quarter was caused by lower profits from electricity sales and a drop in oil prices due to the pandemic and other global events, company officials said.

The novel coronavirus also added to the troubles plaguing the Muskrat Falls hydroelectric dam on Labrador's Churchill River, amid Quebec-N.L. energy tensions that long predate the pandemic.

Work at the remote site stopped in March over concerns about spreading the virus. Operations have been resuming slowly, with a reduced workforce tackling the remaining jobs.

Officials with Nalcor said it will likely be another year before the megaproject is complete.

CEO Stan Marshall estimates the months of delays could bring the total cost to $13.1 billion including financing, up from the previous estimate of $12.7 billion -- though the total impact of the coronavirus on the project's price tag has yet to be determined.

"If we're going to shut down again, all of that's wrong," Marshall said. "But otherwise, we can just carry on and we'll have a good idea of the productivity level. I'm hoping that by September we'll have a more definitive number here."

The 824 megawatt hydroelectric dam will eventually send power to Newfoundland, and later Nova Scotia, through subsea cables, even as Nova Scotia boosts wind and solar in its energy mix.

It has seen costs essentially double since it was approved in 2012, and faced significant delays even before pandemic-forced shutdowns in North America and around the world this spring.

Cost and schedule overruns were the subject of a sweeping inquiry that held hearings last year, while broader generation choices like biomass use have drawn scrutiny as well.

The commissioner's report faulted previous governments for failing to protect residents by proceeding with the project no matter what, and for placing trust in Nalcor executives who "frequently" concealed information about schedule, cost and related risks.

Some of the latest delays have come from challenges with the development of software required to run the transmission link between Labrador and Newfoundland, where winter reliability issues have been flagged in reports.

The software is still being worked out, Marshall said Tuesday, and the four units at the dam will come online gradually over the next year.

"It's not an all or nothing thing," Marshall said of the final work stages.
Nalcor's financial snapshot follows a bleak fiscal update from the province this month. The Liberal government reported a net debt of $14.2 billion and a deficit of more than $1.1 billion, even as a recent Churchill Falls deal promised new revenues for the province, citing challenges from pandemic-related closures and oil production shutdowns.

Finance Minister Tom Osborne said at the time that help from Ottawa will be necessary to get the province's finances back on track.

Muskrat Falls represents about one-third of the province's debt, and is set to produce more power than the province of about half a million people requires. Anticipated rate increases due to the ballooning costs and questions about Muskrat Falls benefits have posed a significant political challenge for the provincial government.

Ottawa has agreed to work with Newfoundland and Labrador on a rewrite of the project's financial structure, scrapping the format agreed upon in past federal-provincial loan agreements in order to ease the burden on ratepayers, while some argue independent planning would better safeguard ratepayers.

Marshall, a former Fortis CEO who was brought in to lead Nalcor in 2016, has called the project a "boondoggle" and committed to seeing it completed within four years. Though that plan has been disrupted by the pandemic, Marshall said the end is in sight.

"I'm looking forward to a year from now. And I hope to be gone," Marshall said.

 

Related News

View more

Carbon capture: How can we remove CO2 from the atmosphere?

CO2 Removal Technologies address climate change via negative emissions, including carbon capture, reforestation, soil carbon, biochar, BECCS, DAC, and mineralization, helping meet Paris Agreement targets while managing costs, land use, and infrastructure demands.

 

Key Points

Methods to extract or sequester atmospheric CO2, combining natural and engineered approaches to limit warming.

✅ Includes reforestation, soil carbon, biochar, BECCS, DAC, mineralization

✅ Balances climate goals with costs, land, energy, and infrastructure

✅ Key to Paris Agreement targets under 1.5-2.0 °C warming

 

The world is, on average, 1.1 degrees Celsius warmer today than it was in 1850. If this trend continues, our planet will be 2 – 3 degrees hotter by the end of this century, according to the Intergovernmental Panel on Climate Change (IPCC).

The main reason for this temperature rise is higher levels of atmospheric carbon dioxide, which cause the atmosphere to trap heat radiating from the Earth into space. Since 1850, the proportion of CO2 in the air has increased, with record greenhouse gas concentrations documented, from 0.029% to 0.041% (288 ppm to 414 ppm).

This is directly related to the burning of coal, oil and gas, which were created from forests, plankton and plants over millions of years. Back then, they stored CO2 and kept it out of the atmosphere, but as fossil fuels are burned, that CO2 is released. Other contributing factors include industrialized agriculture and slash-and-burn land clearing techniques, and emissions from SF6 in electrical equipment are also concerning today.

Over the past 50 years, more than 1200 billion tons of CO2 have been emitted into the planet's atmosphere — 36.6 billion tons in 2018 alone, though global emissions flatlined in 2019 before rising again. As a result, the global average temperature has risen by 0.8 degrees in just half a century.


Atmospheric CO2 should remain at a minimum
In 2015, the world came together to sign the Paris Climate Agreement which set the goal of limiting global temperature rise to well below 2 degrees — 1.5 degrees, if possible.

The agreement limits the amount of CO2 that can be released into the atmosphere, providing a benchmark for the global energy transition now underway. According to the IPCC, if a maximum of around 300 billion tons were emitted, there would be a 50% chance of limiting global temperature rise to 1.5 degrees. If CO2 emissions remain the same, however, the CO2 'budget' would be used up in just seven years.

According to the IPCC's report on the 1.5 degree target, negative emissions are also necessary to achieve the climate targets.


Using reforestation to remove CO2
One planned measure to stop too much CO2 from being released into the atmosphere is reforestation. According to studies, 3.6 billion tons of CO2 — around 10% of current CO2 emissions — could be saved every year during the growth phase. However, a study by researchers at the Swiss Federal Institute of Technology, ETH Zurich, stresses that achieving this would require the use of land areas equivalent in size to the entire US.

Young trees at a reforestation project in Africa (picture-alliance/OKAPIA KG, Germany)
Reforestation has potential to tackle the climate crisis by capturing CO2. But it would require a large amount of space


More humus in the soil
Humus in the soil stores a lot of carbon. But this is being released through the industrialization of agriculture. The amount of humus in the soil can be increased by using catch crops and plants with deep roots as well as by working harvest remnants back into the ground and avoiding deep plowing. According to a study by the German Institute for International and Security Affairs (SWP) on using targeted CO2 extraction as a part of EU climate policy, between two and five billion tons of CO2 could be saved with a global build-up of humus reserves.


Biochar shows promise
Some scientists see biochar as a promising technology for keeping CO2 out of the atmosphere. Biochar is created when organic material is heated and pressurized in a zero or very low-oxygen environment. In powdered form, the biochar is then spread on arable land where it acts as a fertilizer. This also increases the amount of carbon content in the soil. According to the same study from the SWP, global application of this technology could save between 0.5 and two billion tons of CO2 every year.


Storing CO2 in the ground
Storing CO2 deep in the Earth is already well-known and practiced on Norway's oil fields, for example. However, the process is still controversial, as storing CO2 underground can lead to earthquakes and leakage in the long-term. A different method is currently being practiced in Iceland, in which CO2 is sequestered into porous basalt rock to be mineralized into stone. Both methods still require more research, however, with new DOE funding supporting carbon capture, utilization, and storage.

Capturing CO2 to be held underground is done by using chemical processes which effectively extract the gas from the ambient air, and some researchers are exploring CO2-to-electricity concepts for utilization. This method is known as direct air capture (DAC) and is already practiced in other parts of Europe.  As there is no limit to the amount of CO2 that can be captured, it is considered to have great potential. However, the main disadvantage is the cost — currently around €550 ($650) per ton. Some scientists believe that mass production of DAC systems could bring prices down to €50 per ton by 2050. It is already considered a key technology for future climate protection.

The inside of a carbon capture facility in the Netherlands (RWE AG)
Carbon capture facilities are still very expensive and take up a huge amount of space

Another way of extracting CO2 from the air is via biomass. Plants grow and are burned in a power plant to produce electricity. CO2 is then extracted from the exhaust gas of the power plant and stored deep in the Earth, with new U.S. power plant rules poised to test such carbon capture approaches.

The big problem with this technology, known as bio-energy carbon capture and storage (BECCS) is the huge amount of space required. According to Felix Creutzig from the Mercator Institute on Global Commons and Climate Change (MCC) in Berlin, it will therefore only play "a minor role" in CO2 removal technologies.


CO2 bound by rock minerals
In this process, carbonate and silicate rocks are mined, ground and scattered on agricultural land or on the surface water of the ocean, where they collect CO2 over a period of years. According to researchers, by the middle of this century it would be possible to capture two to four billion tons of CO2 every year using this technique. The main challenges are primarily the quantities of stone required, and building the necessary infrastructure. Concrete plans have not yet been researched.


Not an option: Fertilizing the sea with iron
The idea is use iron to fertilize the ocean, thereby increasing its nuturient content, which would allow plankton to grow stronger and capture more CO2. However, both the process and possible side effects are very controversial. "This is rarely treated as a serious option in research," concludes SWP study authors Oliver Geden and Felix Schenuit.

 

Related News

View more

Why the promise of nuclear fusion is no longer a pipe dream

ITER Nuclear Fusion advances tokamak magnetic confinement, heating deuterium-tritium plasma with superconducting magnets, targeting net energy gain, tritium breeding, and steam-turbine power, while complementing laser inertial confinement milestones for grid-scale electricity and 2025 startup goals.

 

Key Points

ITER Nuclear Fusion is a tokamak project confining D-T plasma with magnets to achieve net energy gain and clean power.

✅ Tokamak magnetic confinement with high-temp superconducting coils

✅ Deuterium-tritium fuel cycle with on-site tritium breeding

✅ Targets net energy gain and grid-scale, low-carbon electricity

 

It sounds like the stuff of dreams: a virtually limitless source of energy that doesn’t produce greenhouse gases or radioactive waste. That’s the promise of nuclear fusion, often described as the holy grail of clean energy by proponents, which for decades has been nothing more than a fantasy due to insurmountable technical challenges. But things are heating up in what has turned into a race to create what amounts to an artificial sun here on Earth, one that can provide power for our kettles, cars and light bulbs.

Today’s nuclear power plants create electricity through nuclear fission, in which atoms are split, with next-gen nuclear power exploring smaller, cheaper, safer designs that remain distinct from fusion. Nuclear fusion however, involves combining atomic nuclei to release energy. It’s the same reaction that’s taking place at the Sun’s core. But overcoming the natural repulsion between atomic nuclei and maintaining the right conditions for fusion to occur isn’t straightforward. And doing so in a way that produces more energy than the reaction consumes has been beyond the grasp of the finest minds in physics for decades.

But perhaps not for much longer. Some major technical challenges have been overcome in the past few years and governments around the world have been pouring money into fusion power research as part of a broader green industrial revolution under way in several regions. There are also over 20 private ventures in the UK, US, Europe, China and Australia vying to be the first to make fusion energy production a reality.

“People are saying, ‘If it really is the ultimate solution, let’s find out whether it works or not,’” says Dr Tim Luce, head of science and operation at the International Thermonuclear Experimental Reactor (ITER), being built in southeast France. ITER is the biggest throw of the fusion dice yet.

Its $22bn (£15.9bn) build cost is being met by the governments of two-thirds of the world’s population, including the EU, the US, China and Russia, at a time when Europe is losing nuclear power and needs energy, and when it’s fired up in 2025 it’ll be the world’s largest fusion reactor. If it works, ITER will transform fusion power from being the stuff of dreams into a viable energy source.


Constructing a nuclear fusion reactor
ITER will be a tokamak reactor – thought to be the best hope for fusion power. Inside a tokamak, a gas, often a hydrogen isotope called deuterium, is subjected to intense heat and pressure, forcing electrons out of the atoms. This creates a plasma – a superheated, ionised gas – that has to be contained by intense magnetic fields.

The containment is vital, as no material on Earth could withstand the intense heat (100,000,000°C and above) that the plasma has to reach so that fusion can begin. It’s close to 10 times the heat at the Sun’s core, and temperatures like that are needed in a tokamak because the gravitational pressure within the Sun can’t be recreated.

When atomic nuclei do start to fuse, vast amounts of energy are released. While the experimental reactors currently in operation release that energy as heat, in a fusion reactor power plant, the heat would be used to produce steam that would drive turbines to generate electricity, even as some envision nuclear beyond electricity for industrial heat and fuels.

Tokamaks aren’t the only fusion reactors being tried. Another type of reactor uses lasers to heat and compress a hydrogen fuel to initiate fusion. In August 2021, one such device at the National Ignition Facility, at the Lawrence Livermore National Laboratory in California, generated 1.35 megajoules of energy. This record-breaking figure brings fusion power a step closer to net energy gain, but most hopes are still pinned on tokamak reactors rather than lasers.

In June 2021, China’s Experimental Advanced Superconducting Tokamak (EAST) reactor maintained a plasma for 101 seconds at 120,000,000°C. Before that, the record was 20 seconds. Ultimately, a fusion reactor would need to sustain the plasma indefinitely – or at least for eight-hour ‘pulses’ during periods of peak electricity demand.

A real game-changer for tokamaks has been the magnets used to produce the magnetic field. “We know how to make magnets that generate a very high magnetic field from copper or other kinds of metal, but you would pay a fortune for the electricity. It wouldn’t be a net energy gain from the plant,” says Luce.


One route for nuclear fusion is to use atoms of deuterium and tritium, both isotopes of hydrogen. They fuse under incredible heat and pressure, and the resulting products release energy as heat


The solution is to use high-temperature, superconducting magnets made from superconducting wire, or ‘tape’, that has no electrical resistance. These magnets can create intense magnetic fields and don’t lose energy as heat.

“High temperature superconductivity has been known about for 35 years. But the manufacturing capability to make tape in the lengths that would be required to make a reasonable fusion coil has just recently been developed,” says Luce. One of ITER’s magnets, the central solenoid, will produce a field of 13 tesla – 280,000 times Earth’s magnetic field.

The inner walls of ITER’s vacuum vessel, where the fusion will occur, will be lined with beryllium, a metal that won’t contaminate the plasma much if they touch. At the bottom is the divertor that will keep the temperature inside the reactor under control.

“The heat load on the divertor can be as large as in a rocket nozzle,” says Luce. “Rocket nozzles work because you can get into orbit within minutes and in space it’s really cold.” In a fusion reactor, a divertor would need to withstand this heat indefinitely and at ITER they’ll be testing one made out of tungsten.

Meanwhile, in the US, the National Spherical Torus Experiment – Upgrade (NSTX-U) fusion reactor will be fired up in the autumn of 2022, while efforts in advanced fission such as a mini-reactor design are also progressing. One of its priorities will be to see whether lining the reactor with lithium helps to keep the plasma stable.


Choosing a fuel
Instead of just using deuterium as the fusion fuel, ITER will use deuterium mixed with tritium, another hydrogen isotope. The deuterium-tritium blend offers the best chance of getting significantly more power out than is put in. Proponents of fusion power say one reason the technology is safe is that the fuel needs to be constantly fed into the reactor to keep fusion happening, making a runaway reaction impossible.

Deuterium can be extracted from seawater, so there’s a virtually limitless supply of it. But only 20kg of tritium are thought to exist worldwide, so fusion power plants will have to produce it (ITER will develop technology to ‘breed’ tritium). While some radioactive waste will be produced in a fusion plant, it’ll have a lifetime of around 100 years, rather than the thousands of years from fission.

At the time of writing in September, researchers at the Joint European Torus (JET) fusion reactor in Oxfordshire were due to start their deuterium-tritium fusion reactions. “JET will help ITER prepare a choice of machine parameters to optimise the fusion power,” says Dr Joelle Mailloux, one of the scientific programme leaders at JET. These parameters will include finding the best combination of deuterium and tritium, and establishing how the current is increased in the magnets before fusion starts.

The groundwork laid down at JET should accelerate ITER’s efforts to accomplish net energy gain. ITER will produce ‘first plasma’ in December 2025 and be cranked up to full power over the following decade. Its plasma temperature will reach 150,000,000°C and its target is to produce 500 megawatts of fusion power for every 50 megawatts of input heating power.

“If ITER is successful, it’ll eliminate most, if not all, doubts about the science and liberate money for technology development,” says Luce. That technology development will be demonstration fusion power plants that actually produce electricity, where advanced reactors can build on decades of expertise. “ITER is opening the door and saying, yeah, this works – the science is there.”

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.