Power transmission complex, costly

By Knight Ridder Tribune


Substation Relay Protection Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$699
Coupon Price:
$599
Reserve Your Seat Today
Perhaps the crowd for the open house at PPL Corp.'s Lake Wallenpaupack hydroelectric dam was an indication. There were old and young people alike, but few in between, suggesting most folks had found other things to do at this lakeside own on an overcast summer day.

That illustrates the public's attitude on electricity production. Consumers want it here on demand as cheaply as possible, but only those inquisitive few care about where it comes from or how it is transmitted. Consumers, attuned or indifferent, will get bigger electric bills in 2010 when prices in Pennsylvania jump perhaps 35 percent or more.

Electricity companies say it's an inevitable reaction to open-market prices that increased during the cap years. Alert customers might be confused.

Wait, they might ask, haven't bills already gone up? Haven't companies already received rate increases? Indeed, they have because only the generation rate, which accounts for about 40 percent of the total cost, was capped.

Utilities have been free to request and receive increases to transmission and distribution rates. This year alone, local provider PPL Electric Utilities has requested increases of about 10 percent. About half of that has already been granted by the state Public Utilities Commission.

Demand changes constantly, creating spikes that the industry must handle or risk blackouts. It's not cost effective for generating companies to always generate peak-period amounts, so the industry has created a two-tiered system to handle that demand. The first tier is large, efficient power plants that create vast amounts of power.

The plants are slow to turn on, but run almost continually, forming the backbone of the supply. The second tier includes small, often automated "peaking" plants that usually run on expensive fuel, but can be turned on almost instantly. They run when demand exceeds the large plants' output.

"The philosophy is entirely different," PPL spokesman George Lewis said. "Those plants are for the most part sitting in wait. If they get the call... they'd better be able to start." Electricity is sold by generating companies in an open market through a brokerage organization to utility companies. The brokerage solicits bids from the generating companies, and accepts the lowest ones first.

Generating companies, however, are all paid the highest accepted bid. So when expensive peaking units must be used, utilities pay a premium price for every megawatt. The rate caps, which began in 1996, have kept customers from feeling that effect. Also at issue is where plants are located. Because large plants often face opposition, they often are built where few people live.

Pennsylvania exported 70 billion kilowatt-hours in 2006, more than any other state, according to Doug Biden of the Electric Power Generation Association. All that power gets shipped through high-voltage lines, which themselves often receive opposition.

"I would say it's more difficult to build a transmission line than to build anything else just because of its length," state Consumer Advocate Sonny Popowsky said. They're also somewhat unreliable.

In August 2003, electricity-laden lines in Ohio sagged into a tree, sparking a blackout that crippled the Northeast. And PPL estimates energy lost to heat while traveling on the lines accounts for 9 percent of use. To control the price shocks that come with relying on one fuel, the industry plans to use a variety of fuels in the future. Politicians are pushing eco-friendly sources.

But such plants must go where the fuel is, requiring more transmission lines.

Furthermore, their energy-production-to-land-use efficiency is dwarfed by larger, more established plants. An example: The Susquehanna nuclear plant near Berwick can produce about six times more power per acre than the Bear Creek wind park. Renewable energy accounts for only 3 percent of total generation in the state, according to Biden.

Legislation might soon make renewable sources more appealing, though. Renewable energy quotas have been passed in some states, including Pennsylvania. More stringent carbon dioxide emissions caps would put heavy economic restrictions on coal plants. Many officials believe it comes down to personal responsibility.

"We have a society problem where we demand electricity" while complaining about its drawbacks, said Terry Williamson, a spokesman for the brokerage organization PJM Interconnection. "But you've gotta have the juice somewhere."

Related News

Does Providing Electricity To The Poor Reduce Poverty? Maybe Not

Rural Electrification Poverty Impact examines energy access, grid connections, and reliability, testing economic development claims via randomized trials; findings show minimal gains without appliances, reliable supply, and complementary services like education and job creation initiatives.

 

Key Points

Study of household grid connections showing modest poverty impact without reliable power and appliances.

✅ Randomized grid connections showed no short-term income gains.

✅ Low reliability and few appliances limited electricity use.

✅ Complementary investments in jobs, education, health may be needed.

 

The head of Swedfund, the development finance group, recently summarized a widely-held belief: “Access to reliable electricity drives development and is essential for job creation, women’s empowerment and combating poverty.” This view has been the driving force behind a number of efforts to provide electricity to the 1.1 billion people around the world living in energy poverty, such as India's village electrification initiatives in recent years.

But does electricity really help lift households out of poverty? My co-authors and I set out to answer this question. We designed an experiment in which we first identified a sample of “under grid” households in Western Kenya—structures that were located close to but not connected to a grid. These households were then randomly divided into treatment and control groups. In the treatment group, we worked closely with the rural electrification agency to connect the households to the grid for free or at various discounts. In the control group, we made no changes. After eighteen months, we surveyed people from both groups and collected data on an assortment of outcomes, including whether they were employed outside of subsistence agriculture (the most common type of work in the region) and how many assets they owned. We even gave children basic tests, as a frequent assertion is that electricity helps children perform better in school since they are able to study at night.

When we analyzed the data, we found no differences between the treatment and control groups. The rural electrification agency had spent more than $1,000 to connect each household. Yet eighteen months later, the households we connected seemed to be no better off. Even the children’s test scores were more or less the same. The results of our experiment were discouraging, and at odds with the popular view that supplying households with access to electricity will drive economic development. Lifting people out of poverty may require a more comprehensive approach to ensure that electricity is not only affordable (with some evidence that EV growth can benefit all customers in mature markets), but is also reliable, useable, and available to the whole community, paired with other important investments.

For instance, in many low-income countries, the grid has frequent blackouts and maintenance problems, making electricity unreliable, as seen in Nigeria's electricity crisis in recent years. Even if the grid were reliable, poor households may not be able to afford the appliances that would allow for more than just lighting and cell phone charging. In our data, households barely bought any appliances and they used just 3 kilowatt-hours per month. Compare that to the U.S. average of 900 kilowatt-hours per month, a figure that could rise as EV adoption increases electricity demand over time.

There are also other factors to consider. After all, correlation does not equal causation. There is no doubt that the 1.1 billion people without power are the world’s poorest citizens. But this is not the only challenge they face. The poor may also lack running water, basic sanitation, consistent food supplies, quality education, sufficient health care, political influence, and a host of other factors that may be harder to measure but are no less important to well-being. Prioritizing investments in some of these other factors may lead to higher immediate returns. Previous work by one of my co-authors, for example, shows substantial economic gains from government spending on treatment for intestinal worms in children.

It’s possible that our results don’t generalize. They certainly don’t apply to enhancing electricity services for non-residential customers, like factories, hospitals, and schools, and electric utilities adapting to new load patterns. Perhaps the households we studied in Western Kenya are particularly poor (although measures of well-being suggest they are comparable to rural households across Sub-Saharan Africa) or politically disenfranchised. Perhaps if we had waited longer, or if we had electrified an entire region, the household impacts we measured would have been much greater. But others who have studied this question have found similar results. One study, also conducted in Western Kenya, found that subsidizing solar lamps helped families save on kerosene, but did not lead children to study more. Another study found that installing solar-powered microgrids in Indian villages resulted in no socioeconomic benefits.

 

Related News

View more

Nearly $1 Trillion in Investments Estimated by 2030 as Power Sector Transitions to a More Decarbonized and Flexible System

Distributed Energy Resources (DER) are surging as solar PV, battery storage, and demand response decarbonize power, cut costs, and boost grid resilience for utilities, ESCOs, and C&I customers through 2030.

 

Key Points

DER are small-scale, grid-connected assets like solar PV, storage, and demand response that deliver flexible power.

✅ Investments in DER to rise 75% by 2030; $846B in assets, $285B in storage.

✅ Residential solar PV: 49.3% of spend; C&I solar PV: 38.9% by 2030.

✅ Drivers: favorable policy, falling costs, high demand charges, decarbonization.

 

Frost & Sullivan's recent analysis, Growth Opportunities in Distributed Energy, Forecast to 2030, finds that the rate of annual investment in distributed energy resources (DER) will increase by 75% by 2030, with the market set for a decade of high growth. Favorable regulations, declining project and technology costs, and high electricity and demand charges are key factors driving investments in DER across the globe, with rising European demand boosting US solar equipment makers prospects in export markets. The COVID-19 pandemic will reduce investment levels in the short term, but the market will recover. Throughout the decade, $846 billion will be invested in DER, supported by a further $285 billion that will be invested in battery storage, with record solar and storage growth anticipated as installations and investments accelerate.

"The DER business model will play an increasingly pivotal role in the global power mix, as highlighted by BNEF's 2050 outlook and as part of a wider effort to decarbonize the sector," said Maria Benintende, Senior Energy Analyst at Frost & Sullivan. "Additionally, solar photovoltaic (PV) will dominate throughout the decade. Residential solar PV will account for 49.3% of total investment ($419 billion), though policy moves like a potential Solar ITC extension could pressure the US wind market, with commercial and industrial solar PV accounting for a further 38.9% ($330 billion)."

Benintende added: "In developing economies, DER offers a chance to bridge the electricity supply gap that still exists in a number of country markets. Further, in developed markets, DER is a key part of the transition to a cleaner and more resilient energy system, consistent with IRENA's renewables decarbonization findings across the energy sector."

DER offers significant revenue growth prospects for all key market participants, including:

  • Technology original equipment manufacturers (OEMs): Offer flexible after-sales support, including digital solutions such as asset integrity and optimization services for their installed base.
  • System integrators and installers: Target household customers and provide efficient and trustworthy solutions with flexible financial models.
  • Energy service companies (ESCOs): ESCOs should focus on adding DER deployments, in line with US decarbonization pathways and policy goals, to expand and enhance their traditional role of providing energy savings and demand-side management services to customers.

Utility companies: Deployment of DER can create new revenue streams for utility companies, from real-time and flexibility markets, and rapid solar PV growth in China illustrates how momentum in renewables can shape utility strategies.
Growth Opportunities in Distributed Energy, Forecast to 2030 is the latest addition to Frost & Sullivan's Energy and Environment research and analyses available through the Frost & Sullivan Leadership Council, which helps organizations identify a continuous flow of growth opportunities to succeed in an unpredictable future.

 

Related News

View more

Duke Energy will spend US$25bn to modernise its US grid

Duke Energy Clean Energy Strategy targets smart grid upgrades, wind and solar expansion, efficient gas, and high-reliability nuclear, cutting CO2, boosting decarbonization, and advancing energy efficiency and reliability for the Carolinas.

 

Key Points

A plan investing in smart grids, renewables, gas, and nuclear to cut CO2 and enhance reliability and efficiency by 2030.

✅ US$25bn smart grid upgrades; US$11bn renewables and gas

✅ 40% CO2 reduction and >80% low-/zero-carbon generation by 2030

✅ 2017 nuclear fleet 95.64% capacity factor; ~90 TWh carbon-free

 

The US power group Duke Energy plans to invest US$25bn on grid modernization over the 2017-2026 period, including the implementation of smart grid technologies to cope with the development of renewable energies, along with US$11bn on the expansion of renewable (wind and solar) and gas-fired power generation capacities.

The company will modernize its fleet and expects more than 80% of its power generation mix to come from zero and lower CO2 emitting sources, aligning with nuclear and net-zero goals, by 2030. Its current strategy focuses on cutting down CO2 emissions by 40% by 2030. Duke Energy will also promote energy efficiency and expects cumulative energy savings - based on the expansion of existing programmes - to grow to 22 TWh by 2030, i.e. the equivalent to the annual usage of 1.8 million households.

#google#

Duke Energy’s 11 nuclear generating units posted strong operating performance in 2017, as U.S. nuclear costs hit a ten-year low, providing the Carolinas with nearly 90 billion kilowatt-hours of carbon-free electricity – enough to power more than 7 million homes.

Globally, China's nuclear program remains on a steady development track, underscoring broader industry momentum.

“Much of our 2017 success is due to our focus on safety and work efficiencies identified by our nuclear employees, along with ongoing emphasis on planning and executing refueling outages to increase our fleet’s availability for producing electricity,” said Preston Gillespie, Duke Energy chief nuclear officer.

Some of the nuclear fleet’s 2017 accomplishments include, as a new U.S. reactor comes online nationally:

  • The 11 units achieved a combined capacity factor of 95.64 percent, second only to the fleet’s 2016 record of 95.72 percent, marking the 19th consecutive year of attaining a 90-plus percent capacity factor (a measure of reliability).
  • The two units at Catawba Nuclear Station produced more than 19 billion kilowatt-hours of electricity, and the single unit at Harris Nuclear Plant generated more than 8 billion kilowatt-hours, both setting 12-month records.
  • Brunswick Nuclear Plant unit 2 achieved a record operating run.
  • Both McGuire Nuclear Station units completed their shortest refueling outages ever and unit 1 recorded its longest operating run.
  • Oconee Nuclear Station unit 2 achieved a fleet record operating run.

The Robinson Nuclear Plant team completed the station’s 30th refueling outage, which included a main generator stator replacement and other life-extension activities, well ahead of schedule.

“Our nuclear employees are committed to providing reliable, clean electricity every day for our Carolinas customers,” added Gillespie. “We are very proud of our team’s 2017 accomplishments and continue to look for additional opportunities to further enhance operations.”

 

 

Related News

View more

Minnesota 2050 carbon-free electricity plan gets first hearing

Minnesota Carbon-Free Power by 2050 aims to shift utilities to renewable energy, wind and solar, boosting efficiency while managing grid reliability, emissions, and costs under a clean energy mandate and statewide climate policy.

 

Key Points

A statewide goal to deliver 100% carbon-free power by 2050, prioritizing renewables, efficiency, and grid reliability.

✅ Targets 100% carbon-free electricity statewide by 2050

✅ Prioritizes wind, solar, and efficiency before fossil fuels

✅ Faces utility cost, reliability, and legislative challenges

 

Gov. Tim Walz's plan for Minnesota to get 100 percent of its electricity from carbon-free sources by 2050, similar to California's 100% carbon-free mandate in scope, was criticized Tuesday at its first legislative hearing, with representatives from some of the state's smaller utilities saying they can't meet that goal.

Commerce Commissioner Steve Kelley told the House climate committee that the Democratic governor's plan is ambitious. But he said the state's generating system is "aging and at a critical juncture," with plants that produce 70 percent of the state's electricity coming up for potential retirement over the next two decades. He said it will ensure that utilities replace them with wind, solar and other innovative sources, and increased energy efficiency, before turning to fossil fuels.

"Utilities will simply need to demonstrate why clean energy would not work whenever they propose to replace or add new generating capacity," he said.

Walz's plan, announced last week, seeks to build on the success of a 2007 law that required Minnesota utilities to get at least 25 percent of their electricity from renewable sources by 2025. The state largely achieved that goal in 2017 thanks to the growth of wind and solar power, and the topic of climate change has only grown hotter, with some proposals like a fully renewable grid by 2030 pushing even faster timelines, hence the new goal for 2050.

But Joel Johnson, a lobbyist for the Minnkota Power Cooperative, testified that the governor's plan is "misguided and unrealistic" even with new technology to capture carbon dioxide emissions from power plants. Johnson added that even the big utilities that have set goals of going carbon-free by mid-century, such as Minneapolis-based Xcel Energy, acknowledge they don't know yet how they'll hit the net-zero electricity by mid-century target they have set.

 

Minnkota serves northwestern Minnesota and eastern North Dakota.

Tim Sullivan, president and CEO of the Wright-Hennepin Cooperative Electric Association in the Twin Cities area, said the plan is a "bad idea" for the 1.7 million state electric consumers served by cooperatives. He said Minnesota is a "minuscule contributor" to total global carbon emissions, even as the EU plans to double electricity use by 2050 to meet electrification demands.

"The bill would have a devastating impact on electric consumers," Sullivan said. "It represents, in our view, nothing short of a first-order threat to the safety and reliability of Minnesota's grid."

Isaac Orr is a policy fellow at the Minnesota-based conservative think tank, the Center for the American Experiment, which released a report critical of the plan Tuesday. Orr said all Minnesota households would face higher energy costs and it would harm energy-intensive industries such as mining, manufacturing and health care, while doing little to reduce global warming.

"This does not pass a proper cost-benefit analysis," he testified.

Environmental groups, including Conservation Minnesota and the Sierra Club, supported the proposal while acknowledging the challenges, noting that cleaning up electricity is critical to climate pledges in many jurisdictions.

"Our governor has called climate change an existential crisis," said Kevin Lee, director of the climate and energy program at the Minnesota Center for Environmental Advocacy. "This problem is the defining challenge of our time, and it can feel overwhelming."

Rep. Jean Wagenius, the committee chairwoman and Minneapolis Democrat who's held several hearings on the threats that climate change poses, said she expected to table the bill for further consideration after taking more testimony in the evening and would not hold a vote Tuesday.

While the bill has support in the Democratic-controlled House, it's not scheduled for action in the Republican-led Senate. Rep. Pat Garofalo, a Farmington Republican, quipped that it "has a worse chance of becoming law than me being named the starting quarterback for the Minnesota Vikings."

 

Related News

View more

Grid coordination opens road for electric vehicle flexibility

Smart EV Charging orchestrates vehicle-to-grid (V2G), demand response, and fast charging to balance the power grid, integrating renewables, electrolyzers for hydrogen, and megawatt chargers for fleets with advanced control and co-optimization.

 

Key Points

Smart EV charging coordinates EV load to stabilize the grid, cut peaks, and integrate renewable energy efficiently.

✅ Reduces peak demand via coordinated, flexible load control

✅ Enables V2G services with renewables and battery storage

✅ Supports megawatt fast charging for heavy-duty fleets

 

As electric vehicle (EV) sales continue to rev up in the United States, the power grid is in parallel contending with the greatest transformation in its 100-year history: the large-scale integration of renewable energy and power electronic devices. The expected expansion of EVs will shift those challenges into high gear, causing cities to face gigawatt-growth in electricity demand, as analyses of EV grid impacts indicate, and higher amounts of variable energy.

Coordinating large numbers of EVs with the power system presents a highly complex challenge. EVs introduce variable electrical loads that are highly dependent on customer behavior. Electrified transportation involves co-optimization with other energy systems, like natural gas and bulk battery storage, including mobile energy storage flexibility for new operational options. It could involve fleets of automated ride-hailing EVs and lead to hybrid-energy truck stops that provide hydrogen and fast-charging to heavy-duty vehicles.

Those changes will all test the limits of grid integration, but the National Renewable Energy Laboratory (NREL) sees opportunity at the intersection of energy systems and transportation. With powerful resources for simulating and evaluating complex systems, several NREL projects are determining the coordination required for fast charging, balancing electrical supply and demand, and efficient use of all energy assets.


Smart and Not-So-Smart Control
To appreciate the value of coordinated EV charging, it is helpful to imagine the opposite scenario.

"Our first question is how much benefit or burden the super simple, uncoordinated approach to electric vehicle charging offers the grid," said Andrew Meintz, the researcher leading NREL's Electric Vehicle Grid Integration team, as well as the RECHARGE project for smart EV charging. "Then we compare that to the 'whiz-bang,' everything-is-connected approach. We want to know the difference in value."

In the "super simple" approach, Meintz explained that battery-powered electric vehicles grow in market share, exemplified by mass-market EVs, without any evolution in vehicle charging coordination. Picture every employee at your workplace driving home at 5 p.m. and charging their vehicle. That is the grid's equivalent of going 0 to 100 mph, and if it does not wreck the system, it is at least very expensive. According to NREL's Electrification Futures Study, a comprehensive analysis of the impacts of widespread electrification across all U.S. economic sectors, in 2050 EVs could contribute to a 33% increase in energy use during peak electrical demand, underscoring state grid challenges that make these intervals costly when energy reserves are procured. In duck curve parlance, EVs will further strain the duck's neck.

The Optimization and Control Lab's Electric Vehicle Grid Integration bays allow researchers to determine how advanced high power chargers can be added safely and effectively to the grid, with the potential to explore how to combine buildings and EV charging. Credit: Dennis Schroeder, NREL
Meintz's "whiz-bang" approach instead imagines EV control strategies that are deliberate and serve to smooth, rather than intensify, the upcoming demand for electricity. It means managing both when and where vehicles charge to create flexible load on the grid.

At NREL, smart strategies to dispatch vehicles for optimal charging are being developed for both the grid edge, where consumers and energy users connect to the grid, as in RECHARGEPDF, and the entire distribution system, as in the GEMINI-XFC projectPDF. Both projects, funded by the U.S. Department of Energy's (DOE's) Vehicle Technologies Office, lean on advanced capabilities at NREL's Energy Systems Integration Facility to simulate future energy systems.

At the grid edge, EVs can be co-optimized with distributed energy resources—small-scale generation or storage technologies—the subject of a partnership with Eaton that brought industry perspectives to bear on coordinated management of EV fleets.

At the larger-system level, the GEMINI-XFC project has extended EV optimization scenarios to the city scale—the San Francisco Bay Area, to be specific.

"GEMINI-XFC involves the highest-ever-fidelity modeling of transportation and the grid," said NREL Research Manager of Grid-Connected Energy Systems Bryan Palmintier.

"We're combining future transportation scenarios with a large metro area co-simulationPDF—millions of simulated customers and a realistic distribution system model—to find the best approaches to vehicles helping the grid."

GEMINI-XFC and RECHARGE can foresee future electrification scenarios and then insert controls that reduce grid congestion or offset peak demand, for example. Charging EVs involves a sort of shell game, where loads are continually moved among charging stations to accommodate grid demand.

But for heavy-duty vehicles, the load is harder to hide. Electrified truck fleets will hit the road soon, creating power needs for electric truck fleets that translate to megawatts of localized demand. No amount of rerouting can avoid the requirements of charging heavy-duty vehicles or other instances of extreme fast-charging (XFC). To address this challenge, NREL is working with industry and other national laboratories to study and demonstrate the technological buildout necessary to achieve 1+ MW charging stationsPDF that are capable of fast charging at very high energy levels for medium- and heavy-duty vehicles.

To reach such a scale, NREL is also considering new power conversion hardware based on advanced materials like wide-bandgap semiconductors, as well as new controllers and algorithms that are uniquely suited for fleets of charge-hungry vehicles. The challenge to integrate 1+ MW charging is also pushing NREL research to higher power: Upcoming capabilities will look at many-megawatt systems that tie in the support of other energy sectors.


Renewable In-Roads for Hydrogen

At NREL, the drive toward larger charging demands is being met with larger research capabilities. The announcement of ARIES opens the door to energy systems integration research at a scale 10-times greater than current capabilities: 20 MW, up from 2 MW. Critically, it presents an opportunity to understand how mobility with high energy demands can be co-optimized with other utility-scale assets to benefit grid stability.

"If you've got a grid humming along with a steady load, then a truck requires 500 kW or more of power, it could create a large disruption for the grid," said Keith Wipke, the laboratory program manager for fuel cells and hydrogen technologies at NREL.

Such a high power demand could be partially served by battery storage systems. Or it could be hidden entirely with hydrogen production. Wipke's program, with support from the DOE's Hydrogen and Fuel Cell Technologies Office, has been performing studies into how electrolyzers—devices that use electricity to break water into hydrogen and oxygen—could offset the grid impacts of XFC. These efforts are also closely aligned with DOE's H2@Scale vision for affordable and effective hydrogen use across multiple sectors, including heavy-duty transportation, power generation, and metals manufacturing, among others.

"We're simulating electrolyzers that can match the charging load of heavy-duty battery electric vehicles. When fast charging begins, the electrolyzers are ramped down. When fast charging ends, the electrolyzers are ramped back up," Wipke said. "If done smoothly, the utility doesn't even know it's happening."

NREL Researchers Rishabh Jain, Kazunori Nagasawa, and Jen Kurtz are working on how grid integration of electrolyzers—devices that use electricity to break water into hydrogen and oxygen—could offset the grid impacts of extreme fast-charging. Credit: National Renewable Energy Laboratory
As electrolyzers harness the cheap electrons from off-demand periods, a significant amount of hydrogen can be produced on site. That creates a natural energy pathway from discount electricity into a fuel. It is no wonder, then, that several well-known transportation and fuel companies have recently initiated a multimillion-dollar partnership with NREL to advance heavy-duty hydrogen vehicle technologies.

"The logistics of expanding electric charging infrastructure from 50 kW for a single demonstration battery electric truck to 5,000 kW for a fleet of 100 could present challenges," Wipke said. "Hydrogen scales very nicely; you're basically bringing hydrogen to a fueling station or producing it on site, but either way the hydrogen fueling events are decoupled in time from hydrogen production, providing benefits to the grid."

The long driving range and fast refuel times—including a DOE target of achieving 10-minutes refuel for a truck—have already made hydrogen the standout solution for applications in warehouse forklifts. Further, NREL is finding that distributed electrolyzers can simultaneously produce hydrogen and improve voltage conditions, which can add much-needed stability to a grid that is accommodating more energy from variable resources.

Those examples that co-optimize mobility with the grid, using diverse technologies, are encouraging NREL and its partners to pursue a new scale of systems integration. Several forward-thinking projects are reimagining urban mobility as a mix of energy solutions that integrate the relative strengths of transportation technologies, which complement each other to fill important gaps in grid reliability.


The Future of Urban Mobility
What will electrified transportation look like at high penetrations? A few NREL projects offer some perspective. Among the most experimental, NREL is helping the city of Denver develop a smart community, integrated with electrified mobility and featuring automated charging and vehicle dispatch.

On another path to advanced mobility, Los Angeles has embarked on a plan to modernize its electricity system infrastructure, reflecting California EV grid stability goals—aiming for a 100% renewable energy supply by 2045, along with aggressive electrification targets for buildings and vehicles. Through the Los Angeles 100% Renewable Energy Study, the city is currently working with NREL to assess the full-scale impacts of the transition in a detailed analysis that integrates diverse capabilities across the laboratory.

The transition would include the Port of Long Beach, the busiest container port in the United States.

At the port, NREL is applying the same sort of scenario forecasting and controls evaluation as other projects, in order to find the optimal mix of technologies that can be integrated for both grid stability and a reliable quality of service: a mix of hydrogen fuel-cell and battery EVs, battery storage systems, on-site renewable generation, and extreme coordination among everything.

"Hydrogen at ports makes sense for the same reason as trucks: Marine applications have big power and energy demands," Wipke said. "But it's really the synergies between diverse technologies—the existing infrastructure for EVs and the flexibility of bulk battery systems—that will truly make the transition to high renewable energy possible."

Like the Port of Long Beach, transportation hubs across the nation are adapting to a complex environment of new mobility solutions. Airports and public transit stations involve the movement of passengers, goods, and services at a volume exceeding anywhere else. With the transition to digitally connected electric mobility changing how airports plan for the future, NREL projects such as Athena are using the power of high-performance computing to demonstrate how these hubs can maximize the value of passenger and freight mobility per unit of energy, time, and/or cost.

The growth in complexity for transportation hubs has just begun, however. Looking ahead, fleets of ride-sharing EVs, automated vehicles, and automated ride-sharing EV fleets could present the largest effort to manage mobility yet.


A Self-Driving Power Grid
To understand the full impact of future mobility-service providers, NREL developed the HIVE (Highly Integrated Vehicle Ecosystem) simulation framework. HIVE combines factors related to serving mobility needs and grid operations—such as a customer's willingness to carpool or delay travel, and potentially time-variable costs of recharging—and simulates the outcome in an integrated environment.

"Our question is, how do you optimize the management of a fleet whose primary purpose is to provide rides and improve that fleet's dispatch and charging?" said Eric Wood, an NREL vehicle systems engineer.

HIVE was developed as part of NREL's Autonomous Energy Systems research to optimize the control of automated vehicle fleets. That is, optimized routing and dispatch of automated electric vehicles.

The project imagines how price signals could influence dispatch algorithms. Consider one customer booking a commute through a ride-hailing app. Out of the fleet of vehicles nearby—variously charged and continually changing locations—which one should pick up the customer?

Now consider the movements of thousands of passengers in a city and thousands of vehicles providing transportation services. Among the number of agents, the moment-to-moment change in energy supply and demand, and the broad diversity in vendor technologies, "we're playing with a lot of parameters," Wood said.

But cutting through all the complexity, and in the midst of massive simulations, the end goal for vehicle-to-grid integration is consistent:

"The motivation for our work is that there are forecasts for significant load on the grid from the electrification of transportation," Wood said. "We want to ensure that this load is safely and effectively integrated, while meeting the expectations and needs of passengers."

The Port of Long Beach uses a mix of hydrogen fuel-cell and battery EVs, battery storage systems, on-site renewable generation, and extreme coordination among everything. Credit: National Renewable Energy Laboratory
True Replacement without Caveats

Electric vehicles are not necessarily helpful to the grid, but they can be. As EVs become established in the transportation sector, NREL is studying how to even out any bumps that electrified mobility could cause on the grid and advance any benefits to commuters or industry.

"It all comes down to load flexibility," Meintz said. "We're trying to decide how to optimally dispatch vehicle charging to meet quality-of-service considerations, while also minimizing charging costs."

 

Related News

View more

Why the promise of nuclear fusion is no longer a pipe dream

ITER Nuclear Fusion advances tokamak magnetic confinement, heating deuterium-tritium plasma with superconducting magnets, targeting net energy gain, tritium breeding, and steam-turbine power, while complementing laser inertial confinement milestones for grid-scale electricity and 2025 startup goals.

 

Key Points

ITER Nuclear Fusion is a tokamak project confining D-T plasma with magnets to achieve net energy gain and clean power.

✅ Tokamak magnetic confinement with high-temp superconducting coils

✅ Deuterium-tritium fuel cycle with on-site tritium breeding

✅ Targets net energy gain and grid-scale, low-carbon electricity

 

It sounds like the stuff of dreams: a virtually limitless source of energy that doesn’t produce greenhouse gases or radioactive waste. That’s the promise of nuclear fusion, often described as the holy grail of clean energy by proponents, which for decades has been nothing more than a fantasy due to insurmountable technical challenges. But things are heating up in what has turned into a race to create what amounts to an artificial sun here on Earth, one that can provide power for our kettles, cars and light bulbs.

Today’s nuclear power plants create electricity through nuclear fission, in which atoms are split, with next-gen nuclear power exploring smaller, cheaper, safer designs that remain distinct from fusion. Nuclear fusion however, involves combining atomic nuclei to release energy. It’s the same reaction that’s taking place at the Sun’s core. But overcoming the natural repulsion between atomic nuclei and maintaining the right conditions for fusion to occur isn’t straightforward. And doing so in a way that produces more energy than the reaction consumes has been beyond the grasp of the finest minds in physics for decades.

But perhaps not for much longer. Some major technical challenges have been overcome in the past few years and governments around the world have been pouring money into fusion power research as part of a broader green industrial revolution under way in several regions. There are also over 20 private ventures in the UK, US, Europe, China and Australia vying to be the first to make fusion energy production a reality.

“People are saying, ‘If it really is the ultimate solution, let’s find out whether it works or not,’” says Dr Tim Luce, head of science and operation at the International Thermonuclear Experimental Reactor (ITER), being built in southeast France. ITER is the biggest throw of the fusion dice yet.

Its $22bn (£15.9bn) build cost is being met by the governments of two-thirds of the world’s population, including the EU, the US, China and Russia, at a time when Europe is losing nuclear power and needs energy, and when it’s fired up in 2025 it’ll be the world’s largest fusion reactor. If it works, ITER will transform fusion power from being the stuff of dreams into a viable energy source.


Constructing a nuclear fusion reactor
ITER will be a tokamak reactor – thought to be the best hope for fusion power. Inside a tokamak, a gas, often a hydrogen isotope called deuterium, is subjected to intense heat and pressure, forcing electrons out of the atoms. This creates a plasma – a superheated, ionised gas – that has to be contained by intense magnetic fields.

The containment is vital, as no material on Earth could withstand the intense heat (100,000,000°C and above) that the plasma has to reach so that fusion can begin. It’s close to 10 times the heat at the Sun’s core, and temperatures like that are needed in a tokamak because the gravitational pressure within the Sun can’t be recreated.

When atomic nuclei do start to fuse, vast amounts of energy are released. While the experimental reactors currently in operation release that energy as heat, in a fusion reactor power plant, the heat would be used to produce steam that would drive turbines to generate electricity, even as some envision nuclear beyond electricity for industrial heat and fuels.

Tokamaks aren’t the only fusion reactors being tried. Another type of reactor uses lasers to heat and compress a hydrogen fuel to initiate fusion. In August 2021, one such device at the National Ignition Facility, at the Lawrence Livermore National Laboratory in California, generated 1.35 megajoules of energy. This record-breaking figure brings fusion power a step closer to net energy gain, but most hopes are still pinned on tokamak reactors rather than lasers.

In June 2021, China’s Experimental Advanced Superconducting Tokamak (EAST) reactor maintained a plasma for 101 seconds at 120,000,000°C. Before that, the record was 20 seconds. Ultimately, a fusion reactor would need to sustain the plasma indefinitely – or at least for eight-hour ‘pulses’ during periods of peak electricity demand.

A real game-changer for tokamaks has been the magnets used to produce the magnetic field. “We know how to make magnets that generate a very high magnetic field from copper or other kinds of metal, but you would pay a fortune for the electricity. It wouldn’t be a net energy gain from the plant,” says Luce.


One route for nuclear fusion is to use atoms of deuterium and tritium, both isotopes of hydrogen. They fuse under incredible heat and pressure, and the resulting products release energy as heat


The solution is to use high-temperature, superconducting magnets made from superconducting wire, or ‘tape’, that has no electrical resistance. These magnets can create intense magnetic fields and don’t lose energy as heat.

“High temperature superconductivity has been known about for 35 years. But the manufacturing capability to make tape in the lengths that would be required to make a reasonable fusion coil has just recently been developed,” says Luce. One of ITER’s magnets, the central solenoid, will produce a field of 13 tesla – 280,000 times Earth’s magnetic field.

The inner walls of ITER’s vacuum vessel, where the fusion will occur, will be lined with beryllium, a metal that won’t contaminate the plasma much if they touch. At the bottom is the divertor that will keep the temperature inside the reactor under control.

“The heat load on the divertor can be as large as in a rocket nozzle,” says Luce. “Rocket nozzles work because you can get into orbit within minutes and in space it’s really cold.” In a fusion reactor, a divertor would need to withstand this heat indefinitely and at ITER they’ll be testing one made out of tungsten.

Meanwhile, in the US, the National Spherical Torus Experiment – Upgrade (NSTX-U) fusion reactor will be fired up in the autumn of 2022, while efforts in advanced fission such as a mini-reactor design are also progressing. One of its priorities will be to see whether lining the reactor with lithium helps to keep the plasma stable.


Choosing a fuel
Instead of just using deuterium as the fusion fuel, ITER will use deuterium mixed with tritium, another hydrogen isotope. The deuterium-tritium blend offers the best chance of getting significantly more power out than is put in. Proponents of fusion power say one reason the technology is safe is that the fuel needs to be constantly fed into the reactor to keep fusion happening, making a runaway reaction impossible.

Deuterium can be extracted from seawater, so there’s a virtually limitless supply of it. But only 20kg of tritium are thought to exist worldwide, so fusion power plants will have to produce it (ITER will develop technology to ‘breed’ tritium). While some radioactive waste will be produced in a fusion plant, it’ll have a lifetime of around 100 years, rather than the thousands of years from fission.

At the time of writing in September, researchers at the Joint European Torus (JET) fusion reactor in Oxfordshire were due to start their deuterium-tritium fusion reactions. “JET will help ITER prepare a choice of machine parameters to optimise the fusion power,” says Dr Joelle Mailloux, one of the scientific programme leaders at JET. These parameters will include finding the best combination of deuterium and tritium, and establishing how the current is increased in the magnets before fusion starts.

The groundwork laid down at JET should accelerate ITER’s efforts to accomplish net energy gain. ITER will produce ‘first plasma’ in December 2025 and be cranked up to full power over the following decade. Its plasma temperature will reach 150,000,000°C and its target is to produce 500 megawatts of fusion power for every 50 megawatts of input heating power.

“If ITER is successful, it’ll eliminate most, if not all, doubts about the science and liberate money for technology development,” says Luce. That technology development will be demonstration fusion power plants that actually produce electricity, where advanced reactors can build on decades of expertise. “ITER is opening the door and saying, yeah, this works – the science is there.”

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified