First Solar exploring new panels in Silicon Valley

By Reuters


CSA Z463 Electrical Maintenance

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$249
Coupon Price:
$199
Reserve Your Seat Today
First Solar Inc, the worlds largest thinfilm solar panel producer, has set up a Silicon Valley lab for a thinfilm technology with the potential for higher efficiency at a lower cost, sources said.

A big move toward panels based on CIGS, or copper indium gallium diselenide, technology would mark a major shift for First Solar into an area pioneered by others and would also lend support to critics who see limited room to improve the efficiency of the companys current cadmium telluride panels.

The company has said its cadtel panels can exceed 11 percent efficiency in converting sunlight to electricity, and it aims to improve that further.

First Solar acknowledged it has a small research and development unit in Silicon Valley but declined to comment further.

The Arizonabased company made a name for itself with these thinfilm panels that are less efficient than standard panels based on computer processorstyle silicon semiconductors, but are much cheaper to produce per watt of output.

CIGS offers the hope of cheap thinfilm production costs with efficiency near the best of silicon solar cells, which can be as high as 20 percent.

First Solar, which has never discussed its Silicon Valley operations publicly, has had a research and development unit there for a couple of years, five industry sources with knowledge of the operation said.

The company is looking at lowcost processes to manufacture CIGS panels, one of the people said, though two other sources said the CIGS research appears to be in its initial stages.

Most First Solar operations are outside California, with its corporate headquarters in Tempe, Arizona, and factories in Ohio and Malaysia. The Silicon Valley unit, however, would benefit from the abundant engineering and semiconductorrelated talent available in the San Francisco Bay Area.

In lab conditions, CIGS has demonstrated the efficiency of silicon cells. The U.S. Department of Energys National Renewable Energy Laboratory set a new efficiency record of 19.9 percent for CIGS, nearing the record for crystalline silicon cells, but no company has come close to that in the factory.

First Solar Chief Financial Officer Jens Meyerhoff, when asked about new technologies at an industry conference last month, said the company monitors emerging technologies closely but hasnt seen major competitive threats.

First Solar has an inhouse effort that deals with assessment and evaluation of alternate material sets and technologies, Meyerhoff added.

The company has never publicly said it is doing research on CIGSbased solar panels.

Silicon Valley is home to many startups that are looking to commercialize CIGSbased panels, including MiaSole, Nanosolar and Solyndra.

CIGS has caught the attention of researchers and companies as it has the potential to match the photovoltaic PV efficiency of crystalline silicon cells.

If you can do it, its probably the cheapest PV solution out there, said Travis Bradford, Director of Prometheus Institute for Sustainable Development.

Ascent Solar, a publicly traded CIGSbased solar company, has said it has achieved efficiency as high as 11.7 percent for its modules.

Sales of CIGS panels are tiny compared to First Solars sales of cadtel panels, or those made by large competitors Suntech Power Holdings and SunPower Corp.

The cost of building CIGS cells is expected to fall to 50 cents per watt in the coming years, less than half the level of current silicon cells.

Related News

Energize America: Invest in a smarter electricity infrastructure

Smart Grid Modernization unites distributed energy resources, energy storage, EV charging, advanced metering, and bidirectional power flows to upgrade transmission and distribution infrastructure for reliability, resilience, cybersecurity, and affordable, clean power.

 

Key Points

Upgrading grid hardware and software to integrate DERs, storage, and EVs for a reliable and affordable power system.

✅ Enables DER, storage, and EV integration with bidirectional flows

✅ Improves reliability, resilience, and grid cybersecurity

✅ Requires early investment in sensors, inverters, and analytics

 

Much has been written, predicted, and debated in recent years about the future of the electricity system. The discussion isn’t simply about fossil fuels versus renewables, as often dominates mainstream energy discourse. Rather, the discussion is focused on something much larger and more fundamental: the very design of how and where electricity should be generated, delivered, and consumed.

Central to this discussion are arguments in support of, or in opposition to, the traditional model versus that of the decentralized or “emerging” model. But this is a false choice. The only choice that needs making is how to best transition to a smarter grid, and do so in a reliable and affordable manner that reflects grid modernization affordability concerns for utilities today. And the most effective and immediate means to accomplish that is to encourage and facilitate early investment in grid-related infrastructure and technology.

The traditional, or centralized, model has evolved since the days of Thomas Edison, but the basic structure is relatively unchanged: generate electrons at a central power plant, transmit them over a unidirectional system of high-voltage transmission lines, and deliver them to consumers through local distribution networks. The decentralized, or emerging, model envisions a system that moves away from the central power station as the primary provider of electricity to a system in which distributed energy resources, energy storage, electric vehicles, peer-to-peer transactions, connected appliances and devices, and sophisticated energy usage, pricing, and load management software play a more prominent role.

Whether it’s a fully decentralized and distributed power system, or the more likely centralized-decentralized hybrid, it is apparent that the way in which electricity is produced, delivered, and consumed will differ from today’s traditional model. And yet, in many ways, the fundamental design and engineering that makes up today’s electric grid will serve as the foundation for achieving a more distributed future. Indeed, as the transition to a smarter grid ramps up, the grid’s basic structure will remain the underlying commonality, allowing the grid to serve as a facilitator to integrate emerging technologies, including EV charging stations, rooftop solar, demand-side management software, and other distributed energy resources, while maximizing their potential benefits and informing discussions about California’s grid reliability under ambitious transition goals.

A loose analogy here is the internet. In its infancy, the internet was used primarily for sending and receiving email, doing homework, and looking up directions. At the time, it was never fully understood that the internet would create a range of services and products that would impact nearly every aspect of everyday life from online shopping, booking travel, and watching television to enabling the sharing economy and the emerging “Internet of Things.”

Uber, Netflix, Amazon, and Nest would not be possible without the internet. But the rapid evolution of the internet did not occur without significant investment in internet-related infrastructure. From dial-up to broadband to Wi-Fi, companies have invested billions of dollars to update and upgrade the system, allowing the internet to maximize its offerings and give way to technological breakthroughs, innovative businesses, and ways to share and communicate like never before.  

The electric grid is similar; it is both the backbone and the facilitator upon which the future of electricity can be built. If the vision for a smarter grid is to deploy advanced energy technologies, create new business models, and transform the way electricity is produced, distributed, and consumed, then updating and modernizing existing infrastructure and building out new intelligent infrastructure need to be top priorities. But this requires money. To be sure, increased investment in grid-related infrastructure is the key component to transitioning to a smarter grid; a grid capable of supporting and integrating advanced energy technologies within a more digital grid architecture that will result in a cleaner, more modern and efficient, and reliable and secure electricity system.

The inherent challenges of deploying new technologies and resources — reliability, bidirectional flow, intermittency, visibility, and communication, to name a few, as well as emerging climate resilience concerns shaping planning today, are not insurmountable and demonstrate exactly why federal and state authorities and electricity sector stakeholders should be planning for and making appropriate investment decisions now. My organization, Alliance for Innovation and Infrastructure, will release a report Wednesday addressing these challenges facing our infrastructure, and the opportunities a distributed smart grid would provide. From upgrading traditional wires and poles and integrating smart power inverters and real-time sensors to deploying advanced communications platforms and energy analytics software, there are numerous technologies currently available and capable of being deployed that warrant investment consideration.

Making these and similar investments will help to identify and resolve reliability issues earlier, and address vulnerabilities identified in the latest power grid report card findings, which in turn will create a stronger, more flexible grid that can then support additional emerging technologies, resulting in a system better able to address integration challenges. Doing so will ease the electricity evolution in the long-term and best realize the full reliability, economic, and environmental benefits that a smarter grid can offer.  

 

Related News

View more

Carbon capture: How can we remove CO2 from the atmosphere?

CO2 Removal Technologies address climate change via negative emissions, including carbon capture, reforestation, soil carbon, biochar, BECCS, DAC, and mineralization, helping meet Paris Agreement targets while managing costs, land use, and infrastructure demands.

 

Key Points

Methods to extract or sequester atmospheric CO2, combining natural and engineered approaches to limit warming.

✅ Includes reforestation, soil carbon, biochar, BECCS, DAC, mineralization

✅ Balances climate goals with costs, land, energy, and infrastructure

✅ Key to Paris Agreement targets under 1.5-2.0 °C warming

 

The world is, on average, 1.1 degrees Celsius warmer today than it was in 1850. If this trend continues, our planet will be 2 – 3 degrees hotter by the end of this century, according to the Intergovernmental Panel on Climate Change (IPCC).

The main reason for this temperature rise is higher levels of atmospheric carbon dioxide, which cause the atmosphere to trap heat radiating from the Earth into space. Since 1850, the proportion of CO2 in the air has increased, with record greenhouse gas concentrations documented, from 0.029% to 0.041% (288 ppm to 414 ppm).

This is directly related to the burning of coal, oil and gas, which were created from forests, plankton and plants over millions of years. Back then, they stored CO2 and kept it out of the atmosphere, but as fossil fuels are burned, that CO2 is released. Other contributing factors include industrialized agriculture and slash-and-burn land clearing techniques, and emissions from SF6 in electrical equipment are also concerning today.

Over the past 50 years, more than 1200 billion tons of CO2 have been emitted into the planet's atmosphere — 36.6 billion tons in 2018 alone, though global emissions flatlined in 2019 before rising again. As a result, the global average temperature has risen by 0.8 degrees in just half a century.


Atmospheric CO2 should remain at a minimum
In 2015, the world came together to sign the Paris Climate Agreement which set the goal of limiting global temperature rise to well below 2 degrees — 1.5 degrees, if possible.

The agreement limits the amount of CO2 that can be released into the atmosphere, providing a benchmark for the global energy transition now underway. According to the IPCC, if a maximum of around 300 billion tons were emitted, there would be a 50% chance of limiting global temperature rise to 1.5 degrees. If CO2 emissions remain the same, however, the CO2 'budget' would be used up in just seven years.

According to the IPCC's report on the 1.5 degree target, negative emissions are also necessary to achieve the climate targets.


Using reforestation to remove CO2
One planned measure to stop too much CO2 from being released into the atmosphere is reforestation. According to studies, 3.6 billion tons of CO2 — around 10% of current CO2 emissions — could be saved every year during the growth phase. However, a study by researchers at the Swiss Federal Institute of Technology, ETH Zurich, stresses that achieving this would require the use of land areas equivalent in size to the entire US.

Young trees at a reforestation project in Africa (picture-alliance/OKAPIA KG, Germany)
Reforestation has potential to tackle the climate crisis by capturing CO2. But it would require a large amount of space


More humus in the soil
Humus in the soil stores a lot of carbon. But this is being released through the industrialization of agriculture. The amount of humus in the soil can be increased by using catch crops and plants with deep roots as well as by working harvest remnants back into the ground and avoiding deep plowing. According to a study by the German Institute for International and Security Affairs (SWP) on using targeted CO2 extraction as a part of EU climate policy, between two and five billion tons of CO2 could be saved with a global build-up of humus reserves.


Biochar shows promise
Some scientists see biochar as a promising technology for keeping CO2 out of the atmosphere. Biochar is created when organic material is heated and pressurized in a zero or very low-oxygen environment. In powdered form, the biochar is then spread on arable land where it acts as a fertilizer. This also increases the amount of carbon content in the soil. According to the same study from the SWP, global application of this technology could save between 0.5 and two billion tons of CO2 every year.


Storing CO2 in the ground
Storing CO2 deep in the Earth is already well-known and practiced on Norway's oil fields, for example. However, the process is still controversial, as storing CO2 underground can lead to earthquakes and leakage in the long-term. A different method is currently being practiced in Iceland, in which CO2 is sequestered into porous basalt rock to be mineralized into stone. Both methods still require more research, however, with new DOE funding supporting carbon capture, utilization, and storage.

Capturing CO2 to be held underground is done by using chemical processes which effectively extract the gas from the ambient air, and some researchers are exploring CO2-to-electricity concepts for utilization. This method is known as direct air capture (DAC) and is already practiced in other parts of Europe.  As there is no limit to the amount of CO2 that can be captured, it is considered to have great potential. However, the main disadvantage is the cost — currently around €550 ($650) per ton. Some scientists believe that mass production of DAC systems could bring prices down to €50 per ton by 2050. It is already considered a key technology for future climate protection.

The inside of a carbon capture facility in the Netherlands (RWE AG)
Carbon capture facilities are still very expensive and take up a huge amount of space

Another way of extracting CO2 from the air is via biomass. Plants grow and are burned in a power plant to produce electricity. CO2 is then extracted from the exhaust gas of the power plant and stored deep in the Earth, with new U.S. power plant rules poised to test such carbon capture approaches.

The big problem with this technology, known as bio-energy carbon capture and storage (BECCS) is the huge amount of space required. According to Felix Creutzig from the Mercator Institute on Global Commons and Climate Change (MCC) in Berlin, it will therefore only play "a minor role" in CO2 removal technologies.


CO2 bound by rock minerals
In this process, carbonate and silicate rocks are mined, ground and scattered on agricultural land or on the surface water of the ocean, where they collect CO2 over a period of years. According to researchers, by the middle of this century it would be possible to capture two to four billion tons of CO2 every year using this technique. The main challenges are primarily the quantities of stone required, and building the necessary infrastructure. Concrete plans have not yet been researched.


Not an option: Fertilizing the sea with iron
The idea is use iron to fertilize the ocean, thereby increasing its nuturient content, which would allow plankton to grow stronger and capture more CO2. However, both the process and possible side effects are very controversial. "This is rarely treated as a serious option in research," concludes SWP study authors Oliver Geden and Felix Schenuit.

 

Related News

View more

California lawmakers plan to overturn income-based utility charges

California income-based utility charges face bipartisan pushback as the PUC weighs fixed fees for PG&E, SDG&E, and Southern California Edison, reshaping rate design, electricity affordability, energy equity, and privacy amid proposed per-kWh reductions.

 

Key Points

PUC-approved fixed fees tied to household income for PG&E, SDG&E, and SCE, offset by lower per-kWh rates.

✅ Proposed fixed fees: $51 SCE, $73.31 SDG&E, $50.92 PG&E

✅ Critics warn admin, privacy, legal risks and higher bills for savers

✅ Backers say lower-income pay less; kWh rates cut ~33% in PG&E area

 

Efforts are being made across California's political landscape to derail a legislative initiative that introduced income-based utility charges for customers of Southern California Edison and other major utilities.

Legislators from both the Democratic and Republican parties have proposed bills aimed at nullifying the 2022 legislation that established a sliding scale for utility charges based on customer income, a decision made in a late-hour session and subsequently endorsed by Governor Gavin Newsom.

The plan, pending final approval from the state Public Utilities Commission (PUC) — all of whose current members were appointed by Governor Newsom — would enable utilities like Southern California Edison, San Diego Gas & Electric, and PG&E to apply new income-based charges as early as this July.

Among the state legislators pushing back against the income-based charge scheme are Democrats Jacqui Irwin and Marc Berman, along with Republicans Janet Nguyen, Kelly Seyarto, Rosilicie Ochoa Bogh, Scott Wilk, Brian Dahle, Shannon Grove, and Roger Niello.

A cadre of specialists, including economist Ahmad Faruqui who has advised all three utilities implicated in the fee proposal, have outlined several concerns regarding the PUC's pending decision.

Faruqui and his colleagues argue that the proposed charges are excessively high in comparison to national standards, reflecting soaring electricity prices across the state, potentially leading to administrative challenges, legal disputes, and negative unintended outcomes, such as penalizing energy-conservative consumers.

Advocates for the income-based fee model, including The Utility Reform Network (TURN) and the National Resources Defense Council, argue it would result in higher charges for wealthier consumers and reduced fees for those with lower incomes. They also believe that the utilities plan to decrease per kilowatt-hour rates as part of a broader rate structure review to balance out the new fees.

However, even supporters like TURN and the Natural Resources Defense Council acknowledge that the income-based fee model is not a comprehensive solution to making soaring electricity bills more affordable.

If implemented, California would have the highest income-based utility fees in the country, with averages far surpassing the national average of $11.15, as reported by EQ Research:

  • Southern California Edison would charge $51.
  • San Diego Gas & Electric would levy $73.31.
  • PG&E would set fees at $50.92.

The proposal has raised concerns among state legislators about the additional financial burden on Californians already struggling with high electricity costs.

Critics highlight several practical challenges, including the PUC's task of assessing customers' income levels, a process fraught with privacy concerns, potential errors, and constitutional questions regarding access to tax information.

Economists have pointed out further complications, such as the difficulty in accurately assessing incomes for out-of-state property owners and the variability of customers' incomes over time.

The proposed income-based charges would differ by income bracket within the PG&E service area, for example, with lower-income households facing lower fixed charges and higher-income households facing higher charges, alongside a proposed 33% reduction in electricity rates to help mitigate the fixed charge impact.

Yet, the economists warn that most customers, particularly low-usage customers, could end up paying more, essentially rewarding higher consumption and penalizing efficiency.

This legislative approach, they caution, could inadvertently increase costs for moderate users across all income brackets, a sign of major changes to electric bills that could emerge, challenging the very goals it aims to achieve by promoting energy inefficiency.

 

Related News

View more

Toronto Prepares for a Surge in Electricity Demand as City Continues to Grow

Toronto Electricity Demand Growth underscores IESO projections of rising peak load by 2050, driven by population growth, electrification, new housing density, and tech economy, requiring grid modernization, transmission upgrades, demand response, and local renewable energy.

 

Key Points

It refers to the projected near-doubling of Toronto's peak load by 2050, driven by electrification and urban growth.

✅ IESO projects peak demand nearly doubling by 2050

✅ Drivers: population, densification, EVs, heat pumps

✅ Solutions: efficiency, transmission, storage, demand response

 

Toronto faces a significant challenge in meeting the growing electricity needs of its expanding population and ambitious development plans. According to a new report from Ontario's Independent Electricity System Operator (IESO), Toronto's peak electricity demand is expected to nearly double by 2050. This highlights the need for proactive steps to secure adequate electricity supply amidst the city's ongoing economic and population growth.


Key Factors Driving Demand

Several factors are contributing to the projected increase in electricity demand:

Population Growth: Toronto is one of the fastest-growing cities in North America, and this trend is expected to continue. More residents mean more need for housing, businesses, and other electricity-consuming infrastructure.

  • New Homes and Density: The city's housing strategy calls for 285,000 new homes within the next decade, including significant densification in existing neighbourhoods. High-rise buildings in urban centers are generally more energy-intensive than low-rise residential developments.
  • Economic Development: Toronto's robust economy, a hub for tech and innovation, attracts new businesses, including energy-intensive AI data centers that fuel further demand for electricity.
  • Electrification: The push to reduce carbon emissions is driving the electrification of transportation and home heating, further increasing pressure on Toronto's electricity grid.


Planning for the Future

Ontario and the City of Toronto recognize the urgency to secure stable and reliable electricity supplies to support continued growth and prosperity without sacrificing affordability, drawing lessons from British Columbia's clean energy shift to inform local approaches. Officials are collaborating to develop a long-term plan that focuses on:

  • Energy Efficiency: Efforts aim to reduce wasteful electricity usage through upgrades to existing buildings, promoting energy-efficient appliances, and implementing smart grid technologies. These will play a crucial role in curbing overall demand.
  • New Infrastructure: Significant investments in building new electricity generation, transmission lines, and substations, as well as regional macrogrids to enhance reliability, will be necessary to meet the projected demands of Toronto's future.
  • Demand Management: Programs incentivizing energy conservation during peak hours will help to avoid strain on the grid and reduce the need to build expensive power plants only used at peak demand times.


Challenges Ahead

The path ahead isn't without its hurdles.  Building new power infrastructure in a dense urban environment like Toronto can be time-consuming, expensive, and sometimes disruptive, especially as grids face harsh weather risks that complicate construction and operations. Residents and businesses might worry about potential rate increases required to fund these necessary investments.


Opportunity for Innovation

The IESO and the city view the situation as an opportunity to embrace innovative solutions. Exploring renewable energy sources within and near the city, developing local energy storage systems, and promoting distributed energy generation such as rooftop solar, where power is created near the point of use, are all vital strategies for meeting needs in a sustainable way.

Toronto's electricity future depends heavily on proactive planning and investment in modernizing its power infrastructure.  The decisions made now will determine whether the city can support economic growth, address climate goals and a net-zero grid by 2050 ambition, and ensure that lights stay on for all Torontonians as the city continues to expand.
 

 

Related News

View more

Electricity demand set to reduce if UK workforce self-isolates

UK Energy Networks Coronavirus Contingency outlines ESO's lockdown electricity demand forecast, reduced industrial and commercial load, rising domestic use, Ofgem guidance needs, grid resilience, control rooms, mutual aid, and backup centers.

 

Key Points

A coordinated plan with ESO forecasts, safeguards, and mutual aid to keep power and gas services during a lockdown.

✅ ESO forecasts lower industrial use, higher domestic demand

✅ Control rooms protected; backup sites and cross-trained staff

✅ Mutual aid and Ofgem coordination bolster grid resilience

 

National Grid ESO is predicting a reduction in electricity demand, consistent with residential use trends observed during the pandemic, in the case of the coronavirus spread prompting a lockdown across the country.

Its analysis shows the reduction in commercial and industrial use would outweigh an upsurge in domestic demand, mirroring Ontario demand data seen as people stayed home, according to similar analyses.

The prediction was included in an update from the Energy Networks Association (ENA), in which it sought to reassure the public that contingency plans are in place, reflecting utility disaster planning across electric and gas networks, to ensure services are unaffected by the coronavirus spread.

The body, which represents the UK's electricity and gas network companies, said "robust measures" had been put in place to protect control rooms and contact centres, similar to staff lockdown protocols considered by other system operators, to maintain resilience. To provide additional resilience, engineers have been trained across multiple disciplines and backup centres exist should operations need to be moved if, for example, deep cleaning is required, the ENA said.

Networks also have industry-wide mutual aid arrangements, similar to grid response measures outlined in the U.S., for people and the equipment needed to keep gas and electricity flowing.

ENA chief executive, David Smith, said, echoing system reliability assurances from other markets: "The UK's electricity and gas network is one of the most reliable in the world and network operators are working with the authorities to ensure that their contingency plans are reviewed and delivered in accordance with the latest expert advice. We are following this advice closely and reassuring customers that energy networks are continuing to operate as normal for the public."

Utility Week spoke to a senior figure at one of the networks who reiterated the robust measures in place to keep the lights on, even as grid alerts elsewhere highlight the importance of contingency planning. However, they pleaded for more clarity from Ofgem and government on how its workers will be treated if the coronavirus spread becomes a pandemic in the UK.

 

Related News

View more

Extensive Disaster Planning at Electric & Gas Utilities Means Lights Will Stay On

Utility Pandemic Preparedness strengthens grid resilience through continuity planning, critical infrastructure protection, DOE-DHS coordination, onsite sequestration, skeleton crews, and deferred maintenance to ensure reliable electric and gas service for commercial and industrial customers.

 

Key Points

Plans that sustain grid operations during outbreaks using staffing limits, access controls, and deferred maintenance.

✅ Deferred maintenance and restricted site access

✅ Onsite sequestering and skeleton crew operations

✅ DOE-DHS coordination and control center staffing

 

Commercial and industrial businesses can rest assured that the current pandemic poses no real threat to our utilities, with the U.S. grid remaining reliable for now, as disaster planning has been key to electric and gas utilities in recent years, writes Forbes. Beginning a decade ago, the utility and energy industries evolved detailed pandemic plans, outlining what to know about the U.S. grid during outbreaks, which include putting off maintenance and routine activities until the worst of the pandemic has passed, restricting site access to essential personnel, and being able to run on a skeleton crew as more and more people become ill, a capability underscored by FPL's massive Irma response when crews faced prolonged outages.

One possible outcome of the current situation is that the US electric industry may require essential staff to live onsite at power plants and control centers, similar to Ontario work-site lockdown plans under consideration, if the outbreak worsens; bedding, food and other supplies are being stockpiled, reflecting local response preparations many utilities practice, Reuters reported. The Great River Energy cooperative, for example, has had a plan to sequester essential staff in place since the H1N1 bird flu crisis in 2009. The cooperative, which runs 10 power plants in Minnesota, says its disaster planning ensured it has enough cots, blankets and other necessities on site to keep staff healthy.

Electricity providers are now taking part in twice-weekly phone calls with officials at the DOE, the Department of Homeland Security, and other agencies, as Ontario demand shifts are monitored, according to the Los Angeles Times. By planning for a variety of worst case scenarios, including weeks-long restorations after major storms, “I have confidence that the sector will be prepared to respond no matter how this evolves,” says Scott Aaronson, VP of security and preparedness for the Edison Electric Institute.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified