Wind turbines making us sick: protesters

By Toronto Star


NFPA 70b Training - Electrical Maintenance

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 12 hours Instructor-led
  • Group Training Available
Regular Price:
$599
Coupon Price:
$499
Reserve Your Seat Today
The Liberal government has shot down an opposition motion to place an immediate moratorium on wind turbines until their health effects are further studied.

Nearly 250 people descended on QueenÂ’s Park to protest the presence of the turbines near residential areas. They claim the turbines cause lowfrequency noise and have sickened nearly 106 Ontario residents, causing a variety of health ailments ranging from hypertension to sleeplessness and nosebleeds in children.

People are suffering and their concerns are being dismissed, Dr. Robert McMurtry told the protesters.

“I see their lack of energy and things they are feeling,” the Londonarea surgeon said. “But the thing I can’t tolerate… is the steadfast denial of the complaints.”

The Progressive Conservatives brought forward a motion calling for an immediate moratorium on the turbines until health studies are completed. The motion was defeated by the majority Liberals.

A moratorium is unnecessary, Premier Dalton McGuinty said. “Wind turbines have been up and running for decades in dozens, if not hundreds, of jurisdictions,” he told reporters. “We are relatively late coming to electricity generation by means of wind power.”

Ontario has some of the most rigorous standards in North America in terms of wind power and the province is funding a university research chair to study the longterm effects of the turbines, he added.

The world needs to figure out new ways of generating clean energy, the premier said.

“There are no real easy decisions in all this. We’ve decided it would be a good thing to get rid of coal. It makes our kids sick and contributes to global warming,” he said.

During Question Period, PC MPP Joyce Savoline Burlington asked the premier why people are blocked from having a say in the placement of industrial wind farms, but are allowed to voice opinions on where shopping malls are built.

“Why does the premier think Dalton knows best when it comes to putting large industrial wind turbines in place?” she asked.

Energy Minister Brad Duguid said there is “plenty of room for consultation” on the projects and that municipalities are involved in the decision making.

On the front steps of the Legislature, protesters accused the Liberals of fasttracking renewable energy projects in the name of the Green Energy Act – legislation passed last May to bolster the green economy. They say the province has bypassed planning powers of municipalities.

“There appears to be significant scientific uncertainty of how close you can have industrial turbines to where people live,” said Eric Gillespie, an environmental lawyer. He is representing Ian Hanna, a citizen whose application for judicial review of the Green Energy Act as it applies to wind turbines will be heard in September.

Currently, the turbines must be set back 550 metres from homes but Gillespie said to be safe it should be nearly 2 kilometres. “If you are going to have industrial turbines, you need to cite them in locations you can say with some degree of certainty, are safe.”

Related News

Purdue: As Ransomware Attacks Increase, New Algorithm May Help Prevent Power Blackouts

Infrastructure Security Algorithm prioritizes cyber defense for power grids and critical infrastructure, mitigating ransomware, blackout risks, and cascading failures by guiding utilities, regulators, and cyber insurers on optimal security investment allocation.

 

Key Points

An algorithm that optimizes security spending to cut ransomware and blackout risks across critical infrastructure.

✅ Guides utilities on optimal security allocation

✅ Uses incentives to correct human risk biases

✅ Prioritizes assets to prevent cascading outages

 

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid, as past US utility intrusions have shown.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid's utilities could fill in the most crucial security gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an algorithm to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework for protecting the U.S. power grid that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Power grids are a type of critical infrastructure, which is any network - whether physical like water systems or virtual like health care record keeping - considered essential to a country's function and safety. The biggest ransomware attacks in history have happened in the past year, affecting most sectors of critical infrastructure in the U.S. such as grain distribution systems in the food and agriculture sector and the Colonial Pipeline, which carries fuel throughout the East Coast, prompting increased military preparation for grid hacks in the U.S.

With this trend in mind, Purdue researchers evaluated the algorithm in the context of various types of critical infrastructure in addition to the power sector, including electricity-sector IoT devices that interface with grid operations. The goal is that the algorithm would help secure any large and complex infrastructure system against cyberattacks.

"Multiple companies own different parts of infrastructure. When ransomware hits, it affects lots of different pieces of technology owned by different providers, so that's what makes ransomware a problem at the state, national and even global level," said Saurabh Bagchi, a professor in the Elmore Family School of Electrical and Computer Engineering and Center for Education and Research in Information Assurance and Security at Purdue. "When you are investing security money on large-scale infrastructures, bad investment decisions can mean your power grid goes out, or your telecommunications network goes out for a few days."

Protecting infrastructure from hacks by improving security investment decisions

The researchers tested the algorithm in simulations of previously reported hacks to four infrastructure systems: a smart grid, industrial control system, e-commerce platform and web-based telecommunications network. They found that use of this algorithm results in the most optimal allocation of security investments for reducing the impact of a cyberattack.

The team's findings appear in a paper presented at this year's IEEE Symposium on Security and Privacy, the premier conference in the area of computer security. The team comprises Purdue professors Shreyas Sundaram and Timothy Cason and former PhD students Mustafa Abdallah and Daniel Woods.

"No one has an infinite security budget. You must decide how much to invest in each of your assets so that you gain a bump in the security of the overall system," Bagchi said.

The power grid, for example, is so interconnected that the security decisions of one power utility company can greatly impact the operations of other electrical plants. If the computers controlling one area's generators don't have adequate security protection, as seen when Russian hackers accessed control rooms at U.S. utilities, then a hack to those computers would disrupt energy flow to another area's generators, forcing them to shut down.

Since not all of the grid's utilities have the same security budget, it can be hard to ensure that critical points of entry to the grid's controls get the most investment in security protection.

The algorithm that Purdue researchers developed would incentivize each security decision maker to allocate security investments in a way that limits the cumulative damage a ransomware attack could cause. An attack on a single generator, for instance, would have less impact than an attack on the controls for a network of generators, which sophisticated grid-disruption malware can target at scale, rather than for the protection of a single generator.

Building an algorithm that considers the effects of human behavior

Bagchi's research shows how to increase cybersecurity in ways that address the interconnected nature of critical infrastructure but don't require an overhaul of the entire infrastructure system to be implemented.

As director of Purdue's Center for Resilient Infrastructures, Systems, and Processes, Bagchi has worked with the U.S. Department of Defense, Northrop Grumman Corp., Intel Corp., Adobe Inc., Google LLC and IBM Corp. on adopting solutions from his research. Bagchi's work has revealed the advantages of establishing an automatic response to attacks, and analyses like Symantec's Dragonfly report highlight energy-sector risks, leading to key innovations against ransomware threats, such as more effective ways to make decisions about backing up data.

There's a compelling reason why incentivizing good security decisions would work, Bagchi said. He and his team designed the algorithm based on findings from the field of behavioral economics, which studies how people make decisions with money.

"Before our work, not much computer security research had been done on how behaviors and biases affect the best defense mechanisms in a system. That's partly because humans are terrible at evaluating risk and an algorithm doesn't have any human biases," Bagchi said. "But for any system of reasonable complexity, decisions about security investments are almost always made with humans in the loop. For our algorithm, we explicitly consider the fact that different participants in an infrastructure system have different biases."

To develop the algorithm, Bagchi's team started by playing a game. They ran a series of experiments analyzing how groups of students chose to protect fake assets with fake investments. As in past studies in behavioral economics, they found that most study participants guessed poorly which assets were the most valuable and should be protected from security attacks. Most study participants also tended to spread out their investments instead of allocating them to one asset even when they were told which asset is the most vulnerable to an attack.

Using these findings, the researchers designed an algorithm that could work two ways: Either security decision makers pay a tax or fine when they make decisions that are less than optimal for the overall security of the system, or security decision makers receive a payment for investing in the most optimal manner.

"Right now, fines are levied as a reactive measure if there is a security incident. Fines or taxes don't have any relationship to the security investments or data of the different operators in critical infrastructure," Bagchi said.

In the researchers' simulations of real-world infrastructure systems, the algorithm successfully minimized the likelihood of losing assets to an attack that would decrease the overall security of the infrastructure system.

Bagchi's research group is working to make the algorithm more scalable and able to adapt to an attacker who may make multiple attempts to hack into a system. The researchers' work on the algorithm is funded by the National Science Foundation, the Wabash Heartland Innovation Network and the Army Research Lab.

Cybersecurity is an area of focus through Purdue's Next Moves, a set of initiatives that works to address some of the greatest technology challenges facing the U.S. Purdue's cybersecurity experts offer insights and assistance to improve the protection of power plants, electrical grids and other critical infrastructure.

 

Related News

View more

Is Ontario's Power Cost-Effective?

Ontario Nuclear Power Costs highlight LCOE, capex, refurbishment outlays, and waste management, compared with renewables, grid reliability, and emissions targets, informing Australia and Peter Dutton on feasibility, timelines, and electricity prices.

 

Key Points

They include high capex and LCOE from refurbishments and waste, offset by reliable, low-emission baseload.

✅ Refurbishment and maintenance drive lifecycle and LCOE variability.

✅ High capex and long timelines affect consumer electricity prices.

✅ Low emissions, but waste and safety compliance add costs.

 

Australian opposition leader Peter Dutton recently lauded Canada’s use of nuclear power as a model for Australia’s energy future. His praise comes as part of a broader push to incorporate nuclear energy into Australia’s energy strategy, which he argues could help address the country's energy needs and climate goals. However, the question arises: Is Ontario’s experience with nuclear power as cost-effective as Dutton suggests?

Dutton’s endorsement of Canada’s nuclear power strategy highlights a belief that nuclear energy could provide a stable, low-emission alternative to fossil fuels. He has pointed to Ontario’s substantial reliance on nuclear power, and the province’s exploration of new large-scale nuclear projects, as an example of how such an energy mix might benefit Australia. The province’s energy grid, which integrates a significant amount of nuclear power, is often cited as evidence that nuclear energy can be a viable component of a diversified energy portfolio.

The appeal of nuclear power lies in its ability to generate large amounts of electricity with minimal greenhouse gas emissions. This characteristic aligns with Australia’s climate goals, which emphasize reducing carbon emissions to combat climate change. Dutton’s advocacy for nuclear energy is based on the premise that it can offer a reliable and low-emission option compared to the fluctuating availability of renewable sources like wind and solar.

However, while Dutton’s enthusiasm for the Canadian model reflects its perceived successes, including recent concerns about Ontario’s grid getting dirtier amid supply changes, a closer look at Ontario’s nuclear energy costs raises questions about the financial feasibility of adopting a similar strategy in Australia. Despite the benefits of low emissions, the economic aspects of nuclear power remain complex and multifaceted.

In Ontario, the cost of nuclear power has been a topic of considerable debate. While the province benefits from a stable supply of electricity due to its nuclear plants, studies warn of a growing electricity supply gap in coming years. Ontario’s experience reveals that nuclear power involves significant capital expenditures, including the costs of building reactors, maintaining infrastructure, and ensuring safety standards. These expenses can be substantial and often translate into higher electricity prices for consumers.

The cost of maintaining existing nuclear reactors in Ontario has been a particular concern. Many of these reactors are aging and require costly upgrades and maintenance to continue operating safely and efficiently. These expenses can add to the overall cost of nuclear power, impacting the affordability of electricity for consumers.

Moreover, the development of new nuclear projects, as seen with Bruce C project exploration in Ontario, involves lengthy and expensive construction processes. Building new reactors can take over a decade and requires significant investment. The high initial costs associated with these projects can be a barrier to their economic viability, especially when compared to the rapidly decreasing costs of renewable energy technologies.

In contrast, the cost of renewable energy has been falling steadily, even as debates over nuclear power’s trajectory in Europe continue, making it a more attractive option for many jurisdictions. Solar and wind power, while variable and dependent on weather conditions, have seen dramatic reductions in installation and operational costs. These lower costs can make renewables more competitive compared to nuclear energy, particularly when considering the long-term financial implications.

Dutton’s praise for Ontario’s nuclear power model also overlooks some of the environmental and logistical challenges associated with nuclear energy. While nuclear power generates low emissions during operation, it produces radioactive waste that requires long-term storage solutions. The management of nuclear waste poses significant environmental and safety concerns, as well as additional costs for safe storage and disposal.

Additionally, the potential risks associated with nuclear power, including the possibility of accidents, contribute to the complexity of its adoption. The safety and environmental regulations surrounding nuclear energy are stringent and require continuous oversight, adding to the overall cost of maintaining nuclear facilities.

As Australia contemplates integrating nuclear power into its energy mix, it is crucial to weigh these financial and environmental considerations. While the Canadian model provides valuable insights, the unique context of Australia’s energy landscape, including its existing infrastructure, energy needs, and the costs of scrapping coal-fired electricity in comparable jurisdictions, must be taken into account.

In summary, while Peter Dutton’s endorsement of Canada’s nuclear power model reflects a belief in its potential benefits for Australia’s energy strategy, the cost-effectiveness of Ontario’s nuclear power experience is more nuanced than it may appear. The high capital and maintenance costs associated with nuclear energy, combined with the challenges of managing radioactive waste and ensuring safety, present significant considerations. As Australia evaluates its energy future, a comprehensive analysis of both the benefits and drawbacks of nuclear power will be essential to making informed decisions about its role in the country’s energy strategy.

 

Related News

View more

Competition in Electricity Has Been Good for Consumers and Good for the Environment

Electricity Market Competition drives lower wholesale prices, stable retail rates, better grid reliability, and faster emissions cuts as deregulation and renewables adoption pressure utilities, improve efficiency, and enhance consumer choice in power markets.

 

Key Points

Electricity market competition opens supply to rivals, lowering prices, improving reliability, and reducing emissions.

✅ Wholesale prices fell faster in competitive markets

✅ Retail rates rose less than in monopoly states

✅ Fewer outages, shorter durations, improved reliability

 

By Bernard L. Weinstein

Electricity used to be boring.  Public utilities that provided power to homes and businesses were regulated monopolies and, by law, guaranteed a fixed rate-of-return on their generation, transmission, and distribution assets. Prices per kilowatt-hour were set by utility commissions after lengthy testimony from power companies, wanting higher rates, and consumer groups, wanting lower rates.

About 25 years ago, the electricity landscape started to change as economists and others argued that competition could lead to lower prices and stronger grid reliability. Opponents of competition argued that consumers weren’t knowledgeable enough about power markets to make intelligent choices in a competitive pricing environment. Nonetheless, today 20 states have total or partial competition for electricity, allowing independent power generators to compete in wholesale markets and retail electric providers (REPs) to compete for end-use customers, a dynamic echoed by the Alberta electricity market across North America. (Transmission, in all states, remains a regulated natural monopoly).

A recent study by the non-partisan Pacific Research Institute (PRI) provides compelling evidence that competition in power markets has been a boon for consumers. Using data from the U.S. Energy Information Administration (EIA), PRI’s researchers found that wholesale electricity prices in competitive markets have been generally declining or flat, prompting discussions of free electricity business models, over the last five years. For example, compared to 2015, wholesale power prices in New England have dropped more than 44 percent, those in most Mid-Atlantic States have fallen nearly 42 percent, and in New York City they’ve declined by nearly 45 percent. Wholesale power costs have also declined in monopoly states, but at a considerably slower rate.

As for end-users, states that have competitive retail electricity markets have seen smaller price increases, as consumers can shop for electricity in Texas more cheaply than in monopoly states. Again, using EIA data, PRI found that in 14 competitive jurisdictions, retail prices essentially remained flat between 2008 and 2020. By contrast, retail prices jumped an average of 21 percent in monopoly states.  The ten states with the largest retail price increases were all monopoly-based frameworks. A 2017 report from the Retail Energy Supply Association found customers in states that still have monopoly utilities saw their average energy prices increase nearly 19 percent from 2008 to 2017 while prices fell 7 percent in competitive markets over the same period.

The PRI study also observed that competition has improved grid reliability, the recent power disruptions in California and Texas, alongside disruptions in coal and nuclear sectors across the U.S., notwithstanding. Looking at two common measures of grid resiliency, PRI’s analysis found that power interruptions were 10.4 percent lower in competitive states while the duration of outages was 6.5 percent lower.

Citing data from the EIA between 2008 and 2018, PRI reports that greenhouse gas emissions in competitive states declined on average 12.1 percent compared to 7.3 percent in monopoly states. This result is not surprising, and debates over whether Israeli power supply competition can bring cheaper electricity mirror these dynamics.  In a competitive wholesale market, independent power producers have an incentive to seek out lower-cost options, including subsidized renewables like wind and solar. By contrast, generators in monopoly markets have no such incentive as they can pass on higher costs to end-users. Perhaps the most telling case is in the monopoly state of Georgia where the cost to build nuclear Plant Vogtle has doubled from its original estimate of $14 billion 12 years ago. Overruns are estimated to cost Georgia ratepayers an average of $854, and there is no definite date for this facility to come on line. This type of mismanagement doesn’t occur in competitive markets.

Unfortunately, some critics are attempting to halt the momentum for electricity competition and have pointed to last winter’s “deep freeze” in Texas that left several million customers without power for up to a week. But this example is misplaced. Power outages in February were the result of unprecedented and severe weather conditions affecting electricity generation and fuel supply, and numerous proposals to improve Texas grid reliability have focused on weatherization and fuel resilience; the state simply did not have enough access to natural gas and wind generation to meet demand. Competitive power markets were not a factor.

The benefits of wholesale and retail competition in power markets are incontrovertible. Evidence shows that households and businesses in competitive states are paying less for electricity while grid reliability has improved. The facts also suggest that wholesale and retail competition can lead to faster reductions in greenhouse gas emissions. In short, competition in power markets is good for consumers and good for the environment.

Bernard L. Weinstein is emeritus professor of applied economics at the University of North Texas, former associate director of the Maguire Energy Institute at Southern Methodist University, and a fellow of Goodenough College, London. He wrote this for InsideSources.com.

 

Related News

View more

Current Model For Storing Nuclear Waste Is Incomplete

Nuclear Waste Corrosion accelerates as stainless steel, glass, and ceramics interact in aqueous conditions, driving localized corrosion in repositories like Yucca Mountain, according to Nature Materials research on high-level radioactive waste storage.

 

Key Points

Degradation of waste forms and canisters from water-driven chemistry, causing accelerated, localized corrosion in storage.

✅ Stainless steel-glass contact triggers severe localized attack

✅ Ceramics and steel co-corrosion observed under aqueous conditions

✅ Yucca Mountain-like chemistry accelerates waste form degradation

 

The materials the United States and other countries plan to use to store high-level nuclear waste, even as utilities expand carbon-free electricity portfolios, will likely degrade faster than anyone previously knew because of the way those materials interact, new research shows.

The findings, published today in the journal Nature Materials (https://www.nature.com/articles/s41563-019-0579-x), show that corrosion of nuclear waste storage materials accelerates because of changes in the chemistry of the nuclear waste solution, and because of the way the materials interact with one another.

"This indicates that the current models may not be sufficient to keep this waste safely stored," said Xiaolei Guo, lead author of the study and deputy director of Ohio State's Center for Performance and Design of Nuclear Waste Forms and Containers, part of the university's College of Engineering. "And it shows that we need to develop a new model for storing nuclear waste."

Beyond waste storage, options like carbon capture technologies are being explored to reduce atmospheric CO2 alongside nuclear energy.

The team's research focused on storage materials for high-level nuclear waste -- primarily defense waste, the legacy of past nuclear arms production. The waste is highly radioactive. While some types of the waste have half-lives of about 30 years, others -- for example, plutonium -- have a half-life that can be tens of thousands of years. The half-life of a radioactive element is the time needed for half of the material to decay.

The United States currently has no disposal site for that waste; according to the U.S. General Accountability Office, it is typically stored near the nuclear power plants where it is produced. A permanent site has been proposed for Yucca Mountain in Nevada, though plans have stalled. Countries around the world have debated the best way to deal with nuclear waste; only one, Finland, has started construction on a long-term repository for high-level nuclear waste.

But the long-term plan for high-level defense waste disposal and storage around the globe is largely the same, even as the U.S. works to sustain nuclear power for decarbonization efforts. It involves mixing the nuclear waste with other materials to form glass or ceramics, and then encasing those pieces of glass or ceramics -- now radioactive -- inside metallic canisters. The canisters then would be buried deep underground in a repository to isolate it.

At the generation level, regulators are advancing EPA power plant rules on carbon capture to curb emissions while nuclear waste strategies evolve.

In this study, the researchers found that when exposed to an aqueous environment, glass and ceramics interact with stainless steel to accelerate corrosion, especially of the glass and ceramic materials holding nuclear waste.

In parallel, the electrical grid's reliance on SF6 insulating gas has raised warming concerns across Europe.

The study qualitatively measured the difference between accelerated corrosion and natural corrosion of the storage materials. Guo called it "severe."

"In the real-life scenario, the glass or ceramic waste forms would be in close contact with stainless steel canisters. Under specific conditions, the corrosion of stainless steel will go crazy," he said. "It creates a super-aggressive environment that can corrode surrounding materials."

To analyze corrosion, the research team pressed glass or ceramic "waste forms" -- the shapes into which nuclear waste is encapsulated -- against stainless steel and immersed them in solutions for up to 30 days, under conditions that simulate those under Yucca Mountain, the proposed nuclear waste repository.

Those experiments showed that when glass and stainless steel were pressed against one another, stainless steel corrosion was "severe" and "localized," according to the study. The researchers also noted cracks and enhanced corrosion on the parts of the glass that had been in contact with stainless steel.

Part of the problem lies in the Periodic Table. Stainless steel is made primarily of iron mixed with other elements, including nickel and chromium. Iron has a chemical affinity for silicon, which is a key element of glass.

The experiments also showed that when ceramics -- another potential holder for nuclear waste -- were pressed against stainless steel under conditions that mimicked those beneath Yucca Mountain, both the ceramics and stainless steel corroded in a "severe localized" way.

Other Ohio State researchers involved in this study include Gopal Viswanathan, Tianshu Li and Gerald Frankel.

This work was funded in part by the U.S. Department of Energy Office of Science.

Meanwhile, U.S. monitoring shows potent greenhouse gas declines confirming the impact of control efforts across the energy sector.

 

Related News

View more

California faces huge power cuts as wildfires rage

California Wildfire Power Shut-Offs escalate as PG&E imposes blackouts amid high winds, Getty and Kincade fires, mass evacuations, Sonoma County threats, and a state of emergency, drawing regulatory scrutiny over grid safety and outage scope.

 

Key Points

Planned utility outages to curb wildfire risk during extreme winds, prompting evacuations and regulatory scrutiny.

✅ PG&E preemptive blackouts under regulator inquiry

✅ Getty and Kincade fires drive mass evacuations

✅ Sonoma County under threat amid high winds

 

Pacific Gas & Electric (PG&E) already faces an investigation by regulators after cutting supplies to 970,000 homes and businesses amid California blackouts that raised concerns.

It announced that another 650,000 properties would face precautionary shut-offs.

Wildfires fanned by the strong winds are raging in two parts of the state.

Thousands of residents near the wealthy Brentwood neighbourhood of Los Angeles have been told to evacuate because of a wildfire that began early on Monday.

Further north in Sonoma County, a larger fire has forced 180,000 people from their homes.

California's governor has declared a state-wide emergency.

 

What about the power cuts?

On Monday regulators announced a formal inquiry into whether energy utilities broke rules by pre-emptively cutting power to an estimated 2.5 million people, amid a blackouts policy debate that intensified, as wildfire risks soared.

They did not name any utilities but analysts said PG&E was responsible for the bulk of the "public safety power shut-offs", and later faced a Camp Fire guilty plea that underscored its liabilities.

The company filed for bankruptcy in January after facing hundreds of lawsuits from victims of wildfires in 2017 and 2018.

Of the 970,000 properties hit by the most recent cuts, under half had their services back by Monday, and some sought help through wildfire assistance programs, the Associated Press reported.

Despite criticism that the precautionary blackouts were too widespread and too disruptive, PG&E said more would come on Tuesday and Wednesday because further strong winds were expected.

The company said it had logged more than 20 preliminary reports of damage to its network from the most recent windstorm.

In a video posted to Twitter on Saturday, Governor Gavin Newsom said the power cuts were "infuriating everyone, and rightfully so".

 

Where are the fires now?

In Los Angeles, the Getty Fire has burned over 600 acres (242 ha) and about 10,000 buildings are in the mandatory evacuation zone.

At least eight homes have been destroyed and five others damaged.

"If you are in an evacuation zone, don't screw around," Mr Schwarzenegger tweeted. "Get out."

LA fire chief Ralph Terrazas said fire crews had been "overwhelmed" by the scale of the fires.

"They had to make some tough decisions on which houses they were able to protect," he said.

"Many times it depends on where the ember lands. I saw homes that were adjacent to homes that were totally destroyed, without any damage."

In northern California, schools remain closed in Sonoma County, where tens of thousands of homes and businesses are under threat.

Sonoma has been ravaged by the Kincade Fire, which started on Wednesday and has burned through 50,000 acres of land, fanned by the winds.

The Kincade Fire began seven minutes after a nearby power line was damaged, and power lines may have started fires according to reports, but PG&E has not yet confirmed if the power glitch started the blaze.

About 180,000 people have been ordered to evacuate, with roads around Santa Rosa north of San Francisco packed with cars as people tried to flee.

There are fears the flames could cross the 101 highway and enter areas that have not seen wildfires since the 1940s.

 

Related News

View more

Spent fuel removal at Fukushima nuclear plant delayed up to 5 years

Fukushima Daiichi decommissioning delay highlights TEPCO's revised timeline, spent fuel removal at Units 1 and 2, safety enclosures, decontamination, fuel debris extraction by robot arm, and contaminated water management under stricter radiation control.

 

Key Points

A government revised schedule pushing back spent fuel removal and decommissioning milestones at Fukushima Daiichi.

✅ TEPCO delays spent fuel removal at Units 1 and 2 for safety.

✅ Enclosures, decontamination, and robotics mitigate radioactive risk.

✅ Contaminated water cut target: 170 tons/day to 100 by 2025.

 

The Japanese government decided Friday to delay the removal of spent fuel from the Fukushima Daiichi nuclear power plant's Nos. 1 and 2 reactors by as much as five years, casting doubt on whether it can stick to its timeframe for dismantling the crippled complex.

The process of removing the spent fuel from the units' pools had previously been scheduled to begin in the year through March 2024.

In its latest decommissioning plan, the government said the plant's operator, Tokyo Electric Power Company Holdings Inc., will not begin the roughly two-year process (a timeline comparable to major reactor refurbishment programs seen worldwide) at the No. 1 unit at least until the year through March 2028 and may wait until the year through March 2029.

Work at the No. 2 unit is now slated to start between the year through March 2025 and the year through March 2027, it said.

The delay is necessary to take further safety precautions such as the construction of an enclosure around the No. 1 unit to prevent the spread of radioactive dust, and decontamination of the No. 2 unit, even as authorities have begun reopening previously off-limits towns nearby, the government said. It is the fourth time it has revised its schedule for removing the spent fuel rods.

"It's a very difficult process and it's hard to know what to expect. The most important thing is the safety of the workers and the surrounding area," industry minister Hiroshi Kajiyama told a press conference.

The government set a new goal of finishing the removal of the 4,741 spent fuel rods across all six of the plant's reactors by the year through March 2032, amid ongoing debates about the consequences of early nuclear plant closures elsewhere.

Plant operator TEPCO has started the process at the No. 3 unit and already finished at the No. 4 unit, which was off-line for regular maintenance at the time of the disaster. A schedule has yet to be set for the Nos. 5 and 6 reactors.

While the government maintained its overarching timeframe of finishing the decommissioning of the plant 30 to 40 years from the 2011 crisis triggered by a magnitude 9.0 earthquake and tsunami, there may be further delays, even as milestones at other nuclear projects are being reached worldwide.

The government said it will begin removing fuel debris from the three reactors that experienced core meltdowns in the year through March 2022, starting with the No. 2 unit as part of broader reactor decommissioning efforts.

The process, considered the most difficult part of the decommissioning plan, will involve using a robot arm, reflecting progress in advanced reactors technologies, to initially remove small amounts of debris, moving up to larger amounts.

The government also said it will aim to reduce the pace at which contaminated water at the plant increases. Water for cooling the melted cores, mixed with underground water, amounts to around 170 tons a day. That number will be brought down to 100 tons by 2025, it said.

The water is being treated to remove the most radioactive materials and stored in tanks on the plant's grounds, but already more than 1 million tons has been collected and space is expected to run out by the summer of 2022.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.