Which is more sustainable, paper or digital?

By Printing News


NFPA 70e Training

Our customized live online or in‑person group training can be delivered to your staff at your location.

  • Live Online
  • 6 hours Instructor-led
  • Group Training Available
Regular Price:
$199
Coupon Price:
$149
Reserve Your Seat Today
Paper or digital? The simple answer is — neither.

The most common answer is that print kills trees and computers don't, so digital media must be greener. The typical indignant response is that print is greener because trees are a renewable resource and computers are toxic energy vampires that don't grow on trees. It's time to stop the bickering. Our future depends on getting this right.

The life cycles of both print and digital media have positive and negative triple bottom line impacts. Both need to become more sustainable, rather than fighting a zero-sum war of words. Humanity's prospects and our better nature will best be served if we strive for the sustainable evolution of both print and digital media, rather than allowing or cheering the demise of one or the other. If you are a printer or supplier of graphic arts you cannot afford to be indignant or complacent about making print significantly more sustainable than it is — and don't try to argue that you can't afford it.

The term sustainability was first used by Lester Brown in the 1981 book, "Building a Sustainable Society," and it is closely related to the widely used term "sustainable development," as defined by the 1987 Bruntland Commission report to the United Nations, Our Common Future. Sustainability is a cross-cutting concept meaning far more than the basic notion of "things persisting or enduring" or being "green."

While sustainability encompasses environmental stewardship, conservation and other "green" factors, it is a broad aspirational concept that seeks to integrate and balance the economic, environmental, and social outcomes of human activity through the use of qualitative action and principles such as The Precautionary Principle, The Natural Step and Appreciative Inquiry, as well as quantitative methods such as Lifecycle Analysis and System Dynamics. It seeks to meet the economic, environmental and social needs of present generations without crossing thresholds that prevent future generations from doing the same.

While environmental issues have typically taken a back seat to financial issues and investment during difficult economic times, this time it's different. Eighty percent of North American corporate sustainability executives recently surveyed by the research firm Panel Intelligence plan to maintain or increase levels of sustainability related spending in 2009. More importantly, R&D and coordination of marketing initiatives for the greening of IT and digital media are growing and outstripping any comparable efforts for print.

Print service providers, technologists, marketers and their associations should take note of efforts such as the Climate Savers Computing Initiative, The Green Grid, and the Global e-Sustainability Initiative (GeSI). An initiative such as the Sustainable Green Printing Partnership is a good start, but more investment and better coordination of efforts with others are needed. Failure to materially address the greening of print supply chains may ultimately seal the fate of print, as well as the fate of the billions whose media-related needs will not be served by a digital monoculture. Addressing sustainability is an issue of growing importance that requires us to rethink our approach.

Have you ever considered what the carbon footprint of your print and digital document workflows are, or what the carbon footprint of a magazine or an iPhone is? (Before you rush to use one of the dozens of carbon calculators available, it's important to realize that the results you get can vary widely as the assumptions used differ, and standards for calculating carbon footprints are still under development.)

The amounts of energy, materials and waste associated with the lifecycles of print and digital media are all too often overlooked, misunderstood or underestimated. There are billions of kilowatt hours of electricity embodied in the paper, ink and digital technologies we use each day, and among our greatest challenges is the need to identify, measure and reduce the amount of energy, waste and greenhouse gas emissions associated with each page or megabyte of information we rely on.

Both print and digital media use prodigious amounts of electricity. According to the Department of Energy, the U.S. papermaking industry used more than 75 billion kilowatt hours of electricity in 2006. That's the fourth largest industrial use of electricity in the country. However, U.S. data centers and servers consumed over 60 billion kilowatt hours of electricity during the same year, and that does not include the energy consumed by client computers or networks. In fact, recent analysis by Gartner Research indicates that datacenter energy consumption is expected to double by 2010, and its growth is unsustainable. This is one of the factors spurring investment in Green IT.

On average, each kilowatt hour of energy represents the emission of approximately two pounds of CO2. To put that in perspective, consider the Empire State Building's 37 million cubic feet of space. The combined emissions of U.S. papermaking, datacenters and client energy demand alone would fill over 100 Empire State Buildings with solidified CO2 (dry ice) each year. Each cubic foot of dry ice weighs approximately 100 pounds.

It is currently difficult to discover carbon footprints. However, according to information recently released by Apple, the lifecycle carbon footprint of an iPhone is responsible for the emission of 121 pounds of CO2-equivalent green house gas emissions over the course of a three-year expected lifetime of use, the same amount produced by 12, 100-watt light bulbs glowing for 691 hours, or a car engine burning 603 gallons of gasoline. Though it is not a direct comparison, it is interesting to note that Discover magazine estimates the lifecycle carbon footprint of each copy of its publication is responsible for 2.1 pounds of carbon dioxide emissions, the same amount produced by 12, 100-watt light bulbs glowing for an hour, or a car engine burning 14 ounces of gasoline.

Over the next few years it can be expected that lifecycle data and the carbon labeling of all products will move from the margins to the mainstream. In part this will be due to the high priority that the current administration in Washington has placed on carbon cap and trade legislation, and regulation of greenhouse gas emissions. In addition, there is already broad support for voluntary initiatives such as the Carbon Disclosure Project and Carbon Trust labeling initiative.

Research has shown that all things being equal, consumers prefer product with smaller carbon footprints. If that's true for other products, why not for print and digital media?

Business, government and day-to-day life depend on both print and digital media to a greater extent than is commonly realized, but neither is without its pluses and minuses. Members of the digital generation might not know enough to care if print goes the way of the slide rule, but they are unlikely to welcome arguments for reinventing print and keeping it in the mix if they are confronted by righteous indignation about the inherent greenness of print as it is today. Better to acknowledge the negative aspects of print then ask critics to consider the metastizing carbon footprint of digital media and encourage them to envision a near term future in which both media supply chains collaborate to become more economically, environmentally and socially sustainable.

Paint a picture of sustainable data centers and green printing facilities. Discuss how both print and digital media could be powered by advanced paper mills called integrated biorefineries that turn agricultural waste, waste paper, dedicated energy crops, algae and sustainably harvested trees into energy, biofuels, biopolymer toner, renewable chemical feedstocks and paper. Not only would this increase U.S. energy security, it would create green collar jobs and address climate change at the same time.

We cannot achieve sustainability by asking consumers to change light bulbs, drive hybrids and recycle alone. And we can't achieve sustainability by decreasing the diversity of media options that serve as our collective memory.

Whether you choose print or digital, anyone who lives in the U.S. contributes more than twice as much greenhouse gas as the global average. Considering that we represent about 5 percent of the world's population, and that billions in the developing world emulate our lifestyle, it becomes even more important to avoid simple answers or ignore inconvenient truths. We have an opportunity and an obligation to reinvent both print and digital media. We must also be realistic about the limited effects of small individual or voluntary actions.

It's time for consumers and producers of media to recognize that we share a common fate that can only be sustainable if we work together to make both print and digital sustainable. Toward that end, the Institute for Sustainable Communication welcomes your support, comments, questions and suggestions for positive steps we can take together.

Related News

LOC Renewables Delivers First MWS Services To China's Offshore Wind Market

Pinghai Bay Offshore Wind Farm MWS advances marine warranty survey best practices, risk management, and international standards in Fujian, with Haixia Goldenbridge Insurance and reinsurer-aligned audits supporting safer offshore wind construction and logistics.

 

Key Points

An MWS program ensuring Pinghai Bay Phase 2 meets standards via audits, risk controls, and vetted procedures.

✅ First MWS delivered in China's offshore wind market

✅ Audits, risk consultancy, and reinsurer-aligned standards

✅ Supports 250MW Phase 2 at Pinghai Bay, Fujian

 

LOC Renewables has announced it is to carry out marine warranty survey (MWS) services for the second phase of the Pinghai Bay Offshore Wind Farm near Putian, Fujian province, China, on behalf of Haixia Goldenbridge Insurance Co., Ltd. The agreement represents the first time MWS services have been delivered to the Chinese offshore wind market.

China’s installed offshore capacity jumped more than 60% in 2017, and its growing offshore market is aiming for a total grid-connected capacity of 5GW by 2020, as the sector globally advances toward a $1 trillion industry over the coming decades. Much of this future offshore development is slated to take place in Jiangsu, Zhejiang, Guangdong and Fujian provinces. As developers becoming increasingly aware of the need for stringent risk management and value that internationally accepted standards can bring to projects, Pinghai Bay will be the first Chinese offshore wind farm to employ MWS to ensure it meets the highest technical standards and minimise project risk. The agreement will see LOC Renewables carry out audit and risk consultancy services for the project from March until the end of 2018.

#google#

In recent years, as Chinese offshore wind projects have grown in scale and complexity the need for international expertise in the market has increased, with World Bank support for emerging markets underscoring global momentum. In response, domestic insurers are partnering with international reinsurers to manage and mitigate the associated larger risks. Applying the higher standards required by international reinsurers, LOC Renewables will draw on its extensive experience in European, US and Asian offshore wind markets to provide MWS services on the Pinghai project from its Tianjin office.

“As offshore wind technology continues to proliferate across Asia, driven by declining global costs, successful knowledge transfer based on best practices and lessons learned in the established offshore wind markets becomes ever more important,” said Ke Wan, Managing Director, LOC China.

“With a wealth of experience in Europe and the US, where UK offshore wind growth has accelerated, we’re increasingly working on projects across Asia, and are delighted to now be providing the first MWS services to China’s offshore wind market – services that bring real value in lower risk and will enable the project to achieve its full potential.”

“At 250MW, phase two of the Pinghai Bay Wind Farm represents a significant expansion on phase one, and we wanted to ensure that it met the highest technical and risk mitigation standards, informed by regional learnings such as Korean installation vessels analyses,” said Fan Ming, Business Director at Haixia Goldenbridge Insurance.

“In addition to their global experience, LOC Renewables’ familiarity with and presence in the local market was very important to us, and we’re looking forward to working closely with them to help bring this project to fruition and make a significant contribution to China’s expanding offshore wind market.”

 

Related News

View more

Electricity demand set to reduce if UK workforce self-isolates

UK Energy Networks Coronavirus Contingency outlines ESO's lockdown electricity demand forecast, reduced industrial and commercial load, rising domestic use, Ofgem guidance needs, grid resilience, control rooms, mutual aid, and backup centers.

 

Key Points

A coordinated plan with ESO forecasts, safeguards, and mutual aid to keep power and gas services during a lockdown.

✅ ESO forecasts lower industrial use, higher domestic demand

✅ Control rooms protected; backup sites and cross-trained staff

✅ Mutual aid and Ofgem coordination bolster grid resilience

 

National Grid ESO is predicting a reduction in electricity demand, consistent with residential use trends observed during the pandemic, in the case of the coronavirus spread prompting a lockdown across the country.

Its analysis shows the reduction in commercial and industrial use would outweigh an upsurge in domestic demand, mirroring Ontario demand data seen as people stayed home, according to similar analyses.

The prediction was included in an update from the Energy Networks Association (ENA), in which it sought to reassure the public that contingency plans are in place, reflecting utility disaster planning across electric and gas networks, to ensure services are unaffected by the coronavirus spread.

The body, which represents the UK's electricity and gas network companies, said "robust measures" had been put in place to protect control rooms and contact centres, similar to staff lockdown protocols considered by other system operators, to maintain resilience. To provide additional resilience, engineers have been trained across multiple disciplines and backup centres exist should operations need to be moved if, for example, deep cleaning is required, the ENA said.

Networks also have industry-wide mutual aid arrangements, similar to grid response measures outlined in the U.S., for people and the equipment needed to keep gas and electricity flowing.

ENA chief executive, David Smith, said, echoing system reliability assurances from other markets: "The UK's electricity and gas network is one of the most reliable in the world and network operators are working with the authorities to ensure that their contingency plans are reviewed and delivered in accordance with the latest expert advice. We are following this advice closely and reassuring customers that energy networks are continuing to operate as normal for the public."

Utility Week spoke to a senior figure at one of the networks who reiterated the robust measures in place to keep the lights on, even as grid alerts elsewhere highlight the importance of contingency planning. However, they pleaded for more clarity from Ofgem and government on how its workers will be treated if the coronavirus spread becomes a pandemic in the UK.

 

Related News

View more

EPA: New pollution limits proposed for US coal, gas power plants reflect "urgency" of climate crisis

EPA Power Plant Emissions Rule proposes strict greenhouse gas limits for coal and gas units, leveraging carbon capture (CCS) under the Clean Air Act to cut CO2 and accelerate decarbonization of the U.S. grid.

 

Key Points

A proposed EPA rule setting CO2 limits for coal and gas plants, using CCS to cut power-sector greenhouse gases.

✅ Applies to existing and new coal and large gas units

✅ Targets near-zero CO2 by 2038 via CCS or retirement

✅ Cites grid, health, and climate benefits; faces legal challenges

 

The Biden administration has proposed new limits on greenhouse gas emissions from coal- and gas-fired power plants, its most ambitious effort yet to roll back planet-warming pollution from the nation’s second-largest contributor to climate change.

A rule announced by the Environmental Protection Agency could force power plants to capture smokestack emissions using a technology that has long been promised but is not used widely in the United States, and arrives amid changes stemming from the NEPA rewrite that affect project reviews.

“This administration is committed to meeting the urgency of the climate crisis and taking the necessary actions required,″ said EPA Administrator Michael Regan.

The plan would not only “improve air quality nationwide, but it will bring substantial health benefits to communities all across the country, especially our front-line communities ... that have unjustly borne the burden of pollution for decades,” Regan said in a speech at the University of Maryland.

President Joe Biden, whose climate agenda includes a clean electricity standard as a key pillar, called the plan “a major step forward in the climate crisis and protecting public health.”

If finalized, the proposed regulation would mark the first time the federal government has restricted carbon dioxide emissions from existing power plants, following a Trump-era replacement of Obama’s power plant overhaul, which generate about 25% of U.S. greenhouse gas pollution, second only to the transportation sector. The rule also would apply to future electric plants and would avoid up to 617 million metric tons of carbon dioxide through 2042, equivalent to annual emissions of 137 million passenger vehicles, the EPA said.

Almost all coal plants — along with large, frequently used gas-fired plants — would have to cut or capture nearly all their carbon dioxide emissions by 2038, the EPA said, a timeline that echoed concerns raised during proposed electricity pricing changes in the prior administration. Plants that cannot meet the new standards would be forced to retire.

The plan is likely to be challenged by industry groups and Republican-leaning states, much like litigation over the Affordable Clean Energy rule unfolded in recent years. They have accused the Democratic administration of overreach on environmental regulations and warn of a pending reliability crisis for the electric grid. The power plant rule is one of at least a half-dozen EPA rules limiting power plant emissions and wastewater treatment rules.

“It’s truly an onslaught” of government regulation “designed to shut down the coal fleet prematurely,″ said Rich Nolan, president and CEO of the National Mining Association.

Regan denied that the power plant rule was aimed at shutting down the coal sector, but acknowledged — even after the end to the 'war on coal' rhetoric — “We will see some coal retirements.”

 

Related News

View more

California proposes income-based fixed electricity charges

Income Graduated Fixed Charge aligns CPUC billing with utility fixed costs, lowers usage rates, supports electrification, and shifts California investor-owned utilities' electric bills by income, with CARE and Climate Credit offsets for low-income households.

 

Key Points

A CPUC proposal: an income-based monthly fixed fee with lower usage rates to align costs and aid low-income customers.

✅ Income-tiered fixed fees: $0-$42; CARE: $14-$22, by utility territory

✅ Usage rates drop 16%-22% to support electrification and cost-reflective billing

✅ Lowest-income save ~$10-$20; some higher earners pay ~$10+ more monthly

 

The Public Advocates Office (PAO) for the California Public Utilities Commission (CPUC) has proposed adding a monthly income-based fixed charge on electric utility bills based on income level.  

The rate change is designed to lower bills for the lowest-income residents while aligning billing more directly with utility costs. 

PAO’s recommendation for the Income Graduated Fixed Charge places fees between $22 and $42 per month in the three major investor-owned utilities’ territories, including an SDG&E minimum charge debate under way, for customers not enrolled in the California Alternative Rates for Energy (CARE) program. As seen below, CARE customers would be charged between $14 per month and $22 a month, depending on income level and territory.

For households earning $50,000 or less per year, the fixed charge would be $0, but only if the California Climate Credit is applied to offset the fixed cost.

Meanwhile, usage-based electricity rates are lowered in the PAO proposal, part of major changes to electric bills statewide. Average rates would be reduced between 16% to 22% for the three major investor-owned utilities.

The lowest-income bracket of Californians is expected to save roughly $10 to $20 a month under the proposal, while middle-income customers may see costs rise by about $20 a month, even as lawmakers seek to overturn income-based charges in Sacramento.

“We anticipate the vast majority of low-income customers ($50,000 or less per year) will have their monthly bills decrease by $10 or more, and a small proportion of the highest income earners ($100,000+ per year) will see their monthly bills rise by $10 or more,” said the PAO.

The charges are an effort to help suppress ever-increasing electricity generation and transmission rates, which are among the highest in the country, with soaring electricity prices reported across California. Rates are expected to rise sharply as wildfire mitigation efforts are implemented by the utilities found at fault for their origin.

“We are very concerned. However, we do not see the increases stopping at this point,” Linda Serizawa, deputy director for energy, PAO, told pv magazine. “We think the pace and scale of the [rate] increases is growing faster than we would have anticipated for several years now.”

Consumer advocates and regulators face calls for action on surging electricity bills across the state.

The proposed changes are also meant to more directly couple billing with the fixed charges that utilities incur, as California considers revamping electricity rates to clean the grid. For example, activities like power line maintenance, energy efficiency programs, and wildfire prevention are not expected to vary with usage, so these activities would be funded through a fixed charge.

Michael Campbell of the PAO’s customer programs team, and leader of the proposed program, likened paying for grid enhancements and other social programs with utility rate increases to “paying for food stamps by taxing food.” Instead, a fixed charge would cover these costs.

PAO said the move to lower rates for usage should help encourage electrification as California moves to replace heating and cooling, appliances, and gas combustion cars with electrified counterparts. In addition, lower rates mean the cost burden of running these devices is improved.

 

Related News

View more

New York State to investigate sites for offshore wind projects

NYSERDA Offshore Wind Data initiative funds geophysical and geotechnical surveys, seabed and soil studies on New York's shelf to accelerate siting, optimize foundation design, reduce costs, and advance clean energy deployment.

 

Key Points

State funding to support surveys and soil studies guiding offshore wind siting, design, and cost reduction.

✅ Up to $5.5M for geophysical and geotechnical data collection

✅ Focus on seabed soils, shelf geology, and foundation design inputs

✅ Accelerates siting, reduces risk, and lowers offshore wind costs

 

The New York State Energy Research and Development Authority (NYSERDA) is investing up to $5.5 million for the collection of geophysical and geotechnical data to determine future offshore wind development sites.

The funding is to look at seabed soil and geological data for the preliminary design and installation requirements for future offshore wind projects. Its part of N.Y. Gov. Andrew Cuomos plan to develop 9,000 megawatts of offshore wind energy by 2035.

Todays announcement is another step in Governor Cuomos steadfast march to achieving 9,000 megawatts of offshore wind by 2035, putting New York in a clear national leadership position when it comes to advancing this new industry through large-scale energy projects across the state. The surveys NYSERDA will be funding under this solicitation will expand the offshore wind industrys access to geophysical and geotechnical data that will provide the foundation for future offshore wind development in these areas, and accelerate project development while driving down costs, NYSERDA President and CEO Alicia Barton said.

NYSERDA will select one or more contractors to do the investigations, while recent DOE wind energy awards support complementary research, and develop a model for describing geophysical and geotechnical conditions. NYSERDA will also select a contractor to support project management and host the data that is collected. The submission deadline is Jan. 21, 2020.

Todays announcement builds on the data collected in a Geotechnical and Geophysical Desktop Study also released today, which includes information on the middle continental shelf off the shore of New York and New Jersey, where BOEM lease requests are shaping activity, creating a regional overview of the seafloor and sub-seafloor environment as it relates to offshore wind development.

Strong knowledge of environmental conditions and factors, including seabed soil conditions, are essential for the installation of offshore projects, such as Long Island proposals, but only a limited amount of soil sampling and testing has been undertaken to date.

The collection of geophysical and geotechnical data from areas off of New Yorks Atlantic coast is yet another demonstration of New Yorks leadership promoting the responsible development of offshore wind. The data generated by this initiative will ultimately lead to better projects, lower cost, and enhanced safety. New York is leading the way to a clean energy future, as the state finalizes renewable project contracts that expand capacity, and relying on data collection and sound science to get us there, New York Offshore Wind Alliance Director Joe Martens said.

 

Related News

View more

Jolting the brain's circuits with electricity is moving from radical to almost mainstream therapy

Brain Stimulation is transforming neuromodulation, from TMS and DBS to closed loop devices, targeting neural circuits for addiction, depression, Parkinsons, epilepsy, and chronic pain, powered by advanced imaging, AI analytics, and the NIH BRAIN Initiative.

 

Key Points

Brain stimulation uses pulses to modulate neural circuits, easing symptoms in depression, Parkinsons, and epilepsy.

✅ Noninvasive TMS and invasive DBS modulate specific brain circuits

✅ Closed loop systems adapt stimulation via real time biomarker detection

✅ Emerging uses: addiction, depression, Parkinsons, epilepsy, chronic pain

 

In June 2015, biology professor Colleen Hanlon went to a conference on drug dependence. As she met other researchers and wandered around a glitzy Phoenix resort’s conference rooms to learn about the latest work on therapies for drug and alcohol use disorders, she realized that out of the 730 posters, there were only two on brain stimulation as a potential treatment for addiction — both from her own lab at Wake Forest School of Medicine.

Just four years later, she would lead 76 researchers on four continents in writing a consensus article about brain stimulation as an innovative tool for addiction. And in 2020, the Food and Drug Administration approved a transcranial magnetic stimulation device to help patients quit smoking, a milestone for substance use disorders.

Brain stimulation is booming. Hanlon can attend entire conferences devoted to the study of what electrical currents do—including how targeted stimulation can improve short-term memory in older adults—to the intricate networks of highways and backroads that make up the brain’s circuitry. This expanding field of research is slowly revealing truths of the brain: how it works, how it malfunctions, and how electrical impulses, precisely targeted and controlled, might be used to treat psychiatric and neurological disorders.

In the last half-dozen years, researchers have launched investigations into how different forms of neuromodulation affect addiction, depression, loss-of-control eating, tremor, chronic pain, obsessive compulsive disorder, Parkinson’s disease, epilepsy, and more. Early studies have shown subtle electrical jolts to certain brain regions could disrupt circuit abnormalities — the miscommunications — that are thought to underlie many brain diseases, and help ease symptoms that persist despite conventional treatments.

The National Institute of Health’s massive BRAIN Initiative put circuits front and center, distributing $2.4 billion to researchers since 2013 to devise and use new tools to observe interactions between brain cells and circuits. That, in turn, has kindled interest from the private sector. Among the advances that have enhanced our understanding of how distant parts of the brain talk with one another are new imaging technology and the use of machine learning, much as utilities use AI to adapt to shifting electricity demand, to interpret complex brain signals and analyze what happens when circuits go haywire.

Still, the field is in its infancy, and even therapies that have been approved for use in patients with, for example, Parkinson’s disease or epilepsy, help only a minority of patients, and in a world where electricity drives pandemic readiness expectations can outpace evidence. “If it was the Bible, it would be the first chapter of Genesis,” said Michael Okun, executive director of the Norman Fixel Institute for Neurological Diseases at University of Florida Health.

As brain stimulation evolves, researchers face daunting hurdles, and not just scientific ones. How will brain stimulation become accessible to all the patients who need it, given how expensive and invasive some treatments are? Proving to the FDA that brain stimulation works, and does so safely, is complicated and expensive. Even with a swell of scientific momentum and an influx of funding, the agency has so far cleared brain stimulation for only a handful of limited conditions. Persuading insurers to cover the treatments is another challenge altogether. And outside the lab, researchers are debating nascent issues, such as the ethics of mind control, the privacy of a person’s brain data—concerns that echo efforts to develop algorithms to prevent blackouts during rising ransomware threats—and how to best involve patients in the study of the human brain’s far-flung regions.

Neurologist Martha Morrell is optimistic about the future of brain stimulation. She remembers the shocked reactions of her colleagues in 2004 when she left full-time teaching at Stanford (she still has a faculty appointment as a clinical professor of neurology) to direct clinical trials at NeuroPace, then a young company making neurostimulator systems to potentially treat epilepsy patients.

Related: Once a last resort, this pain therapy is getting a new life amid the opioid crisis
“When I started working on this, everybody thought I was insane,” said Morrell. Nearly 20 years in, she sees a parallel between the story of jolting the brain’s circuitry and that of early implantable cardiac devices, such as pacemakers and defibrillators, which initially “were used as a last option, where all other medications have failed.” Now, “the field of cardiology is very comfortable incorporating electrical therapy, device therapy, into routine care. And I think that’s really where we’re going with neurology as well.”


Reaching a ‘slope of enlightenment’
Parkinson’s is, in some ways, an elder in the world of modern brain stimulation, and it shows the potential as well as the limitations of the technology. Surgeons have been implanting electrodes deep in the brains of Parkinson’s patients since the late 1990s, and in people with more advanced disease since the early 2000s.

In that time, it’s gone through the “hype cycle,” said Okun, the national medical adviser to the Parkinson’s Foundation since 2006. Feverish excitement and overinflated expectations have given way to reality, bringing scientists to a “slope of enlightenment,” he said. They have found deep brain stimulation to be very helpful for some patients with Parkinson’s, rendering them almost symptom-free by calming the shaking and tremors that medications couldn’t. But it doesn’t stop the progression of the disease, or resolve some of the problems patients with advanced Parkinson’s have walking, talking, and thinking.

In 2015, the same year Hanlon found only her lab’s research on brain stimulation at the addiction conference, Kevin O’Neill watched one finger on his left hand start doing something “funky.” One finger twitched, then two, then his left arm started tingling and a feeling appeared in his right leg, like it was about to shake but wouldn’t — a tremor.

“I was assuming it was anxiety,” O’Neill, 62, told STAT. He had struggled with anxiety before, and he had endured a stressful year: a separation, selling his home, starting a new job at a law firm in California’s Bay Area. But a year after his symptoms first began, O’Neill was diagnosed with Parkinson’s.

In the broader energy context, California has increasingly turned to battery storage to stabilize its strained grid.

Related: Psychiatric shock therapy, long controversial, may face fresh restrictions
Doctors prescribed him pills that promote the release of dopamine, to offset the death of brain cells that produce this messenger molecule in circuits that control movement. But he took them infrequently because he worried about insomnia as a side effect. Walking became difficult — “I had to kind of think my left leg into moving” — and the labor lawyer found it hard to give presentations and travel to clients’ offices.

A former actor with an outgoing personality, he developed social anxiety and didn’t tell his bosses about his diagnosis for three years, and wouldn’t have, if not for two workdays in summer 2018 when his tremors were severe and obvious.

O’Neill’s tremors are all but gone since he began deep brain stimulation last May, though his left arm shakes when he feels tense.

It was during that period that he learned about deep brain stimulation, at a support group for Parkinson’s patients. “I thought, ‘I will never let anybody fuss with my brain. I’m not going to be a candidate for that,’” he recalled. “It felt like mad scientist science fiction. Like, are you kidding me?”

But over time, the idea became less radical, as O’Neill spoke to DBS patients and doctors and did his own research, and as his symptoms worsened. He decided to go for it. Last May, doctors at the University of California, San Francisco surgically placed three metal leads into his brain, connected by thin cords to two implants in his chest, just near the clavicles. A month later, he went into the lab and researchers turned the device on.

“That was a revelation that day,” he said. “You immediately — literally, immediately — feel the efficacy of these things. … You go from fully symptomatic to non-symptomatic in seconds.”

When his nephew pulled up to the curb to pick him up, O’Neill started dancing, and his nephew teared up. The following day, O’Neill couldn’t wait to get out of bed and go out, even if it was just to pick up his car from the repair shop.

In the year since, O’Neill’s walking has gone from “awkward and painful” to much improved, and his tremors are all but gone. When he is extra frazzled, like while renovating and moving into his new house overlooking the hills of Marin County, he feels tense and his left arm shakes and he worries the DBS is “failing,” but generally he returns to a comfortable, tremor-free baseline.

O’Neill worried about the effects of DBS wearing off but, for now, he can think “in terms of decades, instead of years or months,” he recalled his neurologist telling him. “The fact that I can put away that worry was the big thing.”

He’s just one patient, though. The brain has regions that are mostly uniform across all people. The functions of those regions also tend to be the same. But researchers suspect that how brain regions interact with one another — who mingles with whom, and what conversation they have — and how those mixes and matches cause complex diseases varies from person to person. So brain stimulation looks different for each patient.

Related: New study revives a Mozart sonata as a potential epilepsy therapy
Each case of Parkinson’s manifests slightly differently, and that’s a bit of knowledge that applies to many other diseases, said Okun, who organized the nine-year-old Deep Brain Stimulation Think Tank, where leading researchers convene, review papers, and publish reports on the field’s progress each year.

“I think we’re all collectively coming to the realization that these diseases are not one-size-fits-all,” he said. “We have to really begin to rethink the entire infrastructure, the schema, the framework we start with.”

Brain stimulation is also used frequently to treat people with common forms of epilepsy, and has reduced the number of seizures or improved other symptoms in many patients. Researchers have also been able to collect high-quality data about what happens in the brain during a seizure — including identifying differences between epilepsy types. Still, only about 15% of patients are symptom-free after treatment, according to Robert Gross, a neurosurgery professor at Emory University in Atlanta.

“And that’s a critical difference for people with epilepsy. Because people who are symptom-free can drive,” which means they can get to a job in a place like Georgia, where there is little public transit, he said. So taking neuromodulation “from good to great,” is imperative, Gross said.


Renaissance for an ancient idea
Recent advances are bringing about what Gross sees as “almost a renaissance period” for brain stimulation, though the ideas that undergird the technology are millenia old. Neuromodulation goes back to at least ancient Egypt and Greece, when electrical shocks from a ray, called the “torpedo fish,” were recommended as a treatment for headache and gout. Over centuries, the fish zaps led to doctors burning holes into the brains of patients. Those “lesions” worked, somehow, but nobody could explain why they alleviated some patients’ symptoms, Okun said.

Perhaps the clearest predecessor to today’s technology is electroconvulsive therapy (ECT), which in a rudimentary and dangerous way began being used on patients with depression roughly 100 years ago, said Nolan Williams, director of the Brain Stimulation Lab at Stanford University.

Related: A new index measures the extent and depth of addiction stigma
More modern forms of brain stimulation came about in the United States in the mid-20th century. A common, noninvasive approach is transcranial magnetic stimulation, which involves placing an electromagnetic coil on the scalp to transmit a current into the outermost layer of the brain. Vagus nerve stimulation (VNS), used to treat epilepsy, zaps a nerve that contributes to some seizures.

The most invasive option, deep brain stimulation, involves implanting in the skull a device attached to electrodes embedded in deep brain regions, such as the amygdala, that can’t be reached with other stimulation devices. In 1997, the FDA gave its first green light to deep brain stimulation as a treatment for tremor, and then for Parkinson’s in 2002 and the movement disorder dystonia in 2003.

Even as these treatments were cleared for patients, though, what was happening in the brain remained elusive. But advanced imaging tools now let researchers peer into the brain and map out networks — a recent breakthrough that researchers say has propelled the field of brain stimulation forward as much as increased funding has, paralleling broader efforts to digitize analog electrical systems across industry. Imaging of both human brains and animal models has helped researchers identify the neuroanatomy of diseases, target brain regions with more specificity, and watch what was happening after electrical stimulation.

Another key step has been the shift from open-loop stimulation — a constant stream of electricity — to closed-loop stimulation that delivers targeted, brief jolts in response to a symptom trigger. To make use of the futuristic technology, labs need people to develop artificial intelligence tools, informed by advances in machine learning for the energy transition, to interpret large data sets a brain implant is generating, and to tailor devices based on that information.

“We’ve needed to learn how to be data scientists,” Morrell said.

Affinity groups, like the NIH-funded Open Mind Consortium, have formed to fill that gap. Philip Starr, a neurosurgeon and developer of implantable brain devices at the University of California at San Francisco Health system, leads the effort to teach physicians how to program closed-loop devices, and works to create ethical standards for their use. “There’s been extraordinary innovation after 20 years of no innovation,” he said.

The BRAIN Initiative has been critical, several researchers told STAT. “It’s been a godsend to us,” Gross said. The NIH’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative was launched in 2013 during the Obama administration with a $50 million budget. BRAIN now spends over $500 million per year. Since its creation, BRAIN has given over 1,100 awards, according to NIH data. Part of the initiative’s purpose is to pair up researchers with medical technology companies that provide human-grade stimulation devices to the investigators. Nearly three dozen projects have been funded through the investigator-devicemaker partnership program and through one focused on new implantable devices for first-in-human use, according to Nick Langhals, who leads work on neurological disorders at the initiative.

The more BRAIN invests, the more research is spawned. “We learn more about what circuits are involved … which then feeds back into new and more innovative projects,” he said.

Many BRAIN projects are still in early stages, finishing enrollment or small feasibility studies, Langhals said. Over the next couple of years, scientists will begin to see some of the fruits of their labor, which could lead to larger clinical trials, or to companies developing more refined brain stimulation implants, Langhals said.

Money from the National Institutes of Mental Health, as well as the NIH’s Helping to End Addiction Long-term (HEAL), has similarly sweetened the appeal of brain stimulation, both for researchers and industry. “A critical mass” of companies interested in neuromodulation technology has mushroomed where, for two decades, just a handful of companies stood, Starr said.

More and more, pharmaceutical and digital health companies are looking at brain stimulation devices “as possible products for their future,” said Linda Carpenter, director of the Butler Hospital TMS Clinic and Neuromodulation Research Facility.


‘Psychiatry 3.0’
The experience with using brain stimulation to stop tremors and seizures inspired psychiatrists to begin exploring its use as a potentially powerful therapy for healing, or even getting ahead of, mental illness.

In 2008, the FDA approved TMS for patients with major depression who had tried, and not gotten relief from, drug therapy. “That kind of opened the door for all of us,” said Hanlon, a professor and researcher at the Center for Research on Substance Use and Addiction at Wake Forest School of Medicine. The last decade saw a surge of research into how TMS could be used to reset malfunctioning brain circuits involved in anxiety, depression, obsessive-compulsive disorder, and other conditions.

“We’re certainly entering into what a lot of people are calling psychiatry 3.0,” Stanford’s Williams said. “Whereas the first iteration was Freud and all that business, the second one was the psychopharmacology boom, and this third one is this bit around circuits and stimulation.”

Drugs alleviate some patients’ symptoms while simultaneously failing to help many others, but psychopharmacology clearly showed “there’s definitely a biology to this problem,” Williams said — a biology that in some cases may be more amenable to a brain stimulation.

Related: Largest psilocybin trial finds the psychedelic is effective in treating serious depression
The exact mechanics of what happens between cells when brain circuits … well, short-circuit, is unclear. Researchers are getting closer to finding biomarkers that warn of an incoming depressive episode, or wave of anxiety, or loss of impulse control. Those brain signatures could be different for every patient. If researchers can find molecular biomarkers for psychiatric disorders — and find ways to preempt those symptoms by shocking particular brain regions — that would reshape the field, Williams said.

Not only would disease-specific markers help clinicians diagnose people, but they could help chip away at the stigma that paints mental illness as a personal or moral failing instead of a disease. That’s what happened for epilepsy in the 1960s, when scientific findings nudged the general public toward a deeper understanding of why seizures happen, and it’s “the same trajectory” Williams said he sees for depression.

His research at the Stanford lab also includes work on suicide, and obsessive-compulsive disorder, which the FDA said in 2018 could be treated using noninvasive TMS. Williams considers brain stimulation, with its instantaneity, to be a potential breakthrough for urgent psychiatric situations. Doctors know what to do when a patient is rushed into the emergency room with a heart attack or a stroke, but there is no immediate treatment for psychiatric emergencies, he said. Williams wonders: What if, in the future, a suicidal patient could receive TMS in the emergency room and be quickly pulled out of their depressive mental spiral?

Researchers are also actively investigating the brain biology of addiction. In August 2020, the FDA approved TMS for smoking cessation, the first such OK for a substance use disorder, which is “really exciting,” Hanlon said. Although there is some nuance when comparing substance use disorders, a primal mechanism generally defines addiction: the eternal competition between “top-down” executive control functions and “bottom-up” cravings. It’s the same process that is at work when one is deciding whether to eat another cookie or abstain — just exacerbated.

Hanlon is trying to figure out if the stop and go circuits are in the same place for all people, and whether neuromodulation should be used to strengthen top-down control or weaken bottom-up cravings. Just as brain stimulation can be used to disrupt cellular misfiring, it could also be a tool for reinforcing helpful brain functions, or for giving the addicted brain what it wants in order to curb substance use.

Evidence suggests many people with schizophrenia smoke cigarettes (a leading cause of early death for this population) because nicotine reduces the “hyperconnectivity” that characterizes the brains of people with the disease, said Heather Ward, a research fellow at Boston’s Beth Israel Deaconess Medical Center. She suspects TMS could mimic that effect, and therefore reduce cravings and some symptoms of the disease, and she hopes to prove that in a pilot study that is now enrolling patients.

If the scientific evidence proves out, clinicians say brain stimulation could be used alongside behavioral therapy and drug-based therapy to treat substance use disorders. “In the end, we’re going to need all three to help people stay sober,” Hanlon said. “We’re adding another tool to the physician’s toolbox.”

Decoding the mysteries of pain
Afavorable outcome to the ongoing research, one that would fling the doors to brain stimulation wide open for patients with myriad disorders, is far from guaranteed. Chronic pain researchers know that firsthand.

Chronic pain, among the most mysterious and hard-to-study medical phenomena, was the first use for which the FDA approved deep brain stimulation, said Prasad Shirvalkar, an assistant professor of anesthesiology at UCSF. But when studies didn’t pan out after a year, the FDA retracted its approval.

Shirvalkar is working with Starr and neurosurgeon Edward Chang on a profoundly complex problem: “decoding pain in the brain states, which has never been done,” as Starr told STAT.

Part of the difficulty of studying pain is that there is no objective way to measure it. Much of what we know about pain is from rudimentary surveys that ask patients to rate how much they’re hurting, on a scale from zero to 10.

Using implantable brain stimulation devices, the researchers ask patients for a 0-to-10 rating of their pain while recording up-and-down cycles of activity in the brain. They then use machine learning to compare the two streams of information and see what brain activity correlates with a patient’s subjective pain experience. Implantable devices let researchers collect data over weeks and months, instead of basing findings on small snippets of information, allowing for a much richer analysis.

 

Related News

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.