What is an Electrical Fault?

By R.W. Hurst, Editor


what is an electrical fault

An electrical fault occurs when a system or piece of equipment departs from its normal operating state, resulting in abnormal current flow. This can result in overheating, equipment damage, or safety risks. Protective devices isolate faults to preserve safety and reliability.

 

What is an Electrical Fault?

An electrical fault is an abnormal condition in a power system or equipment. It happens when the current flowing through a circuit is partially or completely interrupted.

✅ Short circuits, ground faults, and overloads are common types

✅ Protective devices mitigate hazards and equipment damage

✅ Fault detection ensures system safety and reliability

 

Electrical faults can occur for various reasons, including equipment failure, environmental conditions, and human error. Some common causes of electrical faults include faulty wiring, damaged insulation, overloaded circuits, lightning strikes, power surges, and voltage fluctuations. 

  • Equipment issues: faulty wiring, broken insulation, overloaded circuits

  • Environmental conditions: moisture, lightning, dust, or tree contact

  • Human error: poor installation, neglect, or unsafe work practices

The most common fault categories include open-circuit faults, short-circuit faults, and ground faults. An open circuit fault occurs when a break in the circuit prevents current from flowing. A short circuit occurs when an unintended connection between two points allows an excessive amount of current to flow. A ground fault occurs when an unintended connection between the electrical circuit and the ground creates a shock hazard. Faults often relate to excessive current flow, which can be better understood through Ohm’s Law and its role in determining resistance, voltage, and current relationships.

A balanced fault is a condition in which all three phases of a three-phase system are shorted to ground or to each other. In this type of fault, the system remains balanced, and the fault current is limited. Understanding basic electricity is essential to grasp how faults disrupt the normal flow of current in a circuit.

 

Classifications of Electrical Faults

Electrical faults can be categorized into several groups to help engineers understand their causes and plan effective protective measures.

Transient vs. Permanent Faults: Transient faults, such as those caused by a lightning strike or temporary contact with a tree branch, clear on their own once the source is removed. Permanent faults, on the other hand, require repair before normal operation can resume, such as when insulation fails or a conductor breaks.

Symmetric vs. Asymmetric Faults: A symmetric fault affects all three phases of a system equally, and although rare, it can cause severe damage due to the high fault currents it generates. Asymmetric faults are far more common, involving one or two phases, and they create an unbalanced condition in the system.

Internal vs. External Faults: Internal faults occur within equipment, such as transformers, generators, or motors, often due to insulation breakdown or winding damage. External faults originate outside the equipment, caused by conditions such as storm damage, contact with foreign objects, or human error.

 

Types of Electrical Faults in Power Systems

A line-to-ground fault occurs when one of the conductors in a circuit comes in contact with the ground. This can happen due to faulty insulation, damaged equipment, or environmental conditions. A common example is a lightning strike creating a transient line-to-ground fault that trips breakers on a distribution system.

Other major types include:

  • Line-to-ground: conductor touches ground, causing shock risk

  • Open circuit: broken wires or components stop current flow

  • Phase fault: phases contact each other or ground

  • Short circuit: an unintended connection allows excessive current

  • Single-phase: limited to one phase, but still damaging

  • Arc fault: current jumps an air gap, creating sparks and fire risk

  • Balanced vs unbalanced: equal current in phases vs uneven distribution

Rodents chewing through insulation in attics or utility spaces often cause arc faults, showing how even small intrusions can lead to dangerous electrical events. When discussing ground faults and protective systems, it’s useful to revisit the conductor definition, since conductors are the pathways through which electrical energy travels and where faults typically occur.

 

Electrical Fault Protection Systems and Safety Devices

A circuit breaker is a device that automatically interrupts the flow of current in a circuit when it detects a fault. It is an essential safety device that helps prevent fires and other hazards.

When a circuit is interrupted, the flow of current in the circuit is stopped. This can happen for various reasons, including a circuit fault, a switch or breaker opening, or other similar issues.

In an electric power system, faults can cause significant damage to system equipment and result in power outages. Power system equipment includes transformers, generators, and other devices that are used to generate, transmit, and distribute power.

  • Circuit breakers: interrupt current when faults are detected

  • Relays: monitor and signal breakers to operate

  • Fuses: provide overcurrent protection in smaller systems

  • GFCIs: stop leakage current to ground instantly

  • AFCIs: detect arc faults to prevent electrical fires

Modern protective relay schemes, such as distance relays, differential relays, and overcurrent relays, provide precise and selective fault detection in high-voltage power systems. Engineers also use fault current analysis and time–current coordination studies to ensure that devices operate in the right order, isolating only the affected portion of the network.

Voltage drop refers to the reduction in voltage that occurs when current flows through a circuit. Various factors, including the resistance of the circuit components and the distance between the power source and the load, can cause voltage drops. Many fault events lead to abnormal heating or circuit interruption, highlighting the importance of electrical resistance and how it affects system reliability.

 

Signs, Hazards & Prevention

Electrical hazards refer to any situation or condition that poses a risk of injury or damage. Various factors, including faulty equipment, damaged insulation, or human error, can cause hazards. Faulty wiring refers to any damaged, frayed, or deteriorated wiring. Faulty wiring can cause faults and create safety hazards for people nearby.

The signs of a fault can vary depending on the type of fault and its location. However, some common signs include flickering lights, frequent circuit breaker trips, burning odours, and overheating equipment.

  • Warning signs: flickering lights, breaker trips, overheating, burning odours

  • Safety hazards: electric shock, fire, equipment damage

  • Prevention steps: inspections, correct equipment sizing, avoiding overloads, and code compliance

It is crucial to follow proper safety practices to prevent faults from occurring. This includes regular maintenance and inspection of equipment, using the correct type and size of electrical components, and avoiding overloading circuits. It is also essential to use circuit breakers, GFCIs, and other protective devices as required by code. For a broader perspective, exploring the dangers of electricity helps explain why protective devices and fault detection are so critical for both personal safety and equipment protection.

 

Frequently Asked Questions
 

How do faults occur?

Faults can occur for various reasons, including equipment failure, environmental conditions, and human error. Some common causes of faults include faulty wiring, damaged insulation, overloaded circuits, lightning strikes, power surges, and voltage fluctuations.


What are the most common types of faults?

The most common types of faults include open-circuit faults, short-circuit faults, and ground faults.


What are the signs of a fault?

The signs of a fault can vary depending on the type of fault and its location. However, some common signs of an electrical fault include flickering lights, circuit breakers tripping frequently, burning smells, and overheating equipment.


How can you prevent faults from occurring?

It is crucial to follow proper safety practices to prevent faults from occurring. This includes regular maintenance and inspection of equipment, using the correct type and size of electrical components, and avoiding overloading circuits. It is also essential to use circuit breakers and other protective devices.

 

Related Articles

 

Related News

How Is Electricity Generated?

It is produced by converting various energy sources, such as fossil fuels, nuclear, solar, wind, or hydro, into electrical energy using turbines and generators. These systems harness mechanical or chemical energy and transform it into usable power.

 

How Is Electricity Generated?

✅ Converts energy sources like coal, gas, wind, or sunlight into power

✅ Uses generators driven by turbines to create electrical current

✅ Supports global power grids and industrial, commercial, and residential use

 

Understanding Electricity Generation

Electricity generation is the lifeblood of modern civilization, powering homes, industries, hospitals, transportation systems, and digital infrastructure. But behind the flip of a switch lies a vast and complex process that transforms raw energy into electrical power. At its core, electricity is generated by converting various forms of energy—mechanical, thermal, chemical, or radiant—into a flow of electric charge through systems engineered for efficiency and reliability.

Understanding the role of voltage is essential in this process, as it determines the electrical pressure that drives current through circuits.

According to the Energy Information Administration, the United States relies on a diverse mix of technologies to produce electric power, including fossil fuels, nuclear power, and renewables. In recent years, the rapid growth of solar photovoltaic systems and the widespread deployment of wind turbines have significantly increased the share of clean energy in the national grid. These renewable systems often use turbines to generate electricity by converting natural energy sources—sunlight and wind—into mechanical motion and ultimately electrical power. This transition reflects broader efforts to reduce emissions while meeting rising electric power demand.

 

How Power Generation Works

Most electricity around the world is produced using turbines and generators. These devices are typically housed in large-scale power plants. The process begins with an energy source—such as fossil fuels, nuclear reactions, or renewable inputs like water, wind, or sunlight—which is used to create movement. This movement, in turn, drives a turbine, which spins a shaft connected to a generator. Inside the generator, magnetic fields rotate around conductive coils, inducing a voltage and producing alternating current (AC) electricity. This method, known as electromagnetic induction, is the fundamental mechanism by which nearly all electric power is made.

In designing and maintaining electrical systems, engineers must also consider voltage drop, which can reduce efficiency and power quality. You can evaluate system losses using our interactive voltage drop calculator, and better understand the math behind it using the voltage drop formula.

 

Energy Sources Used in Power Production

Steam turbines remain the dominant technology in global energy production. These are especially common in plants that burn coal, natural gas, or biomass, or that rely on nuclear fission. In a typical thermal power plant, water is heated to create high-pressure steam, which spins the turbine blades. In nuclear facilities, this steam is generated by the immense heat released when uranium atoms are split. While highly efficient, these systems face environmental and safety concerns—greenhouse gas emissions from fossil fuels, radioactive waste and accident risk from nuclear power.

Power quality in these plants can be impacted by voltage sag, which occurs when systems experience a temporary drop in electrical pressure, often due to sudden large loads or faults. Managing such variations is crucial to stable output.

 

The Rise of Renewable Energy in Electricity Generation

Alongside these large-scale thermal technologies, renewable sources have grown significantly. Hydroelectric power harnesses the kinetic energy of falling or flowing water, typically from a dam, to spin turbines. Wind energy captures the movement of air through large blades connected to horizontal-axis turbines. Solar power generates electricity in two distinct ways: photovoltaic cells convert sunlight directly into electric power using semiconductors, while solar thermal plants concentrate sunlight to heat fluids and produce steam. Geothermal systems tap into the Earth’s internal heat to generate steam directly or via heat exchangers.

These renewable systems offer major advantages in terms of sustainability and environmental impact. They produce no direct emissions and rely on natural, often abundant energy flows. However, they also face limitations. Solar and wind power are intermittent, meaning their output fluctuates with weather and time of day. Hydropower and geothermal are geographically constrained, only viable in certain regions. Despite these challenges, renewables now account for a growing share of global electricity generation and play a central role in efforts to decarbonize the energy sector.

In areas where water and electricity coexist—such as hydroelectric plants—understanding the risks associated with water and electricity is critical to ensure operational safety and prevent electrocution hazards.

 

Generators and Turbines: The Heart of Electricity Generation

Generators themselves are marvels of electromechanical engineering. They convert rotational kinetic energy into electrical energy through a system of magnets and copper windings. Their efficiency, durability, and capacity to synchronize with the grid are critical to a stable electric power supply. In large plants, multiple generators operate in parallel, contributing to a vast, interconnected grid that balances supply and demand in real-time.

Turbines, powered by steam, water, gas, or wind, generate the rotational force needed to drive the generator. Their design and performance have a significant impact on the overall efficiency and output of the plant. Measuring output accurately requires devices like a watthour meter or wattmeters, which are standard tools in generation stations.

Technicians often use formulas such as Watt’s Law to determine power consumption and verify performance. Understanding what ammeters measure also plays a role in monitoring electrical current flowing through generator systems.

Related Articles

 

View more

What is a Voltmeter?

What is a voltmeter? A voltmeter is an electrical measuring instrument used to determine voltage across circuit points. Common in electronics, engineering, and power systems, it ensures accuracy, safety, and efficiency when monitoring current and diagnosing electrical performance.

 

What is a Voltmeter?

A Voltmeter provides a method to accurately measure voltage, which is the difference in electric potential between two points in a circuit, without changing the voltage in that circuit. It is an instrument used for measuring voltage drop.

✅ Ensures accurate voltage measurement for safety and performance

✅ Used in electrical engineering, electronics, and power systems

✅ Helps diagnose faults and maintain efficient operation

Electrical current consists of a flow of charge carriers. Voltage, also known as electromotive force (EMF) or potential difference, manifests as "electrical pressure" that enables current to flow. Given an electric circuit under test with a constant resistance, the current through the circuit varies directly in proportion to the voltage across the circuit. A voltmeter measures potential difference, which directly relates to Ohm’s Law, the fundamental equation connecting voltage, current, and resistance in circuits.

A voltmeter can take many forms, from the classic analog voltmeter with a moving needle to modern instruments like the digital voltmeter (DVM) or the versatile digital multimeter. These tools are essential for measuring electrical values in electronic devices, enabling technicians to measure voltage, current, and resistance with precision and accuracy. While analog units provide quick visual feedback, digital versions deliver more precise measurements across wider voltage ranges, making them indispensable for troubleshooting and maintaining today’s complex electrical systems.

A voltmeter can be tailored to have various full-scale ranges by switching different values of resistance in series with the microammeter, as shown in Fig. 3-6. A voltmeter exhibits high internal resistance because the resistors have large ohmic values. The greater the supply voltage, the larger the internal resistance of the voltmeter because the necessary series resistance increases as the voltage increases. To understand how a voltmeter works, it helps to first review basic electricity, as voltage, current, and resistance form the foundation of all electrical measurements.

 


 

Fig 3-6. A simple circuit using a microammeter (tA) to measure DC voltage.

 

A Voltmeter, whether digital or analog, should have high resistance, and the higher the better. You don't want the meter to draw a lot of current from the power source. (Ideally, it wouldn't draw any current at all.) The power-supply current should go, as much as possible, towards operating whatever circuit or system you want to use, not into getting a meter to tell you the voltage. A voltmeter is commonly used to measure voltage drop across conductors or devices, helping electricians ensure circuits operate efficiently and safely. For quick calculations, a voltage drop calculator provides accurate estimates of conductor losses based on length, size, and current. Understanding the voltage drop formula allows engineers and technicians to apply theoretical principles when designing or troubleshooting electrical systems.

Also, you might not want to keep the voltmeter constantly connected in parallel in the circuit. You may need the voltmeter for testing various circuits. You don't want the behavior of a circuit to be affected the moment you connect or disconnect the voltmeter. The less current a voltmeter draws, the less it affects the behavior of anything that operates from the power supply. Engineers often ask: What is a voltmeter?  They use a voltmeter in power system analysis, where accurate voltage readings are crucial for ensuring safety, reliability, and optimal performance.

Alternative types of voltmeters use electrostatic deflection, rather than electromagnetic deflection, to produce their readings. Remember that electric fields produce forces, just as magnetic fields do. Therefore, a pair of electrically charged plates attracts or repels each other. An electrostatic type utilizes the attractive force between two plates with opposite electric charges or a large potential difference. A voltmeter is used to measure the potential difference. Figure 3-7 portrays the functional mechanics of an electrostatic meter. It constitutes, in effect, a sensitive, calibrated electroscope. A voltmeter draws essentially no current from the power supply. Nothing but air exists between the plates, and air constitutes a nearly perfect electrical insulator. A properly designed electrostatic meter can measure both AC voltage and DC voltage. However, the meter construction tends to be fragile, and mechanical vibration can influence the reading.

 

 

Fig 3-7. Functional drawing of an electrostatic voltmeter movement.

 

It's always good when a voltmeter has a high internal resistance. The reason for this is that you don't want the voltmeter to draw a significant amount of current from the power source. This cur­rent should go, as much as possible, towards working whatever circuit is hooked up to the supply, and not just into getting a reading of the voltage. Additionally, you may not want or need to have the voltmeter constantly connected in the circuit; instead, you might need it for testing various circuits. You don't want the behavior of the circuit to be affected the instant you connect the voltmeter to the supply. The less current a voltmeter draws, the less it will affect the behavior of anything that is working from the power supply.

If you connect an ammeter directly across a source of voltage, a battery, the meter needle will deflect. In fact, a milliammeter needle will probably be "pinned" if you do this with it, and a microammeter might well be wrecked by the force of the needle striking the pin at the top of the scale. For this reason, you should never connect milli-ammeters or micro-ammeters directly across voltage sources. An ammeter, perhaps with a range of 0-10 A, may not deflect to full scale if it is placed across a battery; however, it's still a bad idea to do so, as it will rapidly drain the battery. Some batteries, such as automotive lead-acid cells, can explode under these conditions. This is because all ammeters have low internal resistance. They are designed that way deliberately. They are meant to be connected in series with other parts of a circuit, not right across the power supply. Because voltage is inseparable from current, learning what is current electricity provides deeper insight into why voltmeters are vital diagnostic tools.

But if you place a large resistor in series with an ammeter, and then connect the ammeter across a battery or other type of power supply, you no longer have a short cir­cuit. The ammeter will give an indication that is directly proportional to the voltage of the supply. The smaller the full-scale reading of the ammeter, the larger the resistance needed to get a meaningful indication on the meter. Using a microammeter and a very large resistor in series, it can be devised that draws only a small current from the source.

So, What is a Voltmeter? In summary, a voltmeter is a fundamental instrument for electrical work, allowing professionals and students to accurately measure voltage and understand circuit behaviour. Whether using an analog or digital design, voltmeters and multimeters provide precise insights that support safety, efficiency, and reliable performance in electrical systems.

Related Articles

 

View more

What is a Watt-hour?

A watt-hour (Wh) is a unit of energy equal to using one watt of power for one hour. It measures how much electricity is consumed over time and is commonly used to track energy use on utility bills.

Understanding watt-hours is important because it links electrical power (watts) and time (hours) to show the total amount of energy used. To better understand the foundation of electrical energy, see our guide on What is Electricity?

 

Watt-Hour vs Watt: What's the Difference?

Although they sound similar, watts and watt-hours measure different concepts.

  • Watt (W) measures the rate of energy use — how fast energy is being consumed at a given moment.

  • Watt-hour (Wh) measures the amount of energy used over a period of time.

An easy way to understand this is by comparing it to driving a car:

  • Speed (miles per hour) shows how fast you are travelling.

  • Distance (miles) shows how far you have travelled in total.

Watt-hours represent the total energy consumption over a period, not just the instantaneous rate. You can also explore the relationship between electrical flow and circuits in What is an Electrical Circuit?

 

How Watt-Hours Are Calculated

Calculating watt-hours is straightforward. It involves multiplying the power rating of a device by the length of time it operates.
The basic formula is:

Energy (Wh) = Power (W) × Time (h)

This illustrates this relationship, showing how steady power over time yields a predictable amount of energy consumed, measured in watt-hours. For a deeper look at electrical power itself, see What is a Watt? Electricity Explained

 

Real-World Examples of Watt-Hour Consumption

To better understand how watt-hours work, it is helpful to examine simple examples. Different devices consume varying amounts of energy based on their wattage and the duration of their operation. Even small variations in usage time or power level can significantly affect total energy consumption.

Here are a few everyday examples to illustrate how watt-hours accumulate:

  • A 60-watt lightbulb uses 60 watt-hours (Wh) when it runs for one hour.

  • A 100-watt bulb uses 1 Wh in about 36 seconds.

  • A 6-watt Christmas tree bulb would take 10 minutes to consume 1 Wh.

These examples demonstrate how devices with different power ratings achieve the same energy consumption when allowed to operate for sufficient periods. Measuring energy usage often involves calculating current and resistance, which you can learn more about in What is Electrical Resistance?

 

Understanding Energy Consumption Over Time

In many cases, devices don’t consume energy at a steady rate. Power use can change over time, rising and falling depending on the device’s function. Figure 2-6 provides two examples of devices that each consume exactly 1 watt-hour of energy but in different ways — one at a steady rate and one with variable consumption.

Here's how the two devices compare:

  • Device A draws a constant 60 watts and uses 1 Wh of energy in exactly 1 minute.

  • Device B starts at 0 watts and increases its power draw linearly up to 100 watts, still consuming exactly 1 Wh of energy in total.

For Device B, the energy consumed is determined by finding the area under the curve in the power vs time graph.
Since the shape is a triangle, the area is calculated as:

Area = ½ × base × height

In this case:

  • Base = 0.02 hours (72 seconds)

  • Height = 100 watts

  • Energy = ½ × 100 × 0.02 = 1 Wh

This highlights an important principle: even when a device's power draw varies, you can still calculate total energy usage accurately by analyzing the total area under its power curve.

It’s also critical to remember that for watt-hours, you must multiply watts by hours. Using minutes or seconds without converting will result in incorrect units.

 



Fig. 2-6. Two hypothetical devices that consume 1 Wh of energy.

 

Measuring Household Energy Usage

While it’s easy to calculate energy consumption for a single device, it becomes more complex when considering an entire household's energy profile over a day.
Homes have highly variable power consumption patterns, influenced by activities like cooking, heating, and running appliances at different times.

Figure 2-7 shows an example of a typical home’s power usage throughout a 24-hour period. The curve rises and falls based on when devices are active, and the shape can be quite complex. Saving energy at home starts with understanding how devices consume power; see How to Save Electricity

Instead of manually calculating the area under such an irregular curve to find the total watt-hours used, electric utilities rely on electric meters. These devices continuously record cumulative energy consumption in kilowatt-hours (kWh).

Each month, the utility company reads the meter, subtracts the previous reading, and bills the customer for the total energy consumed.
This system enables accurate tracking of energy use without the need for complex mathematical calculations.

 



Fig. 2-7. Graph showing the amount of power consumed by a hypothetical household, as a function of the time of day.

 

Watt-Hours vs Kilowatt-Hours

Both watt-hours and kilowatt-hours measure the same thing — total energy used — but kilowatt-hours are simply a larger unit for convenience. In daily life, we usually deal with thousands of watt-hours, making kilowatt-hours more practical.

Here’s the relationship:

  • 1 kilowatt-hour (kWh) = 1,000 watt-hours (Wh)

To see how this applies, consider a common household appliance:

  • A refrigerator operating at 150 watts for 24 hours consumes:

    • 150 W × 24 h = 3,600 Wh = 3.6 kWh

Understanding the connection between watt-hours and kilowatt-hours is helpful when reviewing your utility bill or managing your overall energy usage.

Watt-hours are essential for understanding total energy consumption. Whether power usage is steady or variable, calculating watt-hours provides a consistent and accurate measure of energy used over time.
Real-world examples — from simple light bulbs to complex household systems — demonstrate that, regardless of the situation, watt-hours provide a clear way to track and manage electricity usage. 

By knowing how to measure and interpret watt-hours and kilowatt-hours, you can make more informed decisions about energy consumption, efficiency, and cost savings. For a broader understanding of how energy ties into everyday systems, visit What is Energy? Electricity Explained

 

Related Articles

 

View more

Who Discovered Electricity

Who discovered electricity? Early pioneers including William Gilbert, Benjamin Franklin, Luigi Galvani, Alessandro Volta, and Michael Faraday advanced static electricity, circuits, and electromagnetism, laying the foundation for modern electrical science.

 

Who Discovered Electricity?

From the writings of Thales of Miletus it appears that Westerners in their day knew as long ago as 600 B.C. that amber becomes charged by rubbing. But other than that, there was little real progress until the English scientist William Gilbert in 1600 described the electrification of many substances and coined the term "electricity" from the Greek word for amber. For a deeper look at how ideas about discovery versus invention evolved, see who invented electricity for historical perspective.

As a result, Gilbert is called the father of modern electric power. In 1660, Otto von Guericke invented a crude machine for producing static electricity. It was a ball of sulfur, rotated by a crank with one hand and rubbed with the other. Successors, such as Francis Hauksbee, made improvements that provided experimenters with a ready source of static electricity. Today's highly developed descendant of these early machines is the Van de Graaf generator, which is sometimes used as a particle accelerator. Robert Boyle realized that attraction and repulsion were mutual and that electric force was transmitted through a vacuum. Stephen Gray distinguished between conductors and nonconductors. C. F. Du Fay recognized two kinds of power, which Benjamin Franklin and Ebenezer Kinnersley of Philadelphia, peoples who later named positive and negative.

For a quick chronological overview of these pioneering advances, consult this timeline of electricity to trace developments across centuries.

Progress quickened after the Leyden jar was invented in 1745 by Pieter van Musschenbroek. The Leyden jar stored static electricity, which could be discharged all at once. In 1747 William Watson discharged a Leyden jar through a circuit, and comprehension of the current and circuit started a new field of experimentation. Henry Cavendish, by measuring the conductivity of materials (he compared the simultaneous shocks he received by discharging Leyden jars through the materials), and Charles A. Coulomb, by expressing mathematically the attraction of electrified bodies, began the quantitative study of electric power. For additional background on early experiments and theory, explore the history of electricity for context and sources.

Depite what you have learned, Benjamin Franklin did not "discover" electric power. In fact, electric power did not begin when Benjamin Franklin at when he flew his kite during a thunderstorm or when light bulbs were installed in houses all around the world. For details on why Franklin is often miscredited, read did Ben Franklin discover electricity for clarification.

The truth is that electric power has always been around because it naturally exists in the world. Lightning, for instance, is simply a flow of electrons between the ground and the clouds. When you touch something and get a shock, that is really static electricity moving toward you. If you are new to the core concepts, start with basic electricity to ground the fundamentals.

Power Personalities

 

Benjamin Franklin

Ben Franklin was an American writer, publisher, scientist and diplomat, who helped to draw up the famous Declaration of Independence and the US Constitution. In 1752 Franklin proved that lightning and the spark from amber were one and the same thing. The story of this famous milestone is a familiar one, in which Franklin fastened an iron spike to a silken kite, which he flew during a thunderstorm, while holding the end of the kite string by an iron key. When lightening flashed, a tiny spark jumped from the key to his wrist. The experiment proved Franklin's theory. For more about Franklin's experiments, see Ben Franklin and electricity for experiment notes and legacy.

 

Galvani and Volta

In 1786, Luigi Galvani, an Italian professor of medicine, found that when the leg of a dead frog was touched by a metal knife, the leg twitched violently. Galvani thought that the muscles of the frog must contain electric signals. By 1792 another Italian scientist, Alessandro Volta, disagreed: he realised that the main factors in Galvani's discovery were the two different metals - the steel knife and the tin plate - apon which the frog was lying. Volta showed that when moisture comes between two different metals, electric power is created. This led him to invent the first electric battery, the voltaic pile, which he made from thin sheets of copper and zinc separated by moist pasteboard.

In this way, a new kind of electric power was discovered, electric power that flowed steadily like a current of water instead of discharging itself in a single spark or shock. Volta showed that electric power could be made to travel from one place to another by wire, thereby making an important contribution to the science of electricity. The unit of electrical potential, the Volt, is named after Volta.

 

Michael Faraday

The credit for generating electric current on a practical scale goes to the famous English scientist, Michael Faraday. Faraday was greatly interested in the invention of the electromagnet, but his brilliant mind took earlier experiments still further. If electricity could produce magnetism, why couldn't magnetism produce electric power.

In 1831, Faraday found the solution. Electricity could be produced through magnetism by motion. He discovered that when a magnet was moved inside a coil of copper wire, a tiny electric current flows through the wire. Of course, by today's standards, Faraday's electric dynamo or electric generator was crude, and provided only a small electric current be he discovered the first method of generating electric power by means of motion in a magnetic field.

 

Thomas Edison and Joseph Swan

Nearly 40 years went by before a really practical DC (Direct Current) generator was built by Thomas Edison in America. Edison's many inventions included the phonograph and an improved printing telegraph. In 1878 Joseph Swan, a British scientist, invented the incandescent filament lamp and within twelve months Edison made a similar discovery in America. For a broader view of his role in power systems, visit Thomas Edison and electricity for projects and impact.

Swan and Edison later set up a joint company to produce the first practical filament lamp. Prior to this, electric lighting had been my crude arc lamps.

Edison used his DC generator to provide electricity to light his laboratory and later to illuminate the first New York street to be lit by electric lamps, in September 1882. Edison's successes were not without controversy, however - although he was convinced of the merits of DC for generating electricity, other scientists in Europe and America recognised that DC brought major disadvantages.

 

George Westinghouse and Nikola Tesl

Westinghouse was a famous American inventor and industrialist who purchased and developed Nikola Tesla's patented motor for generating alternating current. The work of Westinghouse, Tesla and others gradually persuaded American society that the future lay with AC rather than DC (Adoption of AC generation enabled the transmission of large blocks of electrical, power using higher voltages via transformers, which would have been impossible otherwise). Today the unit of measurement for magnetic fields commemorates Tesla's name.

 

James Watt

When Edison's generator was coupled with Watt's steam engine, large scale electricity generation became a practical proposition. James Watt, the Scottish inventor of the steam condensing engine, was born in 1736. His improvements to steam engines were patented over a period of 15 years, starting in 1769 and his name was given to the electric unit of power, the Watt.

Watt's engines used the reciprocating piston, however, today's thermal power stations use steam turbines, following the Rankine cycle, worked out by another famous Scottish engineer, William J.M Rankine, in 1859.

 

Andre Ampere and George Ohm

Andre Marie Ampere, a French mathematician who devoted himself to the study of electricity and magnetism, was the first to explain the electro-dynamic theory. A permanent memorial to Ampere is the use of his name for the unit of electric current.

George Simon Ohm, a German mathematician and physicist, was a college teacher in Cologne when in 1827 he published, "The galvanic Circuit Investigated Mathematically". His theories were coldly received by German scientists but his research was recognised in Britain and he was awarded the Copley Medal in 1841. His name has been given to the unit of electrical resistance.

Go here to visit all of our Electrical Energy pages.

 

 

Related Articles

View more

Electricity How it Works

Electricity How It Works explains electron flow, voltage, current, resistance, and power in circuits, from generation to distribution, covering AC/DC systems, Ohm's law, conductors, semiconductors, transformers, and energy conversion efficiency and safety.

 

The Science Behind How Electricity Works

Electricity How It Works - This is a very common question. It can best be explained by stating this way: Single-phase electricity is what you have in your house. You generally talk about household electrical service as single-phase, 120-volt AC service. If you use an oscilloscope and look at the power found at a normal wall-plate outlet in your house, what you will find is that the power at the wall plate looks like a sine wave, and that wave oscillates between -170 volts and 170 volts (the peaks are indeed at 170 volts; it is the effective (rms) voltage that is 120 volts). The rate of oscillation for the sine wave is 60 cycles per second. Oscillating power like this is generally referred to as AC, or alternating current. The alternative to AC is DC, or direct current. Batteries produce DC: A steady stream of electrons flows in one direction only, from the negative to the positive terminal of the battery.

For a refresher on fundamentals, the overview at what is electricity explains charge, current, and voltage in practical terms.

AC has at least three advantages over DC in an electricity power distribution grid:

1. Large electricity generators happen to generate AC naturally, so conversion to DC would involve an extra step.
2. Electrical Transformers must have alternating current to operate, and we will see that the power distribution grid depends on transformers. 
3. It is easy to convert AC to DC but expensive to convert DC to AC, so if you were going to pick one or the other AC would be the better choice.

To connect these advantages to real-world practice, the primer on basic electricity clarifies AC versus DC behavior, impedance, and safety basics.

The electricity generating plant, therefore, produces AC. For a deeper look at how rotating machines induce AC, see the overview of electricity generators and their role in utility-scale plants.

 

Electricity How it Works in The Power Plant: Three-phase Power

If you want a quick walkthrough from generation to loads, this guide on how electricity works ties the concepts together before we examine three-phase specifics.

The power plant produces three different phases of AC power simultaneously, and the three phases are offset 120 degrees from each other. There are four wires coming out of every power plant: the three phases plus a neutral or ground common to all three. If you were to look at the three phases on a graph, they would look like this relative to ground:

A concise visual explainer on three-phase electricity shows how 120-degree phase offsets create balanced currents in feeders.

Electricity How It Works - There is nothing magical about three-phase power. It is simply three single phases synchronized and offset by 120 degrees. For wiring diagrams and common configurations, explore 3-phase power examples used across industrial facilities.

Why three phases? Why not one or two or four? In 1-phase and 2-phase electricity, there are 120 moments per second when a sine wave is crossing zero volts. In 3-phase power, at any given moment one of the three phases is nearing a peak. High-power 3-phase motors (used in industrial applications) and things like 3-phase welding equipment therefore have even power output. Four phases would not significantly improve things but would add a fourth wire, so 3-phase is the natural settling point.

Practical comparisons of motor torque ripple and line loading in 3-phase electricity help illustrate why three conductors strike the best balance.

And what about this "ground," as mentioned above? The power company essentially uses the earth as one of the wires in the electricity system. The earth is a pretty good conductor and it is huge, so it makes a good return path for electrons. (Car manufacturers do something similar; they use the metal body of the car as one of the wires in the car's electrical system and attach the negative pole of the battery to the car's body.) "Ground" in the power distribution grid is literally "the ground" that's all around you when you are walking outside. It is the dirt, rocks, groundwater, etc., of the earth.

 

Related Articles

View more

Unit of Capacitance Explained

The unit of capacitance is the farad (F), which measures the amount of electric charge a capacitor stores per volt. Typically expressed in microfarads, nanofarads, or picofarads, it is essential in electronics, circuit design, and energy storage systems.

 

What is a Unit of Capacitance?

The unit of capacitance, the farad (F), measures the amount of electric charge a capacitor can store per volt. It is crucial to understand the function of capacitors in electronics, circuits, and energy storage technologies.

✅ 1 farad equals 1 coulomb per volt

✅ Common values include microfarads, nanofarads, and picofarads

✅ Used in electronics, circuits, power systems, and capacitor design

 

It is determined by the electrical charge, which is symbolized by the letter Q, and is measured in units of coulombs. Discover how capacitance interacts with other electrical quantities and gain a deeper understanding of its role in circuit design and performance. The coulomb is given by the letter C, as with capacitance. Unfortunately, this can be confusing. One coulomb of charge is defined as a charge equivalent to 6.28 × 10^18 electrons. The basic unit is the farad, denoted by the letter F. By definition, one farad is the amount of charge stored on a capacitor when one volt is applied across its plates. The general formula for capacitance in terms of charge and voltage is:


 

Understanding the Unit of Electric Capacitance

The unit of electric capacitance explains how a capacitor functions as a body to store an electrical charge. This is achieved through two conductive plates, which form the essential structure of a parallel plate capacitor. These plates are separated by an insulating material, known as the dielectric, which prevents direct current flow while allowing the device to store energy.

A capacitor is a widely used electronic component, and it belongs to the class of passive electronic components since it does not generate energy but only stores it temporarily. The concept of capacitance was first defined by the English physicist Michael Faraday, whose pioneering work in electromagnetism laid the foundation of electrical science. Historical records place Michael Faraday 1791 1867 as one of the most influential figures in this field.

In modern practice, capacitance is measured in the SI base units of the farad (F). Because a farad is large, smaller units such as the nanofarad nF are commonly used to describe practical capacitors found in circuits. Whether measured in farads, microfarads, or nanofarads, the unit of electric capacitance remains the standard way of expressing a capacitor’s ability to store charge for reliable operation in electronic systems.

 

Farad in Practical Use

In practical terms, one farad is a large amount of capacitance. Typically, in electronics, much smaller units are used. The two more common smaller units are the microfarad (μF), which is 10^-6 farad, and the picofarad (pF), which is 10^-12 farad. To better understand the core principles behind charge and voltage, see our overview on what is a capacitor, which explains how capacitance functions in practical circuits.

Voltage Rating of a Capacitor: Capacitors have limits on the voltage that can be applied across their plates. The aircraft technician must be aware of the voltage rating, which specifies the maximum DC voltage that can be applied without risking damage to the device. This voltage rating is typically referred to as the breakdown voltage, working voltage, or simply the voltage rating. If the voltage applied across the plates is too great, the dielectric will break down, and arcing will occur between the plates. The capacitor is then short-circuited, and the possible flow of direct current through it can cause damage to other parts of the equipment. For foundational knowledge that supports capacitance calculations, our what is voltage article defines the relationship between electric potential and stored charge.

A capacitor that can be safely charged to 500 volts DC cannot be safely subjected to AC or pulsating DC whose effective values are 500 volts. An alternating voltage of 500 volts (RMS) has a peak voltage of 707 volts, and a capacitor to which it is applied should have a working voltage of at least 750 volts. The capacitor should be selected so that its working voltage is at least 50 percent greater than the highest voltage to be applied. Learn about different types of components that influence total capacitance by reading our guide on types of capacitors, which compares materials, ratings, and applications.

 

Smaller Units of Capacitance

The voltage rating of the capacitor is a factor in determining the actual capacitance, as capacitance decreases with increasing dielectric thickness. A high-voltage capacitor with a thick dielectric must have a larger plate area to achieve the same capacitance as a similar low-voltage capacitor with a thin dielectric.

 

Table 1 – Dielectric Strength of Common Materials

Dielectric Material Approx. Dielectric Strength (kV/mm) Relative Permittivity (εr) Notes / Applications
Vacuum 30 1.0 Reference value, ideal insulator
Air 3 ~1.0 Baseline, used as standard
Paper 16 3–4 Used in older capacitors
Glass 9–14 4–10 High stability, low loss
Mica 100 5–7 Precision capacitors, RF use
Ceramic 10–40 6–12 (varies) Common in small capacitors
Polystyrene 20–30 2.5–2.7 Low loss, stable
Polyethylene 20–30 2.2 High-voltage applications
Teflon (PTFE) 60–170 2.1 Excellent insulator, stable
Oil (transformer) 10–15 2.2–2.3 Used in HV capacitors and transformers
Quartz 8–10 ~3.8 Stable, heat resistant

 

Factors Affecting A Unit of Capacitance

  1. The capacitance of parallel plates is directly proportional to the area of the plates. A larger plate area produces a larger capacitance, and a smaller area produces less capacitance. If we double the area of the plates, there is room for twice as much charge. The charge that a capacitor can hold at a given potential difference is doubled, and since C = Q/E, the capacitance is doubled.

  2. The capacitance of parallel plates is inversely proportional to the spacing between them.

  3. The dielectric material affects the capacitance of parallel plates. The dielectric constant of a vacuum is defined as 1, and that of air is very close to 1. These values are used as a reference, and all other materials have values specified in relation to air (vacuum).

The strength of some commonly used dielectric materials is listed in Table 1. The voltage rating also depends on frequency, as the losses and resultant heating effect increase with higher frequencies. Discover how capacitance fits into the broader context of energy flow in circuits by visiting our what is electrical resistance page, offering insights on resistance and its effect on voltage and current.

 

Related Articles

 

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified