What is Electricity?


what is electricity

Electricity is the flow of electric charge, usually through a conductor like wire. It powers lights, appliances, and machines by converting energy into motion, heat, or light. Electricity can be generated from sources such as fossil fuels, wind, solar, or water.

 

What is electricity?

Electricity is a fundamental form of energy created by the movement of electrons.

✅ Powers homes, industries, and electronic devices

✅ Flows through circuits as an electric current

✅ Generated from renewable and non-renewable sources

The power we use is a secondary energy source because it is produced by converting primary energy sources such as coal, natural gas, nuclear, solar, and wind energy into electrical power. It is also referred to as an energy carrier, meaning it can be converted into other forms of energy, such as mechanical or thermal energy.

Primary energy sources are either renewable or nonrenewable, but our power is neither.

To understand why electrons move in the first place, start with voltage, the electrical “pressure” that pushes charge through every circuit.

 

Electricity Has Changed Everyday Life

Although most people rarely think about electricity, it has profoundly changed how we live. It is as essential as air or water, yet we tend to take it for granted—until it’s gone. Electricity powers heating and cooling systems, appliances, communications, entertainment, and modern conveniences that past generations never imagined.

Before widespread electrification began just over a century ago, homes were lit with candles or oil lamps, food was cooled with ice blocks, and heating was provided by wood- or coal-burning stoves.

The steady stream of electrons we use daily is explored in our primer on current electricity.

 

Discovering Electricity: From Curiosity to Power Grid

Scientists and inventors began unlocking the secrets of electricity as early as the 1600s. Over the next few centuries, their discoveries built the foundation for the electric age.

Benjamin Franklin demonstrated that lightning is a form of electricity.

Thomas Edison invented the first commercially viable incandescent light bulb.

Nikola Tesla pioneered the use of alternating current (AC), which enabled the efficient transmission of electricity over long distances. He also experimented with wireless electricity.

Curious why Tesla’s ideas beat Edison’s? Our article on alternating current breaks down the advantages of alternating current (AC) over direct current (DC).

Before Tesla’s innovations, arc lighting used direct current (DC) but was limited to outdoor and short-range applications. His work made it possible for electricity to be transmitted to homes and factories, revolutionizing lighting and industry.

 

Understanding Electric Charge and Current

Electricity is the movement of electrically charged particles, typically electrons. These particles can move either statically, as in a buildup of charge, or dynamically, as in a flowing current.

All matter is made of atoms, and each atom consists of a nucleus with positively charged protons and neutral neutrons, surrounded by negatively charged electrons. Usually, the number of protons and electrons is balanced. But when that balance is disturbed—when electrons are gained or lost—an electric current is formed as those electrons move.

For a step-by-step walkthrough of everything from circuits to safety, visit how electricity works.

 

Electricity as a Secondary Energy Source

Electricity doesn’t occur naturally in a usable form. It must be generated by converting other types of energy. In fact, electricity is a manufactured product. That’s why electricity is called a secondary energy source—it carries energy from its original form to where we need it.

We generate electricity by transforming mechanical energy—such as spinning a turbine—into electrical energy. This conversion happens at power plants that use a variety of fuels and methods:

  • Fossil fuels (coal, oil, natural gas)

  • Nuclear energy

  • Renewable sources like wind, solar, and hydroelectric

If turbines, magnets, and power plants intrigue you, see how electricity is generated for a deeper dive.

 

How Electricity Was Brought Into Homes

Before electricity generation began on a mass scale, cities often developed near waterfalls, where water wheels powered mills and machines. The leap from mechanical energy to electrical energy enabled power to travel not just across a town, but across entire countries.

Beginning with Franklin’s experiments and followed by Edison’s breakthrough with indoor electric light, the practical uses of electricity expanded rapidly. Tesla’s AC power system made widespread electric distribution feasible, bringing light, heat, and industry to homes and cities worldwide.

 

How Transformers Changed Everything

To transmit electricity efficiently over long distances, George Westinghouse developed the transformer. This device adjusts the voltage of electrical power to match its purpose—high for long-range travel, low for safe use in homes.

Transformers made it possible to supply electricity to homes and businesses far from power plants. The electric grid became a coordinated system of generation, transmission, distribution, and regulation.

Even today, most of us rarely consider the complexity behind our wall sockets. But behind every outlet lies a vast infrastructure keeping electricity flowing safely and reliably.

 

How Is Electricity Generated?

Electric generators convert mechanical energy into electricity using the principles of magnetism. When a conductor—such as a coil of wire—moves through a magnetic field, an electric current is induced.

In large power stations, turbines spin magnets inside massive generators. These turbines are driven by steam, water, or wind. The rotating magnet induces small currents in the coils of wire, which combine into a single continuous flow of electric power.

Discover the principle that turns motion into power in electromagnetic induction, the heart of every modern generator.

 

Measuring Electricity

Electricity is measured in precise units. The amount of power being used or generated is expressed in watts (W), named after inventor James Watt.

  • One watt is a small unit of power; 1,000 watts equal one kilowatt (kW).

  • Energy use over time is measured in kilowatt-hours (kWh).

  • A 100-watt bulb burning for 10 hours uses 1 kWh of electricity.

These units are what you see on your electric bill. They represent how much electricity you’ve consumed over time—and how much you’ll pay.

When it’s time to decode your energy bill, the chart in electrical units makes watts, volts, and amps clear.

 

Related Articles

 

Related News

What is Voltage?

Voltage is the electrical potential difference between two points, providing the force that moves current through conductors. It expresses energy per charge, powering devices, controlling circuits, and ensuring efficient and safe operation of electrical and electronic systems.

 

What is Voltage?

Voltage is the electric potential difference, the work done per unit charge (Joules per Coulomb). It: 

✅ Is the difference in electric potential energy between two points in a circuit.

✅ Represents the force that pushes electric current through conductors.

✅ It is measured in volts (V), and it is essential for power distribution and electrical safety.

To comprehend the concept of what is voltage, it is essential to understand its fundamental principles. Analogies make this invisible force easier to picture. One of the most common is the water pressure analogy: just as higher water pressure pushes water through pipes more forcefully, higher voltage pushes electric charges through a circuit. A strong grasp of voltage begins with the fundamentals of electricity fundamentals, which explain how current, resistance, and power interact in circuits.

Another way to imagine what is voltage is as a hill of potential energy. A ball placed at the top of a hill naturally rolls downward under gravity. The steeper the hill, the more energy is available to move the ball. Likewise, a higher voltage means more energy is available per charge to move electrons in a circuit.

A third analogy is the pump in a water system. A pump creates pressure, forcing water to move through pipes. Similarly, a battery or generator functions as an electrical pump, supplying the energy that drives electrons through conductors. Without this push, charges would remain in place and no current would flow.

Together, these analogies—water pressure, potential energy hill, and pump—show how voltage acts as the essential driving force, the “electrical pressure” that enables circuits to function and devices to operate. Since voltage and Current are inseparable, Ohm’s Law shows how resistance influences the flow of electricity in every system.

These analogies help us visualize voltage as pressure or stored energy, but in physics, voltage has a precise definition. It is the work done per unit charge to move an electric charge from one point to another. Mathematically, this is expressed as:

V = W / q

where V is voltage (in volts), W is the work or energy (in joules), and q is the charge (in coulombs). This equation shows that one volt equals one joule of energy per coulomb of charge.

In circuit analysis, voltage is also described through Ohm’s Law, which relates it to current and resistance:

V = I × R

where I is current (in amperes) and R is resistance (in ohms). This simple but powerful formula explains how voltage, current, and resistance interact in every electrical system.

Italian physicist Alessandro Volta played a crucial role in discovering and understanding V. The unit of voltage, the volt (V), is named in his honor. V is measured in volts, and the process of measuring V typically involves a device called a voltmeter. In an electrical circuit, the V difference between two points determines the energy required to move a charge, specifically one coulomb of charge, between those points. The history of voltage is closely tied to the History of Electricity, where discoveries by pioneers like Volta and Franklin have shaped modern science.

An electric potential difference between two points produces an electric field, represented by electric lines of flux (Fig. 1). There is always a pole that is relatively positive, with fewer electrons, and one that is relatively negative, with more electrons. The positive pole does not necessarily have a deficiency of electrons compared with neutral objects, and the negative pole might not have a surplus of electrons compared with neutral objects. But there's always a difference in charge between the two poles. So the negative pole always has more electrons than the positive pole.

 


 

Fig 1. Electric lines of flux always exist near poles of electric charge.

 

The abbreviation for voltage measurement is V. Sometimes, smaller units are used. For example, the millivolt (mV) is equal to a thousandth (0.001) of a volt. The microvolt (uV) is equal to a millionth (0.000001) of a volt. And it is sometimes necessary to use units much larger than one volt. For example, one kilovolt (kV) is equal to one thousand volts (1,000). One megavolt (MV) is equal to one million volts (1,000,000) or one thousand kilovolts. When comparing supply types, the distinction between Direct Current and AC vs DC shows why standardized voltage systems are essential worldwide.

The concept of what is voltage is closely related to electromotive force (EMF), which is the energy source that drives electrons to flow through a circuit. A chemical battery is a common example of a voltage source that generates EMF. The negatively charged electrons in the battery are compelled to move toward the positive terminal, creating an electric current.

In power distribution, three-phase electricity and 3 Phase Power demonstrate how higher voltages improve efficiency and reliability.

Voltage is a fundamental concept in electrical and electronic systems, as it influences the behavior of circuits and devices. One of the most important relationships involving V is Ohm's Law, which describes the connection between voltage, current, and resistance in an electrical circuit. For example, Ohm's Law states that the V across a resistor is equal to the product of the current flowing through it and the resistance of the resistor. 

The voltage dropped across components in a circuit is critical when designing or analyzing electrical systems. Voltage drop occurs when the circuit components, such as resistors, capacitors, and inductors, partially consume the V source's energy. This phenomenon is a crucial aspect of circuit analysis, as it helps determine a system's power distribution and efficiency. Potential energy is defined as the work required to move a unit of charge from different points in an electric dc circuit in a static electric field.  Engineers often analyze Voltage Drop to evaluate circuit performance, alongside concepts like Electrical Resistance.

Voltage levels are standardized in both household and industrial applications to ensure the safe and efficient operation of electrical equipment. In residential settings, common voltage levels range from 110 to 240 volts, depending on the country. Industrial applications often utilize higher voltages, ranging from several kilovolts to tens of kilovolts, to transmit electrical energy over long distances with minimal losses.

Another important distinction in the realm of voltage is the difference between alternating current (AC) and direct current (DC). AC alternates periodically, whereas DC maintains a constant direction. AC is the standard for most household and industrial applications, as it can be easily transformed to different voltage levels and is more efficient for long-distance transmission. DC voltage, on the other hand, is often used in batteries and electronic devices.

Voltage is the driving force behind the flow of charge carriers in electrical circuits. It is essential for understanding the behavior of circuits and the relationship between voltage, current, and resistance, as described by Ohm's Law. The importance of V levels in household and industrial applications, as well as the significance of voltage drop in circuit analysis, cannot be overstated. Finally, the distinction between AC and DC voltage is critical for the safe and efficient operation of electrical systems in various contexts.

By incorporating these concepts into our understanding of voltage, we gain valuable insight into the world of electricity and electronics. From the pioneering work of Alessandro Volta to the modern applications of voltage in our daily lives, it is clear that voltage will continue to play a crucial role in the development and advancement of technology. Foundational principles such as Amperes Law and the Biot Savart Law complement voltage by describing how currents and magnetic fields interact.

 

Related Articles

 

View more

What is a Capacitor?

A capacitor is an electrical component that stores and releases energy in a circuit. It consists of two conductive plates separated by an insulator and is commonly used for filtering, power conditioning, and energy storage in electronic and electrical systems.

 

What is a Capacitor?

A capacitor is a key component in electronics and power systems. It temporarily stores electrical energy and is widely used in both AC and DC circuits.

✅ Stores and discharges electrical energy efficiently

✅ Used in filtering, timing, and power factor correction

✅ Found in electronics, motors, and power supplies

It is designed for energy storage and can store electric charges, which can be released when needed. In this article, we will delve into the fundamentals of capacitors, including their functions, types, and applications. To better understand how capacitors support overall system performance, explore our Power Quality overview covering the fundamentals of voltage stability and energy flow.

Power Quality Analysis Training

Power Factor Training

Request a Free Power Quality Training Quotation

A capacitor consists of two metallic plates separated by an insulating material known as the dielectric. The dielectric can be made from various materials, such as mica, paper, or ceramic. When voltage is applied across the plates, positive charges accumulate on one plate, while negative charges accumulate on the opposite plate. The amount of capacitor charge that can be stored depends on several factors, including plate area, plate separation, dielectric material, and voltage ratings. Capacitors are often used in capacitor banks to improve power factor and reduce energy losses in electrical systems.

How does a capacitor work? The primary function of a capacitor in an electronic circuit is to store electrical energy. Capacitors can be used for various purposes, such as filtering, timing, and coupling or decoupling signals. In addition, they play a crucial role in power supplies, ensuring that the output voltage remains stable even when there are fluctuations in the input voltage. Learn how capacitive loads influence circuit behavior and why they require precise capacitor selection for optimal performance.

A capacitor stores energy through the electrostatic field created between its plates. The stored energy can be calculated using the formula E = 0.5 * C * V^2, where E is the stored energy, C is the capacitance, and V is the voltage across the capacitor. Capacitance, measured in Farads, is a measure of a capacitor's ability to store charge. The capacitor voltage rating is crucial for ensuring safe operation and preventing dielectric breakdown during voltage spikes.

So, when I am asked what is a capacitor? I tell readers about several types of capacitors, each with unique applications. Common types include ceramic, electrolytic, film, and tantalum capacitors. Ceramic capacitors are widely used due to their low cost and small size. They are ideal for high-frequency applications and decoupling in power supply circuits. On the other hand, Electrolytic capacitors are popular for their high capacitance values and are commonly used in filtering and energy storage applications. Capacitors play a crucial role in power factor correction, enabling industrial systems to reduce demand charges and enhance energy efficiency.

Dielectric materials used in capacitors can be organic (such as paper) or inorganic (such as ceramic). The choice of dielectric material depends on factors like the desired capacitance value, voltage rating, and operating temperature range. Additionally, different dielectric materials exhibit varying properties, making them suitable for specific applications. For a deeper understanding of energy relationships, see how apparent power differs from real and reactive power in systems using capacitors.

A capacitor can be classified as polarized or non-polarized based on the presence or absence of polarity. Polarized capacitors, like electrolytic capacitors, have a positive and a negative terminal and must be connected correctly in a circuit to function properly. Non-polarized capacitors, like ceramic capacitors, do not have a specific polarity and can be connected in any orientation.

A Capacitor behaves differently in AC and DC voltage circuits. In DC circuits, once a capacitor is charged, it blocks the flow of current, essentially acting as an open circuit. However, in ac voltage circuits, capacitors allow the flow of alternating current. This phenomenon is known as displacement current, which occurs due to the continuous charging and discharging of charges.

So, what is a capacitor? Understanding what a capacitor is and how it works is essential for anyone interested in electronics. The Capacitor plays a vital role in a wide range of applications, from energy storage and filtering to signal coupling and decoupling. Understanding the various types of capacitors and their specific applications enables you to make informed decisions when designing or troubleshooting electronic circuits. Explore how an automatic power factor controller dynamically adjusts capacitor usage to maintain an efficient power factor in real-time.

 

Related Articles

 

View more

What is Power Factor? Understanding Electrical Efficiency

Power factor is the ratio of real power to apparent power in an electrical system. It measures how efficiently electrical energy is converted into useful work. A high power factor means less energy loss and better system performance.

What is Power Factor?

It is defined as the ratio of real power (kW), which performs useful work, to apparent power (kVA), which is the total power supplied to the system.

✅ Indicates how efficiently electrical power is used

✅ Reduces energy losses and utility costs

✅ Improves system capacity and voltage regulation

A poor power factor means that some of the supplied power is wasted as reactive power — energy that circulates in the system but does not perform useful work.

Power Quality Analysis Training

Power Factor Training

Request a Free Power Quality Training Quotation

Inductive loads, such as motors and variable speed drives, are a common cause of poor power factor. This inefficiency can lead to higher electric bills, particularly for industrial customers, because utilities often base demand charges on kVA rather than just on kW. To correct a poor power factor, capacitor banks are often installed to offset the inductive reactive power, reducing wasted energy and improving system efficiency.

A poor power factor can lead to higher electricity bills, especially for industrial customers who face demand charges based on kVA. Utilities must supply both the real and reactive components of power, which you can learn more about in our Apparent Power Formula: Definition, Calculation, and Examples guide. To correct power factor issues, capacitor banks are often installed to offset inductive effects and bring the system closer to unity power factor.

 

Understanding Power Factor in Electrical Systems

Power factor (PF) is not just about efficiency — it also reflects the relationship between voltage and current in an electrical circuit. It measures how closely the voltage waveform and current waveform are aligned, or "in phase," with each other.

  • Leading Power Factor: Occurs when the current waveform leads the voltage waveform. Some lighting systems, like compact fluorescent lamps (CFLs), can produce a leading power factor.

  • Lagging Power Factor: Occurs when the current waveform lags behind the voltage waveform. This is typical in systems with motors and transformers. See our article on Lagging Power Factor and How to Correct It for a detailed discussion.

  • Non-Linear Loads: Loads that distort the current waveform from its original sine wave shape, often due to switching operations within devices. Examples include electric ballasts and switch-mode power supplies used in modern electronics. Their effect on system stability is discussed in our Power Quality and Harmonics Explained guide.

  • Mixed Loads: Most real-world systems have a mix of linear and non-linear loads, which can partially cancel out some harmonic distortions.

 

Real, Reactive, and Apparent Power

To fully understand power factor, it helps to grasp the three types of electrical power:

  • Real (or Active) Power: The power that performs actual work in the system, expressed in Watts (W).

  • Reactive (or Non-Active) Power: The power stored and released by the system’s inductive or capacitive elements, expressed in Volt-Amperes Reactive (VARs). Explore how it’s calculated in our article on Reactive Power Formula in AC Circuits.

  • Apparent Power: The combined effect of real and reactive power, expressed in Volt-Amperes (VA). Utilities must deliver apparent power to serve all the loads connected to their networks.

The relationship between these three can be visualized as a right triangle, with real power as the base, reactive power as the vertical side, and apparent power as the hypotenuse. If you want to calculate power factor quickly, check out our simple How to Calculate Power Factor guide.

 

A Simple Analogy: The Horse and the Railroad Car

Imagine a horse pulling a railroad car along uneven tracks. Because the tracks are not perfectly straight, the horse pulls at an angle. The real power is the effort that moves the car forward. The apparent power is the total effort the horse expends. The sideways pull of the horse — effort that does not move the car forward — represents the reactive power.

The angle of the horse’s pull is similar to the phase angle between current and voltage in an electrical system. When the horse pulls closer to straight ahead, less effort is wasted, and the real power approaches the apparent power. In electrical terms, this means the power factor approaches 1.0 — the ideal scenario where almost no energy is wasted. For more real-world examples, we provide further explanations in Power Factor Leading vs. Lagging

The formula for calculating power factor is:

PF = Real Power ÷ Apparent Power

If your facility has poor power factor, adding a Power Factor Correction Capacitor can make a significant difference.

 

Causes of Low Power Factor

Low PF is caused by inductive loads (such as transformers, electric motors, and high-intensity discharge lighting), which are a major portion of the power consumed in industrial complexes. Unlike resistive loads that create heat by consuming kilowatts, inductive loads require the current to create a magnetic field, and the magnetic field produces the desired work. The total or apparent power required by an inductive device is a composite of the following:

• Real power (measured in kilowatts, kW)

• Reactive power, the nonworking power caused by the magnetizing current, required to operate the device (measured in kilovolts, power kVAR)

Reactive power required by inductive loads increases the amount of apparent power (measured in kilovolts-amps, kVA) in your distribution system. The increase in reactive and apparent power causes the PF to decrease.

 

Simple How-to: Correcting Power Factor

Correcting a low power factor is typically straightforward and can bring significant benefits to a facility’s energy performance. Here are some common methods:

  • Install Capacitor Banks: Capacitors supply leading reactive power, which offsets the lagging reactive power caused by inductive loads such as motors.

  • Use Synchronous Condensers: These specialized rotating machines can dynamically correct power factor in larger industrial settings.

  • Upgrade Motor Systems: High-efficiency motors and variable frequency drives (VFDs) can reduce reactive power consumption.

  • Perform Regular System Audits: Periodic testing and monitoring can identify changes in power factor over time, allowing for proactive corrections.

Implementing power factor correction measures not only improves energy efficiency but also reduces system losses, stabilizes voltage levels, and extends the lifespan of electrical equipment.

 

Industries Where Power Factor Correction Matters

Industries that operate heavy machinery, large motors, or lighting banks often struggle with low PF. Facilities interested in monitoring their system health can benefit from tools like a Power Quality Analyzer Explained. Proper correction reduces wasted energy, prevents overheating, and extends the equipment's lifespan.

Power factor management is especially important for utilities and high-demand commercial sites, where poor PF can impact both Quality of Electricity and system reliability.

Some key sectors where maintaining a high power factor is vital include:

  • Manufacturing Plants: Motors, compressors, and welding equipment can cause significant reactive power demands.

  • Data Centers: The large number of servers and cooling systems contributes to power inefficiencies.

  • Hospitals: Medical imaging machines, HVAC systems, and other critical equipment generate substantial electrical loads.

  • Commercial Buildings: Lighting systems, elevators, and HVAC units can result in a low power factor without proper correction.

  • Water Treatment Facilities: Pumps and filtration systems involve extensive motor usage, requiring careful management of power quality.

Improving the power factor in these industries not only reduces utility penalties but also enhances the reliability of critical systems.
 

Frequently Asked Questions

What is a good power factor, and why does it matter?

A power factor (PF) of 1.0 (or 100%) is ideal, indicating that all the power supplied is effectively used for productive work. Utilities typically consider a PF above 0.9 (90%) as acceptable. Maintaining a high PF reduces energy losses, improves voltage stability, and can lower electricity costs by minimizing demand charges.

 

How does low power factor increase my electricity bill?

When your PF drops below a certain threshold (often 90%), utilities may impose surcharges to compensate for the inefficiencies introduced by reactive power. For instance, BC Hydro applies increasing penalties as PF decreases, with surcharges reaching up to 80% for PFs below 50% . Improving your PF can thus lead to significant cost savings.

 

What causes a low power factor in electrical systems?

Common causes include:

  • Inductive loads: Equipment like motors and transformers consume reactive power.

  • Underloaded motors: Operating motors below their rated capacity.

  • Non-linear loads: Devices like variable frequency drives and fluorescent lighting can distort current waveforms, leading to a lower PF.

 

How can I improve my facility's power factor?

Improvement strategies encompass:

  • Installing capacitor banks: These provide reactive power locally, reducing the burden on the supply.

  • Using synchronous condensers: Particularly in large industrial settings, they help adjust PF dynamically.

  • Upgrading equipment: Replacing outdated or inefficient machinery with energy-efficient models.

  • Regular maintenance: Ensuring equipment operates at optimal conditions to prevent PF degradation.

 

Does power factor correction benefit the environment?

Yes. Enhancing PF reduces the total current drawn from the grid, leading to:

  • Lower energy losses: Less heat generation in conductors.

  • Improved system capacity: Allowing more users to be served without infrastructure upgrades.

  • Reduced greenhouse gas emissions: As overall energy generation needs decrease.

 

Related Articles

 

 

View more

Define Electromagnetism

Electromagnetism is the branch of physics that studies the interaction between electric currents and magnetic fields. It explains how electricity generates magnetism and powers devices such as motors, generators, and transformers in modern electrical systems.

 

How Should We Define Electromagnetism?

Here's a good way to define electromagnetism: Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles.

✅ Explains the relationship between electricity and magnetism

✅ Governs the operation of motors, generators, and transformers

✅ Forms the basis for electromagnetic waves like light and radio

The electromagnetic force is carried by electromagnetic fields, which are composed of electric fields and magnetic fields, and it is responsible for electromagnetic radiation, such as light.

 

Who Discovered Electromagnetism?

In 1820, the Danish physicist, Hans Christian Oersted, discovered that the needle of a compass brought near a current-carrying conductor would be deflected. When the current flow stopped, the compass needle returned to its original position. This important discovery demonstrated a relationship between electricity and magnetism that led to the development of the electromagnet and to many of the inventions on which modern industry is based.

Oersted discovered that the magnetic field had no connection with the conductor in which the electrons were flowing, because the conductor was made of nonmagnetic copper. The electrons moving through the wire created the magnetic field around the conductor. Since a magnetic field accompanies a charged particle, the greater the current flow, the greater the magnetic field. Figure 1 illustrates the magnetic field around a current-carrying wire. A series of concentric circles around the conductor represents the field, which, if all the lines were shown, would appear more as a continuous cylinder of such circles around the conductor.


Fig. 1 - Magnetic field formed around a conductor in which current is flowing.

 

As long as current flows in the conductor, the lines of force remain around it. [Figure 2] If a small current flows through the conductor, there will be a line of force extending out to circle A. If the current flow is increased, the line of force will increase in size to circle B, and a further increase in current will expand it to circle C. As the original line (circle) of force expands from circle A to B, a new line of force will appear at circle A. As the current flow increases, the number of circles of force increases, expanding the outer circles farther from the surface of the current-carrying conductor.


Fig. 2 - Expansion of magnetic field as current increases.

 

If the current flow is a steady, nonvarying direct current, the magnetic field remains stationary. When the current stops, the magnetic field collapses, and the magnetism around the conductor disappears.

A compass needle is used to demonstrate the direction of the magnetic field around a current-carrying conductor. Figure 3 View A shows a compass needle positioned at right angles to, and approximately one inch from, a current-carrying conductor. If no current were flowing, the north-seeking end of the compass needle would point toward the Earth’s magnetic pole. When current flows, the needle lines itself up at right angles to a radius drawn from the conductor. Since the compass needle is a small magnet, with lines of force extending from south to north inside the metal, it will turn until the direction of these lines agrees with the direction of the lines of force around the conductor. As the compass needle is moved around the conductor, it will maintain itself in a position at right angles to the conductor, indicating that the magnetic field around a current-carrying conductor is circular. As shown in View B of Figure 3, when the direction of current flow through the conductor is reversed, the compass needle points in the opposite direction, indicating that the magnetic field has reversed its direction.


Fig.3 - Magnetic field around a current-carrying conductor.

 

A method for determining the direction of the lines of force when the direction of current flow is known is illustrated in Figure 4. If the conductor is grasped in the left hand, with the thumb pointing in the direction of current flow, the fingers will be wrapped around the conductor in the same direction as the lines of the magnetic field. This is called the left-hand rule.


Fig.4 - Left-hand rule.

 

Although it has been stated that the lines of force have direction, this should not be construed to mean that the lines have motion in a circular direction around the conductor. Although the lines of force tend to act in a clockwise or counterclockwise direction, they are not revolving around the conductor.

Since current flows from negative to positive, many illustrations indicate the current direction with a dot symbol on the end of the conductor when the electrons are flowing toward the observer and a plus sign when the current is flowing away from the observer. [Figure 5]


Fig. 5 - Direction of current flow in a conductor.

 

When a wire is bent into a loop and an electric current flows through it, the left-hand rule remains valid. [Figure 6]


Fig. 6 - Magnetic field around a looped conductor.

 

If the wire is coiled into two loops, many of the lines of force become large enough to include both loops. Lines of force go through the loops in the same direction, circle around the outside of the two coils, and come in at the opposite end. [Figure 7]


Fig. 7 - Magnetic field around a conductor with two loops.

 

When a wire contains many such loops, it is referred to as a coil. The lines of force form a pattern through all the loops, causing a high concentration of flux lines through the center of the coil. [Figure 8]


Fig. 8 - Magnetic field of a coil.

 

In a coil made from loops of a conductor, many of the lines of force are dissipated between the loops of the coil. By placing a soft iron bar inside the coil, the lines of force will be concentrated in the center of the coil, since soft iron has a greater permeability than air. [Figure 9] This combination of an iron core in a coil of wire loops, or turns, is called an electromagnet, since the poles (ends) of the coil possess the characteristics of a bar magnet.


Fig. 9 - Electromagnet.

 

The addition of the soft iron core does two things for the current-carrying coil. First, the magnetic flux increases, and second, the flux lines become more concentrated.

When direct current flows through the coil, the core becomes magnetized with the same polarity (north and south poles) as the coil would have without the core. If the current is reversed, the polarity will also be reversed.

The polarity of the electromagnet is determined by the left-hand rule in the same manner as the polarity of the coil without the core was determined. If the coil is grasped in the left hand in such a manner that the fingers curve around the coil in the direction of electron flow (minus to plus), the thumb will point in the direction of the north pole. [Figure 10]


Fig. 10 - Left-hand rule applied to a coil.

The strength of the magnetic field of the electromagnet can be increased by either increasing the flow of current or the number of loops in the wire. Doubling the current flow approximately doubles the strength of the field, and similarly, doubling the number of loops approximately doubles the magnetic field strength. Finally, the type of metal in the core is a factor in the field strength of the electromagnet.

A soft iron bar is attracted to either pole of a permanent magnet and, likewise, is attracted by a current-carrying coil. The lines of force extend through the soft iron, magnetizing it by induction and pulling the iron bar toward the coil. If the bar is free to move, it will be drawn into the coil to a position near the center where the field is strongest. [Figure 10-35]


Fig. 11 - Solenoid with iron core.

 

Electromagnets are utilized in various electrical instruments, including motors, generators, relays, and other devices. Some electromagnetic devices operate on the principle that an iron core, held away from the center of a coil, will be rapidly pulled into its center position when the coil is energized. This principle is utilized in the solenoid, also known as a solenoid switch or relay, where the iron core is spring-loaded off-center and moves to complete a circuit when the coil is energized. 

 

Related Articles

 

View more

Who Discovered Electricity

Who discovered electricity? Early pioneers including William Gilbert, Benjamin Franklin, Luigi Galvani, Alessandro Volta, and Michael Faraday advanced static electricity, circuits, and electromagnetism, laying the foundation for modern electrical science.

 

Who Discovered Electricity?

From the writings of Thales of Miletus it appears that Westerners in their day knew as long ago as 600 B.C. that amber becomes charged by rubbing. But other than that, there was little real progress until the English scientist William Gilbert in 1600 described the electrification of many substances and coined the term "electricity" from the Greek word for amber. For a deeper look at how ideas about discovery versus invention evolved, see who invented electricity for historical perspective.

As a result, Gilbert is called the father of modern electric power. In 1660, Otto von Guericke invented a crude machine for producing static electricity. It was a ball of sulfur, rotated by a crank with one hand and rubbed with the other. Successors, such as Francis Hauksbee, made improvements that provided experimenters with a ready source of static electricity. Today's highly developed descendant of these early machines is the Van de Graaf generator, which is sometimes used as a particle accelerator. Robert Boyle realized that attraction and repulsion were mutual and that electric force was transmitted through a vacuum. Stephen Gray distinguished between conductors and nonconductors. C. F. Du Fay recognized two kinds of power, which Benjamin Franklin and Ebenezer Kinnersley of Philadelphia, peoples who later named positive and negative.

For a quick chronological overview of these pioneering advances, consult this timeline of electricity to trace developments across centuries.

Progress quickened after the Leyden jar was invented in 1745 by Pieter van Musschenbroek. The Leyden jar stored static electricity, which could be discharged all at once. In 1747 William Watson discharged a Leyden jar through a circuit, and comprehension of the current and circuit started a new field of experimentation. Henry Cavendish, by measuring the conductivity of materials (he compared the simultaneous shocks he received by discharging Leyden jars through the materials), and Charles A. Coulomb, by expressing mathematically the attraction of electrified bodies, began the quantitative study of electric power. For additional background on early experiments and theory, explore the history of electricity for context and sources.

Depite what you have learned, Benjamin Franklin did not "discover" electric power. In fact, electric power did not begin when Benjamin Franklin at when he flew his kite during a thunderstorm or when light bulbs were installed in houses all around the world. For details on why Franklin is often miscredited, read did Ben Franklin discover electricity for clarification.

The truth is that electric power has always been around because it naturally exists in the world. Lightning, for instance, is simply a flow of electrons between the ground and the clouds. When you touch something and get a shock, that is really static electricity moving toward you. If you are new to the core concepts, start with basic electricity to ground the fundamentals.

Power Personalities

 

Benjamin Franklin

Ben Franklin was an American writer, publisher, scientist and diplomat, who helped to draw up the famous Declaration of Independence and the US Constitution. In 1752 Franklin proved that lightning and the spark from amber were one and the same thing. The story of this famous milestone is a familiar one, in which Franklin fastened an iron spike to a silken kite, which he flew during a thunderstorm, while holding the end of the kite string by an iron key. When lightening flashed, a tiny spark jumped from the key to his wrist. The experiment proved Franklin's theory. For more about Franklin's experiments, see Ben Franklin and electricity for experiment notes and legacy.

 

Galvani and Volta

In 1786, Luigi Galvani, an Italian professor of medicine, found that when the leg of a dead frog was touched by a metal knife, the leg twitched violently. Galvani thought that the muscles of the frog must contain electric signals. By 1792 another Italian scientist, Alessandro Volta, disagreed: he realised that the main factors in Galvani's discovery were the two different metals - the steel knife and the tin plate - apon which the frog was lying. Volta showed that when moisture comes between two different metals, electric power is created. This led him to invent the first electric battery, the voltaic pile, which he made from thin sheets of copper and zinc separated by moist pasteboard.

In this way, a new kind of electric power was discovered, electric power that flowed steadily like a current of water instead of discharging itself in a single spark or shock. Volta showed that electric power could be made to travel from one place to another by wire, thereby making an important contribution to the science of electricity. The unit of electrical potential, the Volt, is named after Volta.

 

Michael Faraday

The credit for generating electric current on a practical scale goes to the famous English scientist, Michael Faraday. Faraday was greatly interested in the invention of the electromagnet, but his brilliant mind took earlier experiments still further. If electricity could produce magnetism, why couldn't magnetism produce electric power.

In 1831, Faraday found the solution. Electricity could be produced through magnetism by motion. He discovered that when a magnet was moved inside a coil of copper wire, a tiny electric current flows through the wire. Of course, by today's standards, Faraday's electric dynamo or electric generator was crude, and provided only a small electric current be he discovered the first method of generating electric power by means of motion in a magnetic field.

 

Thomas Edison and Joseph Swan

Nearly 40 years went by before a really practical DC (Direct Current) generator was built by Thomas Edison in America. Edison's many inventions included the phonograph and an improved printing telegraph. In 1878 Joseph Swan, a British scientist, invented the incandescent filament lamp and within twelve months Edison made a similar discovery in America. For a broader view of his role in power systems, visit Thomas Edison and electricity for projects and impact.

Swan and Edison later set up a joint company to produce the first practical filament lamp. Prior to this, electric lighting had been my crude arc lamps.

Edison used his DC generator to provide electricity to light his laboratory and later to illuminate the first New York street to be lit by electric lamps, in September 1882. Edison's successes were not without controversy, however - although he was convinced of the merits of DC for generating electricity, other scientists in Europe and America recognised that DC brought major disadvantages.

 

George Westinghouse and Nikola Tesl

Westinghouse was a famous American inventor and industrialist who purchased and developed Nikola Tesla's patented motor for generating alternating current. The work of Westinghouse, Tesla and others gradually persuaded American society that the future lay with AC rather than DC (Adoption of AC generation enabled the transmission of large blocks of electrical, power using higher voltages via transformers, which would have been impossible otherwise). Today the unit of measurement for magnetic fields commemorates Tesla's name.

 

James Watt

When Edison's generator was coupled with Watt's steam engine, large scale electricity generation became a practical proposition. James Watt, the Scottish inventor of the steam condensing engine, was born in 1736. His improvements to steam engines were patented over a period of 15 years, starting in 1769 and his name was given to the electric unit of power, the Watt.

Watt's engines used the reciprocating piston, however, today's thermal power stations use steam turbines, following the Rankine cycle, worked out by another famous Scottish engineer, William J.M Rankine, in 1859.

 

Andre Ampere and George Ohm

Andre Marie Ampere, a French mathematician who devoted himself to the study of electricity and magnetism, was the first to explain the electro-dynamic theory. A permanent memorial to Ampere is the use of his name for the unit of electric current.

George Simon Ohm, a German mathematician and physicist, was a college teacher in Cologne when in 1827 he published, "The galvanic Circuit Investigated Mathematically". His theories were coldly received by German scientists but his research was recognised in Britain and he was awarded the Copley Medal in 1841. His name has been given to the unit of electrical resistance.

Go here to visit all of our Electrical Energy pages.

 

 

Related Articles

View more

What is Energy?

Energy is the capacity to do work, powering motion, heat, and electricity. It exists in many forms—kinetic, potential, chemical, thermal, and renewable—transforming constantly to sustain life, industry, and the universe itself.

 

What is Energy?

Energy is a fundamental concept in physics that describes the capacity of a physical system to perform work. In a sense, energy is the ability to do work.

✅ Exists in forms like kinetic, potential, thermal, chemical, and electrical

✅ Transforms between forms but is conserved under physical laws

✅ Powers human activity, industry, and natural processes

 

To fully understand what energy is, it helps to start with Basic Electricity, which explains the foundation of how electrical systems operate in daily life.

It can be created or released through chemical reactions, nuclear reactions, and electromagnetic waves. Energy is classified into various types based on its origin, nature, and form, including mechanical, thermal, chemical, electrical, radiant, gravitational, nuclear, and sound. With the rise of technology and the global population, energy use has surged, intensifying the demand for alternative and renewable energy sources such as solar, wind, hydropower, and geothermal. 

 

History and Conceptual Origins

The word "energy" comes from the Greek "energeia," meaning activity or operation. Ancient philosophers, such as Aristotle, used it to describe vitality and action. In the 17th to 19th centuries, scientists such as Newton, Joule, and Helmholtz formalized energy as a measurable quantity in mechanics and thermodynamics. By the 20th century, Einstein’s equation E = mc² had shown that mass itself is a form of energy, reshaping physics and cosmology.

 

The Law of Conservation of Energy

The law of conservation of energy states that the total amount of energy in a closed system remains constant. Energy cannot be created or destroyed; it can only change form. Whether in chemical reactions, mechanical systems, or nuclear processes, the initial and final total energy always balances.

Energy is typically measured in joules (J). One joule equals the work done when a force of one newton moves an object one meter. Larger quantities are measured in kilojoules (kJ) or kilowatt-hours (kWh), which are commonly used in electricity billing.

 

The Mathematics of Energy

Energy is quantified with precise formulas:

    • Kinetic energy: KE = ½ mv²

    • Potential energy: PE = mgh

    • Work: W = F × d

These equations demonstrate how motion, position, and force are translated into measurable energy. The joule is equivalent to newton × meter, tying energy directly to mechanics.

 

What is Energy Transformation and Efficiency

Energy transformations follow the principles of thermodynamics, where no process is perfectly efficient. For example, in an engine, the conversion of chemical fuel into mechanical work produces useful power, but some energy is always lost as heat. These limitations underscore the importance of studying energy efficiency in both engineering and environmental science.

In real systems, energy constantly transforms:

  • Combustion in engines: chemical → thermal → mechanical → electrical

  • Solar panels: radiant → electrical

  • Hydropower: gravitational potential → kinetic → electrical

Yet no process is perfectly efficient. Friction, resistance, and heat losses dissipate useful energy, echoing the second law of thermodynamics and the concept of entropy. This inefficiency shapes the design of power plants, engines, and renewable systems. 


Different Types of Energy?

Energy can be classified into various types based on origin, nature, and form. Each type has unique characteristics, examples, and applications in everyday life and industry.

Mechanical Energy

Mechanical energy is the energy of motion or position. It includes:

  • Potential energy – stored energy due to position or configuration (e.g., water behind a dam).

  • Kinetic energy – energy of motion (e.g., a moving car).
    Mechanical energy is widely used in engines, turbines, and machines.

 

Thermal Energy

Thermal energy is related to the temperature of an object or system, arising from the kinetic motion of its atoms and molecules. It transfers between objects as heat. Everyday examples include boiling water, heating systems, and combustion engines.

 

Chemical Energy

Chemical energy is stored in the bonds of molecules and released during chemical reactions. Examples include gasoline fueling a car, food fueling our bodies, and batteries powering electronics. It underpins most biological and industrial processes.

 

Electrical Energy

Electrical energy results from the movement of electrons through a conductor. It powers lighting, electronics, appliances, and the global power grid. It is easily transported and converted into other forms of energy. Since energy drives current flow, learning about Electrical Energy and how it relates to Voltage and Current makes the concept more practical.

 

Radiant Energy

Radiant energy is carried by electromagnetic waves, including visible light, radio waves, and microwaves. It enables vision, communication systems, and solar power technology. Sunlight is the most significant source of radiant energy on Earth.

 

Gravitational Potential Energy

Gravitational energy is stored by objects in a gravitational field due to their height or mass. Lifting an object, climbing a hill, or operating a hydroelectric dam all rely on gravitational potential energy.

 

Nuclear Energy

Nuclear energy is released during atomic reactions, such as fission (splitting nuclei) or fusion (combining nuclei). It is harnessed in nuclear power plants to generate electricity and powers stars through fusion.

 

Sound Energy

Sound energy comes from the vibrations of particles in a medium such as air, water, or solids. It is essential in communication, music, sonar, and countless daily experiences.

 

Comparison Table of Energy Forms

Form Definition Example Common Use / Efficiency
Mechanical Motion or position (kinetic + potential) Car in motion, dam reservoir Engines, machines, turbines
Thermal Motion of atoms/molecules, heat transfer Boiling water Heating, engines
Chemical Energy in molecular bonds Gasoline, food, batteries Fuels, metabolism, storage
Electrical Electron flow through conductors Light bulb, power lines Appliances, power systems
Radiant Electromagnetic waves Sunlight, radio waves Solar panels, communications
Gravitational Position in a gravitational field Falling rock, hydro dam Hydropower, lifting systems
Nuclear Atomic fission/fusion Nuclear reactor, stars Electricity generation
Sound Vibrations in the medium Music, sonar, speech Communication, entertainment


What is Energy in Everyday Life?

Energy is used in numerous everyday activities, including heating and cooling homes, cooking, transportation, communication, and entertainment. Energy use has increased dramatically with the growth of technology and the global population. However, the availability of energy sources is limited, and the demand is increasing. This has led to a search for alternative and renewable energy sources, such as solar, wind, hydropower, and geothermal energy. The physics of 3 phase electricity and 3 phase power demonstrates how energy is efficiently distributed through modern power grids.

Renewable energy sources, such as solar energy, are gaining popularity due to their clean, sustainable, and renewable nature. Solar energy is derived from the sun's radiation and can be converted into electricity through photovoltaic (PV) cells or concentrated solar power (CSP) systems. Solar energy is utilized for various purposes, including generating electricity, heating water, and drying crops. The relationship between energy, Active Power, and Reactive Power is key to understanding how electricity performs useful work.

 

What is Energy in Physics?

In physics, the concept of energy is closely tied to thermodynamics, which explains how heat and work are transferred within systems. The law of conservation of energy ensures that energy is never lost, only changed in form through conversion processes. Whether it is the power delivered by an engine, the work performed by a force, or the density of energy stored in fuels and batteries, different forms of energy shape how the physical world operates and how technology supports human progress.

  • Biology: Cells use chemical energy stored in ATP for growth and repair.

  • Physics: Einstein’s equation E = mc² links matter and energy, essential in cosmology and nuclear physics.

  • Engineering: Modern grids rely on energy storage (batteries, pumped hydro), demand response, and smart systems to balance supply and demand.

Energy principles are also explained through fundamental laws, such as Ohm’s Law and Ampere’s Law, which connect voltage, current, and resistance.

 

Future of Energy

As global demand increases, the future of energy will focus on improving storage systems and raising energy density in fuels and batteries. Advances in renewable systems must also balance the conservation of resources with reliable power delivery. New technologies are being developed to optimize energy conversion and minimize losses, ensuring sustainable solutions for future generations. The future hinges on decarbonization, the integration of renewable energy, and global policy shifts. Fossil fuel limitations and climate change demand innovation in:

  • Large-scale storage (lithium batteries, hydrogen fuel cells).

  • Grid modernization and smart energy management.

  • Sustainable policy frameworks balancing demand with environmental limits.

Energy is not only a scientific concept but also a central issue shaping economies, technology, and our planet’s survival.


How is energy measured and quantified?

Energy is typically measured in joules (J) or kilojoules (kJ). The joule is the unit of measurement for energy in the International System of Units (SI). For example, one joule is the amount of energy needed to move an object with a force of one newton (N) over a distance of one meter (m). Kilojoules (kJ) measure larger amounts of energy, such as the energy content of food or the energy output of power plants.

Energy measurements vary depending on the forms being studied. For instance, thermal systems adhere to the laws of thermodynamics, whereas electrical systems prioritize power output and efficiency. Units like joules, calories, and kilowatt-hours quantify the work done, while energy density helps compare fuels and storage methods in practical applications.

Beyond joules, energy is measured in:

  • Calories – food energy.

  • BTU (British Thermal Unit) – heating and fuel.

  • Kilowatt-hours – electricity billing.

Conversions between units help bridge the gap between physics, engineering, and daily life. For example, a 100-watt light bulb consumes 100 joules every second.

 

Frequently Asked Questions

 

What is the difference between energy and power?

Energy is the capacity to do work; power is the rate of energy transfer, measured in watts (joules per second).

 

Can energy be created?

No. According to the law of conservation, energy cannot be created or destroyed, only transformed.

 

What is energy density?

Energy density refers to the amount of energy stored per unit mass or volume, which is particularly important in fuels and batteries.

 

How is energy related to thermodynamics?

The first law describes conservation; the second law explains inefficiencies and entropy.

 

Related Articles

 

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified