Windmills For Electricity Explained


windmills for electricity

Windmills for electricity use wind energy to generate clean, renewable power. These wind turbines convert kinetic energy into electrical energy, reducing carbon emissions and dependence on fossil fuels. 

 

What are Windmills for Electricity?

Windmills for electricity are modern devices that transform kinetic wind energy into electrical power.

✅ Harness renewable energy for clean power

✅ Reduce carbon footprint and dependence on fossil fuels

✅ Support sustainable power generation worldwide

Windmills for electricity are part of a broader shift toward renewable energy, providing clean alternatives to fossil fuels for homes, businesses, and utilities.

 

History of Windmills

Windmills for electricity - Mankind has been harnessing the wind's energy for many years. From Holland to traditional farms around the world, windmills were used in the past for pumping water through primitive irrigation systems or to grind grain. Then, the wind turned large "sails" that were connected by a long vertical shaft, which was attached to a grinding machine or a wheel that turned and drew water from a well. Today's turbines harness the energy of the wind to turn large metal blades, which in turn spin a generator that produces electric power. Alongside wind, other renewable energy sources like solar, biomass, and tidal energy are shaping a diversified and sustainable energy future.

From the mid-1970s to the mid-1980s, the United States government collaborated with industry to advance windmill technology for power generation and enable the development of large commercial wind turbines. NASA led this effort at the Lewis Research Center in Cleveland, Ohio, and it was an extraordinarily successful government research and development activity.

 

National Science Foundation

With funding from the National Science Foundation and later the Department of Energy (DOE), a total of 13 experimental wind turbines were put into operation, including four major wind turbine designs. This research and development program pioneered many of the multi-megawatt turbine technologies in use today, including steel tube towers, variable-speed generators, composite blade materials, partial-span pitch control, as well as aerodynamic, structural, and acoustic engineering design capabilities. The large Windmills For Electricity developed under this effort set several world records for diameter and power output. The Mod-2 wind turbine cluster produced a total of 7.5 megawatts of power in 1981. Government incentives, such as alternative energy tax credits, have played a major role in expanding wind power adoption across North America.

 

Wind Turbine Technology

In 1987, the Mod-5B was the largest single wind turbine operating in the world with a rotor diameter of nearly 100 meters and a rated power of 3.2 megawatts. It demonstrated an availability of 95 percent, an unparalleled level for a new first-unit wind turbine. The Mod-5B featured the first large-scale variable-speed drive train and a sectioned, two-blade rotor, which enabled easy transport of the blades.

Later, in the 1980s, California provided tax rebates for ecologically harmless wind turbines. These rebates helped fund the first major deployment of wind power for the utility grid. These turbines gathered in large wind parks such as at Altamont Pass, would be considered small and uneconomical by modern wind power development standards.

In the 1990s, as aesthetics and durability became more important, turbines were placed atop steel or reinforced concrete towers. Small generators are connected to the ground tower, and then the tower is raised into position. Larger generators are hoisted into position atop the tower, and a ladder or staircase is located inside the tower to allow technicians to reach and maintain the generator.

Originally, wind turbines were built right next to where their power was needed. With the availability of long-distance electric power transmission, wind generators are now often on wind farms in windy locations, and huge ones are being built offshore, sometimes transmitting power back to land using high-voltage submarine cable. Since wind turbines are a renewable means of generating power, they are being widely deployed, but their cost is often subsidized by taxpayers, either directly or through renewable energy credits. Much depends on the cost of alternative energy sources. The cost of wind generators per unit of power has been decreasing by about 4% per year.

 

Modern Wind Turbines

The most modern generations of Windmills for electricity are more properly called wind turbines, or wind generators, and are primarily used to generate electric power. Modern windmills are designed to harness the energy of the wind and convert it into electric energy. The largest wind turbines can generate up to 6 MW of power (for comparison, a modern fossil fuel power plant generates between 500 and 1,300 MW). Many large-scale renewable energy projects now combine wind farms with solar and storage systems, ensuring reliable, clean power for communities worldwide.

Small wind turbines can generate as little as a few kilowatts, while larger models produce up to 100 kilowatts or more, depending on design and location. These devices capture moving air, and as wind turbines operate, the kinetic energy generated can be used directly or sent into the electrical grid. On a utility scale, wind farms combine many large turbines to deliver massive amounts of energy, powering thousands of homes and businesses. This range of applications, from residential to industrial, demonstrates the versatility of wind technology in meeting diverse energy needs.

Related News

Norton's Theorem

Norton’s Theorem simplifies electrical circuit analysis by reducing any complex linear network to an equivalent current source in parallel with a resistor, enabling easier calculation of load current, evaluation of resistance, and solving practical problems.

 

What is Norton’s Theorem?

Norton’s Theorem states that any linear electrical network with sources and resistances can be reduced to an equivalent current source in parallel with a single resistor.

✅ Represents complex circuits as a simple current source and resistor

✅ Simplifies load current and resistance calculations

✅ Enhances circuit analysis for power systems and electronics

 

Understanding Norton's Theorem

Norton's Theorem is a foundational principle in electrical engineering, used to simplify the analysis of linear electronic circuits. This theorem, often taught alongside Thevenin's Theorem, provides a practical method for reducing complex circuits into a manageable form. The main insight of Norton's Theorem is that any two-terminal linear circuit, regardless of its internal complexity, can be represented by an ideal current source in parallel with a single resistor. This transformation does not alter external circuit behavior, making calculations and predictions about circuit performance far more straightforward. To fully grasp circuit simplification methods like Norton’s Theorem, it helps to start with a foundation in basic electricity.

Norton’s Theorem states that any linear electrical network can be simplified into a Norton equivalent circuit, making analysis more manageable. This representation is similar to an equivalent circuit consisting of a single current source and parallel resistance, allowing engineers to determine load behavior with ease. By calculating the total resistance of the network and combining it with the Norton current, complex problems become straightforward, enabling accurate predictions of circuit performance in both educational and real-world applications.

 

How Norton's Theorem Works

To use Norton's Theorem, engineers follow a step-by-step process:

  1. Identify the portion of the circuit to simplify: Usually, this means the part of the circuit as seen from a pair of terminals (often where a load is connected).

  2. Find the Norton current (IN): This is the current that would flow through a short circuit placed across the two terminals. It's calculated by removing the load resistor and finding the resulting current between the open terminals.

  3. Calculate the Norton resistance (RN): All independent voltage and current sources are deactivated (voltage sources are shorted, current sources are open-circuited), and the resistance seen from the open terminals is measured.

  4. Draw the Norton equivalent: Place the calculated current source (IN) in parallel with the calculated resistor (RN) between the terminals in question.

  5. Reconnect the load resistor: The circuit is now simplified, and analysis (such as calculating load current or voltage) is far easier.

Calculating Norton resistance often relies on principles such as Ohm’s Law and electrical resistance.

 

Why Use Norton's Theorem?

Complex electrical networks often contain multiple sources, resistors, and other components. Calculating the current or voltage across a particular element can be difficult without simplification. Norton's Theorem allows engineers to:

  • Save time: By reducing a circuit to source and resistance values, repeated calculations for different load conditions become much faster.

  • Enhance understanding: Seeing a circuit as a source and parallel resistor clarifies key behaviors, such as maximum power transfer.

  • Test different scenarios: Engineers can quickly swap different load values and immediately see the effect without having to recalculate the entire network each time.

Understanding how current behaves in different networks connects directly to the study of direct current and alternating current.

 

Comparison to Thevenin’s Theorem

Norton's Theorem is closely related to Thevenin's Theorem. Thevenin's approach uses a voltage source in series with a resistor, while Norton's uses a current source in parallel with a resistor. The two equivalents can be converted mathematically:

  • Thevenin equivalent resistance (RTH) = Norton equivalent resistance (RN)
  • Norton current (IN) = Thevenin voltage (VTH) divided by Thevenin resistance (RTH)
  • Thevenin voltage (VTH) = Norton current (IN) times resistance (RN)

Engineers applying Norton’s Theorem also draw on related concepts such as equivalent resistance and impedance to analyze circuits accurately.

 

Real-World Example

Suppose you need to know the current flowing through a sensor in a larger industrial power distribution board. The network supplying the sensor includes many resistors, switches, and sources. Applying Norton's Theorem, you can remove the sensor and find:

  1. The short-circuit current across its terminals (Norton current)
  2. The combined resistance left in the circuit (Norton resistance)

Once you reconnect the sensor and know its resistance, you can easily analyze how much current it will receive, or how it will affect circuit performance under different conditions.

For a deeper understanding, exploring electricity and magnetism reveals how fundamental laws, such as Faraday’s Law and Ampere’s Law, support the theory behind circuit transformations.

 

Applications of Norton's Theorem

  • Power system analysis: Used by utility engineers to study how changes in distribution, like maintenance or faults, impact circuit behavior.

  • Electronic device design: Common in transistors, op-amps, and other components to simplify input and output circuit analysis.

  • Fault diagnosis and protection: Helps quickly estimate fault currents for setting up protective devices in grids.

  • Education: Essential in electrical engineering curricula to develop problem-solving skills.

 

Limitations of Norton's Theorem

While powerful, Norton's Theorem is limited to linear circuits and cannot be directly applied to circuits with non-linear components (such as diodes or transistors in their non-linear regions). Additionally, it is only applicable between two terminals of a network; for systems with more terminals, additional techniques are required.

Norton's Theorem remains a valuable tool for engineers and students, offering clarity and efficiency in analyzing complex circuits. By transforming intricate arrangements into simple source-resistor pairs, it enables faster design iterations, troubleshooting, and optimized system performance. Whether you're analyzing a power distribution panel or designing integrated circuits, understanding and applying Norton's Theorem is an essential skill in the electrical field.

 

Related Articles

 

View more

Wattmeters – Power Measurement

Wattmeters measure electrical power in watts, monitoring energy use in industrial power systems. They provide accurate active power readings for efficiency and load management, utilizing voltage and current measurements to achieve precise results.

 

What are Wattmeters?

Wattmeters are instruments used to measure electrical power. They:

✅ Measure active electrical power in watts for various applications.

✅ Are used in industrial, commercial, and residential energy monitoring.

✅ Help optimize efficiency, manage loads, and ensure system safety.

A wattmeter measures instantaneous (or short-term) electrical power in watts, while a watthour meter accumulates that power over time and reports energy used (e.g. in kWh). Energy meters and smart meters extend this concept by recording consumption continuously for billing, load analysis, and energy audits.

 

Working Principle of Wattmeters

Electrical power is calculated using the formula:

P = E × I

Where:

  • P = Power in watts

  • E = Voltage in volts

  • I = Current in amperes

In DC circuits, watts are sometimes expressed as volt-amperes (VA). In AC circuits, wattmeters measure true (or active) power, taking into account the power factor to compensate for phase differences between voltage and current. Unlike reactive power (measured in kvar) or apparent power (measured in kVA), active power is the usable portion that does real work. This relationship is often represented in the power triangle, where vector analysis explains how apparent, reactive, and active power interact.

 

Construction and Internal Components

A typical wattmeter consists of two main coil assemblies:

  1. Current Coil (CC)

    • Heavy-gauge copper wire with low resistance.

    • Connected in series with the load to carry the circuit current.

  2. Voltage Coil (VC)

    • Fine-gauge wire with high resistance.

    • Connected in parallel with the load to measure voltage.

The electrodynamometer, commonly referred to as a dynamometer wattmeter, is a classic analog device that operates on the principle of a motor. The interaction between the magnetic fields of the current and voltage coils produces a torque proportional to the power, causing the pointer to move over a calibrated scale. Understanding wattmeter principles is a foundation of basic electricity training, helping learners connect theory to practical power measurement.

 


 

Figure 1 – Construction of a dynamometer wattmeter showing current and voltage coil arrangement.

 

Types of Wattmeters

  • Analog/Dynamometer – Durable, reliable, suited for laboratory and field measurements.

  • Digital – Higher accuracy, data logging, and integration with monitoring systems.

  • Clamp-on  – Measure power without breaking the circuit, ideal for quick diagnostics.

  • Specialized  – Designed for RF power, audio power, or other niche applications.

In three-phase systems, wattmeters are often applied in accordance with Blondel’s theorem, which specifies the number of measurement elements required in multi-phase circuits. They are frequently used in conjunction with 3 phase electricity concepts to ensure balanced load distribution and optimal system efficiency.


 

Fig. 2. Power can be measured with a voltmeter and an ammeter.

 

Measuring Power in DC and AC Circuits

In DC circuits, power measurement can be as simple as multiplying voltage and current readings from separate meters.

Example:

If a circuit operates at 117 V DC and draws 1 A, the power is:

P = 117 × 1 = 117 W

In AC systems, especially with reactive or distorted loads, a wattmeter is essential because voltage and current may not be in phase. The device automatically accounts for the phase angle, providing accurate true power readings. Advanced digital wattmeters also compensate for harmonic distortion and poor waveform quality, providing more reliable measurements than older analog designs.

By measuring energy transfer in circuits, they also relate to other power measurement instruments such as ammeters, voltmeters, and multimeters, which measure supporting parameters needed for complete electrical analysis. Accurate wattmeter readings are crucial for diagnosing performance issues in 3-phase power networks, where the relationships between voltage and current are critical. By measuring energy transfer in circuits, they help explain fundamental laws of electromagnetism, such as Ampère’s Law, which underpins the interaction between current and magnetic fields.

 

Fig. 2. Power can be measured with a voltmeter and an ammeter.

 

Practical Examples and Load Considerations

A household iron may consume 1000 W, drawing 8.55 A at 117 V.

A large heater may draw 2000 W, or 17.1 A, potentially overloading a 15 A breaker.

In industrial settings, watt meters help prevent equipment overloading, reduce downtime, and improve energy efficiency.

 

Modern Wattmeter Applications

Today’s wattmeters are often part of smart energy monitoring systems that:

  • Track energy consumption over time.

  • Integrate with SCADA and IoT platforms.

  • Enable predictive maintenance through power trend analysis.

  • Support compliance with energy efficiency regulations.

 

Accuracy, Standards, and Advanced Considerations

Measurement accuracy is a crucial factor in determining wattmeter performance. Devices are often classified by a class of accuracy, with error limits defined by international standards such as IEC, ANSI, or IEEE. Regular calibration and testing procedures ensure watt meters continue to deliver reliable results in both laboratory and field conditions.

Modern digital watt meters feature true RMS measurement, which accurately captures distorted waveforms caused by nonlinear loads. This is especially important in power systems where harmonic distortion is present. In commercial and industrial environments, accurate wattmeter data support energy audits, load analysis, and regulatory compliance, making them indispensable tools for engineers and facility managers. Wattmeter usage is closely linked to the fundamentals of electrical energy, enabling precise monitoring for efficiency and cost control.

 

Key Advantages of Wattmeters

  • Accurate real-time power measurement.

  • Enhanced energy management and cost savings.

  • Improved system reliability through overload prevention.

  • Compatibility with both AC and DC systems.

Wattmeters remain a vital tool for measuring and managing electrical power. Whether in a simple residential circuit, a commercial energy audit, or a high-tech industrial monitoring system, they ensure that electrical systems run efficiently, safely, and cost-effectively. As technology advances, digital and networked wattmeters continue to expand their role, integrating into smart grids and energy-optimized infrastructures. 

 

Related Articles

 

View more

Electrical Resistance Explained

Electrical resistance is the opposition to the flow of electric current in a material. It is measured in ohms (Ω) and depends on the conductor’s length, thickness, material, and temperature.

 

What is Electrical Resistance?

Electrical resistance is a fundamental concept in engineering that defines how much a material opposes the flow of electric current. Measured in ohms (Ω), resistance (Ω) plays a crucial role in circuit design, power distribution, and electronic applications.

✅ Measured in ohms (Ω) and calculated using Ohm’s Law

✅ Influenced by material, length, area, and temperature

✅ Key factor in circuit safety, design, and energy loss

 

Think of electricity moving like water through a pipe. If the pipe is narrow or obstructed, less water flows through it. Similarly, in a wire or conductor, certain materials make it harder for electrons to move freely. This obstruction results in energy loss, often seen as heat.

The ease or difficulty of electric charge movement depends on the conductivity of a material. Metals like copper allow current to flow easily, while rubber or glass inhibit it entirely. This behavior plays a key role in how systems are designed and protected. Discover how resistors are used in circuits to manage voltage and protect components by providing controlled resistance.

 

Electrical Resistance – Example Values by Material/Component

Material/Component Approx. Resistance Notes
Copper wire (1 meter, 1mm²) ~0.017 ohms Very low resistance, ideal for conductors
Aluminum wire (1m, 1mm²) ~0.028 ohms Higher resistance than copper
Iron wire (1m, 1mm²) ~0.10 ohms Often used in heating elements
Nichrome wire (1m, 1mm²) ~1.10 ohms High-resistance alloy used in toasters and heaters
Human body (dry skin) 1,000–100,000 ohms Varies greatly with moisture and contact
Incandescent light bulb ~240 ohms (cold) Resistance increases when hot
Resistor (carbon film) Fixed (e.g., 220 ohms) Used to control current in circuits
Air (dry) ~1 trillion ohms (insulator) Excellent natural insulator unless ionized
Superconductor 0 ohms Only at extremely low temperatures (near absolute zero)

 

Electrical Resistance Definition

Several factors affecting electrical resistance include the type of material, temperature, and the dimensions of the conductor. When an electric charge moves through a material, its ease of flow depends on the material’s conductivity. A high-conductivity material allows charges to move more freely, resulting in lower resistance. The resistance of a conductor increases with its length and decreases with its cross-sectional area. Therefore, the resistance of a wire is directly related to both its physical properties and the material from which it is made. The resistance of a conductor depends heavily on its length and cross-sectional area, as outlined in our resistance formula breakdown.

This opposing property is quantified using Ohm’s Law:

R = V / I

Where:

  • R is the resistive value in ohms

  • V is voltage (volts)

  • I is current (amperes)

Another useful expression involves material properties:

R = ρ × (L / A)

Where:

  • ρ is resistivity (material-specific)

  • L is length

  • A is cross-sectional area

These formulas show that the longer or thinner the conductor, the harder it is for current to move through it.

 

Unit of Electrical Resistance – The Ohm (Ω)

The ohm is the SI unit of resistance, named after German physicist Georg Ohm. One ohm is defined as the resistance between two points of a conductor when a potential difference of one volt causes a current of one ampere to flow.

Common multiples:

  • kΩ (kilo-ohm) = 1,000 ohms

  • MΩ (mega-ohm) = 1,000,000 ohms

Resistance can be measured using a multimeter, and is especially important in designing and troubleshooting power  and electronic circuits. To understand how voltage and resistance interact in a circuit, see our guide on Ohm’s Law.

 

Ohm’s Law and Circuit Function

Ohm’s Law helps us understand how voltage, current, and resistance relate. For example:

  • Increase the resistive load, and current drops.

  • Increase voltage with fixed resistance, and current rises.

These principles help control energy flow, prevent overloads, and design efficient systems.

 

Measuring and Expressing Opposition

The ohm (Ω) is the standard unit used to quantify this phenomenon. One ohm means that a current of one ampere flows when one volt is applied. Components with fixed values, like resistors, are labelled accordingly—e.g., 100 Ω, 1 kΩ, or 1 MΩ.

To measure the current-limiting capacity of a material, a digital multimeter is used. It applies a small voltage and calculates the resulting current flow to determine the opposition level. If you're working with different wire types, explore the unit of electrical resistance for conversion insights and resistance ranges.

 

Real-World Examples of Resistance

  • Heating Elements: Toasters, ovens, and electric heaters utilize high-resistance materials, such as nichrome wire.

  • Power Transmission: Long-distance wires are designed with low resistance to reduce energy loss as heat.

  • Electronic Components: Resistors regulate current in circuits, protecting components from overload.

For real-world scenarios involving current flow, our article on voltage drop explains how resistance affects electrical efficiency over distance.

 

Factors Affecting Electrical Resistance

  • The resistance of a conductor depends on:

    • Material – copper vs. aluminum vs. nichrome

    • Length – longer wires restrict current more

    • Thickness – wider wires allow easier flow

    • Temperature – many materials resist current more when heated

    Thus, the resistance of a wire can vary dramatically depending on where and how it’s used. Materials with high conductivity (like silver or copper) allow electrons to move with minimal restriction, whereas poor conductors like rubber greatly hinder charge movement.

 

Superconductors – Zero Resistance?

In some materials, when cooled to extremely low temperatures, resistance drops to zero. These superconductors enable electricity to flow without energy loss, but their use is limited to specialized fields, such as MRI machines or experimental power lines, due to cost and cooling requirements.

 

Frequently Asked Questions

 

What causes electrical resistance?

It results from collisions between electrons and atoms in a conductor, which convert energy into heat.

 

What is the formula for calculating it?

 R = V/I or R = ρ × (L / A)

 

How is it measured?

With a multimeter in ohms (Ω), using a small test voltage and measuring current. Learn how instruments like a digital multimeter are used to measure opposition to current flow in electrical systems.

 

Why is this concept important?

It controls current flow, prevents damage, and enables functions like heating or dimming.

 

Can resistance ever be zero?

Yes—in superconductors under specific extreme conditions.

Electrical resistance is a foundational concept in understanding how electricity behaves in materials and systems. From household wiring to high-voltage power lines and sensitive electronics, it plays a crucial role in determining safety, efficiency, and performance. For a broader view on electric flow and material response, read about electrical conductivity and current electricity.

 

Related Articles

 

View more

Understanding Current

Current is the flow of electric charge in circuits, defined by amperage, driven by voltage, limited by resistance, described by Ohm’s law, and fundamental to AC/DC power systems, loads, conductors, and electronic components.

 

What Is Current?

Current is charge flow in a circuit, measured in amperes and governed by voltage and resistance.

✅ Measured in amperes; sensed with ammeters and shunts

✅ Defined by Ohm’s law: I = V/R in linear resistive circuits

✅ AC alternates; DC is steady; sets power transfer P = V*I

 

Current is best described as a flow of charge or that the charge is moving. Electrons in motion make up an electric current. This electric current is usually referred to as “current” or “current flow,” no matter how many electrons are moving. Current is a measurement of a rate at which a charge flows through some region of space or a conductor. The moving charges are the free electrons found in conductors, such as copper, silver, aluminum, and gold. The term “free electron” describes a condition in some atoms where the outer electrons are loosely bound to their parent atom. These loosely bound electrons can be easily motivated to move in a given direction when an external source, such as a battery, is applied to the circuit. These electrons are attracted to the positive terminal of the battery, while the negative terminal is the source of the electrons. The greater amount of charge moving through the conductor in a given amount of time translates into a current. For a concise overview of how moving charges create practical circuits, see this guide to current electricity for additional context.


 

The System International unit for current is the Ampere (A), where


 

That is, 1 ampere (A) of current is equivalent to 1 coulomb (C) of charge passing through a conductor in 1 second(s). One coulomb of charge equals 6.28 billion billion electrons. The symbol used to indicate current in formulas or on schematics is the capital letter “I.” To explore the formal definition, standards, and measurement practices, consult this explanation of the ampere for deeper detail.

When current flow is one direction, it is called direct current (DC). Later in the text, we will discuss the form of current that periodically oscillates back and forth within the circuit. The present discussion will only be concerned with the use of direct current. If you are working with batteries or electronic devices, you will encounter direct current (DC) in most basic circuits.

The velocity of the charge is actually an average velocity and is called drift velocity. To understand the idea of drift velocity, think of a conductor in which the charge carriers are free electrons. These electrons are always in a state of random motion similar to that of gas molecules. When a voltage is applied across the conductor, an electromotive force creates an electric field within the conductor and a current is established. The electrons do not move in a straight direction but undergo repeated collisions with other nearby atoms. These collisions usually knock other free electrons from their atoms, and these electrons move on toward the positive end of the conductor with an average velocity called the drift velocity, which is relatively a slow speed. To understand the nearly instantaneous speed of the effect of the current, it is helpful to visualize a long tube filled with steel balls as shown in Figure 10-37. It can be seen that a ball introduced in one end of the tube, which represents the conductor, will immediately cause a ball to be emitted at the opposite end of the tube. Thus, electric current can be viewed as instantaneous, even though it is the result of a relatively slow drift of electrons. For foundational concepts that connect drift velocity with circuit behavior, review this basic electricity primer to reinforce the fundamentals.

Current is also a physical quantity that can be measured and expressed numerically in amperes. Electric current can be compared to the flow of water in a pipe. It is measureda at the rate in which a charge flows past a certain point on a circuit. Current in a circuit can be measured if the quantity of charge "Q" passing through a cross section of a wire in a time "t" (time) can be measured. The current is simply the ratio of the quantity of charge and time. Understanding current and charge flow also clarifies how circuits deliver electrical energy to perform useful work.

 


 

Electrical current is essentially an electric charge in motion. It can take either the form of a sudden discharge of static electricity, such as a lightning bolt or a spark between your finger and a ground light switch plate. More commonly, though, when we speak of current, we mean the more controlled form of electricity from generators, batteries, solar cells or fuel cells.  A helpful overview of static, current, and related phenomena is available in this summary of electricity types for quick reference.

We can think of the flow of electrons in a wire as the flow of water in a pipe, except in this case, the pipe of water is always full. If the valve on the pipe is opened at one end to let water into the pipe, one doesn't have to wait for that water to make its way all the way to the other end of the pipe. We get water out the other end almost instantaneously because the incoming water pushes the water that's already in the pipe toward the end. This is what happens in the case of electrical current in a wire. The conduction electrons are already present in the wire; we just need to start pushing electrons in one end, and they start flowing at the other end instantly. In household power systems, that push on conduction electrons alternates in direction as alternating current (AC) drives the flow with a time-varying voltage.

 


 

Current Formula

Current is rate of flow of negatively-charged particles, called electrons, through a predetermined cross-sectional area in a conductor.

 Essentially, flow of electrons in an electric circuit leads to the establishment of current.

q = relatively charged electrons (C)

t = Time

Amp = C/sec

Often measured in milliamps, mA

 

 

 

Related Articles

View more

Electricity Supply And Demand Balance

Electricity supply covers generation, transmission, distribution, grid infrastructure, voltage regulation, frequency control, power quality, protection, SCADA, and load management to ensure reliable energy delivery to industrial, commercial, and residential loads.

 

What Is Electricity Supply?

Electricity supply is generation, transmission and distribution of power with set voltage, frequency and reliability.

✅ Involves generation, HV transmission, MV/LV distribution networks

✅ Ensures voltage regulation, frequency control, and power quality

✅ Uses SCADA, protection relays, and load forecasting for reliability

 

What Is Electricity Supply?

Electricity Supply is a complex balanced system of electric power generation and real time customer demand. Production (supply) and consumption (demand) dictate electricity pricing in the United States and Canada.  For a regional view, the analysis at Electricity Demand in Canada highlights how seasonal peaks and resource availability shape prices.

Where does the term "electricity supply" originate? How does that supply move from one point to another? These are most important questions to ask when you want to understand the electric power industry.

If you're new to the vocabulary, the concise glossary at Electricity Terms can clarify definitions used throughout the industry.

The first thing to know is that electric power is generated in the United States and Canada in power plants which house electrical generators. Then, power is transported (transmission and distriubtution) through the power grid to the customer. This complex network of transmission lines delivers power to industrial, commercial, institutional and residential customers. For a step-by-step overview of system operations, Electricity: How It Works explains generation, transmission, and distribution in practical detail.

In the electricity industry, transmission and distribution wires do the work of transporting power to satisfy electricity demand during real time peak demand. This is the job of the electricity market. The natural gas and fossil fuels industry works in the same way. These lines run from generating station to substations (sometimes over great distances, like in the case of British Columbia and Manitoba where generation is in the far north and the consumption is in the south. This is where the voltage is reduced for local consumption. Substations are usually located close to where the electricity is consumed. 

For background on core power concepts, the primer at Electricity Power connects voltage, current, and load to real-world grid behavior.

The various prices of electricity depends on the electricity supply mix and the energy efficiency of the customer. Electricity energy supply is usually measured in terawatt hours.

The system design is of three-phase alternating current electrical generation and distribution, which was invented by Nikola Tesla in the 19th century. He considered that 60 Hz was the best frequency for alternating current (AC) power generating Electricity Supply. He preferred 240 V, which was claimed to be better for long supply lines. Thomas Edison developed direct current (DC) systems at 110 V and this was claimed to be safer. For more information about the early battles between proponents of AC and DC supply systems see War of Currents. For foundational fundamentals beyond this history, the overview at What Is Electricity clarifies the principles common to both AC and DC systems.

The German company AEG built the first European generating facility to run at 50 Hz, allegedly because the number 60 did not fit into the numerical unit sequence of 1, 2, 5…. At that time, AEG had a virtual monopoly and their standard spread to the rest of the continent. In Britain, differing frequencies (including 25 Hz 40 Hz and DC) proliferated, and the 50 Hz standard was established only after World War II.

To see how frequency standards interact with generation and end-use performance, the explainer at How Electricity Works ties design choices to everyday operation.

Originally much of Europe was 110 V too, just like the Japanese and the US system today. It was deemed necessary to increase the necessary voltage to draw more electrical power with reduced energy loss and voltage drop from the same copper wire diameter.

The choice of utilization voltage is governed more by tradition than by optimization of the distribution system. In theory, a 240 V distribution system will use less conductor material to deliver a given quantity of power. Incandescent lamps for 120 V systems are more efficient and rugged than 240 V lamps, while large heating appliances can use smaller conductors at 240 V for the same output rating. Practically speaking, few household appliances use anything like the full capacity of the outlet to which they are connected. Minimum wire sizes for hand-held or portable equipment is usually restricted by the mechanical strength of the conductors. One may observe that both 240 V system countries and 120 V system countries have extensive penetration of electrical appliances in homes. National electrical codes prescribe wiring methods intended to minimize the risk of electric shock or fire. For household applications, home electricity basics show how these voltage considerations affect outlets, circuits, and safety practices.

Areas using (approximately) 120V allow different combinations of voltage, suitable for use by a variety of classes of electrical equipment.

 

Related Articles

View more

Capacitance Explained

Capacitance: Understanding the Ability to Store Electricity

Capacitance is an essential concept in electrical circuits, and it describes the ability of a capacitor to store electrical energy. Capacitors are electronic components used in many circuits to perform various functions, such as filtering, timing, and power conversion. Capacitance is a measure of a capacitor's ability to store electrical energy, and it plays a crucial role in the design and operation of electrical circuits. This article provides an overview of capacitance, including its definition, SI unit, and the difference between capacitor and capacitance.

 

What is Capacitance?

Capacitance is the ability of a capacitor to store electrical charge. A capacitor consists of two conductive plates separated by a dielectric material. The conductive plates are connected to an electrical circuit, and the dielectric material is placed between them to prevent direct contact. When a voltage source is applied to the plates, electrical charge builds up on the surface of the plates. The amount of charge that a capacitor can store is determined by its capacitance, which depends on the size and distance between the plates, as well as the dielectric constant of the material.

The energy storing capability of a capacitor is based on its capacitance. This means that a capacitor with a higher capacitance can store more energy than a capacitor with a lower capacitance. The energy stored in a capacitor is given by the formula:

Energy (Joules) = 0.5 x Capacitance (Farads) x Voltage^2

The ability to store energy is essential for many applications, including filtering, timing, and power conversion. Capacitors are commonly used in DC circuits to smooth out voltage fluctuations and prevent noise. They are also used in AC circuits to filter out high-frequency signals.

 

What is Capacitance and the SI Unit of Capacitance?

Capacitance is defined as the ratio of the electrical charge stored on a capacitor to the voltage applied to it. The SI unit of capacitance is the Farad (F), which is defined as the amount of capacitance that stores one coulomb of electrical charge when a voltage of one volt is applied. One Farad is a relatively large unit of capacitance, and most capacitors have values that are much smaller. Therefore, capacitors are often measured in microfarads (µF) or picofarads (pF).

The capacitance of a capacitor depends on several factors, including the distance between the plates, the surface area of the plates, and the dielectric constant of the material between the plates. The dielectric constant is a measure of the ability of the material to store electrical energy, and it affects the capacitance of the capacitor. The higher the dielectric constant of the material, the higher the capacitance of the capacitor.

 

What is the Difference Between Capacitor and Capacitance?

Capacitor and capacitance are related concepts but are not the same thing. Capacitance is the ability of a capacitor to store electrical energy, while a capacitor is an electronic component that stores electrical charge. A capacitor consists of two conductive plates separated by a dielectric material, and it is designed to store electrical charge. Capacitance is a property of a capacitor, and it determines the amount of electrical charge that the capacitor can store. Capacitance is measured in Farads, while the capacitor is measured in units of capacitance, such as microfarads (µF) or picofarads (pF).

 

What is an Example of Capacitance?

One example of capacitance is a common electronic component known as an electrolytic capacitor. These capacitors are used in a wide range of electronic circuits to store electrical energy, filter out noise, and regulate voltage. They consist of two conductive plates separated by a dielectric material, which is usually an electrolyte. The electrolyte allows for a high capacitance, which means that these capacitors can store a large amount of electrical energy.

Another example of capacitance is the human body. Although the capacitance of the human body is relatively small, it can still store a significant amount of electrical charge. This is why people can sometimes feel a shock when they touch a grounded object, such as a metal doorknob or a handrail. The capacitance of the human body is affected by several factors, including the size and shape of the body, as well as the material and proximity of the objects it comes into contact with.

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Download the 2025 Electrical Training Catalog

Explore 50+ live, expert-led electrical training courses –

  • Interactive
  • Flexible
  • CEU-cerified