Electrical Resistance Explained


Electrical Resistance

Electrical resistance is the opposition to the flow of electric current in a material. It is measured in ohms (Ω) and depends on the conductor’s length, thickness, material, and temperature.

 

What is Electrical Resistance?

Electrical resistance is a fundamental concept in engineering that defines how much a material opposes the flow of electric current. Measured in ohms (Ω), resistance (Ω) plays a crucial role in circuit design, power distribution, and electronic applications.

✅ Measured in ohms (Ω) and calculated using Ohm’s Law

✅ Influenced by material, length, area, and temperature

✅ Key factor in circuit safety, design, and energy loss

 

Think of electricity moving like water through a pipe. If the pipe is narrow or obstructed, less water flows through it. Similarly, in a wire or conductor, certain materials make it harder for electrons to move freely. This obstruction results in energy loss, often seen as heat.

The ease or difficulty of electric charge movement depends on the conductivity of a material. Metals like copper allow current to flow easily, while rubber or glass inhibit it entirely. This behavior plays a key role in how systems are designed and protected. Discover how resistors are used in circuits to manage voltage and protect components by providing controlled resistance.

 

Electrical Resistance – Example Values by Material/Component

Material/Component Approx. Resistance Notes
Copper wire (1 meter, 1mm²) ~0.017 ohms Very low resistance, ideal for conductors
Aluminum wire (1m, 1mm²) ~0.028 ohms Higher resistance than copper
Iron wire (1m, 1mm²) ~0.10 ohms Often used in heating elements
Nichrome wire (1m, 1mm²) ~1.10 ohms High-resistance alloy used in toasters and heaters
Human body (dry skin) 1,000–100,000 ohms Varies greatly with moisture and contact
Incandescent light bulb ~240 ohms (cold) Resistance increases when hot
Resistor (carbon film) Fixed (e.g., 220 ohms) Used to control current in circuits
Air (dry) ~1 trillion ohms (insulator) Excellent natural insulator unless ionized
Superconductor 0 ohms Only at extremely low temperatures (near absolute zero)

 

Electrical Resistance Definition

Several factors affecting electrical resistance include the type of material, temperature, and the dimensions of the conductor. When an electric charge moves through a material, its ease of flow depends on the material’s conductivity. A high-conductivity material allows charges to move more freely, resulting in lower resistance. The resistance of a conductor increases with its length and decreases with its cross-sectional area. Therefore, the resistance of a wire is directly related to both its physical properties and the material from which it is made. The resistance of a conductor depends heavily on its length and cross-sectional area, as outlined in our resistance formula breakdown.

This opposing property is quantified using Ohm’s Law:

R = V / I

Where:

  • R is the resistive value in ohms

  • V is voltage (volts)

  • I is current (amperes)

Another useful expression involves material properties:

R = ρ × (L / A)

Where:

  • ρ is resistivity (material-specific)

  • L is length

  • A is cross-sectional area

These formulas show that the longer or thinner the conductor, the harder it is for current to move through it.

 

Unit of Electrical Resistance – The Ohm (Ω)

The ohm is the SI unit of resistance, named after German physicist Georg Ohm. One ohm is defined as the resistance between two points of a conductor when a potential difference of one volt causes a current of one ampere to flow.

Common multiples:

  • kΩ (kilo-ohm) = 1,000 ohms

  • MΩ (mega-ohm) = 1,000,000 ohms

Resistance can be measured using a multimeter, and is especially important in designing and troubleshooting power  and electronic circuits. To understand how voltage and resistance interact in a circuit, see our guide on Ohm’s Law.

 

Ohm’s Law and Circuit Function

Ohm’s Law helps us understand how voltage, current, and resistance relate. For example:

  • Increase the resistive load, and current drops.

  • Increase voltage with fixed resistance, and current rises.

These principles help control energy flow, prevent overloads, and design efficient systems.

 

Measuring and Expressing Opposition

The ohm (Ω) is the standard unit used to quantify this phenomenon. One ohm means that a current of one ampere flows when one volt is applied. Components with fixed values, like resistors, are labelled accordingly—e.g., 100 Ω, 1 kΩ, or 1 MΩ.

To measure the current-limiting capacity of a material, a digital multimeter is used. It applies a small voltage and calculates the resulting current flow to determine the opposition level. If you're working with different wire types, explore the unit of electrical resistance for conversion insights and resistance ranges.

 

Real-World Examples of Resistance

  • Heating Elements: Toasters, ovens, and electric heaters utilize high-resistance materials, such as nichrome wire.

  • Power Transmission: Long-distance wires are designed with low resistance to reduce energy loss as heat.

  • Electronic Components: Resistors regulate current in circuits, protecting components from overload.

For real-world scenarios involving current flow, our article on voltage drop explains how resistance affects electrical efficiency over distance.

 

Factors Affecting Electrical Resistance

  • The resistance of a conductor depends on:

    • Material – copper vs. aluminum vs. nichrome

    • Length – longer wires restrict current more

    • Thickness – wider wires allow easier flow

    • Temperature – many materials resist current more when heated

    Thus, the resistance of a wire can vary dramatically depending on where and how it’s used. Materials with high conductivity (like silver or copper) allow electrons to move with minimal restriction, whereas poor conductors like rubber greatly hinder charge movement.

 

Superconductors – Zero Resistance?

In some materials, when cooled to extremely low temperatures, resistance drops to zero. These superconductors enable electricity to flow without energy loss, but their use is limited to specialized fields, such as MRI machines or experimental power lines, due to cost and cooling requirements.

 

Frequently Asked Questions

 

What causes electrical resistance?

It results from collisions between electrons and atoms in a conductor, which convert energy into heat.

 

What is the formula for calculating it?

 R = V/I or R = ρ × (L / A)

 

How is it measured?

With a multimeter in ohms (Ω), using a small test voltage and measuring current. Learn how instruments like a digital multimeter are used to measure opposition to current flow in electrical systems.

 

Why is this concept important?

It controls current flow, prevents damage, and enables functions like heating or dimming.

 

Can resistance ever be zero?

Yes—in superconductors under specific extreme conditions.

Electrical resistance is a foundational concept in understanding how electricity behaves in materials and systems. From household wiring to high-voltage power lines and sensitive electronics, it plays a crucial role in determining safety, efficiency, and performance. For a broader view on electric flow and material response, read about electrical conductivity and current electricity.

 

Related Articles

 

Related News

Prospective Fault Current Meaning Explained

Prospective fault current (PFC) is the highest electric current that can flow in a system during a short circuit. It helps determine equipment ratings, breaker capacity, and safety measures in electrical installations to prevent overheating, fire, or component failure.

 

What is the Meaning of Prospective Fault Current?

Prospective fault current refers to the maximum current expected during a short circuit at any point in an electrical system.

✅ Helps size circuit breakers and fuses for safe disconnection

✅ Ensures compliance with installation and safety codes

✅ Prevents equipment damage from excessive short-circuit current

Prospective fault current (PFC) is a key factor in the safety and design of electrical systems. It represents the maximum current that could flow in the event of a fault, such as a short circuit. Understanding PFC is essential for selecting protective devices that can handle fault conditions safely. This article explores what PFC is, how it is measured, and its importance for electrical installations, while addressing key questions. Understanding electrical short circuits is key to calculating prospective fault current and ensuring system safety.

When measuring prospective short circuit current in an electrical system, it’s essential to perform tests between L1 N CPC and L2 N CPC to assess the fault current across different phases and protective conductors. These measurements help identify the maximum prospective fault current present in the system, especially at points involving live conductors. Whether taking note of a single-phase supply or between line conductors on a three-phase supply, proper testing protocols must be followed. Technicians should always use insulated test leads rated for the expected voltage and current levels, and please refer to the test meter manufacturer’s instruction for safe and accurate operation. Reliable results ensure that the protective devices can safely interrupt fault conditions, preventing system damage and ensuring compliance with fault current protection standards.

 

Frequently Asked Questions

Why is it Important?

Prospective fault current refers to the maximum current that could pass through a system during a fault. The PFC helps determine the breaking capacity of fuses and circuit breakers, ensuring these protective devices can handle high currents safely. This is vital for protecting the electrical installation and those working near it.

Understanding PFC is critical for ensuring increased safety for employees and third parties. Protective devices must be selected to handle PFC; otherwise, they may fail to operate correctly, leading to severe consequences, such as fires or injuries. To fully grasp how PFC affects energy flow, it’s useful to review the concept of electrical resistance in a circuit.

 

How is Prospective Fault Current Measured or Calculated?

PFC can be measured or calculated using tools such as a multifunction tester, often during fault current testing. The instrument uses a single-phase supply or between line conductors on a three-phase supply to measure the maximum potential current at various points in the installation. Testing often involves checking currents between L1 N CPC, L2 N CPC, and L3 N CPC, which measure current between the lines to neutral in a three-phase system.

When performing these tests, technicians should follow regulation 612.11 of a single-phase supply or between line conductors on a three-phase supply, ensuring that simple and circuit protective conductors are all connected correctly. Accurate testing must also account for maximum current flow. Live testing requires extreme caution, and it is important to refer to the test meter manufacturer’s instructions to ensure proper usage and safety. In three-phase systems, 3-phase electricity significantly impacts how fault current behaves during a short circuit.

 

What is the difference between PFC and Short-Circuit Current?

Though often confused, prospective fault current and short-circuit current are distinct. Prospective fault current is the theoretical maximum current that could flow in a fault, used to predict the worst-case scenario for selecting protective devices. Short-circuit current refers to the actual current that flows during a fault, which depends on real-time conditions such as circuit impedance. Prospective fault current is one of the many concepts that form the foundation of electricity fundamentals.

 

How Does Prospective Fault Current Impact the Selection of Protective Devices?

The calculation of PFC plays a critical role in selecting the correct protective devices. Circuit breakers and fuses must have a breaking capacity that matches or exceeds the prospective fault current in the system. If the PFC exceeds the breaking capacity, the protective device may fail, leading to dangerous electrical hazards.

For instance, fault current testing using a multifunction tester between phases and neutral (L1, L2, L3) ensures that protective devices are rated to handle the highest potential fault current in the system. Proper circuit protection ensures that the system can interrupt faults safely, minimizing the risks to workers and equipment.

 

What Standards and Regulations Govern Prospective Fault Current Calculations?

Various standards, such as IEC 60909, govern how PFC is calculated and how protective devices are selected. These regulations ensure that electrical systems are designed to handle maximum fault conditions safely. Regulation 612.11 further specifies how live testing should be conducted using proper equipment and safety protocols.

It is essential to test PFC at relevant points in the system and follow testing standards to ensure compliance and safety. Devices selected based on PFC calculations help ensure that electrical systems can withstand faults and maintain reliable operation.

Prospective fault current is a crucial element in the safety and reliability of electrical installations. By calculating PFC, engineers can select protective devices that ensure safe operation in the event of a fault. Testing for fault currents at different points in the system and adhering to regulations are essential steps in preventing hazardous conditions.

By choosing protective devices with the appropriate breaking capacity and following safe testing practices, electrical installations can handle fault conditions and protect both workers and equipment from harm. Selecting protective devices that match the PFC is essential for reliable electric power systems design.

 

Related Articles

 

View more

Types of Resistors

Types of resistors are essential in electronics, as they control current, reduce voltage, and ensure circuit stability. From fixed and variable resistors to specialized forms like thermistors and photoresistors, each type plays a critical role in electrical engineering.

 

What are the Various Types of Resistors?

Types of resistors describe categories of electronic components designed to limit current and manage voltage in circuits."
✅ Fixed resistors ensure stable resistance values

✅ Variable resistors adjust resistance as needed

✅ Specialized resistors respond to heat or light

 

There are two main types of resistors: fixed and variable. Fixed resistors maintain a constant value, while variable resistors can be adjusted to suit changing requirements. Within these groups are many designs crafted from various materials, each selected for its precision, stability, and suitability in specific applications. When studying how different resistor types perform, it helps to review Ohm’s Law Formula, which explains the mathematical relationship between voltage, current, and resistance.

In addition to resistors, other passive components, such as capacitors, are equally important for controlling current flow and stabilizing electrical circuits.

 

Common Types of Resistors

Type Key Feature Typical Use
Carbon Film Low cost, general use Everyday electronics
Metal Film High accuracy, low noise Precision circuits
Metal Oxide Surge resistant Power supplies
Wire-Wound High power handling Industrial applications
Surface-Mount (SMD) Compact, space-saving Modern circuit boards
Potentiometer Adjustable resistance Volume/tuning controls
Thermistor Temperature-sensitive Sensors, protection devices

 

Fixed Types of Resistors

 

Carbon Film Resistors

Carbon film resistors are inexpensive, reliable, and widely used in general-purpose electronics. By applying a thin carbon layer to a ceramic base, they provide a stable resistance for everyday circuits. While not as precise as modern designs, they remain popular where high accuracy isn’t required.

 

Carbon Composition Resistors

Once the standard in consumer electronics, carbon composition resistors are now less common, but they still serve in circuits that need high pulse load capacity. They offer durability in the face of sudden surges, although their long-term stability and accuracy are lower than those of newer types.

 

Metal Film Resistors

Metal film resistors, made by depositing a nickel-chromium alloy onto a ceramic substrate, are prized for their precision. With excellent stability, low noise, and tight tolerances, they are widely chosen for high-frequency and sensitive applications. However, they cannot dissipate as much power as wire-wound resistors and are vulnerable to strong surges.

 

Metal Oxide Resistors

Using oxides like tin or ruthenium on ceramic, metal oxide resistors are known for their robustness. They combine good tolerance with the ability to withstand high voltages, making them ideal where surge protection is essential. While not as accurate as metal film resistors, they offer higher reliability in demanding conditions.

 

Wire-Wound Resistors

Built from coiled metal wire wrapped around a ceramic or fibreglass core, wire-wound resistors excel in high-power applications. Their ability to handle large currents and high temperatures makes them indispensable in heavy-duty circuits. Their main drawback is bulk, and at high frequencies, their inductance can interfere with performance.

 

Metal Strip (Foil) Resistors

Metal strip or foil resistors deliver the highest accuracy and stability, with tolerances as fine as 0.005%. Their precision makes them the component of choice in measurement instruments and high-end electronics. Their primary downsides are cost and limited power dissipation, which restrict their use in everyday applications.

 

Thick and Thin Film Resistors

Produced by different deposition techniques, thick and thin film resistors serve specialized roles. Thick film designs are durable and suited to power electronics and automotive systems, while thin film types offer high accuracy and stability in precision or high-frequency circuits.

 

Surface-Mount Resistors (SMDs)

Surface-mount resistors are compact components soldered directly onto printed circuit boards. They make modern electronics smaller and more efficient, and although tiny, they cover a wide range of resistance values to support everything from consumer devices to industrial controls.

 

Variable Types of Resistors

Engineers often use practical examples to compare the unit of electrical resistance with how resistors function in series and parallel arrangements.

Potentiometers

Potentiometers are adjustable resistors that allow manual control over current or voltage. They are common in volume dials, tuning controls, and adjustable circuits where user input is required.

 

Light-Dependent Resistors (LDRs)

LDRs change resistance with varying light levels, making them useful in light sensors, alarms, and automatic lighting systems.

 

Thermistors

Thermistors alter resistance with temperature. Positive temperature coefficient (PTC) types increase resistance as they heat, protecting circuits from over-current, while negative temperature coefficient (NTC) types reduce resistance with rising temperature, making them useful for sensing and regulation.

When considering the various types of resistors, they may be simple components, but their diversity makes them essential to every circuit. Whether precision, power handling, or responsiveness to environmental changes is needed, there is a resistor designed for the task. Selecting the right one ensures accuracy, stability, and safety in electronic design.

 

Related Articles

 

View more

Watt’s Law - Power Triangle

Watt’s Law defines the relationship between power (watts), voltage (volts), and current (amps): Power = Voltage × Current. It’s used in electrical calculations to determine energy usage, system efficiency, and safe equipment ratings in both residential and industrial systems.

 

What is: Watt’s Law?

Watt’s Law is a fundamental principle in electrical engineering:

✅ Calculates electrical power as the product of voltage and current

✅ Helps design efficient and safe electrical systems

✅ Used in both residential and industrial applications

Watt’s Law is a fundamental principle in electrical engineering that defines the relationship between power, voltage, and current in an electrical circuit. James Watt invented the law. It states that the power (measured in watts) of an electrical device is equal to the product of the voltage (measured in volts) and the current (measured in amperes) flowing through it. In other words, the watt's law formula is expressed as: Power = Voltage × Current. This simple equation is essential for understanding how electrical components consume and distribute energy in a circuit. 

For example, consider a light bulb connected to an electrical circuit. The electrical potential (voltage) pushes the electric charge through the filament of the bulb, creating a flow of electrons (current). As the electrons flow, they generate heat and light, representing the bulb’s power in a circuit. By knowing the voltage and current, you can easily calculate the power output of the bulb. The wattage of the bulb indicates the energy consumed per second.

Practical applications of this formula are vast. This equation is especially useful in designing safe and efficient electrical systems. For instance, designing the wiring for both small devices and large power systems requires a thorough understanding of the relationship between voltage, current, and power. The formula helps ensure that systems are capable of delivering the required energy without causing failures or inefficiencies.

Ohm’s Law and this principle are often used together in electrical engineering. While power focuses on the relationship between voltage and current, Ohm’s Law deals with the relationship between voltage, current, and resistance (measured in ohms). Ohm’s Law states that voltage equals current multiplied by resistance (Voltage = Current × Resistance). By combining Ohm’s Law and this power equation, you can analyze an electrical system more comprehensively. For example, if you know the voltage and resistance in a circuit, you can calculate the current and then determine the power in the circuit. To fully understand Watt's Law, it helps to explore how voltage and current electricity interact in a typical electrical circuit.

 

Georg Simon Ohm – German physicist and mathematician (1787–1854), known for Ohm's Law, relating voltage, current, and resistance.

 

What is Watt's Law and how is it used in electrical circuits?

Watt’s Law is a fundamental principle in electrical engineering that defines the relationship between power, voltage, and current in an electrical circuit. The formula is expressed as:

Power (Watts) = Voltage (Volts) × Current (Amperes)

In simpler terms, Watt’s Law states that the electrical power consumed by a device (measured in watts) is the product of the electrical potential difference (voltage) and the current flowing through the circuit. Accurate calculations using Watt’s Law often require a voltage-drop calculator to account for line losses in long-distance wiring. Comparing voltage drop and voltage sag conditions illustrates how slight changes in voltage can have a substantial impact on power output.

 

James Watt – Scottish inventor and mechanical engineer (1736–1819), whose improvements to the steam engine led to the naming of the watt (unit of power).

 

How is it used? Watt’s Law is widely used to determine the amount of power an electrical device or system consumes. This is especially important for designing electrical circuits, optimizing power distribution, and ensuring the efficiency of devices. Here are a few examples of how it’s applied:

  • Electrical Circuit Design: Engineers use it to calculate the power consumption of devices and ensure that circuits can handle the expected electrical load. This helps prevent overloads and ensures that the wiring is safe.

  • Power Output Calculations: Using this formula, you can calculate the power output of a generator, appliance, or device, enabling you to match the right components to your system's requirements.

  • Energy Efficiency: Understanding power consumption in appliances and devices helps consumers make informed choices, such as selecting energy-efficient options. Devices like wattmeters and watthour meters measure power and energy usage based directly on the principles of Watt’s Law. For a deeper look at how devices like ammeters help measure current, see how their readings plug directly into Watt’s Law calculations.

 

How is Watt's Law different from Ohm's Law?

Watt’s Law and Ohm’s Law are both fundamental principles in electrical engineering, but they deal with different aspects of electrical systems:

  • Watt’s Law defines the relationship between power, voltage, and current. It focuses on the amount of energy used by a device in a given circuit. The formula is:

           Power = Voltage × Current

  • Ohm’s Law defines the relationship between voltage, current, and resistance in a circuit. Ohm’s Law explains how the current is affected by the voltage and the resistance present in the circuit. The formula for Ohm’s Law is:

            Voltage = Current × Resistance

 

Key Differences:

  • Focus: It focuses on power, while Ohm’s Law focuses on the flow of electricity in a circuit, particularly how resistance affects current.

  • Watt’s Law is used to determine the amount of power a device is consuming. Ohm’s Law, on the other hand, is used to calculate current, voltage, or resistance in a circuit depending on the other known variables.

  • Applications: It is applied when designing systems that require power management, such as calculating the power output or efficiency of devices. Ohm’s Law is used more in analyzing how current behaves in a circuit when different resistive elements are present.

By combining both laws, electrical engineers can gain a comprehensive understanding of how electrical systems function, ensuring that devices operate efficiently and safely. When used with Ohm’s Law, Watt's Law enables engineers to analyze both energy consumption and electrical resistance.

One key area of application is in energy consumption. By understanding the voltage and current values for a specific device, engineers can monitor the amount of energy the device consumes. This is especially important for managing energy usage in homes, businesses, and power systems. By applying the formula, you can identify inefficient devices and make more informed decisions about energy efficiency.

In renewable energy systems, such as solar panels and wind turbines, this principle plays a critical role in optimizing energy output. Engineers use the formula to calculate how much electrical energy is being generated and distributed. This is crucial for ensuring that power systems operate efficiently and minimize excess energy loss.

Another practical application of this formula is in the automotive industry. It is used to design vehicle charging systems and battery technologies. For example, electric vehicle (EV) charging stations depend on understanding voltage, current, and power to ensure efficient charging times. Engineers use the equation to calculate the charging capacity required for EV batteries, helping to create optimal charging solutions.

In large facilities like data centers, this Watt’s Law formula is used to ensure power distribution is efficient. By applying the relationship between power, voltage, and current, engineers can effectively manage power systems, thereby reducing energy consumption and operational costs. Proper energy management in data centers is crucial, as high power usage can result in significant energy costs.

This power formula is indispensable for electrical engineers and technicians. The applications of Watt’s Law extend across various industries and are utilized in everything from designing power system wiring to developing renewable energy technologies. By combining Ohm’s Law and this principle, electrical engineers can optimize the performance of electrical components, ensuring energy efficiency and system reliability. Understanding the role of a resistor in a circuit can reveal how power is dissipated as heat, a key concept derived from Watt’s Law.

Finally, visual tools like the Watt's Law triangle are often used to simplify the application of this principle, helping both professionals and students understand how to apply the formula. As technology advances and energy demands grow, this formula remains a key element in electrical engineering, guiding the development of more efficient systems for the future.

 

Related Articles

 

View more

Capacitance Explained

Capacitance: Understanding the Ability to Store Electricity

Capacitance is an essential concept in electrical circuits, and it describes the ability of a capacitor to store electrical energy. Capacitors are electronic components used in many circuits to perform various functions, such as filtering, timing, and power conversion. Capacitance is a measure of a capacitor's ability to store electrical energy, and it plays a crucial role in the design and operation of electrical circuits. This article provides an overview of capacitance, including its definition, SI unit, and the difference between capacitor and capacitance.

 

What is Capacitance?

Capacitance is the ability of a capacitor to store electrical charge. A capacitor consists of two conductive plates separated by a dielectric material. The conductive plates are connected to an electrical circuit, and the dielectric material is placed between them to prevent direct contact. When a voltage source is applied to the plates, electrical charge builds up on the surface of the plates. The amount of charge that a capacitor can store is determined by its capacitance, which depends on the size and distance between the plates, as well as the dielectric constant of the material.

The energy storing capability of a capacitor is based on its capacitance. This means that a capacitor with a higher capacitance can store more energy than a capacitor with a lower capacitance. The energy stored in a capacitor is given by the formula:

Energy (Joules) = 0.5 x Capacitance (Farads) x Voltage^2

The ability to store energy is essential for many applications, including filtering, timing, and power conversion. Capacitors are commonly used in DC circuits to smooth out voltage fluctuations and prevent noise. They are also used in AC circuits to filter out high-frequency signals.

 

What is Capacitance and the SI Unit of Capacitance?

Capacitance is defined as the ratio of the electrical charge stored on a capacitor to the voltage applied to it. The SI unit of capacitance is the Farad (F), which is defined as the amount of capacitance that stores one coulomb of electrical charge when a voltage of one volt is applied. One Farad is a relatively large unit of capacitance, and most capacitors have values that are much smaller. Therefore, capacitors are often measured in microfarads (µF) or picofarads (pF).

The capacitance of a capacitor depends on several factors, including the distance between the plates, the surface area of the plates, and the dielectric constant of the material between the plates. The dielectric constant is a measure of the ability of the material to store electrical energy, and it affects the capacitance of the capacitor. The higher the dielectric constant of the material, the higher the capacitance of the capacitor.

 

What is the Difference Between Capacitor and Capacitance?

Capacitor and capacitance are related concepts but are not the same thing. Capacitance is the ability of a capacitor to store electrical energy, while a capacitor is an electronic component that stores electrical charge. A capacitor consists of two conductive plates separated by a dielectric material, and it is designed to store electrical charge. Capacitance is a property of a capacitor, and it determines the amount of electrical charge that the capacitor can store. Capacitance is measured in Farads, while the capacitor is measured in units of capacitance, such as microfarads (µF) or picofarads (pF).

 

What is an Example of Capacitance?

One example of capacitance is a common electronic component known as an electrolytic capacitor. These capacitors are used in a wide range of electronic circuits to store electrical energy, filter out noise, and regulate voltage. They consist of two conductive plates separated by a dielectric material, which is usually an electrolyte. The electrolyte allows for a high capacitance, which means that these capacitors can store a large amount of electrical energy.

Another example of capacitance is the human body. Although the capacitance of the human body is relatively small, it can still store a significant amount of electrical charge. This is why people can sometimes feel a shock when they touch a grounded object, such as a metal doorknob or a handrail. The capacitance of the human body is affected by several factors, including the size and shape of the body, as well as the material and proximity of the objects it comes into contact with.

View more

Electrical Units Explained

Electrical units measure various aspects of electricity, such as voltage (volts), current (amperes), resistance (ohms), and power (watts). These standard units are crucial in electrical engineering, circuit design, energy monitoring, and ensuring the safe operation of electrical systems.

 

What are Electrical Units?

Electrical units are standardized measures used to quantify electrical properties in circuits and systems.

✅ Measure voltage, current, resistance, power, and energy

✅ Used in electrical engineering, testing, and design

✅ Support safe and efficient electrical system operations

Electrical units are standardized measurements that describe various aspects of electricity, such as current, voltage, resistance, and power. These units, like amperes for current and volts for voltage, help quantify the behavior and interaction of systems. By understanding electrical units, professionals can assess performance, design circuits, and ensure safety across different applications. These electrical units play a crucial role in the functioning of everything from household appliances to industrial machinery, making them fundamental in engineering and everyday technology.

In common electricity systems, various electrical units of measure, such as magnetic field, are used to describe how electricity flows in the circuit. For example, the unit of resistance is the ohm, while the unit of time is the second. These measurements, often based on SI units, help define the phase angle, which describes the phase difference between current and voltage in AC circuits. Understanding these electrical units is critical for accurately analyzing performance in both residential and industrial applications, ensuring proper function and safety.

 

Ampere

The ampere is the unit of electric current in the SI, used by both scientists and technologists. Since 1948, the ampere has been defined as the constant current that, if maintained in two straight, parallel conductors of infinite length and negligible circular cross-section, and placed one meter apart in a vacuum, would produce between these conductors a force equal to 2 × 10^7 newtons per meter of length. Named for the 19th-century French physicist André-Marie Ampere, it represents a flow of one coulomb of electricity per second. A flow of one ampere is produced in a resistance of one ohm by a potential difference of one volt. The ampere is the standard unit of electric current, playing a central role in the flow of electricity through electrical circuits.

 

Coulomb

The coulomb is the unit of electric charge in the metre-kilogram—second-ampere system, the basis of the SI system of physical electrical units. The coulomb is defined as the quantity of electricity transported in one second by a current of one ampere. Named for the I8th—I9th-century French physicist.

 

Electron Volt

A unit of energy commonly used in atomic and nuclear physics, the electron volt is equal to the energy gained by an electron (a charged particle carrying one unit of electronic charge when the potential at the electron increases by one volt. The electron volt equals 1.602 x IO2 erg. The abbreviation MeV indicates 10 to the 6th (1,000,000) electron volts, and GeV, 10 to the 9th (1,000,000,000). For those managing voltage drop in long circuits, we provide a helpful voltage drop calculator and related formulas to ensure system efficiency.

 

Faraday

The Faraday (also known as the Faraday constant) is used in the study of electrochemical reactions and represents the amount of electric charge that liberates one gram equivalent of any ion from an electrolytic solution. It was named in honour of the 19th-century English scientist Michael Faraday and equals 6.02214179 × 10^23 coulombs, or 1.60217662 × 10^-19 electrons.

 

Henry

The henry is a unit of either self-inductance or mutual inductance, abbreviated h (or hy), and named for the American physicist Joseph Henry. One henry is the value of self-inductance in a closed circuit or coil in which one volt is produced by a variation of the inducing current of one ampere per second. One henry is also the value of the mutual inductance of two coils arranged such that an electromotive force of one volt is induced in one if the current in the other is changing at a rate of one ampere per second.

 

Ohm

The unit of resistance in the metre-kilogram-second system is the ohm, named in honour of the 19th-century German physicist Georg Simon Ohm. It is equal to the resistance of a circuit in which a potential difference of one volt produces a current of one ampere (1 ohm = 1 V/A); or, the resistance in which one watt of power is dissipated when one ampere flows through it. Ohm's law states that resistance equals the ratio of the potential difference to current, and the ohm, volt, and ampere are the respective fundamental electrical units used universally for expressing quantities. Impedance, the apparent resistance to an alternating current, and reactance, the part of impedance resulting from capacitance or inductance, are circuit characteristics that are measured in ohms. The acoustic ohm and the mechanical ohm are analogous units sometimes used in the study of acoustic and mechanical systems, respectively. Resistance, measured in ohms, determines how much a circuit resists current, as explained in our page on Ohm’s Law.

 

Siemens

The siemens (S) is the unit of conductance. In the case of direct current (DC), the conductance in siemens is the reciprocal of the resistance in ohms (S = amperes per volt); in the case of alternating current (AC), it is the reciprocal of the impedance in ohms. A former term for the reciprocal of the ohm is the mho (ohm spelled backward). It is disputed whether Siemens was named after the German-born engineer-inventor Sir William Siemens(1823-83) or his brother, the engineer Werner von Siemens (1816-92).

 

Volt

The unit of electrical potential, potential difference, and electromotive force in the metre—kilogram—second system (SI) is the volt; it is equal to the difference in potential between two points in a conductor carrying one ampere of current when the power dissipated between the points is one watt. An equivalent is the potential difference across a resistance of one ohm when one ampere of current flows through it. The volt is named in honour of the I8th—I9th-century Italian physicist Alessandro Volta. Ohm's law defines these electrical units, where resistance equals the ratio of potential to current, and the respective units of ohm, volt, and ampere are used universally for expressing electrical quantities. Energy consumption is measured in kWh, or kilowatt-hours. Explore how devices like ammeters and voltmeters are used to measure current and voltage across components. To better understand how voltage is measured and expressed in volts, see our guide on what is voltage.

 

Watt

The watt is the unit of power in the SI equal to one joule of work performed per second, or to 1/746 horsepower. An equivalent is the power dissipated in a conductor carrying one ampere of current between points at a one-volt potential difference. It is named in honour of James Watt, British engineer and inventor. One thousand watts equals one kilowatt. Most electrical devices are rated in watts. Learn how a watt defines power in electrical systems and its relationship to volts and amperes through Watts' Law.

 

Weber

The weber is the unit of magnetic flux in the SI, defined as the amount of flux that, linking a circuit of one turn (one loop of wire), produces in it an electromotive force of one volt as the flux is reduced to zero at a uniform rate in one second. It was named in honour of the 19th-century German physicist Wilhelm Eduard Weber and equals 10 to the 8th maxwells, the unit used in the centimetre—gram—second system.

Related Articles

 

View more

Windmills For Electricity Explained

Windmills for electricity use wind energy to generate clean, renewable power. These wind turbines convert kinetic energy into electrical energy, reducing carbon emissions and dependence on fossil fuels. 

 

What are Windmills for Electricity?

Windmills for electricity are modern devices that transform kinetic wind energy into electrical power.

✅ Harness renewable energy for clean power

✅ Reduce carbon footprint and dependence on fossil fuels

✅ Support sustainable power generation worldwide

Windmills for electricity are part of a broader shift toward renewable energy, providing clean alternatives to fossil fuels for homes, businesses, and utilities.

 

History of Windmills

Windmills for electricity - Mankind has been harnessing the wind's energy for many years. From Holland to traditional farms around the world, windmills were used in the past for pumping water through primitive irrigation systems or to grind grain. Then, the wind turned large "sails" that were connected by a long vertical shaft, which was attached to a grinding machine or a wheel that turned and drew water from a well. Today's turbines harness the energy of the wind to turn large metal blades, which in turn spin a generator that produces electric power. Alongside wind, other renewable energy sources like solar, biomass, and tidal energy are shaping a diversified and sustainable energy future.

From the mid-1970s to the mid-1980s, the United States government collaborated with industry to advance windmill technology for power generation and enable the development of large commercial wind turbines. NASA led this effort at the Lewis Research Center in Cleveland, Ohio, and it was an extraordinarily successful government research and development activity.

 

National Science Foundation

With funding from the National Science Foundation and later the Department of Energy (DOE), a total of 13 experimental wind turbines were put into operation, including four major wind turbine designs. This research and development program pioneered many of the multi-megawatt turbine technologies in use today, including steel tube towers, variable-speed generators, composite blade materials, partial-span pitch control, as well as aerodynamic, structural, and acoustic engineering design capabilities. The large Windmills For Electricity developed under this effort set several world records for diameter and power output. The Mod-2 wind turbine cluster produced a total of 7.5 megawatts of power in 1981. Government incentives, such as alternative energy tax credits, have played a major role in expanding wind power adoption across North America.

 

Wind Turbine Technology

In 1987, the Mod-5B was the largest single wind turbine operating in the world with a rotor diameter of nearly 100 meters and a rated power of 3.2 megawatts. It demonstrated an availability of 95 percent, an unparalleled level for a new first-unit wind turbine. The Mod-5B featured the first large-scale variable-speed drive train and a sectioned, two-blade rotor, which enabled easy transport of the blades.

Later, in the 1980s, California provided tax rebates for ecologically harmless wind turbines. These rebates helped fund the first major deployment of wind power for the utility grid. These turbines gathered in large wind parks such as at Altamont Pass, would be considered small and uneconomical by modern wind power development standards.

In the 1990s, as aesthetics and durability became more important, turbines were placed atop steel or reinforced concrete towers. Small generators are connected to the ground tower, and then the tower is raised into position. Larger generators are hoisted into position atop the tower, and a ladder or staircase is located inside the tower to allow technicians to reach and maintain the generator.

Originally, wind turbines were built right next to where their power was needed. With the availability of long-distance electric power transmission, wind generators are now often on wind farms in windy locations, and huge ones are being built offshore, sometimes transmitting power back to land using high-voltage submarine cable. Since wind turbines are a renewable means of generating power, they are being widely deployed, but their cost is often subsidized by taxpayers, either directly or through renewable energy credits. Much depends on the cost of alternative energy sources. The cost of wind generators per unit of power has been decreasing by about 4% per year.

 

Modern Wind Turbines

The most modern generations of Windmills for electricity are more properly called wind turbines, or wind generators, and are primarily used to generate electric power. Modern windmills are designed to harness the energy of the wind and convert it into electric energy. The largest wind turbines can generate up to 6 MW of power (for comparison, a modern fossil fuel power plant generates between 500 and 1,300 MW). Many large-scale renewable energy projects now combine wind farms with solar and storage systems, ensuring reliable, clean power for communities worldwide.

Small wind turbines can generate as little as a few kilowatts, while larger models produce up to 100 kilowatts or more, depending on design and location. These devices capture moving air, and as wind turbines operate, the kinetic energy generated can be used directly or sent into the electrical grid. On a utility scale, wind farms combine many large turbines to deliver massive amounts of energy, powering thousands of homes and businesses. This range of applications, from residential to industrial, demonstrates the versatility of wind technology in meeting diverse energy needs.

 

Related Articles

 

View more

Sign Up for Electricity Forum’s Newsletter

Stay informed with our FREE Newsletter — get the latest news, breakthrough technologies, and expert insights, delivered straight to your inbox.

Electricity Today T&D Magazine Subscribe for FREE

Stay informed with the latest T&D policies and technologies.
  • Timely insights from industry experts
  • Practical solutions T&D engineers
  • Free access to every issue

Live Online & In-person Group Training

Advantages To Instructor-Led Training – Instructor-Led Course, Customized Training, Multiple Locations, Economical, CEU Credits, Course Discounts.

Request For Quotation

Whether you would prefer Live Online or In-Person instruction, our electrical training courses can be tailored to meet your company's specific requirements and delivered to your employees in one location or at various locations.