Achieve accurate measurement of large DC currents

Publisher:NanoScribeLatest update time:2014-11-23 Source: 互联网 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

  While there are many instruments that can accurately measure small DC currents (up to 3A), few can accurately (better than 1%) measure DC currents above 50A. This large current range is typical of loads such as electric vehicles (EVs), grid energy storage, and photovoltaic (photovoltaic) renewable energy installations. In addition, these systems require accurate prediction of the state of charge (SOC) of the associated energy storage battery. The estimation of the state of charge can be achieved based on current and charge (coulomb counting) measurements, and accurate measurement data is necessary for accurate state of charge estimation.

  Generally speaking, any system used for current or charge measurement is designed to include built-in data acquisition components such as appropriate amplifiers, filters, analog-to-digital converters (ADCs), etc. A current sensor is used to detect the current. The output of the current sensor needs to be converted into a usable form (i.e., voltage) by a circuit. The signal is then filtered to reduce electromagnetic and radio frequency interference. It is then amplified and digitized. Each current data sample is then multiplied by the appropriate time interval and accumulated (through digital calculations) to calculate the charge value.

  On the other hand, if digitizing is done at a constant frequency, current samples are accumulated first and then multiplied by the appropriate time interval when the accumulated charge value is read out or utilized in some way. Consideration also needs to be given to choosing an appropriate minimum Nyquist sampling rate and using a sufficiently narrow anti-aliasing filter before the analog-to-digital converter.

  

 

  Figure 1: Signal chain in a typical modern current measurement system.

  Practical sensor technology for high current measurement

  There are two sensor technologies that are most common when it comes to measuring high currents. The first technique detects the magnetic field around a conductor carrying the current. The second technique measures the voltage drop across a resistor (often called a shunt) that carries the current (and charge) to be measured. This voltage drop follows Ohm's law (V = I × R).

  Devices used for high current measurement are often called Hall effect current sensors. This sensor has a current carrying element built into it. When current and an external magnetic field are applied to the element, a pressure difference appears on both sides of the element, perpendicular to the direction of the current and perpendicular to the direction of the external magnetic field. The Hall effect pressure difference in ordinary metals is very small. It is worth noting that not all DC current sensors that measure the magnetic field around a current carrying conductor are based on the Hall effect. The differences between them are briefly described below.

  High Current Hall Effect Sensors

  In order to make a current sensor with a Hall effect device, a magnetic core is needed to concentrate the magnetic field around the conductor current, and a slot is opened in the core to accommodate the actual Hall element. The relatively small slot (relative to the entire magnetic path length) will form a magnetic field that is nearly uniform and perpendicular to the plane of the Hall element. When the Hall element obtains current energy, it will generate a voltage proportional to the excitation current and the magnetic field of the core. This Hall voltage is amplified and output from the output of the current sensor.

  

 

  Figure 2: Schematic diagram of the magnetic field around a conductor, a linear open-loop Hall effect sensor, and a closed-loop sensor.

  Since there is no electrical connection between the current-carrying conductor and the magnetic core (only the magnetic field is coupled), the sensor is effectively isolated from the circuit being measured. The current-carrying conductor may have a high voltage, and the output of the Hall-effect current sensor can be safely connected to a ground circuit, or to a circuit at any potential relative to the current-carrying conductor, so it is relatively easy to provide clearance and creepage values ​​that meet the most stringent safety standards.

  However, this linear sensor has some disadvantages. Perhaps the least important of these is the fact that Hall effect sensors require a constant excitation current. In addition, the amplification and conditioning circuitry that processes the signal from the Hall effect sensor usually consumes significant power. Of course, this power consumption may not be so significant, depending on the specific application. Nevertheless, Hall sensors used to measure current continuously cannot consume power in the milliwatt range.

  Hall effect sensors: large drift, small usable operating temperature range

  Because the output of a typical linear sensor is measured ratiometrically (depending not only on the strength of the magnetic field being measured, but also on the value of the excitation current), the stability of the excitation current will greatly affect the current amplitude being measured and the zero offset when no current is flowing. In general, the latter two depend on the stability of the supply voltage and temperature changes (because the resistance of the Hall sensor element, which affects the excitation current and the Hall voltage itself, depends on the operating temperature).

  A sensor variant that measures the excitation current and accounts for this in the output is possible. But it requires precision external components and larger processing circuits. Also, the Hall voltage is a nonlinear function of the magnetic field to be measured, which further increases the error of the sensor.

  Because different errors occur under different conditions, most linear HED manufacturers break down the total error into many individual components. It is sometimes difficult to calculate the total combined error.

  

 

  Closed loop current sensor

  To address the nonlinearity of the Hall sensor element, the industry has developed another technology that relies on detecting the presence or sign of the magnetic field in the sensing core rather than measuring the strength of the magnetic field. In addition, it can avoid measurement errors caused by unstable excitation current in the Hall element.

  This technique involves adding a winding to the core that produces a magnetic field of opposite sign, but equal in strength to the magnetic field produced by the current being measured. Now the Hall sensor element is only used to detect the sign of the magnetic field, not the strength of the magnetic field. This winding is connected to a circuit with an op amp. The circuit maintains the current in this compensation winding and makes the magnetic field sensed by the Hall sensor zero. The current in the compensation winding is many times smaller than the current in the conductor being measured (perhaps more than 1000 times), and this function can be achieved by simply winding a few more turns on the core when making the winding, and the number of turns can be precisely controlled.

  Due to the role of the compensation winding in the op amp feedback loop, this type of current sensor is often referred to as a "closed loop" sensor. In contrast, the simple linear Hall effect sensors described above are often considered "open loop" sensors to emphasize the absence of a feedback mechanism in their operation.

  In Hall-effect devices, the (offset) error in detecting zero magnetic field cannot be reduced to an arbitrarily small value due to various drifts, mostly due to temperature-dependent drifts. This is why some higher-performance current sensors use technologies that do not rely on the Hall effect. However, these sensors are generally still called Hall-effect sensors, simply because they are very similar to Hall-effect devices in appearance.

  Other magnetic field detectors

  Among non-Hall devices, there are sensors based on various physical phenomena that can be used to perform the function of a magnetic field detector. One technology is based on the magnetoresistance effect, which is when a magnetic field is applied to the sensor, the resistance of the sensor changes.

  Another technology used in magnetic field detectors takes advantage of the nonlinear properties of ferrites between magnetic field strength (represented by H), magnetic flux density (represented by B), and a special phenomenon called saturation. As the H field increases, the magnetic flux density B will eventually reach a point where it no longer increases significantly - this point is called the saturation point. Some specially formulated materials have very low saturation points and are widely used in devices called fluxgates.

  In fact, a fluxgate-based sensor converts a constant magnetic field into a "gated" or "chopped" magnetic field that alternates between full scale and almost zero. This magnetic field variation can be easily picked up by a winding on the core and then amplified by an AC amplifier. Finally, a value proportional to the constant magnetic field to be measured is recovered using the so-called synchronous detection technique (because the circuit itself controls the chopping action).

  It is worth noting that the mechanical structure and associated circuit complexity of such sensors are much higher than those of closed-loop sensors. In addition, their operation is very difficult - when the sensor is not powered, or when the compensation winding circuit is open due to a loose connection to the external sense resistor, the current is measured - often resulting in irrecoverable offset and gain specifications. Because the compensation winding cannot cancel the magnetic field from the current being measured, the magnetic element in this sensor will be permanently magnetized.

  Precision resistors are required

  The output signal of the closed-loop sensor is the current in the compensation winding (its value is many times smaller than the current to be measured). This current is usually converted into a voltage value for further processing and digitization. In this case, only ordinary resistors are needed.

  However, the accuracy and stability of this resistor will directly affect the accuracy and stability of the closed-loop current sensor. A closed-loop sensor with a basic accuracy specified as 0.0.01% will quickly degrade to 1% accuracy if a 1% accuracy sense resistor is used.

  But it is difficult to buy resistors with an accuracy better than 0.01% in commercial quantities, even if they only operate over a narrow temperature range.

  High current shunt

  As mentioned previously, the second current measurement technique uses the voltage drop across a resistor. When determining the current based on Ohm’s law, a unique set of factors need to be considered, depending on the current magnitude. For relatively small currents, the voltage drop across the shunt resistor can be made quite large to overcome any errors due to heat dissipation in the sense connections and shunt resistor or due to temperature differences in the operating environment. However, when currents exceed 50A, heat dissipation and thermoelectric errors are of primary importance. Also, since the shunt resistor is always heated by the current flowing through it and may be operating in an environment with unstable temperature, the stability of the shunt resistor value with respect to temperature is particularly important.

  Physical composition of a shunt

  At first glance, a shunt device is a simple resistor. Some conductive material with suitable properties in terms of volume resistivity, stability (over temperature and time) and suitable mechanical form factor can be used as a shunt resistor. A low-precision shunt resistor can be simply a length of wire or a rectangular shape constructed from a suitable alloy and simply soldered in series (or connected in some way electrically) with the current-carrying conductor. However, it is almost impossible to insert such a shunt component into a measurement circuit without affecting its resistance value (due to variations in the amount of solder at the connection point, or variations in the mechanical details of the connection).   In addition, for stability reasons, it is very beneficial to arrange the shunt resistors in such a way that the current density is mostly uniform within any given cross-section of the shunt resistor. This prevents the formation of so-called hot spots - defined as areas within the shunt resistor that are hotter than the rest of the material. In addition to simple resistance changes, the elevated temperature at the hot spot may bring the resistive material to an annealing point temperature, at which point (achieved by careful control of the chemical composition and processing) the material's resistance value may begin to change permanently.

  Even though the actual presence of hot spots does not affect accuracy, it is impossible to ensure that they form in exactly the same place when calibrating a shunt resistor. Therefore, the design of the shunt resistor includes a method to evenly distribute the current across the cross-section of the resistive material, or between a single parallel resistive section and within each section.

  This is why most higher precision shunt resistors are made of three distinct sections: two areas are the terminals that connect to the circuit (almost always made of thick, high conductivity material, such as copper), and another area or multiple parallel areas make up the bulk of the shunt resistor. The two terminal areas are connected by a resistor segment or segments using welding or metallurgical processes, with a very uniform seam.

  The resistive portion (also called the active portion) of a precision shunt resistor must have impedance characteristics that have low temperature dependence. One of the most common alloys used for precision shunt resistors is manganese bronze, developed in 1892 by Edward Weston (of electrochemical cell fame - the Weston cell), due to its suitable resistance and low temperature coefficient of resistivity (TCR).

  Heat dissipation in shunt resistors

  The amount of heat dissipated by a resistor is proportional to the square of the current and the resistance (W = I2 × R). For example, a 1mΩ shunt resistor with 50A flowing through it will dissipate 2.5W, a manageable value with a modest heat sink and still air. Conversely, the same shunt resistor with 1kA will dissipate 1kW of heat, which requires a physically large and possibly forced-air (or liquid) cooling device.

  

 

  Figure 3: Relationship between heat dissipated in a shunt resistor and resistance and current.

  

 

  Figure 4: Heat dissipated in the shunt resistor versus full-scale output voltage and current.

  It should be clear from the above graph that the only way to reduce the amount of heat dissipated in a shunt resistor for a given current is to reduce its resistance. However, this will also reduce the voltage measured across the shunt resistor, and the signal will become more sensitive to errors induced in the shunt resistor and the sensing circuit, resulting in a degradation of accuracy at low currents.

  Error Sources in Shunt Measurement Methods

  High operating temperatures and temperature differences across the shunt resistor will negatively impact gain and offset errors. For shunt-based measurement systems, not only does the ambient temperature play a role, but the measured current itself also plays a role, as high currents heat the shunt resistor.

  Although the resistive (active) portion of the shunt element is made of a low TCR material, high operating temperature will inevitably promote the resistance to deviate from the calibrated value, no matter how small the change. This will produce a sensitivity (gain) error.

  Because of the different materials used in the construction of shunt resistors (that is, the connecting terminals and sense leads are typically made of a different material than the resistive portion of the shunt), there are so-called thermoelectric errors (such as the Seebeck effect) that can affect offset errors (reporting a current reading when the actual current is zero). Because the heat dissipation effects of shunt resistors can be measured and expressed in a predictable manner, some shunt-based systems can compensate for the thermal effects of shunt resistors that cause offset and gain errors. In any case, when designing a shunt-based current measurement system such as Figure 1 (a typical signal chain for a modern current measurement system), careful selection of components that provide minimal error and drift is required.

  

 

  Choosing the right measurement method

  For measuring large DC currents, the most basic issues are measurement accuracy and cost. Other important considerations include: operating environment (especially temperature range), power consumption, size and ruggedness (consider possible overloads, transients and uninspired operation). In order to judge the measurement accuracy of any given method, it is important to consider all possible error sources under all relevant extreme operating conditions.

  Table 1: Comparison of current dividers.

  

Reference address:Achieve accurate measurement of large DC currents

Previous article:How to better design PWM DC-DC system?
Next article:Causes and Suppression of DC Injection Problems in Single-Phase PV Grid-Connected Inverters

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号