Fast Checks for Refrigerant Charge?
For A/C technicians, new options exist for testing refrigerant charge levels.
Air conditioning systems that aren’t correctly charged don’t work as efficiently or for as long as they otherwise would.Yet incorrectly charged A/C systems are more common than correctly charged ones.Tests of more than 4,000 residential cooling systems in California revealed that about 34% were undercharged, 28% were overcharged, and only 38% had the correct charge.
Why aren’t A/C technicians fixing this pervasive problem? Because, until recently, the tests that have been available to check charge levels have often been misunderstood or poorly applied.That could be changing.
Refrigerant undercharge and overcharge both reduce A/C performance. For example, laboratory testing of capillary tube-controlled equipment indicates that an undercharge of 15% reduces cooling equipment total capacity by 8%–22% and its energy efficiency ratio (EER) by 4%–6%.An overcharge of 10% reduces capacity by 1%–9% and EER by 4%–11%. Laboratory testing of orificecontrolled equipment reveals similar effects.An undercharge of 15% has been shown to reduce EER by 11%; a 15% overcharge by less than 2%.Thermostatic expansion valve (TXV) -controlled equipment is much less sensitive to deviations from the correct charge.
To check refrigerant charge in capillary-tube or orifice-controlled equipment, only the superheat test is well developed, practical, and reliable—when properly done (see “Demystifying Superheat,” HE Nov/Dec ’00, p. 8). Still,many service technicians won’t use this test, relying instead on rules of thumb.Common technician complaints about the superheat test include that it takes too long or that they don’t have the appropriate equipment or outdoor conditions to conduct it. (For TXV-controlled equipment, a subcooling test is the appro- priate tool for checking charge.) Those complaints may now be as archaic as an open-hearth fireplace.
New Ways to Check
Three new methods that aim to simplify conducting superheat tests on residential cooling systems have recently come on the market. Each method involves different hardware, software, and measurements (see Table 1). The first method relies on the use of pressure transducers, temperature sensors, and a personal digital assistant (PDA) for field diagnosis.The second method uses pressure transducers, temperature and humidity sensors, and a computer-based software package.The third method relies on refrigerant manifold gauges and temperature sensors, a computerbased expert system, and oversight from a human-based expert system.
To compare how well these methods work, Craig Wray, Lawrence Berkeley National Laboratory (LBNL) staff scientist and Jeffrey Siegel, LBNL graduate student researcher, conducted field tests. Due to project constraints, they were able to test cooling equipment in only four houses—two new ones and two older ones—all with short-tubeorifice controls. Due to constraints imposed by the manufacturers, the researchers are unable to identify the methods by brand name, but their descriptions are fairly detailed.
Comparing the different methods was not straightforward, as each method uses different input data and different algorithms. For some of these methods, the algorithms either are proprietary or were not accessible to the researchers, so the best they could do for some of the quantities being compared was to state when they effectively differed.The algorithm for Method 3 is in the California Energy Commission’s Title 24 manual, as well as in the Carrier manual.
While the superheat test is most major manufacturers’ de facto reference test, the LBNL researchers consider the gravimetric test to be the gold standard of refrigerant charge tests.This method involves removing all of the refrigerant in a system, drawing a vacuum and leak testing the system, and then adding the manufacturer’s recommended amount of refrigerant for the compressor, coils, and installed line set (refrigerant line length and diameter).However, the LBNL researchers did not have the time or resources to do gravimetric tests, and it is not possible to conduct this test in homes with mismatched coils or unknown refrigerant line set lengths. Instead, they used the superheat test as a reference standard, relying on output from the software of Method 3 to conduct the test.They used that method because it is more fully developed than the others, and it has been used to conduct many field tests.
The A/C systems that they tested were all split-system, R-22 central air conditioners equipped with short-tubeorifice metering devices (see Table 2). The systems had rated capacities of 3 to 4 tons.The researchers conducted superheat tests in the as-found condition and then repeated them each time they added or removed refrigerant charge.At each house, they operated the equipment for at least 15 minutes initially and after each charge change to allow system conditions to stabilize before conducting a superheat test.
The researchers did not find Method 1 useful for conducting superheat tests for several reasons. Instead of reporting a target superheat, Method 1 furnishes a qualitative rating: high, low, acceptable, or N/A. N/A means that the input data that Method 1 uses to calculate the tar- get are outside the acceptable range. For three of the four as-found cases,N/A was the response Method 1 produced during the superheat test, even though the systems—including the two brandnew ones—were substantially undercharged. Method 1 also reported other problems as being more important than the undercharge.
Another problem with Method 1 is that it doesn’t measure the indoor wet bulb temperature, an important quantity for determining superheat.According to the manufacturer of Method 1, this method was not designed to be used on residential equipment, but rather on commercial rooftop equipment, which tend to be TXV-controlled systems. The manufacturer is currently developing a diagnostic system for residential equipment.
Other problems the researchers had with Method 1 were incorrect diagnoses and inconsistencies between the diagnoses that a technician could get from the hand-held PDA field analysis and the diagnoses provided by software found on the manufacturer’s Web site. For example, after the A/C systems at two of the sites were correctly charged, Method 1 indicated that the charge level was low, while the other methods showed that the systems were now charged correctly. And all the cases that the PDA listed as N/A in the field were later diagnosed as undercharged by the Web data analysis. No technician would want to repair equipment based on a diagnosis obtained in the field and then find out later that the repair wasn’t needed or shouldn’t have been done.
While the problem of incorrect charge level may be difficult to repair, the problem of differing diagnoses is probably easy to fix by better coordinating the diagnoses of the PDA software and the Web site. Indeed, the manufacturer agrees that differing diagnoses can occur if the PDA does not have the latest version of the diagnostic routine, which is what will always be in use on the Web site.According to the manufacturer, technicians who regularly synch their PDA data with the Web server will get their PDA software automatically upgraded with the newest routine.
Not So Uncertain
When the researchers used Methods 2 and 3, uncertainties in the measurements led to variations in target superheat, actual superheat, and superheat deviation.These variations were as much as 5°F for target superheat, 9°F for actual superheat, and 8°F for superheat deviation. Earlier laboratory test data for capillary-tubecontrolled equipment have found that a 10ºF error in superheat deviation can result in a charge assessment difference of 5%–9%, depending on outdoor temperature.
Even with these differences, Methods 2 and 3 resulted in similar diagnoses and suggested fixes. Since the air conditioners were undercharged by 15%–30% at all four sites, this agreement in diagnoses shouldn’t be surprising.The question then becomes,would these methods work as well on air conditioners that were better charged? Apparently they would, since Methods 2 and 3 yielded superheat deviations that never differed by more than 2°F in tests of the A/C systems after they were correctly charged.
To see how correct charging affected these A/C systems, the researchers measured or calculated the as-found and postcharging total cooling capacity, EER, and power consumption, as well as the fractional changes in these parameters caused by charging (see Table 3).A small amount of the changes in power draw, capacity, and EER can be attributed to small changes that occurred in ambient and outdoor temperatures between the as-found and postcharging conditions.
Not surprisingly, properly charging the cooling equipment significantly increased capacity and efficiency.After charging, total cooling capacity improved by 18%–38% and EER improved by 7%–20%. Power consumption increased after charging by 280 to 540 Watts, or 7%–13%. Still, this additional power consumption delivers greater cooling to the customer, and the A/C ends up cycling on for fewer hours, both on- and off-peak.
While the LBNL researchers aren’t saying that either Method 2 or Method 3 is perfect, these methods make checking charge easier to do. Since checking charge is a prerequisite to correctly charging an A/C system, any method that simplifies this process deserves careful consideration by all HVAC contractors interested in delivering high-quality services.
Mary James is the publisher of Home Energy.
Enter your comments in the box below:
(Please note that all comments are subject to review prior to posting.)
While we will do our best to monitor all comments and blog posts for accuracy and relevancy, Home Energy is not responsible for content posted by our readers or third parties. Home Energy reserves the right to edit or remove comments or blog posts that do not meet our community guidelines.