This article was originally published in the January/February 1998 issue of Home Energy Magazine. Some formatting inconsistencies may be evident in older archive content.


| Back to Contents Page | Home Energy Index | About Home Energy |
| Home Energy Home Page | Back Issues of Home Energy |



Home Energy Magazine Online January/February 1998



A Hubbub Over HERS

Home Energy Rating Systems: Actual Usage May Vary (Sept/Oct '97, p. 21) generated more comments than any other topic, both negative and positive. We don't have enough space to print all the letters, but these address some of the primary concerns readers are discussing (and also the errors in the article). HERS Are Worth the Wait Your recent article Home Energy Rating Systems: Actual Usage May Vary had some relevant points and valid suggestions for the HERS industry. However, there were some misunderstandings and flat out wrong numbers presented.

As you correctly discovered, rating scores do not correlate closely with energy use or cost since the rating method that is generally used rates the house against itself using the same fuel-type and not against some average of all houses. Energy consumption and costs are displayed separately on ratings to give consumers this information.

Our biggest concern was the wrong numbers for Vermont on Table 1, page 26 (Comparison of HERS Costs to Rating Volume). I am not sure where the figures came from, but those for Vermont are off by a factor of more than two.

If you look at the National Renewable Energy Laboratory (NREL) HERS Case Study referenced throughout the article, you will see in Table 4-4 that for 1993 through 1995, Energy Rated Homes of Vermont's total funding averaged $218,716 per year (not $390,757). Figure 4-1 in the NREL study shows a total of 1,140 ratings for the same three year period or an average of 380 ratings per year. Doing the math, our cost per rating comes out to $576 per rating and not the $1,252 you reported.

HERS programs can be expensive to get up and running. However, the cost of transforming the housing and mortgage industries to recognize and value energy efficiency will be well worth the expense. The HERS industry is making some real and lasting in-roads that will pay efficiency dividends for a long time to come.

Richard Faesy
Energy Rated Homes of Vermont
Burlington, VT

Mr. Faesy is correct about the numbers cited in the NREL report. We apologize for the error.

Good, Better, BESTEST Stein's article referred to the BESTEST method for evaluating Home Rating software several times. I thought your readers would be interested to know that there are three BESTEST reports available from National Renewable Energy Laboratory (NREL).

  • International Energy Agency Building Energy Simulation Test (BESTEST) and Diagnostic Method. This is intended for testing and diagnosing problems in detailed building energy simulation software.
  • Home Energy Rating System Building Energy Simulation Test (HERS BESTEST). This is intended for testing home rating software and other simplified software.
  • Home Energy Rating System Building Energy Simulation Test for Florida (Florida HERS-BESTEST).
BESTEST is a method for checking the mathematical and algorithmic accuracy of building energy software. It does not check the accuracy of assumptions, such as occupant behavior. Ideally, validation of HERS software would involve both utility bill verification to investigate external error sources and BESTESTing to evaluate internal error sources. Mr. Stein's study highlights the need for these kinds of tests. The HERS National Guidelines, developed with the support of DOE and the HERS Council, call for both kinds of verification.

I would also like to draw your attention to an error in Table 1. It is not possible for the actual and predicted site energy use in Colorado to be 135,000 and 120,000 Btu respectively. That would amount to an annual average utility bill of about $1.50. The numbers appear to be off by about three orders of magnitude.

Ron Judkoff
National Renewable Energy Laboratory
Golden, CO

We mistakenly translated 135 MBtu as 135,000 Btu and 120 MBtu as 120,000 Btu. The correct conversions are 135 million Btu and 120 million Btu respectively.

HERS: Improvements Needed The article questioning the accuracy of HERS rating systems may provide the stimulus needed for HERS to grow and prosper. As stated in the article, HERS providers need to give consumers more information about the accuracy and meaning of the ratings.

The meaning of the rating score is so obtuse that even your excellent article found it ambiguous. If rating scores should not be used to compare houses, what then is their value? The ratings neither compare the estimated energy cost, which is what consumers want to know, nor do they compare the amount of energy used. Instead they compare the sum of the energy loads adjusted for the difference in the energy efficiency of the furnace, air conditioner, and water heater. Since there is not much of a market for that information, HERS providers prefer not to explain what the rating means.

The energy-efficient reference house is also difficult to explain. It is not just a house that meets a desired energy code or standard but tailored to the same dimensions as the rated house, but also one that has been doctored in a number of ways. While based on the CABO/MEC 1993 building standards and the 1992 NAECA systems efficiencies, the heat loss for the wall assembly is half that of the CABO/MEC standards and the required efficiency for window air conditioners is 25% more than NAECA. (The reference house efficiency is based on a central rather than wall air conditioner.)

For estimating solar gains/loads, the reference house is assumed to have windows equal to 18% of the floor area, 50% more than the average of 12%. The exaggerated windows in the reference house result in houses in warm climates getting better ratings and those in cold climates worse ratings than they would otherwise. Also, since glazing is measured as a proportion of the floor rather than the walls, larger houses have more skewed results.

Concerning the accuracy of the cost estimates, there are assumptions built into the calculations which preclude the estimated energy costs from matching actual costs. The most prominent assumption concerns the appliances--which account for almost half of the residential energy dollar. Variations in energy use for appliances are primarily due to differences in the size of the house. Larger houses have more lights, larger refrigerators, more TVs, and more laundry than smaller houses. The data from the Residential Energy Consumption Survey, published by the Department of Energy in 1995, confirm this rather obvious relationship. Yet for appliances the HERS calculations are based on the average use of the appliances found in the house. Thus, a one-room house is assumed to use as many lights and other appliances as a ten room house. This flaw in the estimate results in exaggerating the energy expenditures of small residences, while minimizing those of large ones as much as 30%. If the HERS industry is to survive, it must outgrow its shaky technical foundation.

Doris Iklé
Conservation Management Corporation
Bethesda, MD

Rating Tool Accuracy Stein's article served a useful purpose to initiate a national dialog on the important subject of rating tool accuracy. HERS rating tools, like building energy analysis tools in general, are subject to a number of modeling issues:

Garbage In, Garbage Out--Regardless of whether the home is new or existing, the design and construction characteristics of the home must be accurately defined and entered into the software. Without an accurate building description, it is not possible to obtain accurate energy performance estimates. This requires knowledgeable, trained home energy raters applying a consistent, unambiguous audit process.

Rate the Home, Not the Occupants--HERS rating tools assume typical occupancy assumptions relative to thermostat setpoints, internal heat gain, hot water usage, and other occupant related energy use factors (i.e. lights and appliance operating schedules and energy usage). The goal is to establish the energy efficiency of the home and potential for cost-effective improvements, not to establish the energy-consuming behavior of the occupants. Thus, I think it inappropriate to base an energy rating on utility bills.

Rating Score vs. Energy Costs--So that the home's energy efficiency rating is stable over time, the energy score (rating) does not rely on energy costs. However, as stated in the DOE and HERS Council Guidelines, the annual energy cost for heating, cooling, DHW, and lights and appliances must be prominently displayed on the rating certificate. Everyone in the HERS industry recognizes that the rating score must also be seen in combination with energy costs to truly assess the energy performance of a home.

Rating Tool Energy Cost Prediction vs. Utility Bills--The goal of a HERS rating is not to match an existing home's utility bill. The home's occupants can influence energy costs by a factor of two. Rather, the rating tool attempts to predict likely energy use for the home under average or typical occupancy, just as the MPG test uses a standard driver protocol when establishing the MPG rating on a specific automobile.

Rating Tool Certification--The HERS BESTEST procedure was developed by NREL and the HERS industry to ensure that the physics and mathematics of rating tools is within acceptable bounds of accuracy as established by three public domain simulation tools--DOE-2, BLAST, and SERI/RES. Use of this procedure by all HERS providers will go a long way toward ensuring rating tool accuracy.

HERS has the potential to help create a level playing field for defining home energy efficiency, establishing potential energy costs and identifying cost effective improvements. However, to be meaningful, consistent technical guidelines must exist. These have been developed by DOE, the HERS Council, HERS Providers, and state energy offices. The fact that DOE has not published these guidelines is not important. I would rather see the HERS industry be responsible for promulgating and maintaining these guidelines than DOE. Beyond creating a market demand for home energy rating due to existing and new energy mortgage products, the most important issue facing the HERS industry is to find a business model that will enable home energy raters and HERS providers to be economically sustainable.

HERS represent an important opportunity to account for energy efficiency in all residential real estate transactions. If HERS can promote across-the-board energy efficiency improvements in new and existing homes, national interests, as well as consumers' interests, will be served. We should do all we can to improve the professionalism and technical accuracy of HERS.

I encourage Home Energy to actively support the creation of a viable, professional HERS industry. Let's work together constructively and positively toward this end.

Michael J. Holtz, A.I.A.
Architectural Energy Corporation
Boulder, CO

DESLog Developer Credit In the trends article DESLog Delivers Timely Answers about Home Energy Use, (Sept/Oct '97, p. 7) I failed to credit Lester Shen of Technical and Learning Services for developing the short-term monitoring method. Without Dr. Shen's initiative and involvement in the DESLog project, the DESLog software never would have been developed. As you can imagine, everyone involved in the project at the Center for Energy and Environment deeply regrets this omission.

Karen Linner
Center for Energy and Environment
Minneapolis, MN

Window Queries Thank you for the article on rehab vs. replacement of old windows (Creating Windows of Energy-Saving Opportunity, Sept/Oct '97, p. 13). I read it with interest both theoretical and (given our lovely, peeling, lead-painted double sashes) practical. I was puzzled, however, by the assumption used to separate extraneous vs. sash leakage during pressure testing. The author states ... the air drawn through the windows in the study was approximately 30% cooler than the indoor air. Based on this difference, we assumed that approximately 30% of the extraneous leakage was outdoor air coming through the rough opening.

If 30% cooler means that the temperature was 30% lower, then the leakage ratio depends on the temperature scale chosen. In a house on the Detroit, Michigan side of the Ambassador Bridge with an indoor temperature of 70°F, the windows in question would leak air at 49°F. Move the same house to the other end of the bridge in Windsor, Ontario, where those same temperatures are measured as 21°C and 9°C, and the extraneous leakage would be 60% of the total. Measure it in Kelvin, 294°K indoors and 282°K, and the result is 4%. Temperature by itself doesn't capture the rate of heat transfer.

Also, there is no provision for changes in outdoor temperature. A test done on a very cold day would be very different from that made on a warm day.

I also don't understand the assumption that the measured temperature drop must be due to indoor air mixing with outdoor air coming through the rough opening. Any extraneous outdoor air is passing through the wall cavity, which is at some intermediate temperature between indoors and outdoors due to conduction through the wall, as well as to infiltration. Won't it be warmed by this as well as by mixing with indoor air?

Barbara Shohl Wagner
Rochester, MI

Andrew M. Shapiro responds:

We attempted to make an estimate of the fraction of extraneous leakage that comes from outdoors, and therefore contributes to heat losses during the winter. (Your wording seems to imply all extraneous leakage is from outside--some comes from outside, some comes from inside, based on the way the test procedure is set up. Most uses of this test are concerned with sash leakage only, and so the source of extraneous leakage is unimportant in those tests.) Yes, the wording 30% cooler is very ambiguous. The temperature of air drawn into the blower during the extraneous leakage test (2 layers of poly in place) was about one-third of the way up from outside and to inside temperature. This testing was done during cold weather. Reporting a fraction of the temperature difference (rather than actual temperature difference in degrees) normalizes for varying indoor and outdoor temperatures.

And, yes, we were also aware of the very good point you make about the temperatures of materials across which the air was drawn in the bowels of the wall and window weight cavities influencing the air temperature. The full report (much, much longer) has many caveats about these results. We consider these results a place holder. That is, we have a reasonably high level of confidence that at least some of the extraneous leakage is from outside. We know we did not have the resources to be able to more accurately quantify the actual fraction, but we wanted to make a rough estimate that we could use as an order of magnitude of this effect. The fraction estimated based on this simple test also varied significantly from house to house and even from window to window, even in the limited number of tests we did. There is a lot we have yet to learn about the dynamics of air and heat flow through windows and through adjacent wall areas connected to the windows.


 | Back to Contents Page | Home Energy Index | About Home Energy |
| Home Energy Home Page | Back Issues of Home Energy |

Home Energy can be reached at:
Home Energy magazine -- Please read our Copyright Notice


  • 1
  • NEXT
  • LAST
SPONSORED CONTENT What is Home Performance? Learn about the largest association dedicated to home performance and weatherization contractors. Learn more! Watch Video