Quote:
Originally Posted by trollbait
|
Notice that the test is a comparison against a "course monitoring tire". In an ideal situation, the course monitoring tire is the SRTT - Standard Reference Test Tire. However, because the test doesn't always allow the use of the SRTT, a tire whose value can be traced to the SRTT is used. In theory, all tests are a comparison against the SRTT regardless of the manufacturer.
There is some variability in the test. In other words, the test could be run repeatedly and slightly different values result. The operative words here are "slightly different".
Plus, there are differences in the way each tire manufacturer views how to rate their tires. Some, like Michelin, tend to be conservative in their ratings, while others (and I can't cite an example) tend to be the other extreme.
So you CAN compare different brands, but you have to be aware that this value has some built-in variability and the ratings should be taken with a grain of salt. But clearly, a 260 point point difference is significantly different and certainly indicative.
Some time back NHTSA (National Higway and Traffic Safety Administration) conducted RR tests on a number of tires to see if they could confirm what the tire manufacturers (and conventional wisdom) said about RR vs Traction and Treadwear. Because RR tests are fairly inexpensive to run - and since they needed that data anyway to try to write a RR regulation - they took those RR values and compared them to the UTQG traction and treadwear ratings. What they found was that tires with high traction ratings (or high treadwear ratings) never had low RR values - and tires with low RR values never had high traction or treadwear ratings - which is about a good of a confirrmation of the principle as one could get without actually conducting the traction and treadwear tests - which are pretty expensive to run.
I think you can rely on the UTQG treadwear ratings, but you shouldn't make too much of small differences.