Who Tosses Around the 90 Point Ratings More Casually: Wine Spectator or Wine Advocate?

Wednesday, September 1, 2010

I remember the first time I went wine tasting in Napa and noticed a banner hanging in the tasting room mentioning a 90 point rating.  My first impression was wow: Some independent publication thinks pretty highly of their wines.  Cool.

But then I went to another tasting room and they had more of these banners.  And the next tasting room had even more.  I started paying attention to which publications were behind each of the 90 point ratings and noticed that some wineries leveraged publications I’d never heard of whereas others stuck to familiar names like Wine Spectator and Wine Advocate.  After paying attention to wine ratings for about 15 years now I feel like I’ve got a handle on the relative prestige of each of the big publications that rate wines, and I think I can safely say that for most wine enthusiasts that pay attention to ratings the two most influential publications are Wine Spectator and Wine Advocate.  There are other publications, and some interesting highly specialized ones.  But for the most part these are the two big ones that people pay most attention to in the US.

Over time, I started to notice that Robert Parker’s Wine Advocate seemed to toss around the 90 point ratings a little more casually than Wine Spectator- especially for Napa Cabernet.  When I recently had a look at early ratings emerging for the highly regarded 2007 Napa vintage I noticed some big differences in this area.  Specifically: Wine Advocate seems to rate wines higher in general than Wine Spectator.  Consider that Wine Spectator’s James Laube has never rated a wine he’s tasted as part of Wine Spectator’s blind tasting program 100 points.  For the 2007 Napa Cab vintage alone Robert Parker has already rated five wines 100 points.  And he’s not even done rating wines from the vintage.

I wanted to gain a little more clarity on the situation so I looked more closely at Wine Spectator and Wine Advocate ratings for 2007 Napa Cabernet Sauvignons.  Based on past vintages, I’d estimate Spectator is about half-way through their tastings of this vintage but some telling trends emerge already.  In the histogram below I've plotted the number of wines receiving each rating both for Wine Spectator and Wine Advocate.  Wine Spectator ratings run from 79-97 points, and Wine Advocate ratings go from 85-100 points.

Notice how there seem to be more high ratings from Wine Advocate (in red) than Wine Spectator (blue).  Especially 93 points and above:

Note: Wine Advocate chooses to publish ratings only if they’re higher than 85 points.  I presume this is because they don’t want to fry wineries who sent them wine as samples, but I think we can see from this histogram that hardly any wines are rated even close to 85 points by Wine Advocate.  Almost all of the wines are rated 90 points or higher!  In fact, Wine Advocate rated 90.2% of the 2007 Nabs they rated 90 points or better.  My point: If you buy a 2007 Napa Cab rated "only" 90 points by Wine Advocate you’re actually buying a wine that’s significantly below average compared to other wines they've rated. 

Wine Spectator only rated 54% of the wines they rated 90 points or better, so if you’re looking at a Wine Spectator rating of 90 points or better you’re at least selecting a wine that’s above average compared to other wines they rated. 

Fans of Robert Parker might suggest that the reason he rates wines higher is because he drinks better wines.  To get a true comparison of how casually each publication tosses around the 90 pointers, we’d need to look at what each publication thought of the wines they both published ratings for.  Okay, that’s a bit tedious but because I’m dedicated to ridiculous number crunching as it relates to wine appreciation I’ve done the work.

Below is a chart showing the histogram for the 63 wines both publications released ratings for.  See how Wine Advocate is still top heavy, especially at 98 and above.  For this common set of wines, Wine Advocate is 1.57 points higher on average.  That might not seem like a lot but considering how compressed the portion of the 100 point scale that’s actually used it’s a pretty big difference.  For example- see how Spectator peaks around 92-93 whereas Advocate peaks  around 93-95:
Discussion

One might argue that if Spectator had the same policy of truncating ratings less than 85 points that the 82 point rating in the set of common wines wouldn’t have been included as a comparison point.  Fair enough- Advocate would still be 1.46 points higher on average with this data point excluded.

One might argue further than Parker simply thought more highly of the vintage in general than Laube.  I don’t think that’s true.  If you look at the overall vintage ratings for 2007 and in general they’re fairly well aligned.  For 2007 specifically Parker rated the vintage as a whole 96 whereas Spectator has given it a provisional 94-97.

Conclusions

This is enough data for me to affirm what I suspected: That Robert Parker tosses around the big scores more casually than James Laube.  Would the same be true for Bordeaux where until recently James Suckling was responsible for the numbers?  I haven’t run the numbers yet but I don’t think so.  I think Parker and Suckling distribute numbers on a more similar curve, but that’s just my gut instinct.

My point in doing this analysis and sharing it for discussion is to point out that numerical ratings from Wine Spectator and Wine Advocate aren’t distributed equally, so they shouldn't be treated equally.  Knock a couple points off the next Wine Advocate rating you see, and I think you’ll have something more in line with what Wine Spectator would rate a Napa Cab.

1.57 points might not seem like a lot but if we refer back to the wwpQPR Calculator I suggested that for me personally I consider a 3 point increase north of 90 to be a doubling in quality.  So 1.57 points is “half again as good”.  In other words I react quite differently to a 94 point rating from Wine Spectator than a 92 point rating.  And that’s not because I believe the numbers are that precise- it’s because of the point in the bell curve each number represents.  A 94+ point 2007 Napa Cab from Spectator is pretty special whereas a 92 pointer is fairly common and relatively easy to come by.

A logical reaction to this, I think, is to seek out those wines that are highly rated by both publications.  And it would only be sensible to find the most affordable wines that attained a rating higher than a certain amount, and then further seek out the wines with high availability.  If this sort of buying strategy sounds sensible to you, consider The Wine Blue Book.  On a monthly basis they publish average ratings for top publications sorted by score, category, and their value relative to their peer group along with a current realistic cost and availability.

What You Should Do Next

Subscribe to this site.  Why?  Because crawling through these numbers gave me a chance to look closely at 2007 Napa Cabs and there are some interesting stories that fall out of these ratings.  I’ll be writing about solid value plays at a variety of price points triangulated with my experience tasting some of these wines.  I’d also like to take a closer look at the wines the publications disagreed most heartily on- as usual there are some doozies and it’ll be interesting to see who we think is right.  I’ll look forward to continuing the conversation. 

What do you think of this comparison? When you see a rating from Wine Advocate do you react to it differently than when you see a rating from Wine Spectator?

Topics

  © Blogger templates Newspaper by Ourblogtemplates.com 2008

Back to TOP