The question: Why do critics use a 100-point scoring system if most scores fall in the 80- to 100-point range? Shouldn't we see a lot of 75s, even a few 40s and 50s?
The answer: Fair logic. I suppose that, by academic standards, wine critics sound like pushovers.
Robert Parker, the American who popularized the 100-point system for wine, has argued that it is based on university scoring and that people can easily make sense of it for that reason. Teachers, though, don't get to choose their students, even if they may want to. They've got to mark everybody who comes to class – the brainiacs as well as the academic failures who squeaked into Michigan State solely by the grace of a basketball scholarship.
Wine critics may taste thousands of wines a year, but there's limited space in newspapers and electronic newsletters, so they tend to focus on wines that meet a certain basic quality level (that 80-plus score). Otherwise they'd be filling pages with dreck, and readers have little patience for learning about wines they shouldn't buy.
In my case, I tend not to waste much ink skewering an $8 dud that may only be available in three stores in Nunavut. That said, I do occasionally write negatively about certain wines, but generally only those that have an overhyped reputation that I feel needs to be rectified. And I do criticize winemaking practices, such as the excessive reliance on heavy oak in far too many California chardonnays, citing examples to illustrate the point.
I should add that wine scores are – contrary to Mr. Parker's academic analogy – not to be compared with university scores. A 75 may be a good grade at UBC and Dalhousie, but it denotes a ho-hum bottle. I didn't invent the 100-point system, but if I were to use a different metric and start giving scores of 55 or 60 for mediocre wines and 75s for pretty good ones, few people would find them useful because they're not consistent with an accepted standard.
To be more specific, my 55 (a lousy grade in university) might be equivalent to a (lousy) 75 in Mr. Parker's newsletter or Wine Spectator magazine, but people who read those publications would assume that I found the wine much worse than they did, when in fact that was not the case.
Readers don't just read one publication. Many follow a variety of critics and find utility in comparing reviews to see if there's consensus, which generally is more reliable than a single opinion. In short, readers assume we're all roughly using the same metric even if specific scores differ based on individual judgment. Otherwise, it would be sort of like a malfunctioning car speedometer, where my 55 kilometres an hour is someone else's 75. Not a sensible way to share the road.
Have a wine question?