You are viewing this article in the AnnArbor.com archives. For the latest breaking news and updates in Ann Arbor and the surrounding area, see MLive.com/ann-arbor
Posted on Tue, Jul 28, 2009 : 6:03 p.m.

Wine competitions - Are they a hoax?

By Ron Sober

Thumbnail image for RonSoberStateFairWineCompetition.jpg
I have judged a lot of wine competitions in my life. Some of the competitions require judges to pass tasting examinations, some do not. Some competitions are strictly regulated and others are more free-form. So how do you, as a consumer actually trust the results of competiions? Is there some sort of master list of the competitions that rate the judges? Are there competitions that you need to pay attention to? The quick answer is no, there isn’t and I am not sure if it matters.

I am pretty confident in my judging capabilities. I would hazard to guess that every wine judge you spoke to would say the same thing. They are confident in their judging capabilities, too. The thing is, how accurate are we as judges? The picture in this post is the first flight of wines at this year’s California State Fair Wine Competition. At this competition, we routinely get flights of 30 or more wines. We need to work our way through each wine and decide whether it gets a medal or not. That is a tall order for a single person. Are we infallible as judges? Absolutely not. That is why wine competitons employ panels of judges, instead of allowing one person to divvy out medals.

The validity of wine competitions has come under some scrutiny recently. Professor Robert Hodgson wrote a paper titled “An Examination of Judge Reliability at a Major U.S. Wine Competition”. In his paper, he indicated that only 10 percent of wine judges were able to give the same wine a similar score while judging the California State Fair competition. Now, I am one of those judges. I woulld like to think that I am one of that 10 percent, but the truth is that I probably fall into the 90 percent. Am I concerned about this? Maybe … just a bit.

So, if judges are this inconsistent, does it make the results of a wine competiton bogus? I think that while there may be some need for concern, some other facts override the single judge reliability. The fact that wine competitions are judged in panels is a big factor. This same study indicated that even though individual judges lack some reliability, the overall panel does not. The checks and balances are in place to help combat single judge reliability. Even though I may give a wine two different scores, the balance of the panel tasting almost assures that the wine is given a fair shake and is much more consistent. This is definitely a “wisdom of the crowds” situation.

Here is a great video of G.M. “Pooch” Pucilowski talking about the study and what it means to the California State Fair competition. Pooch has been testing us judges for the past five years or so, to determine just how reliable we are. I think it is a fair assessment of the true situation.

This isn’t the first time I have been tested as a judge. Back whenever I used to judge at InterVin, we were constantly tested for reliability. If we gave the same wine a score that differed by more than 5 points, we were asked to stop tasting and rest for an hour. If this were to happen three times over the course of the competition, you were removed from InterVin as a judge and had to re-take the judges tasting evaluation.

I actually welcome this type of testing and encourage other comptitions to employ them. California State Fair made some significant changes in this year’s competition to help with the reliability issue. Each judge received smaller flights of wines and we also judged less wine each day. I think that it made for a better competition.

If you are interested in the results of the California State Fair Wine Competition, here is the place to go.

Next week, I judge the Michigan Wine and Spirits Competition. More on that after I judge.

Comments

Ron Sober

Fri, Jul 31, 2009 : 10:56 a.m.

Putnam...I agree 100%. I would be far liklier to consider the judgement of a 4 person panel in a wine competition than that of a single wine critic. Single palates are very subjective, but the 4 person panel is quit a bit more objective.

Ron Sober

Fri, Jul 31, 2009 : 10:52 a.m.

Joel...absolutely, I agree with you, that is why I indicated, above, that the study did say that while individual judgement may not be as consistent, the use of panels of 4 (sometimes even 5) judges are very consistent. I also would like to say that the Michigan competition goes one step even further. In instances where there is a preponderance of opinion by the panel (e.g. one person gives a strong gold medal and the other judges give a significantly lower award) that wine can be sent to another panel of 4 judges for their opinion. I think that this allows for an even more accurate outcome. Hats off to the Mich. competiton for this.

putnam

Thu, Jul 30, 2009 : 1:24 p.m.

I suspect taste-based judgments of wine are 90 percent useful to 90 percent of consumers 90 percent of the time, which is something.

Joel Goldberg

Thu, Jul 30, 2009 : 12:50 p.m.

Ron, it's worth a mention that the Hodgson study is based on how well individual judges can replicate scores. At the Michigan Competition, each wine's result represents a consensus of the four judges at the table. In theory, multiple palates reaching agreement should yield better outcomes.

Jennifer Shikes Haines

Wed, Jul 29, 2009 : 5:02 a.m.

This was fascinating, especially both the results of the study on individual judges versus panel scores and how Intervin handles their judges. I wonder if the palate issue is slightly different in food competitions - would anyone who's judged food contests like to chime in on this?