Numbers Game: Road Race Rankings and Their Implications

Do Rankings of Races Matter?

Listings of Top 10 or Top 20 somethings can be fun to debate over a few beers: ten best quarterbacks of all time, ten best vacation spots, ten greatest rock songs, ten greatest movies, and so on. That includes footraces—road, trail, track, and now, mud.  Often these ratings are done by panels of “experts,” whose qualifications consist mainly of a lot of experience in the field, such as journalists, trainers, coaches, and the like.  Experience counts, but the subjectivity involved means that most of us are inclined to take these rankings with a grain or two of salt.

But does it really matter whether your race makes the cut to be considered?  Or, if you make the cut, where it ranks?

It does matter—if not to you personally, then to runners, the media, expo vendors, the semi-clueless nonrunning public, and, perhaps most importantly, the sponsors.  If you can put the “Top Twenty Best Marathons” feather in your cap, you might be able to loosen the wallets of those for whom prestige and name recognition are the deciding factors in choosing what event to support.

So, when a service comes along claiming to have compiled the definitive list of the Best Races in America,” and alleges to have built its ratings from actual participant votes, what do you make of it? Shrug it off as typical hype? What if their claim made the pages of Running USA’s online newsletter? Would you take a closer look?

Thinking it’s worth a closer look, I’ve toiled through a lot of detail below concerning the service claiming to have constructed the “definitive list.” Consider it an exercise in what we used to jocularly call LSD—Long Slow Distance (in running, a training fad that had its day before we discovered that mostly slow in training meant mostly slow in racing).

Does “definitive” mean definitive, or something a little more open-ended?

I’m Old School with the English language, which starts with pinning down what words mean as precisely as possible. (I mean in nonfiction, since we rightly give great flexibility to fiction writers.) Here’s the first definition of “definitive” from the Shorter Oxford English Dictionary: “Having the function or character of finality; decisive, conclusive, final; definite, fixed, finally settled, unconditional.”

Wow! Final! Unconditional! So, when someone asserts they have created a “definitive” list for public consumption, I consider that an extraordinary claim, and, as Carl Sagan used to say, extraordinary claims require extraordinary evidence.

If you think I’m too much into semantic quibbling, stay with me for a few more paragraphs. I think you’ll see that even when we soften the connotation of “definitive” to something more like “highly representative,” there are problems with the list we’re talking about.

To name names: BibRave, and the “definitive list,” the BibRave 100

A writeup in Endurance Sportswire announcing the awards ceremony for the BibRave 100 list (it was December 1st), calls BibRave “the new marketing solution for races and endurance brands.”

I’m confused. The outfit that is compiling the list is also marketing races? It’s key even to get considered for the ranking to get a large number of nominations.  Unless I’m mistaken, there’s a conflict of interest here.  For example, what better way to market a race than to encourage it to get a boatload of nominations for the BibRave 100?

Let’s go up the ladder you need just to make the list and get ranked, as follows.

Getting to the Bottom Rung: Nominations. Scads of them.

For starters, you have to get heaps of nominations even to get considered. How many? You know there’s a cut, numbers-wise, because it says in the FAQ: If your race isn’t on the list, it’s because not enough people nominated it for consideration.”

Enough can’t be known until the nominations period closes.

What “people” are making the nominations? Answer, also on the FAQ: “Anyone! [exclamation point theirs] can nominate a race” by completing a form on the website for as many races as they want, although not any one race more than once.

Are they kidding?  If “anyone!” means anyone, then a monk on a mountainside in the Himalayas could make a nomination, assuming he had access to the internet. So could a Russian oligarch, a deep-sea fisherman, or a bank robber (probably runs). So might your family and friends, and their friends—you can see where this is going. You don’t even have to create a chain email, you can find someone to post to a social media platform how deserving your race is for nominations. There’s no ceiling on the number of nominations a race with good marketing savvy could get.

You might get thwarted in places by some wet blankets reluctant to game the system, but there’s nothing in the rules to stop it.

Sheer numbers in the nominations process can work for you—or not, if you believe you should police yourself.  You might honor an implicit honors system, but who else will?

I’d bet that few readers of this blog would resort to pumping up their numbers by recruiting just “anyone” for a nomination.  But there’s no rule that disallows it. The point is not that a lot of races would try to jack up their nomination numbers to ludicrous proportions, but that there’s no way to qualify nominators, and there’s no way to know in advance how many nominations will make the cut.  It’s a public relations free-for-all.

Second Rung: a “Running Industry survey score.”

Suppose you meet the magic quota for number of nominations. What’s next? Per the FAQ: “The most nominated races will then be rated based on a Running Industry survey score, which will be combined with a general runner vote.”

What’s the “Running Industry survey score?” It’s a little vague, but there’s a clue in the description of “The Running Industry Collective.” That is: “Folks from across the running industry – running store retailers, race directors (BibRave clients and non-clients alike), race service providers (bib makers, chip timers, medal companies, t-shirt companies, registration companies, etc.), running brands, and more.”

What’s that? “BibRave clients and non-clients alike?” Ping! That’s the sound of touching the second tripwire for conflict of interest. The service (BibRave) that is compiling the list has clients and prospective clients in the bunch that is evaluating the candidate races.

Third Rung: “a general runner vote.”

It’s not quite clear if it is a poll of “general runners,” or a “general” poll of ordinary runners. It’s so vague it doesn’t make much difference, so we’ll settle on “general runner.” If you know what a “general runner” is, you are, if not omniscient, a far more acute observer of the road racing scene than I am. My guess: the “general runners” are from the BibRave community, a network of runners who exchange information on races. If so, it is a self-selected sample, and any statistician will tell you that a self-selected sample is probably not representative of a population, in this case the population of participants in all road races in America. That’s even if the sample consists of those with more than one race under their belt who talk to each other online, as you’d expect of the BibRave community. But until we have specifics as to what a “general runner” is, we really can’t know.

At this point, we seem to have moved away from the concept of “definitive” to—what? How about “highly representative of runner opinion?”

Are the Top Picks Representative? Let’s Take a Look

BibRave divided the rankings into four categories by distance: Marathon, Half Marathon, 10K, and 5K, and apologized for omitting other distances. There are so many races! Indeed.

Therefore, they’ve left out some of the most popular and prestigious races in America, all of which sell out and/or have lotteries. Here are some that immediately sprang to mind, listed in alphabetical order: Alaska Airlines Bay to Breakers 12K; Blue Cross Broad Street Run 10 Mile; Credit Union Cherry Blossom 10 Mile; Gate River Run 15K; Lilac Bloomsday 12K; Manchester Road Race 4.78 Mile; New Balance Falmouth Road Race 7 Mile; Utica Boilermaker 15K; Wharf to Wharf 6 Mile.

I’d think a “definitive” list of all races in the USA would have a few of those. They could have a special category: “other distances from 4.78 miles to 15K.” Would that be so hard?

 

But let’s give BibRave a shot with the four distances they have chosen.  I looked most closely at the marathon list (top 20, with the first 5 ranked and the rest in alphabetical order—good call there!). There are twenty marathons listed, which include those you’d expect to rise to the top: Bank of America Chicago, Boston, California International, Medtronic Twin Cities, TCS New York City.

AND THE WINNER IS . . .

The Missoula Marathon, in Missoula, MT, tops the list. It usually has a field of about 850 runners. This seems counter-intuitive, since most of the races are in or near big cities and have large turnouts. Say that 3% of runners in Boston would vote for it as best marathon: that comes to about 900 votes.  For Missoula to get as many votes, it would have to get more than 100%. A similar ratio (3% vs 100%) would roughly apply to account for all the runners who have participated in either event over, say, ten years. It sounds as if “general runner” might include folks who have never run in these races—the BibRave community, having heard of the race second-hand, could account of a lot of those.

Maybe the poll is comparing proportions of votes to the size of the race. That sounds fair. If Missoula got 425 votes out of a size of 850, then they’d have a percentage of 50%, and you wouldn’t expect Boston to come anywhere near that. (The same goes for the expanded pool—i.e. all runners in both events in the last ten years.)

There’s no doubt that Missoula is an excellent race.  It was once named as the top marathon in the USA by Runner’s World. If you go online to marathonguide.com, you’ll see it gets five stars overall, and heaps of glowing runner reviews. If a third party with no stake in the results were brought in to rank the races (not gonna happen), Missoula might very well earn top spot.

As it stands, we’d like to know more about the methodology, the absolute numbers, the percentages, and the sample (i.e. “general runners”) to give credence to this survey. I went down the list and found a few outliers similar to Missoula, but I don’t want to get that deep into the weeds here.

Nominations in the “THOUSANDS.”  From where?

You might wonder where all the nominations for Missoula came from. The BibRave website says Missoula got “THOUSANDS” of nominations—how many, it doesn’t say, but you can interpret upper case THOUSANDS as meaning, lots. If you got half the runners who have run Missoula within the last five years, you’d get .5 x 850 x 5 = 2,125. That’s as if repeaters didn’t exist—that is, a fresh field every year. With the repeaters you’d expect from such a fine race, that number might get cut in half. Without repeaters (the larger number), then you’ve got “THOUSANDS”—as long as you call barely 2,000 THOUSANDS.

So many nominations come from an expanded pool of everyone who has ever heard of the Missoula Marathon.

But there’s a simpler explanation. There are two big clues as to how Missoula made the nominations cut—whatever it was.

Clue Number One: As we’ve seen, “Anyone!” can send in a nomination.

Clue Number Two is BibRave’s description of the winner: it stresses community, and how local runners wrote in about “how this event has changed their community, inspiring thousands to become active.”

Missoula has a population of 72,000.  Put together Clue One and Clue Two and you don’t have to be Hercule Poirot to pretty confidently infer where THOUSANDS of nominations came from.

Also, as to the voting on candidate races, we are not given just how many “general runners” voted for what. Why is BibRave so silent on the actual numbers?

Is it really as bad as I’m making it out to be? Maybe not, but it’s not transparent, and not fair

My assessment of the BibRave lists is pretty harsh. Maybe too much so. I’ve grown cantankerous at age 71, and inclined to scoff at overblown claims, where it appears that marketing generates a lot of hoopla without necessarily a lot of substance. Having taken a statistics course and learned additional stats tricks on my own, I know that the first questions you ask about a claim involving statistics are, what is the methodology, and what is the sample? If you don’t even know the raw numbers, much less the methodology and the sample, you don’t know much.

I do concede that BibRave is an interesting and valuable contribution to the running scene. I give them credit for trying to build a tool that can help runners discuss and choose their races worldwide and instantaneously, or even just argue about them for fun. But we can say their list is not a “definitive” picture of the best road races in the USA.  It excludes a number of wildly popular races because they’re the wrong distance. Not to say that they are acting in bad faith, but there are too many uncertainties in the method and the sample to give the list solid credence.

Or even much credence. I happen to know that one of the most inarguably successful half marathons did not even make the top 20 half marathon list. (I admit to bias, but not to being wrong.)

BTW, if you make the list you get an emblem to put on your website, sort of like an EnergyStar label. The difference is that EnergyStar uses actual, lab-tested numbers.

Once more, the nominations and their implications for a race’s image

First, there are no qualifications asked of nominators—so long as “anyone” means anyone.

Next, lack of transparency: if your race misses out on consideration because your nominations fall below an unknown threshold, just too bad. You could have the best race in the world, but if you don’t find a way to generate big nomination numbers, you don’t even get to the first rung of the ladder.

What do you tell prospective sponsors who may also be considering a race across town that takes place the preceding weekend, when the other race makes the cut, and you don’t? All because you didn’t play the nominations game cleverly enough–a game that may have little to do with the quality of your event. A listing can make a splash with local media (one of the best races in America!), raising the profile of the event and pleasing sponsors even more.

This is mainly a problem for small races, where a sponsor can make the difference between continuing the event at all, or not. My opinion is, that the BibRave rating system closes off options for many of these races.

I don’t have a last word here.  Anyway, you may have already had your fill.

Thanks for completing this marathon post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s