Some Thoughts on Difficulty Ratings

The Ruler Climbing Guidebook

The ideas below are in the Introduction chapter of Rock Climbing: Minnesota and Wisconsin, presented here with more detail.

Difficulty ratings should be:

  • an estimate of the difficulty of the climb for an ‘average person’ on an ‘average’ day
  • consistent within a particular climbing area, and
  • consistent among different areas.

There are many reasons why these goals are difficult to achieve.

Historical inertia. Many of these routes were originally rated when the Yosemite Decimal System (YDS) was capped at a maximum of 5.9. This doesn’t mean that the hardest climbs folks could do back then would be rated 5.9 today; it means that the hardest climbs (which we might rate 5.11 or higher today) were rated 5.9 at that time. This meant that easier climbs were also rated lower than they would be today. Overcoming this historical inertia is difficult.

Geographic differences in standards. Historically, some areas applied ratings in different ways. In other words, a 5.8 in one area might be a 5.7 in another area.

Lead vs. toprope. Except for sport areas such as Red Wing and Willow River, almost all of the routes in this guide are generally toproped. This is strongly affected in the ratings (ignoring protection issues).

Type of rock. Anorthosite, rhyolite, basalt, dolomite, and quartzite all climb differently and all are covered in this guide. Visiting an unfamiliar area can make the ratings seem too low.

On-sight vs. rehearsed. Most of these crags are short, height-wise, and climbers tend to repeat the same routes over and over. As a result, the local ratings tend to reflect this familiarity, and many climbs have been rated well below their on-sight rating.

Indoor climbing. Indoor climb ratings do not relate to outdoor ratings in any meaningful way, other than lower-rated climbs should be easier than higher-rated climbs. 

Style and ethics. There are all sorts of ways to lessen the difficulty of a climb by your behavior. Rehearsal, hanging, tick marks, etc. etc. may lead you so rate a climb to be easier than a person doing a no-falls, on-sight ascent.

Morphometrics. Your height, wingspan, finger length, hand size, etc. differ from others. So a 5.8 hand crack for you might be a 5.10 off width or a 5.9 crappy finger stack for others.

If you’ve read this far, you can see that applying a rating to 1000 routes on ten different crags is no simple task. In the Second Edition, the Area Introductions include a statement on the assumptions made when assigning ratings. I’m guessing that many climbers do not read these sections.

Safety–The Central Issue 

A primary use of difficulty ratings is to keep climbers safe.  The most conservative rating scale will be accurate for the most risky (roped) climbing style: an on-sight, unrehearsed lead climb, assuming good protection. Rating a climb 5.8 when it is really a 5.10a has no real consequences to the top-rope climber, but it could be catastrophic for a new 5.8 leader making an on-sight attempt.  

Another safety issue is protection. I will not provide protection ratings, as most of these climbs can be inspected from the base or inspected on rappel (e.g. Palisade Head). The ability to place protection is an acquired skill, and  just because a climber can protect a route safely, it doesn’t necessarily mean that you can. 

Summary

With these concepts in mind, ratings should

  • be an estimate of the difficulty of the climb for an ‘average person’ on an ‘average’ day
  • be consistent within a particular climbing area
  • be consistent among different areas, and
  • assume an on-sight, unrehearsed lead (with good protection).

It is difficult– actually impossible– to accomplish all of these goals. We can approach these ideals by obtaining as many opinions as possible to drown out differences that may arise due to style, experience, and all other factors listed above. Since a single broken hold can change the rating of a climb, the final responsibility will always lie with the climber on the route.