What's new
USCHO Fan Forum

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • The USCHO Fan Forum has migrated to a new plaform, xenForo. Most of the function of the forum should work in familiar ways. Please note that you can switch between light and dark modes by clicking on the gear icon in the upper right of the main menu bar. We are hoping that this new platform will prove to be faster and more reliable. Please feel free to explore its features.

Division I Rutter Rankings for 2013-2014

LakersFan

Registered User
Here are the first Rutter Rankings of the year for games played through October 6, 2013

Code:
  	Team 		Rating 
1 	Minnesota 	3.42
2 	Wisconsin 	1.66
3 	Clarkson 	1.58
4 	North Dakota 	1.48
5 	Boston College 	1.31
6 	Cornell 	1.30
7 	Boston Univ 	1.29
8 	Ohio State 	1.15
9 	Harvard 	1.04
10 	Minn. Duluth 	0.90

Complete rankings can be found here: http://math.bd.psu.edu/faculty/rutter/WomensRankings.html

FAQ

Q. [Insert team here, like Cornell] hasnt played a game yet. Ho'w can they be ranked?
A. The power of Bayesian statistics. Each team has a prior rating based on last year, so early season rankings are possible. As more games are played, the effect of the prior is reduced.

Q. Is home ice advantage included?
A. No. Although I have estimates of home ice advantage for both the entire division and for each team, they are not included in these rankings. It emulates the NCAA criteria in this respect.

Q. Is margin of victory (MOV) included?
A. No, just wins, losses, and ties. Again, the NCAA doesn't use margin of victory and since the games are low scoring, I haven't found a satisfactory way to include MOV.

Q. How do you include ties?
A. If you look at the web page, you will see how ties are included. In short, I estimate a region that includes a probability of a tie which is larger when the teams are closely ranked. No "tie is half a win, half a loss" simplification here.

Q. How does your method compare to RPI?
A. The two are very different. I think my system (and KRACH) are a much better reflection of the quality of teams as statistical models are used as opposed to arbitrary alegbra. But I am biased. Someday, I will do a really complete comparission of the methods.
 
Re: Division I Rutter Rankings for 2013-2014

For games played through October 13, 2013

Code:
  	Team 	        Rating 	Last Week
1 	Minnesota 	3.6747	1
2 	Clarkson 	1.7047 	2
3 	Boston College	1.6400	5
4 	Wisconsin 	1.4748 	3
5 	North Dakota 	1.4511	4 	
6 	Cornell 	1.3042	6 	
7 	Harvard 	1.0388 	9
8 	Ohio State 	0.8789	8 
9 	UMD 		0.8755	10 	
10 	Boston Univ.	0.7337	7
 
Re: Division I Rutter Rankings for 2013-2014

so what ultimately is the point of your ratings?
to seed teams in the NCAA tourney?
predict outcomes of games?
conference winners?
????

"that's why we play the games"
 
Re: Division I Rutter Rankings for 2013-2014

so what ultimately is the point of your ratings?"

I'll answer the troll. Lakersfan has been posting the Rutter rankings for years. He/She is a mathematician by trade. The Rutter rankings are an unbiased mathematical view of the strength of each team. There are several other such mathematical models, like the SLU one for example, or the RPI.

Not sure when the Rutter rankings first appeared on this board, but I first noticed them back in the day when MC was one of the powerhouse teams often seeded in the top four. Maybe that was the motive for starting them back then.

At any rate, many on the board, yours truly included, really appreciate the effort by Lakersfan to post the weekly the Rutter rankings and the associated tools in the links provided.
 
Last edited:
Re: Division I Rutter Rankings for 2013-2014

so what ultimately is the point of your ratings?

You didn't visit my website, did you? I've been doing this a long time.

1. As a statistician (professor and practicing), it gives me a chance to apply my day to job to something I enjoy.
2. Given that very few games are on TV, it provides a way to rank teams using a fixed set of criteria and game results.
3. RPI stinks, and I think my way is better
4. I also think my way is better than KRACH because of the way I deal with ties, but I would be happy to discuss that.
5. Estimates of uncertainty in a ranking system are important, and I provide those on my website.
6. It gives something people who enjoy women's college hockey something to talk about.
7. It helped me get tenure :)
 
Re: Division I Rutter Rankings for 2013-2014

Wow you are really quite the little troll aren't you
worried that you may lose the title?
You didn't visit my website, did you? I've been doing this a long time.

1. As a statistician (professor and practicing), it gives me a chance to apply my day to job to something I enjoy.
I assumed that before looking at your site
2. Given that very few games are on TV, it provides a way to rank teams using a fixed set of criteria and game results.
3. RPI stinks, and I think my way is better
I agree with the former, which is my incentive for even participating in the forum this year
4. I also think my way is better than KRACH because of the way I deal with ties, but I would be happy to discuss that.
again, I assumed as much, nobody is that geeky enough to do this for the sake of doing it
5. Estimates of uncertainty in a ranking system are important, and I provide those on my website.
6. It gives something people who enjoy women's college hockey something to talk about.
as if there weren't enough already :)
7. It helped me get tenure :)
congrats on that!

although I assumed as much, I don't like to assume anything
so do you update your statistical model to account for the deviation from the results (actual game results)?
just wondering how serious you are about this, given the fact you have already received a reward, tenure.
 
Re: Division I Rutter Rankings for 2013-2014

6. It gives something people who enjoy women's college hockey something to talk about.
I truly appreciate all the hours and dedication that you have put into this project over the years. :cool:
 
Re: Division I Rutter Rankings for 2013-2014

although I assumed as much, I don't like to assume anything
so do you update your statistical model to account for the deviation from the results (actual game results)?
just wondering how serious you are about this, given the fact you have already received a reward, tenure.

If you are interested in learning about Rutter, rather than just harassing people, you could go to his website and find some answers there.
 
Re: Division I Rutter Rankings for 2013-2014

I just looked at website for first time. Very impressive Laker Fan!!!! Clearly much much better than RPI - which if you look at rankings today, is a complete joke. Of course the rankings will change each week with more statistical inputs, but as of right now, I'd say that your rankings looks VERY good. My guess is that once the Ivies start playing, Harvard will jump up a bit, Dartmouth down... but that's just me. I'll be checking your site every week!
 
Re: Division I Rutter Rankings for 2013-2014

It should be obvious that any ranking systems that attaches equal value to victories, whether they occur at the beginning of the season or the end, is inferior to one that puts more importance on victories toward the end of the season. Any fan knows it isn’t how you play at the beginning; it’s how you play at the end. The coaches poll take this into account. But their poll is just a seat of the pants poll, so take a look at the following data:
Year WCHA Season WCHA Tournement NCAA Tournement
2001 Minnesota UMD UMD
2002 Minnesota Minnesota UMD
2003 UMD UMD UMD
2004 Minnesota Minnesota Minnesota
2005 Minnesota Minnesota Minnesota
2006 Wisconsin Wisconsin Wisconsin
2007 Wisconsin Wisconsin Wisconsin
2008 UMD UMD UMD
2009 Minnesota Wisconsin Wisconsin
2010 UMD UMD UMD
2011 Wisconsin Wisconsin Wisconsin
2012 Wisconsin Minnesota Minnesota
2013 Minnesota Minnesota Minnesota

12 out of 13 times the winner of the WCHA tournament predicts the eventual NCAA tournament winner. Only 9 out of 13 times does the WCHA regular season champ predict the NCAA champ. And it adds nothing to increase the ability to predict the eventual winner over what the WCHA tournament champion does. The WCHA tournament is a measure of the teams at the end of the season. The regular season championship is a measure of the entire season. Clearly one that reflects the end of the season is more accurate.
What ranking method has better accuracy than this? Better, which ranking system accurately predicted that UMD, not Minnesota, would win the 2002 NCAA tourney while still accurately predicted every other year’s winner?
The rankings may be fun, they may provide something to talk about, but in the end, are they accurate? Only six times has the rankings accurately predicted the eventual winner. (less than by flipping a coin or by chance). Only once have the rankings predicted the order of finish for the top four teams (2005).
 
Re: Division I Rutter Rankings for 2013-2014

It should be obvious that any ranking systems that attaches equal value to victories, whether they occur at the beginning of the season or the end, is inferior to one that puts more importance on victories toward the end of the season. Any fan knows it isn’t how you play at the beginning; it’s how you play at the end. The coaches poll take this into account. But their poll is just a seat of the pants poll, so take a look at the following data:
Year WCHA Season WCHA Tournement NCAA Tournement
2001 Minnesota UMD UMD
2002 Minnesota Minnesota UMD
2003 UMD UMD UMD
2004 Minnesota Minnesota Minnesota
2005 Minnesota Minnesota Minnesota
2006 Wisconsin Wisconsin Wisconsin
2007 Wisconsin Wisconsin Wisconsin
2008 UMD UMD UMD
2009 Minnesota Wisconsin Wisconsin
2010 UMD UMD UMD
2011 Wisconsin Wisconsin Wisconsin
2012 Wisconsin Minnesota Minnesota
2013 Minnesota Minnesota Minnesota

12 out of 13 times the winner of the WCHA tournament predicts the eventual NCAA tournament winner. Only 9 out of 13 times does the WCHA regular season champ predict the NCAA champ. And it adds nothing to increase the ability to predict the eventual winner over what the WCHA tournament champion does. The WCHA tournament is a measure of the teams at the end of the season. The regular season championship is a measure of the entire season. Clearly one that reflects the end of the season is more accurate.
What ranking method has better accuracy than this? Better, which ranking system accurately predicted that UMD, not Minnesota, would win the 2002 NCAA tourney while still accurately predicted every other year’s winner?
The rankings may be fun, they may provide something to talk about, but in the end, are they accurate? Only six times has the rankings accurately predicted the eventual winner. (less than by flipping a coin or by chance). Only once have the rankings predicted the order of finish for the top four teams (2005).

www.gifsforum.com/images/gif/did not read/grand/didnt_read_gif_kgk9.gif
 
Re: Division I Rutter Rankings for 2013-2014

Only six times has the rankings accurately predicted the eventual winner.
That is probably about the same number of times we had a repeat champ. Typically the previous years champion is the poll leader at the start of the season.

BTW...That list is boring to look at. We need some new blood in that list. Maybe this will be the year. Go Eagles , Go Knights. :D
 
Re: Division I Rutter Rankings for 2013-2014

It should be obvious that any ranking systems that attaches equal value to victories, whether they occur at the beginning of the season or the end, is inferior to one that puts more importance on victories toward the end of the season. Any fan knows it isn’t how you play at the beginning; it’s how you play at the end. The coaches poll take this into account. But their poll is just a seat of the pants poll, so take a look at the following data:
Year WCHA Season WCHA Tournement NCAA Tournement
2001 Minnesota UMD UMD
2002 Minnesota Minnesota UMD
2003 UMD UMD UMD
2004 Minnesota Minnesota Minnesota
2005 Minnesota Minnesota Minnesota
2006 Wisconsin Wisconsin Wisconsin
2007 Wisconsin Wisconsin Wisconsin
2008 UMD UMD UMD
2009 Minnesota Wisconsin Wisconsin
2010 UMD UMD UMD
2011 Wisconsin Wisconsin Wisconsin
2012 Wisconsin Minnesota Minnesota
2013 Minnesota Minnesota Minnesota

12 out of 13 times the winner of the WCHA tournament predicts the eventual NCAA tournament winner. Only 9 out of 13 times does the WCHA regular season champ predict the NCAA champ. And it adds nothing to increase the ability to predict the eventual winner over what the WCHA tournament champion does. The WCHA tournament is a measure of the teams at the end of the season. The regular season championship is a measure of the entire season. Clearly one that reflects the end of the season is more accurate.
What ranking method has better accuracy than this? Better, which ranking system accurately predicted that UMD, not Minnesota, would win the 2002 NCAA tourney while still accurately predicted every other year’s winner?
The rankings may be fun, they may provide something to talk about, but in the end, are they accurate? Only six times has the rankings accurately predicted the eventual winner. (less than by flipping a coin or by chance). Only once have the rankings predicted the order of finish for the top four teams (2005).

I'll bite.

1) Where are you finding the archived rutter rankings? I don't see them on the website.
2) When you say the Rutter rankings didn't predict the national champion, which rutter rankings are you referring to? one at the beginning of the season? one just before the Frozen Four? one updated after the championship game?
3) How is a ranking system supposed to take more into account games that happen at the end of the season when we are in, like, week 3? The point of the RPI, as well as the Rutter Rankings, is to rank the teams *as they are performing now*.

An objective ranking system can give us insight as to which teams might be poised to improve their record down the stretch. i.e., a team with a worse record that has faced a tougher schedule or has a high goal or shot differential, could reasonably be expected to do better as the season moves on. But it's not a crystal ball, it's more of a snapshot.
4) Some of the other points you made make me wonder if you took a high school math class. Do you understand how poor the odds are of accurately picking the finishing order of the top 4 in anything with more than, like, 6 contestants? Have you ever seen the odds for picking a trifecta or a superfecta at a horse race?

Also, the tournament thing. Ultimately a single-elimination tournament is going to produce some sporadic and unpredictable results. If BC got a lucky goal in overtime last year against Minnesota, they might have won the national championship, but no objective ranking system could say they were the best team as of November, December, January, February, etc.

Again, I don't know where you are finding the archived Rutter rankings, but to say "6 out of 13 have won the title, that's worse than flipping a coin" is either really misinformed or really disingenuous because "win a national championship" or "not win a national championship" are not the only two results a team can have in a season.
 
Re: Division I Rutter Rankings for 2013-2014

First of all, not even trying to be a jerk, but I have not the slightest clue what point you're trying to get across here.

Second of all:
It should be obvious that...
The mark of someone who is not serious and is just trying to get someone to take the bait.

Third of all:
The rankings may be fun, they may provide something to talk about, but in the end, are they accurate?
If we had a method that predicted the outcome 100% of the time, we would not need to play the games, would we? The whole reason sports are interesting is that however true it may be that Team A is better than Team B, there is always a measure of uncertainty.

God why am I doing this to myself I know for a fact you aren't serious...

Fourthly,
Only six times has the rankings accurately predicted the eventual winner. (less than by flipping a coin or by chance).
You realize the mathematical flaw you made here right? You do know that if a ranking correctly picked the national champion out of a field of, what, 52?, almost half the time, that that's pretty dod gamb good?

Finally,
Only once have the rankings predicted the order of finish for the top four teams (2005).
Because do you realize how hard it is to predict the finishing order of 4 roughly equal items in a set?

Even if:

1) You had JUST 4 teams playing women's hockey,
2) With each team twice as good as the next team (Team A = .53, Team 2 = .27, Team Gamma = .13, Team IV = .07) -- which is not even in the same STRATOSPHERE as reality)
3) Completely ignoring the fact that there are dozens of other teams in reality as opposed to this example!!


Even with ALL THAT, the chances of picking the finishing order of all four correctly by choosing best to worst -- -- are just one in five. One in five! With just a four team league and each team being twice as good as the next best team!

And if you assume four roughly equal teams, that drops to one in TWENTY FIVE. Again, ignoring the fact that there are DOZENS of other teams not even being considered for this 'top 4' argument.

Are you kidding me with this mathematical nonsense?

Yes, you must be, so I don't know why I'm doing this, I really don't, but it's an impressive bit of troll bait I'm running with here.
 
Last edited:
Re: Division I Rutter Rankings for 2013-2014

You realize the mathematical flaw you made here right? ......Even if.....
1) You had JUST 4 teams playing women's hockey
2) With each team twice as good as the next team (Team A = .53, Team B = .27, Team C = .13, Team 4 = .07)

The chances of picking the finishing order of all four correctly by choosing best to worst -- -- are just one in five. One in five!

And if you assume four roughly equal teams, that drops to one in TWENTY FIVE.

Check your rep.
 
Re: Division I Rutter Rankings for 2013-2014

It should be obvious that any ranking systems that attaches equal value to victories, whether they occur at the beginning of the season or the end, is inferior to one that puts more importance on victories toward the end of the season. Any fan knows it isn’t how you play at the beginning; it’s how you play at the end. The coaches poll take this into account.

Actually, this isn't obvious and not all fans know this. You are conflating your personal opinion on what should be more important with objective fact. Believe it or not, they aren't the same. You assume that the team that wins the national championship is by definition the best and then let this assumption do all of your heavy lifting and putting a superficial structure of logic on top of it.

Personally, I attend games in the early season and think that any system that argues that they aren't just as important is obviously flawed. But I recognize that that's a value statement on my part.
 
Back
Top