What's new
USCHO Fan Forum

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • The USCHO Fan Forum has migrated to a new plaform, xenForo. Most of the function of the forum should work in familiar ways. Please note that you can switch between light and dark modes by clicking on the gear icon in the upper right of the main menu bar. We are hoping that this new platform will prove to be faster and more reliable. Please feel free to explore its features.

Division I Rutter Rankings for 2013-2014

Re: Division I Rutter Rankings for 2013-2014

saying Buffalo has a 54% chance to win IS making a prediction

Sort of, but not in the sense that Lakers Fan means. A prediction generally means a point estimate. In the above case, the prediction is that Buffalo will win, as that is the most likely outcome. The 54% is the expected distribution of possible outcomes, not the prediction itself. So when he draws the distinction between saying that Buffalo will win and giving the distribution, he's actually emphasizing the element that is not a prediction.
 
Re: Division I Rutter Rankings for 2013-2014

When I hear the weatherman say there's a 70% chance of rain, is that a prediction? If not they are never wrong!

(but maybe that's part of the science)

Lakersfan or Grant can probably answer this question for you better than I can given their math backgrounds. But for dummies like myself there's a really fantastic chapter about weather forecasting in Nate Silver's The Signal and the Noise.
 
Re: Division I Rutter Rankings for 2013-2014

saying Buffalo has a 54% chance to win IS making a prediction

No, that is a probability assessment. A prediction is to declare who in your opinion will win the game, no if ands or buts. If there is a line involved, then the prediction becomes if a team will beat the line or not, again no if ands or buts, either you are in or you are out. !.
 
Re: Division I Rutter Rankings for 2013-2014

maybe it's a Canada - USA thing?
you know ....
flavour -> flavor

in any event, what does the statistical model say of last weeks UND-OSU game
in the coaches poll they may not be all that impressed by OSU defeat of UND
realizing it may be due to a 5 minute major (who can predict ... I mean determine the probability of that?)
but if my understanding of the Rutter is correct, a win is a win
 
Re: Division I Rutter Rankings for 2013-2014

For games played through October 20, 2013

Code:
  	Team 		Rating 	Last Week
1 	Minnesota 	3.81	1
2 	Cornell 	1.83 	6
3 	Wisconsin 	1.80 	4
4 	North Dakota 	1.26	5 	
5 	Ohio State 	1.07 	8
6 	Harvard 	1.04 	7
7 	Boston College 	0.92	3	
8 	Clarkson 	0.91	2 
9 	Quinnipiac 	0.86	NR
10 	UMD 	        0.75 	9
 
Re: Division I Rutter Rankings for 2013-2014

OK, a couple of points have been brought up that I am happy to address.

- The Bayesian Prior

There is nothing in my model that turns off the Bayesian prior after X number of games. That was a decision on my part based on "Bayesian Philosophy," if you will.

On the surface, this may seem troubling. But I think it also exposes a flaw in all ranking systems. We are trying to estimate a team's true ability (what I call rating) based on a limited number of observations in which there is "observation error" (hockey games are close, and the team with the better ability does not always win) and "process error" (a team's ability is not constant, be it due to injuries, things going on at school, etc.). So trying to estimate a team's true ability based on thirty or so binary responses in the presence of all this uncertainty is a difficult task.

Using 2012-2013 rankings as a tie-breaker even at the end of the 2013-2014 season is pretty strange.

Would your rankings differ much from KRACH if you ditched the last season stuff?
 
Re: Division I Rutter Rankings for 2013-2014

Using 2012-2013 rankings as a tie-breaker even at the end of the 2013-2014 season is pretty strange.
:confused: I'm not sure what you read that you condensed down to that sentence, but that isn't how I understand Rutter Rankings at all.
 
Re: Division I Rutter Rankings for 2013-2014

Using 2012-2013 rankings as a tie-breaker even at the end of the 2013-2014 season is pretty strange.

Would your rankings differ much from KRACH if you ditched the last season stuff?

Understand that the two teams would have to have performances from the current season that are identical down to several decimal places by the end of the year before the previous season's ratings would matter enough to influence it one way or the other. Since in a system like Rutter or KRACH the numerical values of the ratings are more important (a lot more important) than the order in which the teams are spit out, one shouldn't really interpret this as the tie having been broken at all. Unfortunately, most text editors don't allow you to write two team names on top of each other so one of them has to be ranked higher. But if the ratings are that close, you should think of them as equal.

Therefore, while KRACH and Rutter will differ from time to time, this feature of the latter has nothing to do with it.
 
Re: Division I Rutter Rankings for 2013-2014

For what it's worth, a Bayesian approach to rankings more closely approximates human reasoning. Everyone has some kind of prior for how teams are ranked entering the season, and then we update those rankings appropriately based on observed results. The ranking system just makes that updating process mathematically consistent.

Also, note that when forecasting the NCAA tournament, you'd surely do much better to take into account information from past tournaments rather than focusing exclusively on current season results. I don't think this point should require much further explanation. The 2008 Frozen Four with two lower-seeded WCHA teams comes to mind.

The NCAA would, of course, hypocritically be against any rating system for selections that used past season information -- I gather they think it's unfair for current teams to be punished for past teams' shortcomings (just like how NCAA sanctions don't ever punish current players for past sins, they only punish the millionaire coach who jumps ship... or not :p ) Of course humans on the selection committees for basketball and pollsters in football certainly are influenced by information from past seasons, far more than the influence of the prior in the Rutter rankings. If the teams who are #1 and #2 in preseason football go undefeated, pretty much no one else has a shot at the championship in the current system.
 
Re: Division I Rutter Rankings for 2013-2014

.... they only punish the millionaire coach who jumps ship... or not :p )

NCAA hockey seems to apply a different standard to women's hockey, where they appear to punish the coach at the other end of the pay scale. :p
 
Re: Division I Rutter Rankings for 2013-2014

the NCAA tournament is really a conclusion to the conference tournaments which seed three of the teams in the NCAA tournament. The odd thing though, is that the NCAA tournament essentially gives some teams a second chance, specifically the ones that did well during the regular season but bombed in the season end conference tournament. Why should those teams get a second chance?

Note that many times these highly seeded second chance teams loses right away in the 1/4 finals, but note that the WCHA tournament champ with only one exception has gone on to win the NCAA championship, and in that year (2002) it was an Olympic year where UMD lost 5 players to the Olympic team but returned in time to win the NCAA championship.
 
Re: Division I Rutter Rankings for 2013-2014

For what it's worth, a Bayesian approach to rankings more closely approximates human reasoning. Everyone has some kind of prior for how teams are ranked entering the season, and then we update those rankings appropriately based on observed results. The ranking system just makes that updating process mathematically consistent.

IOW, the coaches poll, minus the math

Note that, for example the MN sweep of Wisconsin, both coaches realize it is simply a measure at this point in time, both teams will continue to improve/learn (or not) relative to the other and what is important is how they play in Madison. And then in the WCHA tournament. The idea is to be playing better than the other at the end of the season, not at the beginning or even middle.

The Rutter ranking does not account for that, as near I can tell. Anyone?
 
Re: Division I Rutter Rankings for 2013-2014

Oh my God. I don't normally just come out and say something like this, but you are a clinically certified idiot.
 
Re: Division I Rutter Rankings for 2013-2014

Note that many times these highly seeded second chance teams loses right away in the 1/4 finals, but note that the WCHA tournament champ with only one exception has gone on to win the NCAA championship, and in that year (2002) it was an Olympic year where UMD lost 5 players to the Olympic team but returned in time to win the NCAA championship.
Yes, the important thing is to be playing the best at the end of the season. But how exactly does any ranking system determine that when there haven't been any non-conference games for weeks? There used to be a "Last 16" component in the PairWise Rankings, but that typically rewarded a strong team in a weak conference. For the 2002 season that you mention, UMD had its Olympic players back for the WCHA tournament, it simply didn't play very well even with them.

I'm not sure if you have any point other than to disparage Rutter's ranking, and if that is the case, you don't have to visit this thread if you don't like the ranking system.
 
Re: Division I Rutter Rankings for 2013-2014

For the 2002 season that you mention, UMD had its Olympic players back for the WCHA tournament, it simply didn't play very well even with them.

I'm not sure if you have any point other than to disparage Rutter's ranking, and if that is the case, you don't have to visit this thread if you don't like the ranking system.

apparently you are missing the entire point

athletes learn, simply having the bodies back doesn't make them a better team, there no doubt has to be a transition period where the team relearns to play together again

and don't tell me to stay away simply because you can't figure out why I am posting here, maybe it is you who needs to take a break, watch, listen, and learn?
 
Re: Division I Rutter Rankings for 2013-2014

The point I'm trying to get from you is what does this have to do with the Rutter rankings, and how is he supposed to change his rankings to make them better, in your opinion? In October, there aren't a lot of conference tournament results to draw upon.
 
Re: Division I Rutter Rankings for 2013-2014

It should be noted that in other sports, including the NHL, the correlation between playing well at the end of the season and doing well in the playoffs is weaker than the correlation between the record over the entire season and doing well in the playoffs. This becomes especially true if you control for late changes to teams' rosters, such as pickups at the trade deadline and those who return from or are lost to injury. For sports that have some sort of projection system based upon roster (such as PECOTA or ZIPS for major league baseball; VUKOTA for NHL; and so on) the best predictor is is using the entire season's worth of data to make a projection based off of the current roster. Being hot at the end of the season is not very useful as a metric going forward.

This is really just a subset of the fact that streaks, of almost any variety, make lousy projections. Winning streaks, or even hot periods, are obvious in retrospect but they do little to tell you who is going to win the next game. The same is true with hot streaks. Whether it's a team's record over the last ten games or a hitter's performance in his last 30 plate appearances, the prediction value is close to nil. The problems of small sample size overwhelm any real element of momentum that might be present.

As a general rule, we should be wary of looking at things in one sport and assuming that it's true in a different sport that's never really been studied. However, the above is true in every sport I am aware of where the question has been asked so my default assumption is that it's probably true in NCAA Division I women's ice hockey as well. I'm not going to overturn that default assumption based upon the observation that on the three occasions that the WCHA regular season and playoffs champions were different teams, it was the playoff champion that won.

This becomes even more true when you actually look at the three cases. In 2000-01, the regular season champion (Minnesota) did not get an invitation to the NCAA tournament, so the hypothesis that winning the conference tournament is a more valuable predictor wasn't even tested. So we really only have a sample size of two, not three.

In 2008-09, Minnesota finished one point ahead of Wisconsin in the regular season standings. What that means is that once the WCHA tournament had been played, Wisconsin actually had a better record in conference games than Minnesota did. Since the claim isn't that regular season conference games make a better predictor than playoff results (I suspect that they do, but since the ratings include all games that really has nothing to do with the claim being made), but rather that using all games prior to the NCAA tournament make a better predictor than the much smaller set of tournament games, there actually isn't a conflict here. The team with the better total season won the championship.

2011-12 is slightly different, but not by much. Wisconsin won the regular season title by two games, 23-3-2 to 21-5-2. After the conference tournament, the records were: Wisconsin 25-4-2; Minnesota 25-5-2. So Wisconsin still had a better record against WCHA opponents overall but the difference was only one game. At that point I would bet that the ratings systems would have suggested that a championship game between the two teams (which is what happened) was pretty much a tossup.

So when the examples are analyzed, pokechecker hypothesis hangs on just one instance in which a game between two almost identical opponents was won by the team that had lost one more game than the other. That's . . . not exactly convincing.
 
Re: Division I Rutter Rankings for 2013-2014

I don't think saying that "you don't have to visit this thread if you don't like the ranking system." is outright telling you to go away. I think It's telling you in a nice way that you are free to make a choice about visiting this thread. Choose wisely.
 
Re: Division I Rutter Rankings for 2013-2014

It should be noted that in other sports, including the NHL, the correlation between playing well at the end of the season and doing well in the playoffs is weaker than the correlation between the record over the entire season and doing well in the playoffs. This becomes especially true if you control for late changes to teams' rosters, such as pickups at the trade deadline and those who return from or are lost to injury. For sports that have some sort of projection system based upon roster (such as PECOTA or ZIPS for major league baseball; VUKOTA for NHL; and so on) the best predictor is is using the entire season's worth of data to make a projection based off of the current roster. Being hot at the end of the season is not very useful as a metric going forward.

This is really just a subset of the fact that streaks, of almost any variety, make lousy projections. Winning streaks, or even hot periods, are obvious in retrospect but they do little to tell you who is going to win the next game. The same is true with hot streaks. Whether it's a team's record over the last ten games or a hitter's performance in his last 30 plate appearances, the prediction value is close to nil. The problems of small sample size overwhelm any real element of momentum that might be present.

As a general rule, we should be wary of looking at things in one sport and assuming that it's true in a different sport that's never really been studied. However, the above is true in every sport I am aware of where the question has been asked so my default assumption is that it's probably true in NCAA Division I women's ice hockey as well. I'm not going to overturn that default assumption based upon the observation that on the three occasions that the WCHA regular season and playoffs champions were different teams, it was the playoff champion that won.

This becomes even more true when you actually look at the three cases. In 2000-01, the regular season champion (Minnesota) did not get an invitation to the NCAA tournament, so the hypothesis that winning the conference tournament is a more valuable predictor wasn't even tested. So we really only have a sample size of two, not three.

In 2008-09, Minnesota finished one point ahead of Wisconsin in the regular season standings. What that means is that once the WCHA tournament had been played, Wisconsin actually had a better record in conference games than Minnesota did. Since the claim isn't that regular season conference games make a better predictor than playoff results (I suspect that they do, but since the ratings include all games that really has nothing to do with the claim being made), but rather that using all games prior to the NCAA tournament make a better predictor than the much smaller set of tournament games, there actually isn't a conflict here. The team with the better total season won the championship.

2011-12 is slightly different, but not by much. Wisconsin won the regular season title by two games, 23-3-2 to 21-5-2. After the conference tournament, the records were: Wisconsin 25-4-2; Minnesota 25-5-2. So Wisconsin still had a better record against WCHA opponents overall but the difference was only one game. At that point I would bet that the ratings systems would have suggested that a championship game between the two teams (which is what happened) was pretty much a tossup.

So when the examples are analyzed, pokechecker hypothesis hangs on just one instance in which a game between two almost identical opponents was won by the team that had lost one more game than the other. That's . . . not exactly convincing.

as a general rule, you should also be wary of the validity of generalizing from bacteria to hockey players

and sorry, my hypothesis does not hang on one instance, your response is what hangs on one instance
you are trying to disprove by n exception to the rule
... except that your exception isn't even an exception
FAIL!

the data is clear, the winner of the WCHA tournament goes on to win the NCAA tournament
and more often than not, the WCHA season champ turns out to be the WCHA tournament champ
 
  • Like
Reactions: ARM
Back
Top