What's new
USCHO Fan Forum

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • The USCHO Fan Forum has migrated to a new plaform, xenForo. Most of the function of the forum should work in familiar ways. Please note that you can switch between light and dark modes by clicking on the gear icon in the upper right of the main menu bar. We are hoping that this new platform will prove to be faster and more reliable. Please feel free to explore its features.

2011-12 DI Rutter Computer Rankings

Re: 2011-12 DI Rutter Computer Rankings

PWR, RPI, Rutter and KRACH do not factor in when games are played. So at the end of the season, order of games are played is not an issue. PWR used to have a "last 16 games" component, but that was eliminated three or four years ago.

As for Mercyhurst, I think they have a little wiggle room. If they win out, they are in. Every tie/loss is going to hurt their RPI, although the teams behind them in the PWR have lower RPIs, so they have some room for error (as long as a run by another team doesn't increase their RPI). Now, if losing 1 out of 4 to a Robert Morris team that wins all their other games might actually help Merychurst. If RMU can become a team under consideration by winning a bunch of games, then Hurst going 3-1 against them will help them in the PWR. I don't think RMU making the top 12 is possible, so I think Mercyhurst could absorb one loss and still make it.

Well, they took their one loss tonight, 3-2 to RMU. If you look at the PWR Rankings, which I think are updated as of tonight, there are 5 teams from the East in the top-8 and 6 in the top-9. Will def make things interesting if BC doesn't win HE and Cornell doesn't win ECAC.
 
Re: 2011-12 DI Rutter Computer Rankings

Your point about the final seasonresults appears valid, but part of my question was whether bunching of games against the same opponents makes weeek-by-week computerized results more volatile or whether the computer algorithms iron out any such volatility

Yes, it can. If two teams play games against each other early, we know the relative strength of those two teams against each other. As they play more games against other opponents, their position will be highly correlated in the short run, as the model attempts to rank the teams against the rest. But this would only last for a few games, and early season results always cause large swings as one upset can cause big changes when the number of observations are small. As the number of games played increase, the rankings stabilize regardless of the early schedule. See this plot:

http://math.bd.psu.edu/faculty/rutter/D1_history/season_history.html
 
Re: 2011-12 DI Rutter Computer Rankings

For games played through January 22, 2012

Code:
  	Team 	Rating 	
1 	Wisconsin 	2.2116 	
2 	Minnesota 	1.8291 
3 	Cornell 	1.6603 	
4 	North Dakota 	1.1291 
5 	UMD 	        1.0204 	
6 	Boston College 	0.9222 
7 	Mercyhurst 	0.7987 	
8 	Harvard 	0.7676 
9 	Bemidji State 	0.7103 	
10 	Northeastern 	0.6925
 
Re: 2011-12 DI Rutter Computer Rankings

Yes, it can. If two teams play games against each other early, we know the relative strength of those two teams against each other. As they play more games against other opponents, their position will be highly correlated in the short run, as the model attempts to rank the teams against the rest. But this would only last for a few games, and early season results always cause large swings as one upset can cause big changes when the number of observations are small. As the number of games played increase, the rankings stabilize regardless of the early schedule. See this plot:

http://math.bd.psu.edu/faculty/rutter/D1_history/season_history.html

Thanks, LakersFan
 
Re: 2011-12 DI Rutter Computer Rankings

For games played through January 29, 2012

Code:
  	Team         	Rating 	
1 	Wisconsin 	2.2616 	
2 	Minnesota 	1.8531 	
3 	Cornell 	1.4586 
4 	North Dakota 	1.1020 
5 	UMD 		1.0411 	
6 	Boston College 	0.8394 	
7 	Harvard 	0.7826 	
8 	Mercyhurst 	0.7664 	
9 	Bemidji State 	0.7014 	
10 	Dartmouth 	0.6476
 
Re: 2011-12 DI Rutter Computer Rankings

For games played through February 19, 2012

Code:
  	Team 	Rating 	
1 	Wisconsin 	2.0895 	
2 	Minnesota 	1.7944 	
3 	Cornell 	1.5729 	
4 	North Dakota 	1.1516 	
5 	UMD 	        1.0067 	
6 	Boston College 	0.8318 
7 	Harvard 	0.7389 	
8 	Northeastern 	0.7342 	
9 	Bemidji State 	0.7083 	
10 	Mercyhurst 	0.7076
 
Re: 2011-12 DI Rutter Computer Rankings

Final 2011-2012 Rankings

Teams marked with a * in the NCAA tournament

Code:
  	Team 	Rating 	
1 	Wisconsin 	2.0150 * (at-large)	
2 	Minnesota 	1.8757 * 	
3 	Cornell 	1.4491 * (at-large)	
4 	North Dakota 	1.2101 * (at-large)
5 	UMD             1.1424 
6 	Boston College 	0.7880 * (at-large)
7 	Harvard 	0.7311 
8 	Boston Univ 	0.6986 *	
9 	Bemidji State 	0.6839 	
10 	Northeastern 	0.6645 	
11 	Mercyhurst 	0.6620 * (at-large)	
12 	St. Lawrence 	0.6358 *	
13 	Ohio State 	0.6025

Never, to my knowledge, has a team as low as Mercyhurst (in terms of Rutter and KRACH) been selected as an at-large team. This just demonstrates how the NCAA selection criteria (the PWR) and Rutter/KRACH are different in that Mercyhurst's loses to teams outside the top 12 (Robert Morris, for example) had a small impact on the PWR, while Rutter and KRACH penalize the team more.

I went 13 teams deep to show the following:

Code:
  	Team     25th Perc.  75th Perc. Percent Top 8
1 	Wisconsin 	1 	2 	100.0
2 	Minnesota 	1 	2 	100.0
3 	Cornell 	3 	4 	99.2
4 	North Dakota 	4 	5 	96.1
5 	UMD        	4 	6 	93.1
6 	Boston College 	6 	10 	55.2
7 	Harvard 	7 	12 	45.3
8 	Bemidji State 	7 	12 	38.8
9 	Boston Univ.	7 	12 	38.7
10 	Mercyhurst 	8 	12 	34.2
11 	Northeastern 	8 	12 	33.1
12 	St. Lawrence 	8 	13 	28.3
13 	Ohio State 	8 	13 	27.0

Since the Rutter rankings are based on a statistical model, I have the ability to report the uncertainty about the ratings and rankings. Based on simulations, I found the 1st quartile and 3rd quartile for each teams ranking. For example, Mercyhurst was ranked 8th or higher at least 25% of the time and ranked 12th or below at least 25% of the time. The last column represents the probability that a team was ranked in the top 8 for all the simulations.

I think this just shows how even the teams in positions 7-13 are, as they all were ranked in the top 8 at least 25% of the time. I think it also shows that UMD would be justified in saying the PWR did not do a good job of representing their performance this season.
 
Back
Top