Come Selection Sunday, CBS will pan to nervous coaches and teams huddled their in team-issued jumpsuits. Next, the screen will flash resume comparisons measuring key metrics, primarily the NET ratings and Quad 1 records, as we await the bracket reveal that will determine the bubble fates.
Bubble teams are an obsession of bracketologists and fans alike in the days leading up to Selection Sunday. But what if all the suspense and intrigue could be settled as early as February 1st? Or even January 1st?
Considering the NET’s prevalence in resume discussions, how responsive is the NET to new information? Can a team’s NET be drastically improved, or will is stubbornly stay put despite teams accruing more wins?
To answer those questions, I analyzed each team’s NET rating from three dates for the past four seasons1:
January 1st
February 1st
Selection Sunday
Does the NET move?
While there is a cardinal component to the NET (the raw efficiency rating2), the most commonly cited and used NET metric is the ordinal ranking. As such, there will always be a team ranked first with spots all the way to the team ranked last. Any movement is zero sum — one team’s rise means another team’s fall.
To measure change in NET rankings, we look at the absolute value of any changes between the dates. If a team starts at 50, a jump to 45 and a fall to 55 would both measure as a change of 5.
Between January 1 and Selection Sunday, NETs can change appreciably.
On average across all D1 teams, NETs change by 29, with a median change of 21. However, much of the movement is at the tails, as ~28% of teams have a change of less than 10, and ~49% of teams have a change less than 20.
This is an ACC blog, and some of the NET movement could be from the many, many mid-major teams. Narrowing the purview down to only Power 6 conferences3 shows that the NET moves even less for the major conferences, with an average change of 21 and a median of 14.
Between February 1 and Selection Sunday, NET ranking are more stable, with an average change of 16 and a median change of 11.
More than 44% of teams have a change of less than 10 in their NET ranking in the final 6 weeks of the season. And, once again, P6 teams have less movement, with an average change of 12 and a median of 8.
A change of +/- 10 in a team’s NET could be enough to move them off the bubble into the tournament or enough to change a their seed line by a seed or two. The changes also mean all the Quad-n mapping of wins is a moving target, too.
But the general stability means that we already know how teams are likely to stack up even before all the games are decided.
Projecting a team’s NET ranking
If we know a team’s NET as of February 1 and their win-loss record to close out the regular season, how well can we project the team’s NET as of Selection Sunday? Given the results outlined above, even just copying over their NET as of February 1 would be roughly accurate.
Of course, we can get more sophisticated than that, so I built a projection model4 to predict a team’s NET based on:
A team’s NET as of Feb. 1
A team’s number of wins and losses between Feb. 1 and Selection Sunday
The average NET rating of the team’s conference as of Feb. 1
The projections were quite accurate, with an average error5 of 10, more than a third of predictions within +/- 5 of the true NET rankings, and an R-squared value of 0.98. The accurate results are not some sort of modeling breakthrough, however.
Mechanically, the NET takes in new information but weighs each game the same. In early January, when team’s have played ~10 games, each incremental game weighs for ~10% of the NET’s information. Later in the year, each new game weighs less and less and is working relative a larger body of existing evidence, so an outcome would have to be particularly incongruous with prior results to have an outsized impact on the season-long ratings.
The results also aren’t surprising because the later NET ratings include all the same information as the earlier ratings. We’re adding some new information via the win-loss components. It matters which games a team wins and losses — wins against better-rated teams will move the needle — so that level of depth is missing from the model. However, each team’s earlier results are already baked in via the original Feb. 1 NET ranking, so the model already has many of the answers to the test when projecting the final NET rating.
While the results aren’t particularly surprising, the implications could be material.
Implications of NET stability
There are two possible explanations and one major implication of inert NET rankings:
Explanation 1: The NET might actually be good, able to properly sort teams without a full-season’s worth of data
Explanation 2: The NET is flawed, as it’s not responsive enough to new results
Implication: If coaches believe the NET is inert, they might try to “game” the system
First, the NET’s purpose is to provide a simple, single way to compare across the wide range of conferences and competition in D1 basketball. How does a 20-win team in the ACC compare to a 20-win team in the SEC and a 20-win team in the Missouri Valley? The NET helps normalize to make those comparison easier.
The stability between the date milestones could suggest that the NET is succeeding at its purpose, even without a full slate of data. Obviously, injuries, experience and the inherent variability of relying on 20-year olds will result in plenty of random outcomes and within-season changes. Hot starts can fade into mediocre finishes just as much as slow starts can pick up steam throughout the season. With those contingencies aside, even before conference play, the NET is largely making up its mind on the ordinal rankings.
Of course, the counterargument is that the NET could be biased, so the lack of movement points to a self-reinforcing calculation: the NET thinks a team is good now because it thought it was good earlier. Validating the NET’s actual “accuracy” would require some sort of “true” measure of each team’s strength, which clearly doesn’t exist since that’s the purpose for which the NET was created.
Regardless of whether this analysis leads you to believe the NET is accurate or flawed, what coaches believe about the NET is most important.
In-season, these findings don’t offer much insight for coaches. Telling a coach of a bubble team to “win more games, especially the Quad 1 games” is not particularly illuminating advice.
However, if coaches believe they can “game” the NET — scheduling easy non-conference games to tally impressive efficiency marks — their actions could have ripple effects for scheduling and competition.
This isn’t blogger paranoia, either. Last season, Clemson coach Brad Brownell explicitly called out the Big 12 conference for “manipulating” the NET by creaming weak (300+) non-conference opponents.
While margin of victory is not included in the NET’s inputs, blowing out bad teams will help efficiency ratings, as you’ll score a lot on your possessions and allow fewer points on defensive possessions. The NET does attempt to account for the quality of your opponent, however.
If you believe teams could manipulate the NET with scheduling, the results shown above lay out step 2 of the plot:
Step 1: Secure good NET ratings in non-conference play
Step 2: In conference play, NET ratings will largely stay the same, so your end-of-season NET will position you well
For conferences with sterling NET rankings, every conference game appears (in the NET’s eyes) as a game between good teams, and the stability of the ranking means the end-of-season ranking will likely be just as sterling.
Take this year’s distribution of NET rankings in the ACC. Compared to the other Power conferences, ACC teams have fewer Quad 1 opportunities. Winning against Miami and BC — with NETs in the 200s — won’t look as impressive as the wins against teams from other conferences. So the ACC’s rankings stay low because the ACC’s rankings were already low, revealing the potential circular flaw in the NET.
As Archie Miller so helpfully (and hilariously) reminded us, every conference will — in aggregate — go 0.500 in league play, so the going-in NET rankings can matter as much as the outcomes.
If you’re sympathetic to the NET-is-good interpretation, then the NET likely can’t be gamed: it will see through any efforts to tip the scales and adjust accordingly for opponent quality. However, if coaches believe the NET is self-reinforcing, they could try to put their thumb on the scales by scheduling and running up gaudy results against bad teams.
After the Big East received only three NCAA tournament bids last season, the conference hinted (threatened?) at “strategizing” to earn more bids, saying “We will be working closely with our schools in the coming months to best position the Big East next year and to ensure that we continue to be represented in March Madness.”
I’m bearish on teams overcorrecting their schedule to manipulate their NET. Old school coaching would implore teams to schedule tough opponents to prepare for the rigors of the postseason. Fans and TV execs also appreciate the high-profile, early-season tilts like the Maui Invite. And winning games is almost always a solution to avoid bubble angst.
So maybe this is all for naught, and bubble teams will continue to scapegoat the nerds for keeping them out of the Big Dance. Or maybe NET manipulation will get added to the list of NIL, transfer portal and conference realignment in the ever-changing and hyper-controlled world of college sports.
A final note on efficiency
Efficiency is a handy evaluation metric, and it’s proven to be predictive of which team will win a game. However, I don’t love watching college basketball because it’s efficient.
The fun of the game is in the inefficient, the unexpected. While I love all basketball, I take far less interest in the NBA, in part because it feels like basketball optimized. It’s not just that the players are so talented, seemingly never missing an open shot. The players and coaches have solved the game in terms of strategy and effort. Threes are worth more than twos, so we’ll jack a bunch of them. We’ll take a foul here rather than give up a layup. End-of-quarter two-for-ones are a given. There’s no question that teams will play nearly perfectly, so the intrigue comes down to execution.
The NBA is so mechanical and clinical that the incredible feats of athleticism and strategy hardly feel exciting anymore. Just another day at the office. To personify the difference, the NBA is Tim Duncan while college basketball is Caleb Love.
In college basketball, zany stuff happens all the time. Bone-headed plays, the historically bad shooter getting hot, the historically great shooter getting cold (RJ Davis?), 16 seeds beating 1 seeds. College basketball is so much more volatile in a way that efficiency metrics obscure. Efficiency implies some sort of grand plan and process, a train chugging along. In reality, what draws me to college basketball is the train wreck, so it feels odd to rely so heavily on a metric at odds with the joy of the game.

Read more about how the NET is calculated here. This is already a long post, but I’ve added some commentary on “efficiency” metrics at the end, as well
The Pac-12 still existed for much of this analysis, RIP
The modeling mechanics are slightly more nuanced. I use a regression to estimate the raw ranking based on the inputs, then rank the raw estimates to enforce a 1-through-n stack rank
“Error” here meaning the mean absolute error of the projected ranking vs the teams’ actual Selection Sunday NET ranking