Notre Dame Deserved to Be In
The process debate matters. But don't let it obscure how good this team actually was.
"Overwhelming shock and sadness. Like a collective feeling that we were all just punched in the stomach."
That was Notre Dame athletic director Pete Bevacqua, speaking to Yahoo Sports an hour after the CFP committee left his team out of the bracket. In the days that followed, Bevacqua called the weekly ranking shows a "farce," said the playoff was "stolen" from his players, and declared that the ACC had done "permanent damage" to its relationship with Notre Dame.
The frustration was real. After the committee ranked Notre Dame within the playoff field for four straight weeks—at one point eight spots ahead of Miami—it flipped them at the finish line, citing Miami's head-to-head victory from Labor Day weekend. The Irish, who'd won 10 straight to close the season, were suddenly on the outside looking in. They declined a bowl invitation rather than play what felt like a consolation game.
It was the first time in CFP history that a team was ranked inside the field in every weekly ranking, won every remaining game, and still didn't make the playoff.
Much of the conversation since has focused on process. Was the weekly ranking show misleading? Should head-to-head matter this much? Did the ACC's lobbying campaign cross a line? These are important questions, and we'll get to them.
But somewhere in the noise about politics and process, there's a simpler question that deserves an answer: Was Notre Dame actually one of the ten best teams in the country?
The data says yes. And it's not close.
Four Ways to Rank a Team
There's a moment every college football fan knows. You're at a bar, three beers in, and someone says "there's no way Team X should be ranked ahead of Team Y." What follows is an hour-long debate involving transitive wins, eye tests, schedule strength, who beat who by more, what some biased expert with an agenda said, and inevitably someone pulling up a score from six weeks ago on their phone.
Here's the thing: every computer ranking system is just doing exactly what you're doing at that bar—except it's doing it across 134 teams and 800+ games simultaneously, with a memory that doesn't fade after your fourth IPA, without the emotional attachment to your alma mater, and with a framework that doesn't change based on which teams are involved.
When you strip away the math, every computer ranking embodies one of three philosophies that you've probably argued yourself at some point. We'll use one representative system from each camp—but understand that dozens of other systems share each philosophy and reach similar conclusions. And then there's the human polls, which mostly just tell you which teams the experts feel good about this week and you need to hope they've actually watched some of the games.
Philosophy #1: "A Win is a Win" — The Colley Matrix
This is the purest form of the argument that only results matter. The Colley Matrix looks at wins and losses—period. No margin of victory. No style points. Beat a team, you get credit; lose, you don't.
What makes Colley interesting is how it handles strength of schedule. If you beat a team that beat a team that beat Ohio State, that connection exists in the math. It's essentially solving a massive system of equations where every team's rating depends on who they beat and who beat them, which depends on who those teams beat, and so on, until you get a self-consistent set of ratings across all 134 FBS teams.
Think of it as what you'd conclude after three hours of bar debate if you had perfect recall of every game result and infinite patience to trace every transitive connection, but you refused to look at any box scores.
Philosophy #2: "How You Win Matters" — The Massey Ratings
Kenneth Massey's system has been around since 1995, and it builds on the wins-only approach by adding one more input: the final score. Margin of victory matters, but with diminishing returns—Massey caps the impact at around 20-25 points, so a 30-point win isn't treated as meaningfully different from a 25-point win.
The math finds a set of team ratings that best explains all the score differentials across all games. When the ratings don't perfectly predict a result (which happens constantly, because football is chaotic), the system treats that as noise.
This is the "a win is a win, but the scoreboard tells you something" argument, systematized. It rewards teams that consistently handle their business, while recognizing that style points beyond a certain threshold are just running up the score.
Philosophy #3: "Who Would Win Tomorrow?" — ESPN's FPI
The Football Power Index takes yet another approach: it tries to predict who would win on a neutral field tomorrow, not who "deserves" it based on what happened in September. Think of it as the eye test, quantified.
FPI breaks down each play into its component parts—separating offensive performance from defensive performance, adjusting for opponent strength, filtering out noise like fluky turnovers and garbage-time touchdowns. It's trying to measure sustainable, repeatable performance rather than outcomes that might not happen again.
This philosophy drives some people crazy. "You can't just ignore results!" But predictive models aren't ignoring results—they're weighting them by how much information each result actually contains. A 3-point win where you outgained the opponent by 200 yards tells you something different than a 3-point win where you got bailed out by three turnovers.
This is the "I know they lost, but watch the tape" argument—except the tape is every play from every game, adjusted for context.
Philosophy #4: "Trust the Experts" — AP Poll & Coaches Poll
Of course, not everyone trusts computers. The AP Poll aggregates the opinions of 62 sportswriters and broadcasters who watch games for a living. The Coaches Poll surveys all 134 FBS head coaches—people with staffs dedicated to analyzing opponents, who see things in film that no algorithm can capture.
Human polls have their flaws. Voters have biases, limited time, and a tendency toward groupthink. Coaches famously delegate their ballots to sports information directors who may or may not have watched a single snap. But hey—sometimes the vibes are right. And when the humans and the computers agree, that's worth noting.
Four different philosophies. Three algorithmic, one human. And all four said the same thing: Notre Dame was a top-10 team.
The Number That Matters
So here's what we did: we took Notre Dame's ranking in each of those four systems—Colley, Massey, FPI, and the average of the two human polls—and averaged them together.
Colley: #10 · Massey: #6 · FPI: #3 · Humans: #9
That 7.00 isn't a cherry-picked stat. It's what you get when you synthesize four entirely different philosophies—and all four point in the same direction. When the "wins only" people, the "margin of victory" people, the "predictive efficiency" people, and the human experts all agree that a team is really good, that's not an accident.
And when you zoom out to the historical context, the four philosophies have never been so sure—and so in sync—about a team left outside the top 10 as they were about Notre Dame in 2025. Here are the ten best four-system averages in CFP history for teams that finished outside the top 10:
| Year | Team | CFP | Col | Mas | FPI | Human | Avg |
|---|---|---|---|---|---|---|---|
| 2025 | Notre Dame | #11 | 10 | 6 | 3 | 9 | 7.00 |
| 2022 | Penn State | #11 | 10 | 6 | 8 | 8 | 8.00 |
| 2024 | Alabama | #11 | 11 | 6 | 4 | 11 | 8.00 |
| 2021 | Oklahoma | #16 | 11 | 7 | 6 | 13.5 | 9.38 |
| 2019 | Alabama | #13 | 22 | 4 | 4 | 9 | 9.75 |
| 2023 | Oklahoma | #12 | 9 | 10 | 8 | 12 | 9.75 |
| 2016 | Florida State | #11 | 10 | 12 | 9 | 10 | 10.25 |
| 2019 | Auburn | #12 | 18 | 6 | 6 | 11 | 10.25 |
| 2018 | LSU | #11 | 9 | 6 | 16 | 11 | 10.50 |
| 2017 | Notre Dame | #14 | 9 | 9 | 10 | 14.5 | 10.62 |
The gap between Notre Dame and the second-best snub (2022 Penn State at 8.00) is a full ranking spot. No team with a profile this strong has ever been left outside the top 10.
Until now.
The Bubble Picture
Notre Dame wasn't competing against the entire top 25. They were competing against roughly eight teams fighting for the final three at-large spots—the bubble. And when you look at how those bubble teams stacked up across all four philosophical approaches, something becomes obvious.
Oklahoma made the field at #8. Their four-system average? 10.25. Alabama squeaked in at #9 with an average of 10.00. Miami grabbed the last spot at #10 with an average of 11.00.
Notre Dame, left out at #11, averaged 7.00.
Read that again. The team that didn't make the playoff had a better composite ranking than every team that did make it from the bubble. It's not that Notre Dame was a borderline case that got squeezed out in a coin flip. By this measure, they were the best team in the at-large conversation—and it wasn't particularly close.
| Team | CFP | Col | Mas | FPI | AP | Coach | Avg | |
|---|---|---|---|---|---|---|---|---|
| Oklahoma | #8 | 9 | 9 | 15 | 8 | 8 | 10.25 | IN |
| Alabama | #9 | 11 | 10 | 8 | 11 | 11 | 10.00 | IN |
| Miami | #10 | 13 | 14 | 7 | 10 | 10 | 11.00 | IN |
| Notre Dame | #11 | 10 | 6 | 3 | 9 | 9 | 7.00 | OUT |
| BYU | #12 | 6 | 13 | 16 | 12 | 13 | 11.88 | OUT |
| Texas | #13 | 20 | 11 | 13 | 14 | 14 | 14.50 | OUT |
| Vanderbilt | #14 | 15 | 17 | 14 | 13 | 12 | 14.62 | OUT |
| Utah | #15 | 17 | 12 | 9 | 15 | 15 | 13.25 | OUT |
Notre Dame's 7.00 isn't just the best on the bubble—it's in a completely different tier. The gap between Notre Dame (7.00) and the best team that made it, Alabama (10.00), is 3 full spots. Notre Dame wasn't in the same tier as the other bubble teams. They were in a tier by themselves.
The Math
OK, great. But maybe you're saying to yourself: "I don't think the models should be weighted equally." Or: "I don't trust predictive models—just tell me who won." Or maybe: "Computers miss things—I want to weight the humans more heavily." That's fair. Pick your philosophy.
We tested every combination of weights across all four systems, at 5% increments. That's 194,480 different scenarios—every possible way you could blend "wins only," "margin matters," "who wins tomorrow," and "trust the experts" into a single ranking philosophy.
| Notre Dame's Bubble Rank | Combinations | Percentage |
|---|---|---|
| #1 | 188,320 | 96.8% |
| #2 | 4,691 | 2.4% |
| #3 | 1,469 | 0.8% |
| #4 or worse | 0 | 0.0% |
It's not that Notre Dame usually comes out ahead. It's that in nearly 97% of scenarios, they're the #1 team on the bubble—and in the remaining 3%, they're still #2 or #3. There is no combination of philosophical weights that drops them to #4. Zero out of 194,480.
Go all-in on wins only? Notre Dame is top 3. Weight margin of victory at 100%? Notre Dame is #1. Pure predictive efficiency? Notre Dame is #1 by a mile. Trust only the human polls? Notre Dame is still ahead of every bubble team that made it.
The math doesn't allow for the outcome the committee produced.
Try it yourself:
No matter how you adjust the weights—whether you care only about wins, only about margin, only about predictive power, or only about what the human experts think—Notre Dame stays in the top 3. The data is that clear.
What This Actually Means
In the weeks since selection Sunday, the conversation has been dominated by one talking point: Miami beat Notre Dame head-to-head. The ACC made sure everyone knew it. ESPN's studio shows hammered it. Committee chair Hunter Yurachek said he asked members to go back and re-watch the Labor Day game. Head-to-head. Head-to-head. Head-to-head.
It was a masterful piece of narrative management—a smokescreen, if you will. Because while everyone was debating whether a Week 1 result should matter more than a 10-game winning streak, they stopped asking the more fundamental question: Was Notre Dame even a bubble team?
They weren't. Notre Dame ranked in the top 3 of the bubble in Colley. In Massey. In FPI. In the AP Poll. In the Coaches Poll. Five different ways of evaluating teams—three algorithmic, two human—and all five said Notre Dame wasn't fighting for the last spot. They were clearly one of the best teams in the at-large conversation, full stop.
The head-to-head debate only makes sense if you accept the premise that Miami and Notre Dame were "otherwise comparable." But they weren't comparable. By every measure we have, Notre Dame wasn't on Miami's level—they were a tier above. The committee didn't choose between two similar teams. They passed over a team that every ranking system—human and computer alike—said was clearly better.
Pete Bevacqua was right to be upset. But in the weeks of arguments about politics and process, about ACC social media campaigns and the sanctity of head-to-head results, something got lost: how good this Notre Dame team actually was.
This was a team that started 0-2 with losses by a combined four points, then won 10 straight to close the season. A team that, by FPI, was the third-best in America—ahead of everyone in the bracket except Indiana and Ohio State. A team that no ranking system, human or computer, could justify leaving outside the top 3 of the bubble.
Bevacqua called the outcome "mystifying." He said there was "no explanation" that could justify it. The data says he was right.
The process questions matter and need to be addressed. But don't let them obscure what happened here: the best team ever left outside the top 10 in CFP history didn't make the playoff.
Data sources: CFP Final Rankings (Dec 7, 2025), Colley Matrix, Massey Ratings, ESPN FPI, AP Poll, USA Today Coaches Poll. Historical analysis covers 2014–2025 CFP era. Human poll average = (AP + Coaches) / 2.
Want more? Subscribe to Bar Graph on Substack for future analysis.