Victor Oladipo and Domantas Sabonis


When the Pacers traded Paul George for Victor Oladipo and Domantas Sabonis, I became interested in the NBA again. In perhaps a strange way, I associate Oladipo with family. To me, Oladipo is living in the mountains out West and going over to my brother’s for breakfast and to watch IU.

During the NBA preseason, I felt disappointed and annoyed — in a kind of personal way — that The Ringer, FiveThirtyEight, and any other NBA media analysis I happened to see seemed to conclude the Pacers had made a terrible move, that they had gone from being one of the better teams in the Eastern Conference to one of the worst. When I read through FiveThirtyEight’s 2017-2018 NBA Predictions and saw that they assigned a 28% chance of the Pacers making the playoffs, I consoled myself that FiveThirtyEight had assigned a similar probability to Donald John Trump becoming president of the United States.1 2

I remember doubting Oladipo. Watching him as a freshman at Indiana University, I was skeptical he’d be able to develop the skills and control to match his athleticism. Two years later, I thought of him as the best player in college basketball.

I bought tickets for my wife and I to see the Pacers play the Rockets in November, knowing that the Rockets were generally accepted as one of three teams that could be a finals contender. I was hopeful that we’d witness an early shock for everyone writing the Pacers off.

The Rockets beat the Pacers 95-118 that night. I don’t remember it ever being close. I do remember saying to my wife’s friend’s husband late in the fourth quarter, with genuine surprise, “I don’t think there’s enough time for them to come back for this one.”

But for the 2017-2018 NBA season, the Pacers finished with a record of 48-34, qualifying them for the 5th seed in the Eastern Conference playoffs. Their series against the Cavs became the highlight of my week. I’d sit in my office and drink beer and listen to the TV commentators talk about how the Pacers aren’t supposed to be here. On days between games, I’d download The Ringer NBA Show and fast forward until I heard them mention the Cavs or Lebron, knowing that they’ll eventually say something about the Pacers, too. I’d play the Locked On Pacers podcast as soon as they downloaded. I’d take my lunch at 1:00 to listen to Dan Dakich live.

I watched the games by myself, physically, but I texted my brother throughout them. He had also become a Pacers fan again.

The Pacers lost the series to the Cavs in 7 games. However, every media analysis I listened to and pick-up basketball player I talked to agreed the Pacers’ season was a success based on the expectations that were set for them during the preseason. Claiming that the Pacers had massively overachieved became so common that I got curious as to what predictions I could still find, and how badly the Pacers really beat them.


The range of wins in the predictions I found was 30-32.5. The average was 31.67.

If we round the average to 32, the Pacers beat the average forecasted prediction by 16 wins (50%) — the Pacers actual wins were 150% of the average of the preseason forecasts I found.


It Seems Strange that FiveThirtyEight and ESPN Arrived at The Same Predicted Record

This may be naive, but I’m as surprised by how similar each prediction was to each other as I am by the margin at which the Pacers beat them. In particular, I’m surprised by ESPN and FiveThirtyEight arriving at exactly the same predicted record given their differences in methodologies (also discussed in the section below called “Methodologies”).

On some level, I would expect each source to follow a pattern of: 1) assess how good each player is 2) sum these player ratings to create a team rating and 3) compare these ratings to other teams. This outline seems to map directly onto ESPN’s process. They used Real Plus-Minus (RPM) to assess how good each player is, and arrived at a team rating based on 1) the RPM for each player and 2) the expected playing time of each player.

FiveThirtyEight incorporates this process into their predictions, but relative to ESPN, they: 1) uses a different player rating system and 2) include other, team-centric factors in their model. FiveThirtyEight’s player ratings use a weighted blend of RPM and Box Plus/Minus (BPM), which I would guess would put them in a close range to ESPN. However, they also incorporate historical performances of teams against each other, and adjustments for things like recent team travel and the altitude of the city the game is being played in. And yet, both systems spit out the same number of wins: 32.

It Doesn’t Seem Strange that SuperBook, OddsShark, USA Today, and The Ringer Arrived at Pretty Much the Same Predicted Record

I guess I would expect anyone that anyone who does qualitative analysis (The Ringer, USA Today) to just start with numbers provided by Vegas (SuperBook, OddsShark) and adjust from there.

Just Saying the Pacers Beat A Lot of Forecasts By a Wide Margin Doesn’t Feel Like Enough in Any Way

I have more questions now than when I started this post. I’d like to continue exploring how accurate some of these quantitative forecast methodologies have been, and to try to gauge how rare it was that the Pacers beat them by such a margin. I’d also like to explore how Oladipo, as well as other Pacers players, performed versus player prediction systems, and how accurately assessing Oladipo’s performance would the ESPN and/or FiveThirtyEight team record forecasts.

Mostly, though, I want to continue watching the Pacers figuratively punch people in the face. I really enjoyed watching this team, not just Oladipo as an individual player. Mark Titus has already said on a recent Bill Simmons podcast that the Pacers will never win an NBA championship. I’m not sure I really care whether they do, as long as they keep playing like they’ll never quit. But I do want to watch the Pacers throw a chair at Titus.3



Before the season started, FiveThirtyEight projected the Pacers to finish with a record of 32-50, putting them in 11th place in the Eastern Conference, and gave them a 28% chance of making the playoffs.4

FiveThirtyEight refers to their prediction system as CARM-Elo. It’s a blend of of their player projections, named CARMELO, and a system created by Aprad Elo to rate chess players, called Elo.

The Elo ratings, as applied by FiveThirtyEight to NBA predictions, depends only on: 1) the final score of each historical game and 2) where it was played. Teams gain or lose Elo points based on winning or losing games, and gain more points for upsets and winning by wider margins.5

Elo ratings carry over from season-to-season, but don’t account for personnel changes, so FiveThirtyEight also uses their CARMELO system in their CARM-Elo predictions.

CARMELO is a system developed by FiveThirtyEight to forecast the future performance of every current NBA player by comparing them to similar players throughout history.

CARMELO uses a blend of RPM and BPM to rate players. A two-third weight is given to RPM, and one-third to BPM. However, the system, in it’s entirety, seems much more complicated than that. I would recommend checking out the interactive application for the latest season,6 reading their original write-up on how CARMELO was created,7 and reading their changes for the 2017-2018 season.8

In addition to an Elo/CARMELO blend, FiveThirtyEight adjusts win probabilities for each game based on factors for expected fatigue, travel, and altitude.

FiveThirtyEight runs 50,000 simulations to determine their final projected records.


Kevin Pelton of ESPN projected a record of 32-50 and an 11th place finish for the Pacers.9

He used RPM as the core metric of his forecast. He didn’t explicitly state his calculation, but he projected the playing time for each player and used it as a weight on their RPM to sum some kind of team rating. I imagine it’s very similar to FiveThirtyEight’s spreadsheet.10

The production of the RPM metric is financed by ESPN. RPM is inspired by BPM, but attempts to control for the performance of other players on the court — a criticism of BPM is that a very average player could have a high rating just by being on the court with a great player and staying out of the way.

From ESPN:11

Drawing on advanced statistical modeling techniques (and the analytical wizardry of RPM developer Jeremias Engelmann, formerly of the Phoenix Suns), the metric isolates the unique plus-minus impact of each NBA player by adjusting for the effects of each teammate and opposing player.

The RPM model sifts through more than 230,000 possessions each NBA season to tease apart the “real” plus-minus effects attributable to each player, employing techniques similar to those used by scientific researchers when they need to model the effects of numerous variables at the same time.

RPM estimates how many points each player adds or subtracts, on average, to his team’s net scoring margin for each 100 possessions played. The RPM model also yields separate ratings for the player’s impact on both ends of the court: offensive RPM (ORPM) and defensive RPM (DRPM).


Bleacher Report wrote that OddsShark projected a record of 30-52, putting the Pacers in a tie for 11th place, and assigned a 5% probably of making the playoffs.12

Bleacher Report provided one link to OddsShark for all of their information, and the page that they linked is updated on a regular basis, with no time traveling mechanism to see previous iterations. Therefore, I’m relying entirely on Bleacher Report as to what OddsShark predicted. I believe that OddsShark generally just summarizes the odds and lines provided by sports books, rather than produce and provide their own predictions.


Sports Illustrated reported that the Las Vegas SuperBook put the over/under on wins at 31.5.13

Sports book generally don’t detail their methodology. So, I’m going to summarize what I’ve read and been told about how sports books operate, with no links to sources. My understanding is that one or two different companies come out with lines that most Vegas sports books copy, sometimes with a slight adjustment of their own. The ultimate goal of a sports books isn’t to accurately predict a given event — they’re trying to arrive at a line where they bring in an equal amount of money on both sides of a bet. In this sense, they’re kind of like a futures exchange. If they can get their exposure to both outcomes to net equally against each other, they’re guaranteed to walk away with a profit, because they take a commission from each bet while the cash flows from outcomes will cancel out.

The Ringer

The Ringer NBA Show had an over/under podcast for the Eastern Conference, reporting that the line was 31.5. The panel agreed on 32 or 33 wins, which I’m calling 32.5.14

After giving their general opinions on the Pacers’ roster and management, they agreed to take the over. They seemed to agree that Myles Tuner was the best player on the team, and his performance would determine the team’s success.

Bill Simmons stated that “they are not a tanking team” and that “they’ve always had a weird sense of pride about competing.” He said that the roster was “not that bad.” On Oladipo, he said, “I don’t think he’s an all-star, but I don’t think he’s a bad player, either.” Simmons thought that in Oklahoma, next to Russell Westbrook, Oladipo had been put in a role where no player could reasonably be expected to succeed.


USA Today predicted a record of 32-50 and a 10th place finish.15

They referred to their predictions as a “semi-scientic guess.” I interpret this to mean that a few of their sports writers bought a case of beer and stayed late one night at the office and voted on the over/under for each team.


  1. I did not and do not support Donald Trump for President of the United States. This statement was supposed to be “haha” funny.
  2. Who will win the presidency?. FiveThirtyEight. November 8, 2016.
  3. Only if it occurred in a YouTube video or something so no one from the Pacers gets suspended. Titus could wear a Purdue jersey or something and someone from the Pacers could be dressed like Bobby Knight.
  4. 2017-18 NBA Predictions. FiveThirtyEight. Change the dropdown menu to “Oct. 17 (preseason - final)”.
  5. How We Calculate NBA Elo Ratings. FiveThirtyEight. May 21, 2015.
  6. CARMELO NBA Player Projections. FiveThirtyEight. October 17, 2017.
  7. How Our 2015-16 NBA Predictions Work. FiveThirtyEight. December 7, 2015.
  8. What’s New In Our NBA Player Projections For 2017-18. FiveThirtyEight. June 29, 2017.
  9. Projected 2017-18 records and standings for every NBA team. ESPN. August 3, 2017.
  10. NBA Depth Charts w/CARM-ELO Projections, 2017-18. FiveThirtyEight.
  11. The next big thing: real plus-minus. ESPN. April 7, 2014.
  12. NBA Schedule 2017-18: Team-by-Team Record Predictions and Playoff Odds. BleacherReport. August 14, 2017.
  13. All NBA team over/under win totals set for 2017-18 season. Sports Illustrated. August 30, 2017.
  14. NBA Over/Under With Bill Simmons—Eastern Conference. The Ringer. October 4, 2017.
  15. Record projections, award predictions for 2017-18 NBA season. USA Today. October 17, 2017.