When the Pacers traded Paul George for Victor Oladipo and Domantas Sabonis, I became interested in the NBA again. In perhaps a strange way, I associate Oladipo with family. To me, Oladipo is living in the mountains out West and going over to my brother’s for breakfast and to watch IU.
I remember doubting Oladipo. Watching him as a freshman at Indiana University, I was skeptical he’d be able to develop the skills and control to match his athleticism. Two years later, I thought of him as the best player in college basketball.
I bought tickets for my wife and I to see the Pacers play the Rockets in November, knowing that the Rockets were generally accepted as one of three teams that could be a finals contender. I was hopeful that we’d witness an early shock for everyone writing the Pacers off.
The Rockets beat the Pacers 95-118 that night. I don’t remember it ever being close. I do remember saying to my wife’s friend’s husband late in the fourth quarter, with genuine surprise, “I don’t think there’s enough time for them to come back for this one.”
I watched the games by myself, physically, but I texted my brother throughout them. He had also become a Pacers fan again.
The Pacers lost the series to the Cavs in 7 games. However, every media analysis I listened to and pick-up basketball player I talked to agreed the Pacers’ season was a success based on the expectations that were set for them during the preseason. Claiming that the Pacers had massively overachieved became so common that I got curious as to what predictions I could still find, and how badly the Pacers really beat them.
The range of wins in the predictions I found was 30-32.5. The average was 31.67.
If we round the average to 32, the Pacers beat the average forecasted prediction by 16 wins (50%) — the Pacers actual wins were 150% of the average of the preseason forecasts I found.
This may be naive, but I’m as surprised by how similar each prediction was to each other as I am by the margin at which the Pacers beat them. In particular, I’m surprised by ESPN and FiveThirtyEight arriving at exactly the same predicted record given their differences in methodologies (also discussed in the section below called “Methodologies”).
On some level, I would expect each source to follow a pattern of: 1) assess how good each player is 2) sum these player ratings to create a team rating and 3) compare these ratings to other teams. This outline seems to map directly onto ESPN’s process. They used Real Plus-Minus (RPM) to assess how good each player is, and arrived at a team rating based on 1) the RPM for each player and 2) the expected playing time of each player.
FiveThirtyEight incorporates this process into their predictions, but relative to ESPN, they: 1) uses a different player rating system and 2) include other, team-centric factors in their model. FiveThirtyEight’s player ratings use a weighted blend of RPM and Box Plus/Minus (BPM), which I would guess would put them in a close range to ESPN. However, they also incorporate historical performances of teams against each other, and adjustments for things like recent team travel and the altitude of the city the game is being played in. And yet, both systems spit out the same number of wins: 32.
I guess I would expect anyone that anyone who does qualitative analysis (The Ringer, USA Today) to just start with numbers provided by Vegas (SuperBook, OddsShark) and adjust from there.
I have more questions now than when I started this post. I’d like to continue exploring how accurate some of these quantitative forecast methodologies have been, and to try to gauge how rare it was that the Pacers beat them by such a margin. I’d also like to explore how Oladipo, as well as other Pacers players, performed versus player prediction systems, and how accurately assessing Oladipo’s performance would the ESPN and/or FiveThirtyEight team record forecasts.
FiveThirtyEight refers to their prediction system as CARM-Elo. It’s a blend of of their player projections, named CARMELO, and a system created by Aprad Elo to rate chess players, called Elo.
Elo ratings carry over from season-to-season, but don’t account for personnel changes, so FiveThirtyEight also uses their CARMELO system in their CARM-Elo predictions.
CARMELO is a system developed by FiveThirtyEight to forecast the future performance of every current NBA player by comparing them to similar players throughout history.
In addition to an Elo/CARMELO blend, FiveThirtyEight adjusts win probabilities for each game based on factors for expected fatigue, travel, and altitude.
FiveThirtyEight runs 50,000 simulations to determine their final projected records.
The production of the RPM metric is financed by ESPN. RPM is inspired by BPM, but attempts to control for the performance of other players on the court — a criticism of BPM is that a very average player could have a high rating just by being on the court with a great player and staying out of the way.
Drawing on advanced statistical modeling techniques (and the analytical wizardry of RPM developer Jeremias Engelmann, formerly of the Phoenix Suns), the metric isolates the unique plus-minus impact of each NBA player by adjusting for the effects of each teammate and opposing player.
The RPM model sifts through more than 230,000 possessions each NBA season to tease apart the “real” plus-minus effects attributable to each player, employing techniques similar to those used by scientific researchers when they need to model the effects of numerous variables at the same time.
RPM estimates how many points each player adds or subtracts, on average, to his team’s net scoring margin for each 100 possessions played. The RPM model also yields separate ratings for the player’s impact on both ends of the court: offensive RPM (ORPM) and defensive RPM (DRPM).
Bleacher Report provided one link to OddsShark for all of their information, and the page that they linked is updated on a regular basis, with no time traveling mechanism to see previous iterations. Therefore, I’m relying entirely on Bleacher Report as to what OddsShark predicted. I believe that OddsShark generally just summarizes the odds and lines provided by sports books, rather than produce and provide their own predictions.
Sports book generally don’t detail their methodology. So, I’m going to summarize what I’ve read and been told about how sports books operate, with no links to sources. My understanding is that one or two different companies come out with lines that most Vegas sports books copy, sometimes with a slight adjustment of their own. The ultimate goal of a sports books isn’t to accurately predict a given event — they’re trying to arrive at a line where they bring in an equal amount of money on both sides of a bet. In this sense, they’re kind of like a futures exchange. If they can get their exposure to both outcomes to net equally against each other, they’re guaranteed to walk away with a profit, because they take a commission from each bet while the cash flows from outcomes will cancel out.
After giving their general opinions on the Pacers’ roster and management, they agreed to take the over. They seemed to agree that Myles Tuner was the best player on the team, and his performance would determine the team’s success.
Bill Simmons stated that “they are not a tanking team” and that “they’ve always had a weird sense of pride about competing.” He said that the roster was “not that bad.” On Oladipo, he said, “I don’t think he’s an all-star, but I don’t think he’s a bad player, either.” Simmons thought that in Oklahoma, next to Russell Westbrook, Oladipo had been put in a role where no player could reasonably be expected to succeed.
They referred to their predictions as a “semi-scientic guess.” I interpret this to mean that a few of their sports writers bought a case of beer and stayed late one night at the office and voted on the over/under for each team.