This article takes an in depth look at a statistic called Expected Points per Shot (XPPS), which was used as a critique of the Milwaukee Bucks’ defense throughout the 2016-2017 season. I have made an excel spreadsheet (link to spreadsheet is right here –> https://docs.google.com/spreadsheets/d/1H5dqCdlUhCkptEciJmIelw-NKQB_TAEm_qDyNl7-LmE/pubhtml) that breaks down by location the entirety of XPPS, while also displaying its deficiencies. This spreadsheet will be referenced throughout the article. (all stats found on stats.nba.com)


On November 3rd, 2016 the Chicago Cubs won their first World Series in 108 years and Theo Epstein effectively put the final dagger in the war against analytics in Major League Baseball. Fifteen years earlier Epstein was appointed as the General Manager of the Boston Red Sox and was tasked with breaking the mighty “Curse of the Bambino.” In 2006 Epstein broke the Red Sox 86 year championship draught using a controversial data driven approach. More than a decade later every team in Major League Baseball has bought in to analytics in some capacity to help build and manage their teams. Similarly, the NBA is approaching an analytics wave akin to the MLB’s.

 

The Analytic Wave in the NBA

The NBA analytics revolution started nearly a decade later highlighted by the 2013 implementation of SportVU. SportVU is a six-camera system to track the real-time positions of basketball players and the ball at 25 times per second. With this new development came a multitude of new ways to analyze a basketball game that became readily available to the casual basketball fan. Statistics such as tracking the closest defender on an opponent’s shot or finding how many miles a certain player travels per game have dramatically changed the way analytics are used in the NBA. Along with the integration of analytics in the NBA came a fierce opposition that argues basketball cannot be translated through numbers the same way that it has been in baseball. Basketball is a sport that relies on team chemistry and an individual player’s ability to use his skills to work with his teammates’ strengths and weaknesses. There are countless variables in basketball and thus using numbers to fully quantify a player’s impact is near impossible. However, as time wears on it seems inevitable that analytics will begin to play a bigger role in the decision-making process in NBA front offices as well as an impact on how fans view and understand the game.

An Analysis of the Milwaukee Bucks’ Defense

The Milwaukee Bucks finished with the 11th worst defensive rating in the regular season that featured one of the most (if not the most) aggressive defenses in the NBA. The Buck’s scheme was designed to force turnovers by flooding the strong side of the court and relying on weak side defenders to recover from their aggressive help positions. Such an aggressive style of play leaves for little margin of error, and when a mistake was made it often led to an open three or a layup. As the season went on and teams became more familiar with the Bucks scheme they were able to exploit the Bucks aggressive rotations, which would in turn lead to wide open three pointers, especially in the corner.

Using XPPS as a Critique of Defense

As the analytics wave hit the NBA, attempts to quantify the game were made with hopes in finding new statistics that would paint a clearer picture of what the naked eye could not see. In 2012 Ian Levy created a metric called Expected Points per Shot (XPPS), which was an attempt to create a metric to evaluate a team’s shot selection. To better explain what XPPS is I took a quote from Levy’s introduction to the statistic.

“The foundational piece of Expected Points per Shot is the understanding that not all shots are created equal. A layup is much more likely to go in than a long jump shot. A three-pointer is also less likely to go in than a layup, but if it does go in, it earns an extra point. All of these trade-offs can be measured numerically. The NBA groups shots into five locations — restricted area, the paint (non-restricted area), mid-range, corner three, above-the-break three. By calculating the total number of points scored on shots from each location and dividing it by the number of attempts we arrive at an expected value for shots from each location. — With those expected values we can calculate a player or team’s expected points per shot.”

When using XPPS as a tool to analyze defense, teams that give up a high number of high-value shots to their opponent (corner three, restricted area, above-the-break three) have a higher/worse expected points per shot than a team that gives up a higher number of low-valued shots (paint non-restricted area, mid-range). If you are an avid follower of Bucks Twitter you have more than likely seen tweets such as these used as a critique of the Bucks defense

                  

As the Bucks defense began to falter toward the end of December the Bucks dismal XPPS was used to illuminate of the generous shot selection that the Bucks were giving opponents. During the month of January the Bucks gave up 12.1 three-pointers to their opponent per game, more than any other team in the league during that time span. The assumption was that as a team’s expected points per shot got worse their defense would falter as well. Fast-forward to the end of the regular season and the Bucks still sported one of the worst XPPS in the NBA as well as only producing minor improvements on the defensive end, but that all changed in Milwaukee’s first round playoff series against the Raptors. The Bucks defense held Toronto to 83, 106, 77, and 87 points in their first four games against a Toronto team that had the 6th best offensive rating in the NBA during the regular season. Then came a miserable game 5 performance, which saw Toronto pick apart the aggressive Bucks defense in comfortable fashion while finding an abundance of open corner three-point attempts. Right on queue the analytic side of Bucks Twitter published spreadsheets showing Toronto’s ability to find a high value shots against the Bucks throughout the series. And while Milwaukee’s defense still maintained the 2nd best defensive rating in the playoffs to that point, they still managed to allow the worst change in XPPS of any defense in the playoffs. This compelled me to dig deeper into the numbers to provide an explanation for correlation between the Bucks poor XPPS and strong defensive rating in the playoffs. In attempting to find a logical explanation I calculated every team’s expected points per shot and then compared them side by side to their actual points per shot. Below are the results I found.

The Faults of Expected Points per Shot

https://docs.google.com/spreadsheets/d/1H5dqCdlUhCkptEciJmIelw-NKQB_TAEm_qDyNl7-LmE/pubhtml

This spreadsheet compares expected points per shot and actual points per shot of each team side by side. DRTG stands for Defensive Rating which is a holistic measure of how a teams overall defense is performing.

When using XPPS, or any metric that is calculating an expected value, there obviously must be a correlation between the expected value and the actual value one is trying to predict. While comparing the expected points per shot with the actual points per shot I found that there seemed to be little to no correlation between the numbers. To test the correlation I used regression analysis to find the R-Squared value. R-squared is a statistical measure of how close the data is to its fitted regression line. If the r-squared value is 0 there is no correlation between the dependent and independent variables, and if the r-squared value is 1 it shows complete correlation. To find the r2 value I plotted the expected points per shot against the actual points per shot giving a graph looked like this:

This graph shows each teams expected points per shot plotted against their actual points per shot. For example, the dot furthest to the bottom represents Golden State (1.031 XPPS, 0.974 Actual PPS).

As shown above, the r-squared value was 0.07089, meaning there was almost no correlation whatsoever between the expected points per shot and the actual points per shot. Additionally I found that there was no difference between the league average XPPS (1.029) and a Top 10 defensive rating XPPS (1.029); meaning that the top 10 defenses in the NBA had the exact same expected points per shot as the bottom 20. I was shocked to find that the chief statistic used as a critique of Milwaukee’s defense was so deeply flawed and showed almost no predicting value of a good defense.

To explore deeper into the reasoning’s behind the disparity between a team’s expected points per shot and the actual points per shot, I took a look at two teams that were in the Top 10 in defensive rating and actual points per shot but finished in the bottom third in expected points per shot. The goal of this exercise is to show how teams that gave up high value shots were still able to have successful defenses.

The following tables give a side by side breakdown of the differences between the expected points per shot value and the actual points per shot value. It is important to remember that when finding the expected points per shot, the league average Field Goal% (FG%) is used. As a result, expected points per shot does not factor in the opponent’s field goal percentage and thus makes the assumption that all teams give up a league average FG%. So, if a team wants to lower their expected points per shot they must force opponents to shoot a higher number (frequency) of shots from low value locations (paint and mid-range) while also limiting the amount of shots from high value locations (restricted area, corner three, above-the-break three).

To best understand these tables one must understand both ∆ in PPS using the league average-PPS and ∆ in PPS using the team’s actual-PPS. The numbers given in ∆ in PPS are the change from the league average PPS from each location. So for example, if Golden State has a 0.0102 ∆(change) in its restricted area PPS that means that they are allowing 0.0102 points higher than the league average (1.222) PPS from the restricted area. I do this to further breakdown which locations a team is either subtracting (helping) or adding (hurting) to its overall XPPS number.

Golden State Warriors — 18th in Expected Points per Shot — 1st in Actual Points per Shot

The Golden State Warriors, who finished first in actual points per shot and second in defensive rating in the regular season, ranked 18th in expected points per shot this season. Golden State allowed a higher percentage of shots than league average in the restricted area and at the above the break 3 while also preventing opponents to shoot from the low value mid-range. As a result the Warriors finished slightly below the league average expected points per shot. So why if the Warriors did a poor job in forcing opponents into bad shots did they still finish with the best points per shot in the NBA? The answer is quite simple; Golden State was elite at forcing misses at every location on the floor. Opponents shot the worst percentage in the league at both the above-the-break three and in-the-paint. Additionally opponents shot 2nd worst from the corner, 5th worst from the restricted area, and 7th worst from mid-range. It doesn’t matter where opponents shoot against Golden State, its likely going to be a miss.

Memphis Grizzlies — 21st in Expected Points per Shot — 6th Actual Points per Shot

            The Memphis Grizzlies have a particularly interesting shot profile, as it is probably the closest to the Milwaukee Bucks while also yielding positive results. The Grizzlies allowed 8.9% of opponent’s shots to come from the corner three, which ranks as the 4th most frequent in the league behind only Atlanta, Indiana and Milwaukee. Additionally Memphis did not do much to stop opponents from shooting well from the corner allowing 38.6% from that location (Bucks allowed 38.8%). Opponents also shot 26.4% (3rd most in the NBA) of their shots from the ATB 3. By allowing their opponents these shots Memphis finished with the 9th worst expected points per shot. But giving opponents a high amount of 3 pointers was by design; the Grizzlies packed the paint and forced opponents to only shoot 30.2%(6th lowest) of their shots in the restricted area while also holding opponents to shooting a measly 37.8% from mid-range and 40.8% from the paint. This was more than enough to make up for their opponents abundance of three-pointers as they finish 6th in actual points per shot.

Milwaukee — 29th in Expected Points per Shot — 18th in Actual Points per Shot

            As mentioned above Milwaukee finished dead last in allowing corner threes while allowing opponents to shoot 38.8% (17th in the NBA) from there. Moreover, Milwaukee allowed 35.1% of opponent’s shots to be at the rim, 3rd worst in the league. Permitting the most and 3rd most shots to come from the 2 most valuable locations on the floor is, no matter your opinion on analytics or XPPS, a recipe for failure unless you are creating an obscene amount of turnovers or protecting the rim at elite levels. Milwaukee’s ability to force a low percentage from mid-range and also forcing opponents to shoot a low three-point percentage from the ATB3 helped Milwaukee finish at 18th in actual points per shot.

Given all this information it is fair to conclude that when using XPPS as a whole to analyze a team’s defensive aptitude one must tread very carefully. If a team has a high expected points per shot it is not fair to suggest their defense is performing poorly based only off that number. There is plenty evidence to suggest that a team with a poor XPPS can not only succeed but thrive given the right circumstances. That being said a team that is struggling to perform on the defensive end may benefit from using XPPS to see if they are giving up too many high value shots and what adjustments can be made based off those numbers. Also, using expected points per shot as a whole without seeing the breakdown between each area is not beneficial as a team can succeed as proof by Memphis, Golden State and other top preforming defenses.

The debate on analytics in the NBA is far from over and the inclusion of advanced statistics within the NBA is far from a fore gone conclusion. Analytics may paint one part of the picture but old methodologies such as the eye test are not dead and gone. The new era of statistical analysis must command thorough critiquing of these new statistics before jumping to make conclusions. Until there is ample evidence backing up advanced statistics such as expected points per shot, the NBA community must proceed with caution when using advanced data and figures to draw conclusions based off of unsubstantiated data.

 

LEAVE A REPLY