If You Thought Playing NBA Defense Was Hard, Try Quantifying It
Nikola Jokic may be the MVP, but is he a good defender? The truth is, the numbers don’t u003cemu003ereallyu003c/emu003e know. The analytics revolution has changed basketball, but everyone—armchair statheads, team quants, agents, general managers—still can’t quite figure out how to properly measure the defensive side of the ball.Nikola Jokic has been a terrible defender this season. The Nuggets center might fit among the NBA’s most magical offensive creators, but according to NBA Shot Charts’ regularized adjusted plus-minus (RAPM), an advanced metric that aims to encapsulate a player’s holistic impact, Jokic is 1.2 points per 100 possessions below average on defense. That ranks him in just the 8th percentile among players with at least 1,000 minutes.
Or, actually, Jokic has been a roughly average defender this season. According to FiveThirtyEight’s RAPTOR, another advanced metric that aims to encapsulate a player’s holistic impact, Jokic is 0.8 points above average on defense, ranking in the 65th percentile.
Sorry, scratch that once again: Jokic has been an excellent defender this season. According to ESPN’s real plus-minus, yet another advanced metric that aims to encapsulate a player’s holistic impact, Jokic is 1.7 points above average on defense, ranking in the 80th percentile—ahead of All-Defensive Team stalwarts like Giannis Antetokounmpo, Jimmy Butler, and Draymond Green.
Jokic, for his part, believes he’s somewhere in between. “I think I’m in the middle,” he says. “I think I’m not a great defender. I’m not the worst defender.”
If that all sounds confusing, welcome to the world of advanced defensive stats—the toughest nut for the analytics movement to crack. The challenge is not just a matter of evaluating Jokic’s MVP case this season, but of judging the defensive bona fides for every player in the league.
Armchair analysts have spent years home-brewing their own defensive metrics in search of a better way to evaluate and compare players. Front offices have crunched the numbers in the hopes of finding an edge in personnel decisions. Players themselves could benefit, as more accurate numbers could lead to better compensation for top defenders. There is every incentive for advanced defensive stats to develop—but at least for the moment, these metrics in both public and private are not up to the task.
“No one has mastered how to analyze defense,” says Daryl Morey, the 76ers’ president of basketball operations. “It’s very complicated.”
To get a general sense of the state of these stats, we surveyed more than a dozen NBA analysts, comprising both team employees and public writers, most of whom used to work for teams. On a scale from 1 to 10, they rated the quality of public defensive stats as just a 3.6, on average. Their perception of teams’ defensive metrics behind the scenes wasn’t much better.
Analysts Answer: How Good Are NBA Stats?
“From an analytical standpoint, you’re kind of on a wing and a prayer,” says a personnel executive for one NBA team, who was among the most pessimistic of the whole group, rating public metrics as a 1.5 out of 10. “Our defensive models throughout the league are probably the part we’re weakest in. Nobody really has good defensive models.”
Baseball has used wins above replacement models with confidence for years. But basketball’s single-number metrics have much wider error bars, especially on defense.
“It would be great if we could just be like, ‘Hey, wins above replacement, plug that in,’” the personnel executive says. “But that’s a little bit fantasy-basketball-ish.”
In the pace-and-space era, as NBA offenses set new efficiency records every season, playing defense is incredibly hard. But measuring defense might be even harder.
The lack of defensive data dates back to the earliest days of basketball box scores. In the 1940s and 1950s, the only recorded statistics were points, made shots, and free throws—nothing about defense at all. Even as other statistics such as assists and missed shots entered the daily newspaper agate, defense lagged behind: The NBA didn’t start recording blocks and steals until 1973-74, meaning that information is missing for early legends like Bill Russell and Wilt Chamberlain.
Early statistical records focused on whatever was simplest to count. All defensive concepts—most of which are nebulous and abstract, like proper rotations or effective recoveries—were essentially ignored. “There aren’t a lot of things that are easily measurable,” says Daniel Myers, the creator of the advanced box plus-minus stat (BPM), which uses box score statistics to generate player ratings. “That’s why we’re still having these discussions 70 years after they started playing.”
Only four defensive stats eventually found their way into a traditional box score: rebounds, fouls, blocks, and steals. And they each have issues.
Defensive rebounds are “noisy to the point of being very nearly useless,” Myers says. That’s because they’re largely contingent on a player’s role rather than skill; only 24 percent of defensive rebounds leaguewide are contested, per tracking data, and there isn’t a great relationship between a player’s individual rebounding stats and his effect on his team’s rebounding stats. Andre Drummond, for instance, is the best defensive rebounder in NBA history, by percentage. But Drummond’s teams have often allowed fewer offensive rebounds, and fewer points, with him on the bench.
Fouls are also tricky—“borderline useless,” Myers says, because they could merely be a signal of greater defensive activity. “So that leaves you with steals and blocks. And those are both very rare statistics.”
This season, a block or steal occurs once every eight possessions leaguewide, meaning each player averages just one block or steal out of every 40 possessions. That leaves 39 out of 40 possessions in which the defender in question doesn’t record either statistic, but he’s still contributing, or not contributing, somehow. This imbalance also hampers other box-score-based metrics like PER, which creator John Hollinger has noted underrates “defensive specialists … who don’t get many blocks or steals.”
All of these overlapping problems mean that judging a player’s defense by his box score statistics is, in the words of Ben Falk, a former executive for the Trail Blazers and 76ers, “like squinting at a blurry picture and trying to find something out of it.”
Albert Einstein never tried to contest Wilt at the rim, but his old chestnut describes the situation perfectly: “Not everything that can be counted counts, and not everything that counts can be counted.”
Robert Covington has never earned a first-place vote for Defensive Player of the Year, but his teams have always played better defense with him on the floor, from Philadelphia and Minnesota to Houston and now Portland. The combo forward collects a healthy share of steals and blocks, but his broader defensive impact goes unnoticed by the box score.
Consider this play from Game 7 of last postseason’s first-round series between the Rockets and Thunder: Covington navigates a screen, helps on Chris Paul to prevent the future Hall of Fame point guard’s drive, and pokes the ball away mid-dribble—all the dirty work on a stop that’s recorded as a P.J. Tucker steal.
Covington has ranked first or second in the league in deflections per game in four of the past five seasons. But in a traditional box score, he doesn’t receive a single stat from this play. That’s where the advanced metrics under the “adjusted plus-minus” umbrella come in.
The necessary play-by-play data to generate on/off data became available in the 1996-97 season. This information has since become one of the two main sources for adjusted plus-minus, as it answers the basic question: Did this player’s team allow more or fewer points with him on the court? Over a large enough sample, this form of analysis—which typically uses sophisticated statistical techniques to control for the quality of a player’s teammates and opponents—can illustrate a single player’s impact on his opponents’ point totals. According to the RAPM figures at NBA Shot Charts, for instance, the NBA’s most effective defenders over the past five seasons include a couple of Defensive Player of the Year winners and Covington; the least effective are noted sieves on the perimeter.
Best and Worst Defenders by RAPM, 2016-21
But for all its theoretical promise, this kind of statistic is not perfect in practice. First, because it focuses solely on the outcome of a possession rather than the process, it can operate as something of a black box. “It answers the what, like, ‘What is a player’s impact?’” says Luke Bornn, who worked as the Kings’ vice president of strategy and analytics from 2017 to 2020. But “it doesn’t help at all for why.”
Take Covington, for instance. He rates highly by on/off data, but as last season’s playoffs demonstrated, he’s better suited as a savvy helper than a lockdown one-on-one defender. Opposing stars like Paul and LeBron James hunted him on switches and bulldozed him on drives to the rim. The results-based data by itself won’t offer that nuance.
The other problem is that on/off data needs very large samples to weed out confounding variables. “There are definitely areas where these stats can be tricked,” says Seth Partnow, the Bucks’ director of basketball research from 2016 to 2019. Some teammates might spend so much of their time on the court together that their individual contributions can’t be separated. Bad backups could make the starters look artificially better, or vice versa. And the stat is subject to shooting flukes, like in 2016-17, when the Spurs allowed more points with MVP candidate Kawhi Leonard on the floor because opponents shot a whopping 7.4 percentage points better on 3-pointers, according to Cleaning the Glass. One reason on/off metrics don’t love Jokic this season is that Denver’s opponents are shooting 5.9 percentage points better on 3s with him on the floor—the largest differential for any rotation player.
“RAPM at a single-season level is basically useless, I’m sorry to say,” says Myers, the BPM creator.
Then came the cameras, which provide the second source of data for adjusted plus-minus stats. Ahead of the 2009-10 season, SportVU, a company created by Israeli scientists with backgrounds in missile tracking, installed cameras in the nosebleeds of four NBA arenas: Dallas, Houston, Oklahoma City, and San Antonio. This system monitors the location of every player and the ball 25 times every second of gameplay, and made its way into every NBA arena by 2013-14. This advanced tracking information—which is now supplied by Second Spectrum—provides some undeniable advances in measuring defense, such as quantifying rim protection for big men.
“No one has mastered how to analyze defense. It’s very complicated.” — Daryl Morey
The problem is there’s too much data. Over the course of the usual 82-game regular season, raw tracking provides teams with about a billion discrete locations, via about 2 billion individual data points (x- and y-coordinates for players; x-, y-, and z-coordinates for the ball). Tracking, says Shane Battier, the Heat’s vice president of basketball development and analytics, “can lead you down more rabbit holes.”
Working with all of this data to produce meaningful metrics isn’t easy. Before going team-side, Bornn spent time at Harvard, where his research group focused on sports statistics. The group “had some of the brightest PhD students in computer science and statistics around, and for us, it was still a significant struggle,” he says, “so I don’t think it’s something that’s really easy to tackle.”
Ultimately, the same issue that limits traditional box scores and on/off data limits the missile-tracking cameras: They haven’t figured out how to measure crucial defensive nuances. Analytics can capture what happens when a shot goes up, but more often falter when considering every other part of a possession. “It’s all the times,” the NBA personnel executive says, “where, did my center show effectively and then recover, and that bought us an extra half-second for the weakside corner guy to get back out to the shooter, which then led to a good defensive possession?”
Intangibles like on-court communication, a key to any stout defense, also aren’t considered. Multiple sources wished for audio tracking for big men in particular, to discern how they contribute by calling out strategies and plays. Even in baseball, which can measure almost anything, defensive metrics for the catcher position don’t encapsulate intangibles like calling pitches and syncing with the pitcher on the mound. Marc Gasol—named by multiple sources as the first player they’d want to track with audio—is the NBA’s Yadier Molina in this analogy.
And most of all, even the most advanced tracking systems can’t capture a player’s fit with his coaches, teammates, and defensive system. “Even if we had all the data we wanted,” says ESPN’s Kevin Pelton, “I don’t know if we’d ever be able to isolate an individual’s impact as easily on defense as on offense, because so much of it is scheme-dependent.”
Until Second Spectrum’s tech can figure out a way to measure intent—as in, what was the defender supposed to do?—this massive facet of defense will remain missing from all the numbers.
For instance, Battier was a stud defender in the early statistical revolution, and he recalls one game in which he executed his game plan against Carmelo Anthony, but the star scorer still went off. “He scored 50 points on my head, was absolutely torching me, and he had zero paint points,” Battier says. His defensive process “was 100 percent right,” he says. “The outcome, which you can’t control, was an unbelievable performance.”
In baseball, a sport with much less situational dependency, a poor center fielder in Dodger Stadium will also be a poor center fielder in Petco Park. Yet in the NBA, a big man might struggle in an aggressive blitzing scheme but post stellar numbers if he plays drop coverage. In 2017-18, ESPN’s real plus-minus ranked Brook Lopez, then with the Lakers, as the worst defensive center in the league. Two seasons later, playing for a new team with a new coach in a new style, he ranked third, was named to an All-Defensive Team, and helped the Bucks post one of the stingiest defensive marks in league history.
Lopez represents an extreme example, but across the NBA player pool, offensive RPM has a much stronger correlation than defensive RPM from one season to the next. In other words, a good offensive player tends to stay good, and a bad one bad, but defensive ratings are more randomly scattered over time—especially for players who change teams.
(To read this chart, know that correlation is measured on a scale in which 0 means no relationship and 1 means a perfect relationship, so higher numbers are better.)
Year-to-Year Correlation in Real Plus-Minus
“Trying to do an all-in-one defensive metric,” says one lead analyst for a team, “is … pretty wonky and not that worthwhile, even on the team side with better data.” Of course, that isn’t stopping everyone from trying.
The search for better defensive numbers exists with a kind of Big Short dynamic, with all manner of analysts working independently toward a similar goal. Some of those researchers work for teams; others used to work for teams but now conduct statistical experiments in the public sphere; and others are like Myers, the BPM creator, who not only doesn’t work for a team but isn’t even a trained statistician. “I never took a statistics class in my life,” he says.
Yet for more than a decade, Myers has made a hobby out of tinkering with adjusted plus-minus stats because, he says, “I think it’s fun.”
Myers acknowledges that this sort of nerdy entertainment befits a man who works as a senior bridge engineer, but it also satisfies an impulse for many sports fans to compare players. “I like to look at career arcs and see how players progress,” he says. That’s one of the main reasons BPM includes only box-score statistics—so it can apply throughout NBA history equally as well as in the present, even without fancy tracking figures.
Teams, however, have the edge in pursuing better metrics for two reasons. The first involves the scheme-fit-intent boondoggle: Even if sussing out the defensive goal on every play is “impossible,” according to one team’s lead analyst, it can at least allow for better judgment of a team’s own players.
In the public, says Partnow, the former Bucks executive, who now writes for The Athletic, “we can sort of surmise scheme from very detailed study—but it’s a whole lot easier if you can just go downstairs and ask a guy in the film room.”
The second reason is advanced tracking, as teams receive the entire output of Second Spectrum’s cameras, while public analysts can examine only certain portions of summary data published on NBA.com. “I would just love to get my hands on the raw x/y data,” says Taylor Snarr, the creator of estimated plus-minus and a former analyst for the Jazz, with a laugh. “The player tracking unlocks a lot of possibilities.”
Yet for now, that’s all they are: possibilities.
Tracking provides gobs of data, but different analysts have different opinions on what information matters, and what is random noise. That means there’s no one right answer when choosing how to build a comprehensive defensive metric. “There’s a lot of art in addition to the science of figuring out what goes in,” says FiveThirtyEight’s Neil Paine, who helped develop that site’s RAPTOR statistic.
For instance, one metric might factor in players’ shooting percentage allowed when they’re the closest defender on jumpers. A second metric might leave out that data, concluding that it’s terribly tricky to show that teams can influence opponent 3-point percentages, let alone individual players. (An analysis by Krishna Narsu in 2016 found that for individuals, “outside of six feet, everything appears to be mostly random.”) The two systems will then arrive at divergent opinions about players’ overall defense based on what information they choose to include.
Looking at five different advanced stats under the broader adjusted plus-minus umbrella, we find many wildly different assessments of players’ defensive performances. Jokic isn’t alone this season—or even the most extreme case. This chart shows the players with the widest variations from advanced stats this season (minimum 1,000 minutes). The stats seem to be struggling in particular with distributing credit for the Knicks’ and Lakers’ stout defenses, with six of the top 10 coming from those two teams:
Players With Largest Variation in Advanced Defensive Stats, 2020-21
Determining how to make sense of these disparate numbers is an issue for All-NBA voters or analysts predicting next season’s title race—but the problem is most acute for players and front offices whose future depends on them.
With a few notable exceptions—Rudy Gobert signed a $205 million extension before this season—the league’s top defenders are generally less likely to garner big contracts. “A substantial part of that is just because we don’t know how to evaluate defense,” the personnel executive says.
One player agent agrees with this sentiment, recalling contract negotiations when he’s tried to advocate for his clients’ defensive ability. “I try and quantify some of it—but some of it I can’t,” he says, because that information isn’t available or widely accepted. “Good defenders are inherently undervalued.”
And for the front offices making the decisions on how much to pay players, the lack of accurate and rapid defensive assessments is just as much of an issue. Players aren’t static, but the slow stabilization of adjusted plus-minus stats means it’s difficult to determine whether changes in results are due to real development or aging, or mere reversion to the mean.
The challenge is most visible with young players adjusting to the NBA. A senior front office member for one team asks, “If a guy’s adjusted plus-minus is better in Year 2, two months into the season, is that because he’s gotten better, or is that just noise? It’s really hard.” While acknowledging that adjusted plus-minus figures over a span of many seasons are better in theory, the executive notes they’re not as useful in practice for a front office that needs to make personnel decisions.
“I don’t always have five years to wait to make an evaluation on a guy. I have to sign him to a rookie extension after Year 3,” the executive says. “I mean, it’s great for a science experiment or a big research paper on what’s important for defense, but in real life, we don’t have that luxury. And frankly, over five years, we may not still have our jobs.”
So what should analysts, both public and private, do when attempting to measure NBA defense? Advanced metrics still offer some value, sources say, because scouting and film study are just as imprecise. Some players are overrated by the eye test because they look like they’re working hard, but they might have to work harder because they’re out of position. Other players might contribute in small, subtle ways that add together but are noticeable only in large data sets. So the analysts who spoke for this piece offer a few ways forward for better interpreting individual player defense.
First, use a “wisdom of the crowds” approach, blending different metrics to find a consensus average. “Every metric has players that are overrated. Every metric has players that are underrated,” Myers says. “If you look at all of them together, hopefully the blind spots offset each other.” If one metric thinks Jokic is terrible, one thinks he’s average, and one thinks he’s excellent, he’s probably somewhere in the middle, closer to average. (Jokic himself agrees!)
And if all the metrics are in general agreement, such as with Gobert, who’s at the top of all this season’s defensive leaderboards, that unanimity should inspire more confidence in the conclusion. This chart shows the players with the most consistent defensive ratings this season—the inverse of the above chart showing the widest variations.
Players With Smallest Variation in Advanced Defensive Stats, 2020-21
Second, focus as much on localized data and discrete skills as possible. Knowing whether Jokic is a terrible or average or excellent defender overall matters for MVP voters. But for teams, catch-all, single-number judgments are generally less useful because they’re trying to determine whether a given player will fit in a specific role. For example, teams looking at a big man may zoom in on his performance at the rim or against the pick-and-roll.
And third, be grateful for any advances in NBA analytics compared to the college game. “Those things we have on college players can be even less than what the public has on NBA players,” the senior front office member says.
After a moment of thought, this executive rates the quality of college defensive stats as a 1 or 2 on the 1-10 scale. That makes the blurry image of NBA metrics look like the Mona Lisa by comparison.
The best advice overall might be to remain humble when assessing player defense, with numbers, film study, or ideally a combination of the two. Analysts have to approach the project as “knowing that you’re not really going to get it perfectly right,” says FiveThirtyEight’s Paine. “You’re just trying to get close.”
“Close” is the best we can do, for now—and maybe forever. Some analysts note that advanced metrics have made enough strides in the past decade that they don’t want to preemptively limit what new advances might arrive in the years to come. But many more sources, when asked about the 1-10 scale, say that they don’t think a “10” is ever possible on defense. “There’s probably some kind of asymptote,” says Falk, who now runs the Cleaning the Glass website. “We’re not going to get better at it once we approach a certain limit.”
We’re not at that limit yet, but there’s no escaping the fact that core aspects of defense are practically immeasurable. When it comes to defense, Myers says, “Truth is not a single number.”