When the ball is on its way to rookie WR George Pickens, the result is a passer rating of 102.2. Essentially Kenny Pickett (passer rating 71.8) becomes Joe Burrow (102.8) when he throws to George Pickens. This post is about that gap…the delta between a team’s overall passer rating and their passer rating when throwing to a particular player. Through Pro Football Reference’s Stathead query tool you can get the passer rating when a player is targeted. This is interesting and valuable information, but my small problem with this, is that a player’s stats would be naturally inflated if they are playing with a good quarterback and a player would be penalized when playing with a bad quarterback (really no different than most other receiving stats, right?). To adjust for this, I took the passer rating from the targeted player and normalized that against the team’s passer rating with simple math. I took the difference between the targeted rating and divided by team rating to show the percentage above or below the team.
I’ll share an example of how this would flip the narrative for when comparing two players. I already used Pickens, who by the way is the number one ranked receiver using this metric, so I’ll use a different example.
When targeted, Chiefs WR Juju Smith-Schuster has a targeted passer rating of 99.4. A solid rating…a QB with a 99.4 would rank 8th in the NFL.
Rookie Jets WR, Garrett Wilson comes in lower but also creates a decent rating of 94.4 which would rank as the 11th best QB rating.
Smith-Schuster > G.Wilson
These stats in a vacuum would show Smith-Schuster to be more successful when targeted. But when we add the second component, the overall team passer rating (which in many cases is just the one starting QB) things start to look different.
The Chiefs have a 99.4 rating when targeting Smith-Schuster, but their overall rating for the year is 106.6 (Mahomes 107.3). Smith-Schuster with a solid 99.4 is actually putting up below average numbers based on the QB throwing to him. So, Smith-Schuster’s TPRI — Targeted Passer Rating Index (for lack of a better term) becomes -6.8%.
G. Wilson’s 94.4, although lower than Smith-Schuster’s 99.4, looks really good when you factor in that most of his targets had Zach Wilson on the other end of the throw. His TPRI (Targeted Passer Rating Index) is 10th best in the league with a +23.7%. (99.4 compared with the team rating of 76.3).
Smith-Schuster > G.Wilson becomes G.Wilson > Smith-Schuster
With the math out of the way. Here are the top 10 pass-catchers (min 40 targets) by TPRI (Target Passer Rating Index):
Some notable players that I was surprised to see with a negative TPRI: CeedDee Lamb (-0.8%), D.K. Metcalf (-8.5%), Gabriel Davis (-8.5%), Chris Olave (-9.2%), Deebo Samuel (-27.4%).
Keep in mind, this is NOT a rating of who the best WR is, it’s one stat…another data point to add to the mix. Like almost every metric, it’s flawed. Perhaps it penalizes players too much for having a great QB or being one of many above average pass catchers on the team. Or maybe it over-inflates an okay pass-catcher who is surrounded by trash pass-catchers on his team who bring down the quarterback’s passer rating. But I think it’s a more valuable metric than simply looking at passer rating when targeted without the additional context that I tried io incorporate.
Below are more tables showing every player with 40 targets through 11/26/22 ranked from best to worst. I’m also showing tables for TE, RB, WR, rookies and all combined.
If you like charts over tables, here is a data visualization showing each player’s targeted passer rating vs their teams passer rating. Ranked by their TPRI.
- I’m using a minimum of 40 targets, so this leads to selection bias. The better players are getting more targets, and the worse players will drop from consideration in this sample because they aren’t being targeted enough. This is why we see more players above 0.0% than below 0.0%.
- Three players on the list changed teams in mid-season (T.J. Hockenson, Chase Claypool, Christian McCaffrey). For these players I used a weighted team passer rating based on the number of games each player played for their two teams.
Consideration for improvement:
- In the denominator I’m using the team’s passer rating, but I wonder if it should be the team’s passer rating excluding throws to the targeted player who is being measured. This is a ton more work and I don’t think much would change directionally, but something to think about.
- Thoughts about the metric? Would the index be better shown as a number instead of percentage (1.24 instead of +24%)?
Does this data confirm any gut feelings you’ve had about players you watch on a regular basis, particularly where the data goes against a player’s box score stats or general narrative?
I hope you find this data useful or interesting. I’ll add this to my project list to do an end-of-year version of this once the season is over. Thanks for reading.