#Number of Interior linemen (IOL) pass blocking 75+
2018: 41
2019: 28
2020: 16
2021: 16
2022: 19
2023: 8
#Number of Tackles pass blocking 75+
2018: 37
2019: 30
2020: 35
2021: 30
2022: 33
2023: 22
#Number of Edge/DI pass rushing 75+
2018: 33
2019: 42
2020: 46
2021: 42
2022: 43
2023: 44
This is the first year there’s more 75+ pass rushers than 75+ pass blockers.
So after looking into the rules changes before the 2019 season, I’m convinced the elimination of blindside blocks within the pocket was what spurred the large drop from 2018 to 2019. I think IOL still probably got away with it a bit in 2019, but by 2020, no one was doing it anymore, and the pass blocking of IOL dropped as a result. This also explains why pass blocking by tackles was stable and consistent from 2018 to 2019 as tackles rarely had the opportunity/need to execute blindside blocks. I am not sure what has caused the further deterioration of interior line pass blocking from 2022 to this season, but by percentage, it is the biggest single year drop in the years I looked at. I’m inclined to believe that the drop-off in tackle pass blocking this year is due to the severe drop off in IOL pass blocking.
Does anyone have any theories on what is behind the drop in IOL pass blocking this year? I don’t think there were any significant changes in rules this off-season that would impact pass blocking, but maybe I missed something.
I included the amount of pass rushers with a rating of 75+ in pass rushing to show that there was a gap up in 2019 likely due to the blindside block rule, but besides that, it has remained stable and is not up this year despite offensive lines being lower rated this year.
TL:DR trying to figure out why offensive lines are so shitty nowadays and dug through PFF data and found out that it’s specifically IOL pass blocking that’s in the gutter and the decline started in 2019 due to the elimination of blindside blocks.
That is laughable. Based on your exact description of their methodology there is only 1 reviewer and if they disagree it’s more of an arbitration between the original two scores than a review.
Peer reviewed journals publish many pages worth of data supporting their findings. Which is reviewed and critiqued by peers, often requiring additional experiments. Results are more often than not quantitative and if you approach a journal with qualitative data only it better be rock solid or the reviewers will have a field day.
PFF is purely qualitative data that is pseudo-quantified. Reporting their methods doesn’t make their methods good (this is true in peer reviewed journals as well).
Any time you take a qualitative source data and attempt to quantify it, the data should be questioned. Pseudo-quantification is good as a supporting measurement, but if it’s your primary measurement for your conclusion then you’re in trouble. Just own it as qualititative and you’ll get more respect.
How much cross reference does PFF do to make sure scores are consistent. I.e. do the same 3 people score all the OL in every game or is one guy assigned to each team, grading that team and the opposing team each week. Do they publish results of bias studies on their graders? Are some graders tougher on some teams than others? Or on some players than others (i.e. position, star status, race/ethnicity, etc)
The data is fine for fans who have no idea how to judge a player within a given season, but it is severely lacking in many aspects