Front Office Football Central

Front Office Football Central (http://forums.operationsports.com/fofc//index.php)
-   Off Topic (http://forums.operationsports.com/fofc//forumdisplay.php?f=6)
-   -   Fun workplace math coincidence (http://forums.operationsports.com/fofc//showthread.php?t=95813)

QuikSand 04-11-2019 04:14 PM

Fun workplace math coincidence
 
At work today, a colleague delivered a document, that included a summary of results compiled by several people. The bottom line is a "winning percentage" as an understandably clumsy way to measure our organization's effectiveness.

Each of the contributors had a certain number of bills to track (a different number for each person), and developed their results in a 3x4 grid. For example, my grid looked like this, without the explanatory headings:

Code:

3  0  5
0  1  3
0  0  -
1  0  0


Now, without belaboring this too much, some of these cells represent good outcomes (wins) for my organization, and some represent bad outcomes (losses). In Excel format, the bad outcomes would be in cells A3, A4, B4, and C4.

In my small sample of 13 bills, I ended up with 12 wins... for a winning % of 12/13 or around 92%.

Each of five people developed such a grid, and submitted it to the person compiling the final report.

- - -

As part of my edit, I initially asked whether the final number was calculated, or just carried over from the previous year.

She's not a math person... so she showed me how she calculated the final number.

Quote:

I took the average of the 5 policy staff success rates (shown on their individual sheets).

LK – 79%
MJS – 92%
NM – 70%
RE – 70%
KK – 87%
Avg = 79.6%


If you're a math person... you likely sniff this is not a valid way to reach the right answer. If each person had the exact same number of cases, it would work, but my n=13 would be messed up by another n=24 and so forth. Equally weighting the five is incorrect.

But... she sensed this might not be right, so she backed it up. One way we present the aggregated data is by each of the categories represented in the three columns of the grid above. So, the aggregate data looks like this:

Code:

22  06  44
02  08  07
00  01  --
18  02  03


So... part of our presentation is to show the "winning percentage" for each column, which work out to (rounded off):

59% 88% 94%

Given those (which auto-calculated on the Excel sheet containing it), she told me:

Quote:

I input the numbers needed there, averaged the 3 success rates, and got the same answer: 79.6%.

If you're a math person... and if you've made it this far you pretty much have to be...you likely sniff once again, this is an invalid way to calculate the actual aggregate winning percentage. Again, if the three columns had the same number in each, it would work fine, but since they do not, a simple average of the three columnar averages will distort the aggregate number.


Bottom line, though... she tried two ways to calculate the overall percentage, and got basically the same number by doing it two different, but both incorrect, methods. One turns up as 79.600% the other as 79.667%

Crazy coincidence, right?

QuikSand 04-11-2019 04:22 PM

Addendum

Spoiler

Izulde 04-11-2019 04:32 PM

Just curious, what's the baseline desired rate for this KPI? I'm wondering how this fits into the 20/60/20 curve of stars/average/underperforming that usually gets thrown around.

QuikSand 04-11-2019 05:22 PM

To be candid, I think this statistic is basically nonsense, and I don't it for the purpose of performance evaluation at all. I instead know that it conveys something superficially impressive about the effectiveness of our organization, and use it that way. Each year, when we run the numbers, we can say "bills we support have a better chance of passing than your average bill, and bills we oppose have a lesser chance of passing." That is invariably true on the surface, but it belies other contributing factors.

Anyway, for a given year, our success rate can vary from 70-90% calculated this way. This year's ~80 is obviously in that range, but doesn't really illustrate our effectiveness in any way that I think is useful.

When I evaluate effectiveness, I am more aware than the stats of the issues where our efforts did, or could have, made a real difference, and I weight those cases far more than the cases where the outcome was largely outside our control. In other words, my lobbyist gets more "points" from me for killing a bad bill that was popular and had every right to pass than for killing another bad bill that was obviously so poorly conceived that it was going to die of its own weight anyhow (and both of those cases happen routinely).

nilodor 04-12-2019 08:25 AM

Quote:

Originally Posted by QuikSand (Post 3235633)
Addendum

Spoiler


Love it!

JonInMiddleGA 04-12-2019 11:20 AM

Quote:

Originally Posted by QuikSand (Post 3235633)
Addendum


This is actually where I expected this to end up in the original post.

albionmoonlight 04-26-2019 07:20 AM

Quote:

Originally Posted by QuikSand (Post 3235639)
To be candid, I think this statistic is basically nonsense, and I don't it for the purpose of performance evaluation at all. I instead know that it conveys something superficially impressive about the effectiveness of our organization, and use it that way. Each year, when we run the numbers, we can say "bills we support have a better chance of passing than your average bill, and bills we oppose have a lesser chance of passing." That is invariably true on the surface, but it belies other contributing factors.

Anyway, for a given year, our success rate can vary from 70-90% calculated this way. This year's ~80 is obviously in that range, but doesn't really illustrate our effectiveness in any way that I think is useful.

When I evaluate effectiveness, I am more aware than the stats of the issues where our efforts did, or could have, made a real difference, and I weight those cases far more than the cases where the outcome was largely outside our control. In other words, my lobbyist gets more "points" from me for killing a bad bill that was popular and had every right to pass than for killing another bad bill that was obviously so poorly conceived that it was going to die of its own weight anyhow (and both of those cases happen routinely).


One of the long standing issues in public defense work is how to evaluate the effectiveness of an office and the lawyers within it. One would love to, of course, have some small number of quantifiable metrics that one could use to do it. But it just does not work that way. Guilty pleas are a useless metric because (at least federally) the government tends to charge only when it has a very strong case.

Sentences received is a very tempting number because it is always right there at the end of the case like a grade or something. But those are so dependent on the severity of the crime, the judge, etc. that a lot of that falls outside of the control of the attorney.

Plus, a good defense lawyer can often do her work on the front end of a case. She can convince the prosecutor to not add a certain enhancing charge to the indictment. And that may take a ton of really good work to marshal her evidence and arguments and make the case to the prosecutor. But, at the end, it simply looks like her client got charged with Crime X and got kind of a high sentence for it. When it never shows up that the prosecutor went into the case expecting to charge Crime X and Crime Y and Crime Z and send the client away for much longer.

And sometimes a bad lawyer never even realizes how bad he was. He might have a client with some really good mitigating facts to use at sentencing (past trauma, etc.) that he never puts in the time to discover. Or there might be inconsistencies in the police reports that he never takes the time to read carefully enough to put together and notice. So, at the end, it just looks like a typical case where the client got a typical result. When a good lawyer's value would have been to show that it was not a typical case at all.

The only real way I have discovered to evaluate this kind of work (other than obvious problems like missing deadlines, etc.) is for management to put in a ton of effort really understanding each case, kind of like Q noted above. That, of course, is incredibly time intensive, so we continue to search for that holy grail of some tangible number that lets us reduce this art into a science.

(I know that this really wasn't the point of your math puzzle, but it has been on my mind, so I thought that I'd share).

QuikSand 04-27-2019 10:47 AM

Well, it's not like this was a popular and raging thread when you stepped in to sidetrack it.

Alf 04-30-2019 02:16 AM

Quote:

Originally Posted by JonInMiddleGA (Post 3235696)
This is actually where I expected this to end up in the original post.

ditto


All times are GMT -5. The time now is 04:24 AM.

Powered by vBulletin Version 3.6.0
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.