View Single Post
Old 04-26-2019, 07:20 AM   #7
albionmoonlight
Head Coach
 
Join Date: Oct 2000
Location: North Carolina
Quote:
Originally Posted by QuikSand View Post
To be candid, I think this statistic is basically nonsense, and I don't it for the purpose of performance evaluation at all. I instead know that it conveys something superficially impressive about the effectiveness of our organization, and use it that way. Each year, when we run the numbers, we can say "bills we support have a better chance of passing than your average bill, and bills we oppose have a lesser chance of passing." That is invariably true on the surface, but it belies other contributing factors.

Anyway, for a given year, our success rate can vary from 70-90% calculated this way. This year's ~80 is obviously in that range, but doesn't really illustrate our effectiveness in any way that I think is useful.

When I evaluate effectiveness, I am more aware than the stats of the issues where our efforts did, or could have, made a real difference, and I weight those cases far more than the cases where the outcome was largely outside our control. In other words, my lobbyist gets more "points" from me for killing a bad bill that was popular and had every right to pass than for killing another bad bill that was obviously so poorly conceived that it was going to die of its own weight anyhow (and both of those cases happen routinely).

One of the long standing issues in public defense work is how to evaluate the effectiveness of an office and the lawyers within it. One would love to, of course, have some small number of quantifiable metrics that one could use to do it. But it just does not work that way. Guilty pleas are a useless metric because (at least federally) the government tends to charge only when it has a very strong case.

Sentences received is a very tempting number because it is always right there at the end of the case like a grade or something. But those are so dependent on the severity of the crime, the judge, etc. that a lot of that falls outside of the control of the attorney.

Plus, a good defense lawyer can often do her work on the front end of a case. She can convince the prosecutor to not add a certain enhancing charge to the indictment. And that may take a ton of really good work to marshal her evidence and arguments and make the case to the prosecutor. But, at the end, it simply looks like her client got charged with Crime X and got kind of a high sentence for it. When it never shows up that the prosecutor went into the case expecting to charge Crime X and Crime Y and Crime Z and send the client away for much longer.

And sometimes a bad lawyer never even realizes how bad he was. He might have a client with some really good mitigating facts to use at sentencing (past trauma, etc.) that he never puts in the time to discover. Or there might be inconsistencies in the police reports that he never takes the time to read carefully enough to put together and notice. So, at the end, it just looks like a typical case where the client got a typical result. When a good lawyer's value would have been to show that it was not a typical case at all.

The only real way I have discovered to evaluate this kind of work (other than obvious problems like missing deadlines, etc.) is for management to put in a ton of effort really understanding each case, kind of like Q noted above. That, of course, is incredibly time intensive, so we continue to search for that holy grail of some tangible number that lets us reduce this art into a science.

(I know that this really wasn't the point of your math puzzle, but it has been on my mind, so I thought that I'd share).
albionmoonlight is offline   Reply With Quote