Real-Time Crime Centers: Is it Even Possible to Measure Your Effectiveness?
Ty Webb, Caddyshack (1980) |
Among the many, there's a memorable and hysterical scene in the movie Caddyshack where two (2) golfers are talking about how they played earlier that day:
- Judge Smails: Ty, what did you shoot today?
- Ty Webb: Oh, Judge, I don't keep score.
- Judge Smails: Then how do you measure yourself with other golfers?
- Ty Webb: By height.
It brings to mind all the metrics that can be used to compare things -- those that are effective, appropriate, worthless, comical, interesting, operational, hidden, insignificant, obvious...
*
In US policing, all the rage is over Real-Time Crime Centers (RTCCs). They come by many names - real-time intel, watch desks, operations centers, situation rooms, etc, etc. But for some reason, the RTCC label is sticking...
And because they come with relatively high price tags (in terms of worker salary & fancy technology), police agency executives rightly want to know how effective these hi-tech units are.
But how do you measure the effectiveness of such a program?
Public safety runs in an environment that can be rightly labeled a "complex adaptive system." That means there is interplay and inter-dependency between actors, wicked unintended consequences, a general aura of unpredictability, and the potential for huge outcomes from small interventions.
These are the same traits and behaviors that make crime stats so complex. We can't figure out if rises, drops, shifts, or movements in crime are attributed to:
- new police strategies or postures,
- the economy,
- a global pandemic,
- weather,
- a kid in Ferguson with his "hands up,"
- school quality,
- police department staffing,
- crime prevention messaging (ex: The 9pm Routine)
- the closing of subsidized housing projects,
- employment opportunities,
- an influential criminal being paroled,
- the local prosecutor philosophy,
- a car manufacturer's ignition vulnerability, or
- a copper in Minnesota with his knee on a man's neck.
I'll take a rather firm stance: Nobody really knows why crime goes up or down. There are too many variables.
Take a Vegas casino. The odds for rolling dice at the craps table have remained consistent since the first pair were rolled hundreds of years ago. Yet, the stakes on any particular roll, the downstream impacts, and the energy around the table at any given point are, if not immeasurable, at least unrepeatable.
And it's that last point -- the unrepeatability -- that stands out to me.
Proof, evidence, and causality all require repeatability to stand up to the rigors of science. And the only way we can repeat things is to ensure that we have control of the variables that impact it. And if not control, at least acknowledge, observe, account for, and measure their impact.
In complex adaptive systems, that's just not possible. At all. There are no do-overs.
We are left with estimates, hunches, beliefs, guesses, theories, indications, and conflicts. And agendas. This is just the nature of the real world. We must contend with not only unknown, but unknowable facets of life.
So back to Real-Time Crime Centers...
How do we know they work at positively impacting crime?
We don't know that. We might strongly believe it based on anecdotal cases or experience. But we really don't know for sure that they have any significant long-lasting impact on crime.
Why?
Because the criminals have a say in the matter. They get to adapt their methods.
The prosecutors have a say. As do the judges. As do dozens of stakeholders in this complex adaptive system.
And the list of variables that I blabbed above also impact crime, in some way. I think.
So when communities or police executives ask: "Why is the police department recovering more stolen cars, when compared to [some other operational period]?" .... we might strongly believe it's because of the deployment of new cameras or technology. But has there also been an increase in cars being stolen, therefore increasing the inventory of outstanding stolen cars in the environment? Has some street gang moved members into your area, causing stolen cars to move into your area with more intensity? Have your officers become more skilled in spotting stolen cars by eye?
The same logic can be applied to questions about solved crimes & criminal arrest stats, specifically those arrests that aren't attached to reported crimes (such as illegal gun possession charges, as compared to a reported crime like residential burglary).
When I'm sitting at a desk operating a "real-time crime center," I oftentimes feel like I am helping the coppers on the street, by feeding them information as quickly as possible. When they make the arrest, I get the feeling like I was a part of that. But was I?
- How much did I contribute to that arrest? (I won't be mentioned in the arrest report.)
- Can we measure or estimate how much impact the RTCC had on the outcome or the speed of it?
- Would those cops have made the arrest without me or the technology that we used?
We will NEVER know. Why? Because we can't repeat the situation. We can't roll the dice again. There are no do-overs.
The best that we have is experience of how similar situations have unfolded in the past, and compare those situations to today's landscape.
Do I believe that RTCCs are positive additions in fighting crime? Absolutely!!!! 100%!!!
But it's really just a belief. There is no proof, evidence, or causality that can be reached. Because there are simply too many uncontrollable input variables.
This is why I vehemently oppose the collection of RTCC "stats" -- like arrests assisted in, or LPR hits broadcast, or stolen cars recovered, criminal cases helped, or whatever nonsensical tallies are kept to justify these centers, programs, technologies, staffing, or funding. We will never know what it would have looked like without them...
And if you're measuring your RTCC by the number of tech assets deployed in the field, the man-hours staffed, or by the software packages used -- that's just a special kind of ignorance. That's like judging a golfer by the brand of or number of clubs in his bag. I oppose all tallies of these things.
The people in charge of determining whether an RTCC should be opened, built, staffed, funded, or supported must understand how complex adaptive systems work. We cannot measure our effectiveness. It's about whether the program puts us on the right trajectory, towards the right goals or intentions.
I do keep a dossier of RTCC successes -- a collection of report numbers and incidents where I believe that some chunk of the success is owned by the RTCC. I ask the question: "Do you think this would have happened without [insert tech, RTCC role, person here]?" It remains a subjective answer for most incidents. Maybe I'm biased into thinking the RTCC played a starring role into any of these incidents. Though for some of the successes, I'll dig in my heels and argue that arrests would have never been made if not for the interjection of RTCC resources and tech.
The people in charge must see value. How do they see value? They have to be sold. Through stories. And then they have to make up their own mind as to how much the RTCC factored into the success.
Stats are not helping tell those stories. The measurements mean nothing.
***
Lou Hayes, Jr. is a detective supervisor in a suburban Chicago police department. He's focused on multi-jurisdictional crime patterns & intelligence, through organic working groups compromised of investigators & analysts from a variety of agencies. With a passion for training, he studies human performance, decision-making, creativity, emotional intelligence, & adaptability. In 2021, he went back to college (remotely!), in hopes to finally finish his undergrad degree from the University of Illinois - Gies College of Business. Follow Lou on LinkedIn, & also the LinkedIn page for The Illinois Model. ***
Comments
Post a Comment