An algorithm is a procedure or formula for solving problems, often based on a set of rules or a sequence of specified actions. A computer program can be viewed as an algorithm. Algorithms underlie many decisions today, from mundane Google searches to more critical terrorism threat assessments. Almost every sector of our society relies on algorithms, and policing is no exception.
When data is ‘big’, how capable are we at truly distilling it, making connections, or taking full advantage of its potential?
To understand the potential of using algorithms for policing, imagine you are a District Commander. Every day you are tasked with the safety of your officers, and with ensuring the safety and security of the public at large. You have to decide strategies for crime prevention and how to deploy resources accordingly. You regularly need to make profound decisions.
The problem is that, while you make these decisions thoughtfully, you don’t write down each decision you make, or track whether it proved correct. You may expect certain factors to predict crime, but you don’t know precisely whether your expectations are accurate most of the time. Meanwhile, your new observations on the job inform subsequent decisions. Perceptions evolve over time as crime patterns change, e.g., such as in response to successful policing strategies, but now you are faced with new information, crime displacement, and new dilemmas. Datasets keep getting bigger, and they may come from multiple sources. The situational contexts keep changing.
The 10,000 officers of the LAPD once used saturation patrols in response to crime hot spots; however, this strategy had limited effect over time. To be more precise, the department developed a “laser-like” strategy that engages multiple analysts, officers and commanders in data analysis, resource deployments, and checks for success. In the hot spot areas where resources are deployed, officers are told to “use their training and experience” to figure out what’s causing crime problems to persist.
Empowering officers to investigate and suggest “why” spots are attractive for criminal behavior is important and necessary for crime prevention and risk governance. Is it schools, bus stops, or liquor stores? Is the risk narrative connected to interactions of people at places in close proximity to major medical facilities, a military base, retail stores, or social services located in town? How, where and when is all of this connected?
There’s no need to guess. Algorithms can be a mechanism for standardizing department-wide initiatives, for objectively analyzing crime patterns, and for testing individual officers’ hypotheses about environmental attractors of criminal behavior. Algorithms such as Risk Terrain Modeling (RTM) help police form crime risk narratives that aid in patrol deployments and place-based interventions. RTMDx software makes the RTM algorithm easy to use and actionable. In fact, it was specifically developed to address these issues that are routinely faced by public safety professionals, and to help officers meet the demands of 21st century policing.
Algorithms can add confidence and consistency to information products that aid decision-making. Algorithms, and the software solutions that make them user-friendly and accessible, can be used to prove officers’ expert opinions right, to prevent distractions from inconclusive sentiments, and to get people to act on key insights at the best times and places to improve efficiencies.
An algorithm need not be a replacement of your expertise, but rather a service in support of it.
It may seem weird to rely on an algorithm to support command-level decision-making of this nature, but the gravity of the outcomes—in cost, crime, and officer safety—is exactly why you could use them. Studies suggest that well-designed algorithms may be far more accurate than human judgment alone. For example, which line is longer in the figure below? Presented with these two lines of equal length, the eye is tricked into seeing one as being longer than the other. Even after it’s proven, with a ruler, that the lines are identical lengths, the illusion persists. If perception has the power to overwhelm reality in such a simple case, how susceptible to failure might the critically unchecked judgments of the smartest, most experienced, and best of any one of us actually be? When data is ‘big’, how capable are we at truly distilling it, making connections, or taking full advantage of its potential?
Which line is longer?
Seeking to avoid egregious errors in the interpretation of data is a key part of decision-making. Yet, uncertainties always exist. The job of decision makers, then, is not to always be right, but to figure out the odds in any decision they have to make and play the odds well. Ultimately, policing depends on how well commanders and patrol officers assess the odds, or the risks, of crime threats before making decisions about prevention and response. The odds of achieving repeatedly accurate crime predictions matters greatly for target area selections and resource allocations, and these can be improved through algorithms. An algorithm need not be a replacement of your expertise, but rather a service in support of it.
A spatial risk assessment algorithm such as RTM is designed to aid thoughtful and experienced decision-makers. It brings multiple sources of data together by connecting them to the environments people live, work, and behave in. It offers insights about places and events in order to add context to data. RTMDx software utilizes RTM as its ‘engine’ to support policing strategies and maximize the role of professional judgments to solve crime problems. Ultimately, it’s the police commanders acting on the recommendations of RTMDx reports that is the most important part of the process. Even the most data-driven, artificially intelligent software must embrace the human element. It should never propose to replace it.
Algorithms and related software programs present many opportunities for policing.
Risk assessments for crime, especially, demand that outputs from decision support software be considered within the context of various other pieces of information so that judgments can be made about managing crime risks with the greatest odds of sustained success. Many examples exist of commanders imparting insights from RTM into policing actions focused at high-risk places throughout their jurisdictions. These jurisdictions have seen many benefits, including lower crime rates and improved community relations, compared to the status quo (e.g. see www.rtmworks.com).
Before using decision support software in your jurisdiction, consider what your goals are in adopting it. Certain goals will be consistent across jurisdictions, such as reducing crime or efficiently deploying resources. But other goals will be specific to a jurisdiction; they should be made by stakeholders within your jurisdiction and should not be delegated to a software company’s sales representatives. Goals may include aspects of evidence-based practice, transparency, actionable data, collaborative problem solving, coordination among agencies, community engagement, community relations, performance measures, or saving money. These goals should be set up-front.
Adopt or procure decision support software that is well grounded in research evidence, customer satisfaction and meaningful outcomes. Some software currently on the market under the category of ‘predictive policing’ can vary significantly in terms of quality or value, and in terms of the reliability and accuracy of algorithms used. Scrutinize sensibly, particularly software programs that have been shown to reinforce bias (e.g. see mic.com article). Some algorithms just serve to cover more aggressive and less constructive policing practices. Decision support software worth their salt should inform actions to achieve your up-front goals and to change minds and actions as the contextual dynamics of crime change (see also Tips For Crime Forecasting blog post).
Judge your options for decision support software against your jurisdiction’s goals, and with regard to the value-added to decision-making and operational practices. Algorithms and related software programs present many opportunities for policing. It’s reasonable for police agencies to embrace them to support command-level decision-making, but not with eyes wide shut.
*This essay was heavily inspired by the commentary of Adam Neufeld, “In Defense of Risk Assessment Tools”
Here is a basic framework for arming police leaders with the tools and resources they’ll need in the 21st century to prevent crime and achieve justice on multiple fronts:
The 21st century demands a change in the culture and mindset of policing, as much as, if not more than, technological upgrades. Rear Admiral Grace Hopper suggested that perhaps the most dangerous phrase in the American language is “we’ve always done it this way”. This cannot be the way of policing.
Teaching police recruits the basic value of data and the operational utility of crime analysis will prove to be public money spent for the public good.
Policy-makers, please take note: Teaching police recruits the basic value of data and the operational utility of crime analysis will prove to be public money spent for the public good.
Billions of public dollars are spent on real estate, buildings and technologies to collect, manage, analyze and communicate the many, many, petabytes of data that police agencies generate. Fusion centers, real-time crime centers, CCTV and surveillance centers, mobile data terminals, or automatic vehicle location systems, are just a few of the capital assets. Each of these cost millions of dollars to build or setup, plus more to maintain and staff. Added to these appropriations are the costs of computer-aided dispatch, records management systems, and predictive policing software, to name a few of the digital resources, which comprise a multi-billion dollar industry in the United States, alone.
Elected officials clearly value data because they invest heavily in producing and preserving its related infrastructure. But there is an obvious void: investments in the human elements that make data actionable.
Far removed from the new buildings with walls of integrated flat screen TVs, in roll-call rooms and on police patrol routes throughout America, exists evidence that data analysis is undervalued by line-level police officers, and even some commanders. Or, maybe the value of data is just overlooked and, therefore, underappreciated. Police of all ranks have a symbiotic relationship with data and analytical products. Every day data informs strategies, tactics and resource deployments. It aids criminal investigations, and is discoverable in courts of law. Data analysis informs command decisions and patrol activities that can directly affect officer safety, public safety, and police-community relations. Skilled analysts in police departments throughout the country turn ‘big data’ into ‘smart data’ and, when used wisely, these products offer insights to prevent crime and reduce risks. Many stakeholders use police administrative data to measure various aspects of success or failure.
Police officers are both the generators of original data and the end users of crime analyses. Yet, they are rarely, if ever, formally trained to preserve the integrity of data measures, to see value in datasets, or to fully harness the utility of analytical products. Largely missing from public spending is deliberate investments to teach police recruits the basic value of data and the operational utility of crime analysis for their personal and departmental interests.
Basic law enforcement training programs in the United States last an average of 840 hours, or 21 weeks, according to the Bureau of Justice Statistics’ (BJS) survey of state and local academies. Major training areas include operations (an average of 213 hours per recruit); firearms, self-defense, and use of force (168 hours); self-improvement (89 hours); and legal education (86 hours). “Data utility” is not mentioned. Adding an hour long module to basic training would account for less than half of 1% of training time, but could yield a huge return on investment.
Police are the front line brokers of crime analysis results to operational practice. Yet their brokering skills and training are often un-nurtured and ad hoc. Recently, 79% of survey respondents agreed that a basic introduction to data and crime analysis should be a required part of police academy training. Expert academy instructors can come from a variety of places, including from within a local police department’s crime analysis unit, as the City of Chesapeake already institutionalized.
Policy-makers responsible for police academy curriculums should add learning objectives to teach recruits why data is important, how it relates to their job, how it can be reliably collected, how it should inform their decision-making, how it can be used to develop crime and risk reduction strategies, and how it can justly identify places for resource deployments. Recruits should graduate with clear expectations of how they’ll produce and use data on the job. Police officers deserve to understand why commanders might have told them to do what they’re doing, and where to do it. They deserve the transparency of knowing that data probably played a role in the orders they were given; that the reliability and validity of data analyses can be affected by the discretionary decisions they make and the actions they take every shift; and that directly or indirectly, this feedback loop affects their future work duties and related liabilities.
The petabytes of data available to police only becomes actionable when people interpret it in meaningful ways. This takes training and practice; but it starts with an honest introduction. It requires a similar level of dedicated training that is already given to shooting accurately, driving emergency vehicles safety, or handcuffing quickly.
Elected officials and many other stakeholders will realize huge long-term benefits when police learn that data has value, and how to harness it. They will witness a more effective, responsive and transparent police department when police officers are trained to balance empirical evidence with professional experience. Learning to value data should be taught early to new recruits and not forgotten or dismissed during field training. The mindset and operational practice of evidence-based policing requires that new generations of recruits learn to value data and empirical evidence, along with a healthy balance of critical professional insights and intuition. Police academies are the places to start to nurture this trend.
'active shooter'.... I'm sure it has at least crossed your mind... on campus.
Discussing "active shooter" or "active killer" situations is difficult. But, I'm sure it has at least crossed your mind as you prepare to teach or attend classes on campus. In 2015 I completed Advanced ALICE Training and certification. ALICE is a set of proactive strategies that increase chances of survival during an active shooter/killer event. It stands for Alert, Lockdown, Inform, Counter, Evacuate.
I do not know what course of actions your college/university/workplace officially endorses (or advises against) in these life-threatening events. Though, most active killer events last 8 minutes, on average. It takes an average of 2-3 minutes before 9-1-1 is called, and even the most efficient police departments have response times averaging several minutes. This means that police often arrive on scene towards the end of the event. So, knowing what to do immediately to protect yourself and others is vitally important.
Some universities are ALICE affiliates (and train all incoming students). Auburn University produced this 8 minute video, which provides a good overview of what you can do to stay safe.
If you'd like more info or have questions, please ask and review the recommended guidelines of your local public safety officials. I'm also happy to share what I learned with you. Being mentally prepared, re: information and mindset, is just as important for survival as being physically able to react and respond.
June Jordan was a poet, activist, teacher and essayist, born in Harlem in 1936. She wrote:
Our earth is round, and, among other things, that means that you and I can hold completely different points of view and both be right. The difference of our positions will show stars in your window I cannot even imagine.
Very spatial and philosophical. I like it!
A lot of recent commentary states the need for micro-level studies and efforts to understand the underlying contexts and attractors of crime at place. The articles listed below deal with different aspects of this approach and might be of interest to you in your work. PDFs are all available online for easy reference.
Please feel free to send me your work as well. I am eager to read cutting-edge stuff and make reference to it where credit is due!
Caplan, J. M., Kennedy, L. W., Barnum, J. D., & Piza, E. L. (2015). Risk Terrain Modeling for Spatial Risk Assessment. Cityscape. 17(1), 11-20.
Caplan, J. M., Marotta, P., Piza, E. L., & Kennedy, L. W. (2014). Spatial Risk Factors of Felonious Battery to Police Officers. Policing: An International Journal of Police Strategies & Management. 37(4), 823-838.
Piza, E., Caplan, J. M. & Kennedy, L. W. (2014). Analyzing the Influence of Micro-Level Factors on CCTV Camera Effect. Journal of Quantitative Criminology. 30(2), 237-264.
Moreto, W. D.*, Piza, E., & Caplan, J. M. (2014). ‘A plague on both your houses?’: Risks, repeats and reconsiderations of urban residential burglary. Justice Quarterly. 31(6), 1102-1126.
Caplan, J. M., Kennedy, L. W., & Piza, E. L. (2013). Joint utility of event-dependent and environmental crime analysis techniques for violent crime forecasting. Crime and Delinquency, 59(2), 243-270.
Caplan, J. M., Kennedy, L. W., & Baughman, J. (2012). Kansas City’s Violent Crime Initiative: A Place-Based Evaluation of Location-Specific Intervention Activities during a Fixed Time Period. Crime Mapping, 4(2), 9-37.
Caplan, J. M. (2011). Mapping the spatial influence of crime correlates: A comparison of operationalization schemes and implications for crime analysis and criminal justice practice. Cityscape, 13(3), 57-83.
Kennedy, L. W., Caplan, J. M., Piza, E. (2011). Risk clusters, hotspots, and spatial intelligence: Risk Terrain Modeling as an Algorithm for Police Resource Allocation Strategies. Journal of Quantitative Criminology, 27(3), 339-362. [2010, online first]
Caplan, J. M., Kennedy, L. W., & Miller, J. (2011). Risk terrain modeling: Brokering criminological theory and GIS methods for crime forecasting. Justice Quarterly, 28(2), 360-381.
See also other Recommended Readings on the subject by independent authors in peer-reviewed journals, law reviews, and other reputable sources: http://rutgerscps.weebly.com/publications.html (towards the bottom). All great stuff!
Commercial and non-commercial "predictive" software products come (and go). But, software does not define the evolution of "predictive policing". Innovation in this arena is (or is going to be) how police use information from analytic outputs to inform decisions or take thoughtful action. Transparent and actionable information should be the commodity of predictive analytics - not the software itself. Key to the "new age" of policing (and an agency's desire to be "predictive") is the willingness of police officers at all levels to ask new questions, collect new data, and find value in the results. Outputs should inform decisions about where to police, but also about what to do when resources get there, and why. Understanding why is important for doing actions with intent. Predictive policing requires a "culture change" as much as, if not more than, a technological change. To my knowledge, Risk Terrain Modeling is the evidence-based diagnostic/forecasting method developed with this in mind. E.g., see ACTION: http://rtmtraining.weebly.com/uploads/2/6/2/0/26205659/actionplan.pdf
I came across La Vigne's essay recently in the Urban Institute publication: http://www.urban.org/urban-wire/rethinking-americas-wash-rinse-repeat-approach-policing. (Good stuff!)
A reporter recently asked me (I'm paraphrasing) why it mattered if risk terrain modeling (RTM) identified factors that some police already knew/suspected are risky. I suppose the same can be said of hotspot maps (i.e., they routinely point to the same areas that police already know are hot). Furthermore, though, she asked what police do with RTM results. In my mind, that was the insightful question, and should be the theme of a news story on crime prediction techniques. We should all be careful not to let current "traditions" or analysis methods be the default benchmark for evaluating answers to this reporter's important question (here some reasons why: http://www.jcaplan.com/forecasttips.html).
RTM fits into existing paradigms of policing. But, the concept of "risk" should really be the catalyst for changing the way police do policing. "Risk", and RTM as a tool for assessing spatial risks, helps police problem solve in ways that can be articulated and results can be measured. RTM helps to take the focus off of "crime" or hotspots-policing and on to risk-based policing. It helps steer police activities toward the underlying attractors of illegal behavior so police can do something about the physical places, not solely on the people located at places. RTM helps to diagnose crime problems and break the "wash, rinse, repeat" approach that La Vigne explains is so common in American policing.
If you are considering acquiring “predictive analytics” software for crime forecasting, then based on my experience, below are some things you might want to ask as you evaluate what’s available. Recently, the efficacy of using “non-crime-related data” for crime forecasting has been questioned. So, consider these 6 tips for evaluating a successful forecast, within the context of all options:
1. Data used in the analysis should be reliable and valid (i.e., content and construct validity). The data sets and their sources should allow for replication and continued forecasting. This includes the requirement that the forecast technique does not rely heavily on the outcome of interest (i.e., the dependent variable) to be the predictor (i.e., independent variable). Such a forecasting technique would not be sustainable if it were both actionable and successful since outcomes would ultimately be prevented.
2. The outputs of the forecast should be operational. It should be reasonably clear what to do with the information to respond to the forecasted effects. Knowing where to go is a start. But forecast outputs should also inform decisions about what to do when you get there. Telling officers to “use your knowledge, skills, experience and training in the most appropriate way to stop crime” is vague and places a heavy burden on individual officers in a way that is nearly impossible to rigorously evaluate.
3. The method of the forecast should be operationalized consistently from one instance to the next. Data sets and sources may change, as will analysts, etc., but the reliability of the forecasting method must withstand multiple iterations, in different settings, by different people, and for different types of outcome events.
4. The elements of the forecast should be articulable, with the importance of each factor relative to one another directly measured. The direct impact of key factors on outcomes should be demonstrable (i.e., internal validity).
5. The output results should be within a range of reasonable expectations (i.e., face validity). The forecasting process should be able to justify why a result was produced or else the forecasting process should be able to be revised in a non-arbitrary way.
6. The method of the forecast should be able to tolerate the products of successful interventions. Especially those that are based on the intel from prior forecast iterations. Precision and accuracy of a forecast are important considerations for defining success. However, a “sales pitch” focused only on precision or accuracy of forecasts should be suspect. For example, if crime never moves, then past crime incidents will always be 100% accurate and precise predictors of new incident locations. Alternatively, if crime always moves, then new crimes will never occur where past crimes did, making an event-dependent forecasting technique zero percent accurate and precise. The reality of crime patterns is likely somewhere in between these extremes. If police intervene successfully in response to a forecast and their actions are expected to change the spatial dynamics of crime, then the forecast could be very accurate/precise prior to the intervention and then not at all accurate/precise after the intervention. Basically, any successful forecast should yield valuable information that can be operationalized for preventative action. However, if the preventive action is successful at reducing crime incident counts and/or changing their spatial patterns, then it could directly affect the precision/accuracy of future forecast iterations. This is why a successful forecasting technique should be able to tolerate the products of successful interventions. A solely event-dependent technique does not fit the bill.
A “prediction” is deterministic in that a crime event is assumed to happen unless proper actions are taken. So, ultimately, any occurrence of the predicted crime connotes a failure of the police who were tasked with prevention. Unfortunately, the only true measure of success of an event-dependent predictive model is for the crime event to occur, which is generally not in the publics’ or practitioners’ best interest. Activities performed in response to predictions always have the burden of proving that those activities directly resulted in the non-event – while assuming that the event would absolutely have occurred otherwise. Why should a software product be applauded when crimes happen where expected, rather than being critiqued for its lack of meaningful outputs to inform effective preventive action?
I have long advocated for evidence-based methods and complementary approaches to crime analysis. So, in a world where predictive analytics software can be “sexy but not quite proven”, I hope you will consider the full scope of options to support intelligence-led policing. Keep in mind that places like the Center for Evidence-Based Crime Policy at George Mason University or the Center on Public Security at Rutgers University, among other entities, broker research to all public safety sectors, provide accessible resources, and share best practices that crime analysts can replicate for free.
Crime analysis is not a point-and-click occupation to be replaced by expensive software or hardware upgrades. Crime analysis requires thoughtful questions, theoretical grounding, and meaningful interpretations and communication of results by experts with technical know-how and insights from the field. Most predictive analytical software applications rely on your local administrative data. Your data is the real commodity. With basic skillsets in GIS and statistics (along with free and open-sourced software), crime analysts can do exactly the same thing as the most profitable “predictive analytics” companies. So, before any decisions are made about which company to contract with, do a thoughtful cost-benefit analysis.
Do not divest in human capital. The most effective adoptions of technology to policing have been achieved when they complement the human element, not replace it. People drive police cars; people operate two-way radios; people monitor CCTV cameras; people verify acoustic gunshot detection systems; and most recently, people wear body cameras. These are only a few examples of how once new technologies proved effective only in concert with human guidance, not in its absence.
Set your own expectations for meaningful forecasts and intelligence products, and demand that the software meet or exceed your expectations. Don’t let the “tale” of the software product “wag the police department.” Consider all options for enhancing your department’s analytical capacity and “predictive” capabilities, including investing in crime analysts (i.e., human capital) to truly study and solve crime problems.