Commercial and non-commercial "predictive" software products come (and go). But, software does not define the evolution of "predictive policing". Innovation in this arena is (or is going to be) how police use information from analytic outputs to inform decisions or take thoughtful action. Transparent and actionable information should be the commodity of predictive analytics - not the software itself. Key to the "new age" of policing (and an agency's desire to be "predictive") is the willingness of police officers at all levels to ask new questions, collect new data, and find value in the results. Outputs should inform decisions about where to police, but also about what to do when resources get there, and why. Understanding why is important for doing actions with intent. Predictive policing requires a "culture change" as much as, if not more than, a technological change. To my knowledge, Risk Terrain Modeling is the evidence-based diagnostic/forecasting method developed with this in mind. E.g., see ACTION: http://rtmtraining.weebly.com/uploads/2/6/2/0/26205659/actionplan.pdf
I came across La Vigne's essay recently in the Urban Institute publication: http://www.urban.org/urban-wire/rethinking-americas-wash-rinse-repeat-approach-policing. (Good stuff!)
A reporter recently asked me (I'm paraphrasing) why it mattered if risk terrain modeling (RTM) identified factors that some police already knew/suspected are risky. I suppose the same can be said of hotspot maps (i.e., they routinely point to the same areas that police already know are hot). Furthermore, though, she asked what police do with RTM results. In my mind, that was the insightful question, and should be the theme of a news story on crime prediction techniques. We should all be careful not to let current "traditions" or analysis methods be the default benchmark for evaluating answers to this reporter's important question (here some reasons why: http://www.jcaplan.com/forecasttips.html).
RTM fits into existing paradigms of policing. But, the concept of "risk" should really be the catalyst for changing the way police do policing. "Risk", and RTM as a tool for assessing spatial risks, helps police problem solve in ways that can be articulated and results can be measured. RTM helps to take the focus off of "crime" or hotspots-policing and on to risk-based policing. It helps steer police activities toward the underlying attractors of illegal behavior so police can do something about the physical places, not solely on the people located at places. RTM helps to diagnose crime problems and break the "wash, rinse, repeat" approach that La Vigne explains is so common in American policing.
If you are considering acquiring “predictive analytics” software for crime forecasting, then based on my experience, below are some things you might want to ask as you evaluate what’s available. Recently, the efficacy of using “non-crime-related data” for crime forecasting has been questioned. So, consider these 6 tips for evaluating a successful forecast, within the context of all options:
1. Data used in the analysis should be reliable and valid (i.e., content and construct validity). The data sets and their sources should allow for replication and continued forecasting. This includes the requirement that the forecast technique does not rely heavily on the outcome of interest (i.e., the dependent variable) to be the predictor (i.e., independent variable). Such a forecasting technique would not be sustainable if it were both actionable and successful since outcomes would ultimately be prevented.
2. The outputs of the forecast should be operational. It should be reasonably clear what to do with the information to respond to the forecasted effects. Knowing where to go is a start. But forecast outputs should also inform decisions about what to do when you get there. Telling officers to “use your knowledge, skills, experience and training in the most appropriate way to stop crime” is vague and places a heavy burden on individual officers in a way that is nearly impossible to rigorously evaluate.
3. The method of the forecast should be operationalized consistently from one instance to the next. Data sets and sources may change, as will analysts, etc., but the reliability of the forecasting method must withstand multiple iterations, in different settings, by different people, and for different types of outcome events.
4. The elements of the forecast should be articulable, with the importance of each factor relative to one another directly measured. The direct impact of key factors on outcomes should be demonstrable (i.e., internal validity).
5. The output results should be within a range of reasonable expectations (i.e., face validity). The forecasting process should be able to justify why a result was produced or else the forecasting process should be able to be revised in a non-arbitrary way.
6. The method of the forecast should be able to tolerate the products of successful interventions. Especially those that are based on the intel from prior forecast iterations. Precision and accuracy of a forecast are important considerations for defining success. However, a “sales pitch” focused only on precision or accuracy of forecasts should be suspect. For example, if crime never moves, then past crime incidents will always be 100% accurate and precise predictors of new incident locations. Alternatively, if crime always moves, then new crimes will never occur where past crimes did, making an event-dependent forecasting technique zero percent accurate and precise. The reality of crime patterns is likely somewhere in between these extremes. If police intervene successfully in response to a forecast and their actions are expected to change the spatial dynamics of crime, then the forecast could be very accurate/precise prior to the intervention and then not at all accurate/precise after the intervention. Basically, any successful forecast should yield valuable information that can be operationalized for preventative action. However, if the preventive action is successful at reducing crime incident counts and/or changing their spatial patterns, then it could directly affect the precision/accuracy of future forecast iterations. This is why a successful forecasting technique should be able to tolerate the products of successful interventions. A solely event-dependent technique does not fit the bill.
A “prediction” is deterministic in that a crime event is assumed to happen unless proper actions are taken. So, ultimately, any occurrence of the predicted crime connotes a failure of the police who were tasked with prevention. Unfortunately, the only true measure of success of an event-dependent predictive model is for the crime event to occur, which is generally not in the publics’ or practitioners’ best interest. Activities performed in response to predictions always have the burden of proving that those activities directly resulted in the non-event – while assuming that the event would absolutely have occurred otherwise. Why should a software product be applauded when crimes happen where expected, rather than being critiqued for its lack of meaningful outputs to inform effective preventive action?
I have long advocated for evidence-based methods and complementary approaches to crime analysis. So, in a world where predictive analytics software can be “sexy but not quite proven”, I hope you will consider the full scope of options to support intelligence-led policing. Keep in mind that places like the Center for Evidence-Based Crime Policy at George Mason University or the Center on Public Security at Rutgers University, among other entities, broker research to all public safety sectors, provide accessible resources, and share best practices that crime analysts can replicate for free.
Crime analysis is not a point-and-click occupation to be replaced by expensive software or hardware upgrades. Crime analysis requires thoughtful questions, theoretical grounding, and meaningful interpretations and communication of results by experts with technical know-how and insights from the field. Most predictive analytical software applications rely on your local administrative data. Your data is the real commodity. With basic skillsets in GIS and statistics (along with free and open-sourced software), crime analysts can do exactly the same thing as the most profitable “predictive analytics” companies. So, before any decisions are made about which company to contract with, do a thoughtful cost-benefit analysis.
Do not divest in human capital. The most effective adoptions of technology to policing have been achieved when they complement the human element, not replace it. People drive police cars; people operate two-way radios; people monitor CCTV cameras; people verify acoustic gunshot detection systems; and most recently, people wear body cameras. These are only a few examples of how once new technologies proved effective only in concert with human guidance, not in its absence.
Set your own expectations for meaningful forecasts and intelligence products, and demand that the software meet or exceed your expectations. Don’t let the “tale” of the software product “wag the police department.” Consider all options for enhancing your department’s analytical capacity and “predictive” capabilities, including investing in crime analysts (i.e., human capital) to truly study and solve crime problems.