Posted: January 7th, 2015 Author: Dan Lewis No Comments »
There is a relationship, Taylor’s Law, that has the potential to reveal crime data manipulation.
This is useful considering Tom Winsor of HMIC told the Home Affairs Select Committee last year, that the questions of whether officers are fiddling data is “where, how much (and) how severe”.
Before exploring these issues, I should explain that my interest in this project started with a request for a department seminar. My name is Quentin Hanley, I work in a combined Chemistry and Forensics Department at Nottingham Trent University and I was looking for ways to make statistical concepts more relevant to Forensic Science students who might attend my seminar. This culminated in a paper I co-authored – Fluctuation Scaling, Taylor’s Law and Crime which is free to view online.
But first of all, a few explanations are needed to help you better understand how something called Taylor’s Law can reveal characteristics of the recording process leading to the crime report statistics.
What is Taylor’s law?
Taylor’s power law is named after an academic ecologist named Roy (L. R.) Taylor who in 1961 observed a simple mathematical relationship between the density of a species population and how much it varied in a given area. Taylor saw the same relationship in 24 different species ranging from beetles to fish. Taylor’s simple method has made it possible to estimate much more precisely what a given future population size could look like and how it might fluctuate. Taylor’s law has been applied far and wide within ecology to determine the potential for species extinction or the transmission of infectious diseases. By way of example, this paper shows how very small and isolated reefs had much higher than expected temporal variance in fish abundance. From its beginnings in ecology, Taylor’s law has since been applied to currency trading, urban traffic, and human disease. Applying Taylor’s Law begins with a mean variance plot.
What is mean variance?
Mean variance is a mathematical way of estimating how much variation you can expect around an average. One version is often used in finance where the variance represents the range of volatility in the prices of given assets and the mean is the averaged plot line that runs through this range and determines the expected financial return. The mean variance approach can also be used to measure the gain of an amplifier.
What is gain?
Gain is a measure of how two (or more) systems respond to the same input. For example, recordings of noise from airplanes, traffic, or a party can be loud or quiet depending on the gain setting of the volume control on the amplifier playing them back to us. However, the volume does not influence our ability to identify what we are hearing. Traffic does not sound like a party and our ears can discern the unique signatures. With the assistance of Taylor’s law we can begin to discern the unique signatures of different types of crime AND measure the relative gain of their recording.
How does gain apply to crime?
In the case of crime, the noise is not from an airplane or party, it is the fluctuations in crime over time. These fluctuations are recorded by a police force and played back to us in the form of police crime report statistics. If we know something about the statistical distribution describing a type of crime, we can measure the relative gain of a particular police force and ask if some Constabularies are “loud” and others “quiet.”
What is a statistical distribution?
Most people are aware of the bell shaped curve of the “normal” distribution first proposed by Gauss in 1809. The normal distribution, unfortunately, does not apply to countable things or events like crime reports. For that we need something like the Poisson distribution named after Simeon Denis Poisson who first described it in 1837.
What is special about the Poisson Distribution?
The Poisson distribution applies to many countable things such as photons of light, accidents, and, in the absence of “clumping,” yeast cells in beer. It also gives a reference point in a Taylor’s Law analysis corresponding to things that are not being clustered together and appear random. Against this reference point we can observe that some events are nearly random such as reports of violence in Nottinghamshire and Derbyshire and others events show greater clustering like burglary in those same regions. Beginning with the Poisson distribution as a reference point we can start to evaluate manipulation.
What is manipulation?
Police officers have been the focus of testimony before parliament related to manipulation of statistics. However, in this context I take manipulation to be anything that alters the process of recording crime statistics resulting in an imperfect representation of the original events. Manipulation can be many things and the conscious behaviour of individual officers is only one.
How might manipulation work?
Consider a Constable confronted with a group of five youths playing in a municipal fountain shouting insults and splashing water on passers-by. Many outcomes are possible. After an intervention, it is possible that no further action is taken. Alternatively, the Constable might decide this was anti-social behaviour and file a report. It might also be possible the Officer is called to a more serious problem leaving this incident with no action taken at all.
The eventual outcome could be influenced by staffing in the constabulary. If staffing is low the probability of moving on to more serious matters will increase.
The outcome could also be influenced by policing targets or quotas. For example, if Constables are subject to an anti-social behaviour reporting target, this incident might represent an opportunity to reach that target increasing the incentive to file a report. If the target for a particular reporting period has been met, there might be less incentive to file a report. Depending on training, policies, and incentives, this could represent as many as five reports (one for each youth) resulting in a “loud” response or only one giving a “quieter” response.
The outcome might be influenced by a threshold effect with the first few incidents drawing no police attention, but after more occur this might change.
When looked at this way, it is less obvious how this should be recorded, particularly in a Force with limited resources. This scenario may be simplistic but this Wall Street Journal article on possible stop-and-frisk and arrest quotas in the New York Police Department may provide perspective. The important point is that Taylor’s law plots can help find and assess some of these manipulative influences.
How does this all fit together?
In approaching this research, the idea was to show an example of the method of mean-variance I had used many years ago to characterise the gain of digital camera sensors similar to those used in phones. The method uses the statistical properties of light (Poisson distribution) to measure the gain of a detector.
Since crime reports, light photons, are countable, I wondered if they would behave like photons and have similar predictable mean variance behaviour.
I started looking at crime statistics from a few neighbourhoods. An individual neighbourhood might look like the figure below which shows how crime fluctuates (whatever the short term trends might be).
Realising the growing scope of this work after looking at a only a few neighbourhoods, I knew I needed assistance. Fortunately, last year, 3 exceptional students (my co-authors Amal, Rachel, and Suniya) volunteered to work for me. They did some chemistry during their time, but were also willing to help with this study. We divided up the work and started assembling a data set.
You will now understand that crime reports are signals produced by a detection system (the police) reporting on crimes committed. If everything turned out as I expected, a plot with the average number of crime reports per month on the x-axis and variance on the y-axis would have a slope of one (linear scaling law). If all Constabularies have the same amplification, the gain would also be the same for all of them.
The data did not show the anticipated results (do they ever?). Crime reports from policing neighbourhoods followed the more complex scaling relationship represented by Taylor’s Law. It certainly did not look like a Poisson distribution.
By the time of my departmental seminar (December 2013) the results looked like this:
Where we expected a straight line with a slope of one, we observed a power law with an exponent of 1.313 and where the slope should have been was a factor of 0.5598 (not 1!). This data was a mix of violence (lower values) and total crime (larger values) as we did not know at the time that mixing crime types was not a good practice.
Convinced that a relationship could be discovered, we systematically looked at all 151 policing neighbourhoods in Nottinghamshire and Derbyshire. We looked at different types of crime as categorised in the Police statistics. We also looked at larger scale data sets from UKCrimeStats and at less controversial statistics (mortality) for comparison. The results have now been published and there are three particularly useful conclusions:
- The relative gain of a Constabulary can be measured by using data from their policing neighbourhoods and comparing it to another Constabulary. Using Derbyshire as the reference, Nottinghamshire showed a “louder” gain (1.36) for anti-social behaviour, and a “quieter” response (0.73) for burglary. In these two regions, violence and total crime had identical gain within statistical limits. These measurements provide a more rigorous and defensible way to compare Forces than unadjusted crime reports.
- Some types of crime show greater clustering. Violence in policing neighbourhoods appears randomly distributed while “other crime”, which includes many crimes of deceit, is more highly clustered.
- Police statistics have been criticised due to widely reported manipulation. Using simple models, we found that some types of manipulation are obvious in a mean-variance plot.
Crime reports from neighborhoods follow a Law, Taylor’s Law. Using this as a starting point, we can extract useful information such as relative gain from imperfect crime statistics provided by the Police. Overall when compared to transmission of measles, total crime at local scale clusters similarly to measles in a population in which 80-90% of people are vaccinated.
I have savoured researching this paper. I’m of the view there is still much more analysis that can be done using sophisticated statistical analysis of available crime data. I now look forward to taking this work further, across all UK Police Forces and attempt to look at other countries. I’d be happy to answer any queries and you can contact me at Quentin.hanley[AT]ntu.ac.uk .
Posted: July 9th, 2014 Author: Dan Lewis No Comments »
Today we are delighted to be publishing the first in a series of posts from Nick Ross, probably best known as the long-time presenter of Crimewatch and author of a recent book, Crime: How to solve it – and why so much of what we’re told is wrong – which if you’ll forgive the plug, is one of the most interesting, well-written and thought-provoking books I’ve read for a long time. So you should read it – and no, before you ask, I’m not on commission !
Today then, we have a treat for you – the publisher has agreed to let Nick reproduce on www.ukcrimestats.com the entire Chapter 9 on Crime Statistics.
Over to Nick.
Chapter 9: Statistics
False facts are highly injurious to science because they often endure so long. – Charles Darwin
If crime is a normal part of the human repertoire, why is the crime rate so low? The question sounds perverse given that crime statistics have caused public consternation and political paranoia. But the answer is instructive. All crime statistics vastly underrate actual victimisation. And among the flakiest of all are the figures most people think are most reliable: those that come from the police.
It is essential to grasp how untrustworthy their records are – and how misleading they can be – to understand why the police get so distracted and why the courts are so feeble at controlling crime. But we also need to find a better metric, because if we can’t measure crime properly we can’t tell if it’s going up or down, we can’t calibrate our responses, and we can’t know if our solutions are making things better or worse.
It all once seemed so simple: if you want to know the crime, ask a policeman. The police are the experts and there is something comfortably definite about police statistics, not least that they can be traced back to actual victims. When the figures are published as tables or graphs they seem so tangible they must be real. Despite long-standing criticisms most policy-makers and commentators still take them at face value. The government even insisted that police statistics should be plotted on street maps and posted online so that, in theory, citizens can judge how safe they are. (I was privileged to be in at the pilot stage of one of these, in the English West Midlands, which showed a huge and obviously dangerous hotspot. It turned out to be the police station where suspects were searched for drugs or stolen property.)
There are three glaring problems in trusting police experience of how big or bad things are, and they all go back to a fundamental problem: crime, by definition, is illicit. As a general rule, people who break the law try not to draw attention to themselves. Sometimes their misdeeds are conspicuous, like a smash-and-grab raid in the high street, but mostly crime is surreptitious, intimate or even virtual. Every now and then someone will confess to a whole string of offences that were unknown to the police, but as a general rule, bullies, fraudsters, drink drivers, drug dealers, music pirates and rapists try to keep their crimes a dirty secret.
Accordingly, we expect the police to go and find crime for themselves. But officers rarely come across a crime in progress and, oddly, when they are proactive they actually distort the picture. A swoop on clubs will expose drug problems; a search for knives will uncover weapons. One area may have had a blitz on burglary, another on domestic violence or uninsured drivers. The arrival of a new chief constable or borough commander can have a huge impact on how the police operate, whom they target and what they prosecute. Some chiefs will turn a blind eye to street prostitution, others will clamp down on it. Often this gives rise to the perverse effect that better policing is rewarded with higher crime rates: if the police persuade more victims of rape to come forward, their tally of sexual offences will surge. Curiously, we can also get ‘more crime’ if those in government demand it.
Officers have often been given targets, such as two arrests per month, and charges are inflated (from, say, drunkenness to harassment – which counts as violence) to meet the quota. The Police Federation, which represents the rank and file in Britain, has justifiably called it ‘ludicrous’.
Similarly disturbing crime waves happen when charities or statutory agencies launch an initiative, or when the media mount a big investigation. Who knew child sex abuse was common until ChildLine came along?
But we the public are by far the biggest source of police intelligence. In other words, police crime figures are largely what we as witnesses, victims and occasional informants choose to tell them. Which is surprisingly little. Even if we see a crime actually taking place. According to a poll for the Audit Commission, almost two-thirds of us would walk on by. We can’t be bothered, don’t want to get involved or don’t think the police would do anything anyway. The reality may be worse than that survey suggests. Avon and Somerset Police set up a small experiment in which a plainclothes officer blatantly brandished bolt cutters to steal bikes, and though at least fifty people clearly saw what he was doing, not one person intervened or rang 999. The video went online and proved extremely popular.
That leaves the great bulk of recorded crime figures in the hands of victims. And, again, a big majority of us have reasons to keep quiet. When people are asked about crime they’ve suffered and whether or not they asked for help, it turns out that only 44 per cent of personal crimes are reported to the police. Even that reporting rate is a big improvement, caused partly by the spread of mobile phones. And it doesn’t count many of at least 9 million business crimes a year, most of which we only hear about through surveys, or commercial frauds which companies and managers would rather not make public.
Why do we suffer so in silence? The answer is fear, guilt and cynicism. In many ordinary crimes, and some extraordinary ones too, private citizens want to stay clear of the authorities. This is often the case in pub brawls, fights at parties, clashes in the street, domestic violence and a lot of sexual assaults which are too embarrassing to talk about. I saw this for myself when auditing crime in Oxford over two weeks for the BBC. On a typical Friday night at the John Radcliffe Hospital we filmed twelve people wounded badly enough to come to A&E, all male, all the result of alcohol, one with a bottle wound just beneath the eye, one with a double-fractured jaw, and one in a coma. But the police recorded only seven violent crimes that night, including some not hurt badly enough to have needed medical attention. Even more surprising, there was little correlation between the severity of the injury and the likelihood of telling the police.
A pioneering emergency surgeon – we shall meet him later – has systematically checked hospital records over many years and is blunt: ‘Police figures are almost hopeless when it comes to measuring violent crime.’
Then there are crimes people tend not to make a formal fuss about. Sometimes the victims perceive what is technically a crime to be normal, as with childhood bullying and theft among school kids. This is even true with full-blown rape, which you might think needs few definitions, but, as we shall see later, it is not just perpetrators who deny it happened; half of all women who have been attacked in a manner that fulfils the legal description do not consider themselves to have been raped. Many victims blame themselves and some are very vulnerable. One of the worst aspects of concealed crime is often dismissed as antisocial behaviour and is targeted at people with disabilities, causing huge distress and sometimes serious harm.
More often it’s simply not worth the effort of telling the police, as when an uninsured bicycle is stolen. In fact, some official theft rates do more to measure changes in insurance penetration than trends in stealing. One of the reasons that street crime appeared to rise steeply in the late 1990s was that mobile phone companies were promoting handset insurance. On the other hand, people are cautious if they are insured and don’t want to jeopardise their no-claims bonus, as where a car is vandalised or broken into.
Apologists for the official figures sometimes demur from such pettifogging and claim that at least the more serious crimes will be recorded. Not so: under-reporting is rife in stabbings or even shootings, so much so that British police chiefs want the medical profession to break patient confidentiality and report patients treated for knife or gunshot wounds.
Even murder is surprisingly hard to count. First it has to be discovered. Britain’s biggest peacetime killer, the family physician Harold Shipman, probably killed 218 patients over a span of thirty years, but none was regarded as homicide until shortly before his arrest in 1998. There are thousands of missing persons and no one knows if they are dead or alive unless a body turns up. Even with a corpse, pathologists and coroners may disagree on whether death was natural, accidental, suicide or at the hands of others; and scientific advances can suggest different causes from one year to the next. The statistical effects of all this are not trivial. Prosecutors can have a big effect too. Most years in England and Wales about 100 cases that are initially recorded as homicide become ‘no longer recorded’ as homicide because of reclassification. On the other hand, other defendants have the book thrown at them, as when reckless misadventure was reclassified as homicide after fifty-eight Chinese nationals suffocated while being smuggled into Britain in 2000, or when twenty-one cockle-pickers drowned in Morecambe Bay four years later.
Since in Britain murder is relatively rare, multiple deaths like these, or the fifty-two killed in the 7/7 bomb attacks, can massively distort the figures, warping short-term trends. Long term trends are even more difficult because of gaps in the records, especially from the age before computers, when information was kept locally on cards or paper.
Which opens another can of worms.
A third of all crime reported to the police is not recorded as a crime.
A great deal depends on whether an officer considers that an offence has taken place and, if so, whether it gets put down in the logs, when it is recorded and how it is categorised. Traditionally the police have a great deal of discretion. Retired officers will sometimes readily concede that, in years gone past, many quite unpleasant crimes were not taken very seriously: people who were racially abused, young men ‘having a scrap’, and even serious bodily harm if inflicted by a husband on his wife. Apart from anything else, turning a blind eye could save a lot of work.
There will always be a lot of wriggle room. When is a young man with a screwdriver equipped for burglary; when is a small amount of drugs not worth bothering about; when is a discarded handbag indicative of a mugging; when is it best to turn a blind eye in the hope of gaining some intelligence; when is a drunken brawl best dealt with by calming people down; when if someone reports a disturbance should one finish one’s paperwork or rush round and intervene? Not infrequently these ambiguities are manipulated cynically, with offences being shuffled from one category to another to reflect better on police performance. As one officer famously put it, the books are frequently ‘cooked in ways that would make Gordon Ramsay proud’.
In recent years Home Office counting rules have greatly improved consistency. Even so, in 2000 the Police Inspectorate found error rates ranging from 15 to 65 per cent and in 2013 the Office of National Statistics was still sufficiently concerned about big discrepancies that it warned police may be tinkering with figures to try to fulfil targets.
Moving the goalposts
Even if all crime were reported and consistently recorded, police statistics can be terribly misleading. Lawyers, legislators and officials keep changing the rules. Karl Marx came across the problem somewhat before I did, correctly noting in 1859 that an apparently huge decrease in London crime could ‘be exclusively attributed to some technical changes in British jurisdiction’.
The most blatant example of moving the goalposts was between 1931 and 1932 when indictable offences in London more than doubled because of a decision to re-categorise ‘suspected stolen’ items as ‘thefts known to the police’. More recently, changes in counting rules led to an apparent and terrifying surge in violent crime in 1998 and then again in 2002. It started as a noble idea to get more uniformity and be more victim-focused but resulted in completely redefining violent crime. From that point on, half of all police-recorded violence against the person involved no injury.
In 2008 violence was reclassified again and this time many less serious offences were bumped up to big ones. For example, grievous bodily harm now included cases where no one was badly hurt. Inevitably the Daily Mail reported ‘violent crime up 22 per cent’.
It is not just journalists who get confused. Many political advisers and university researchers are also taken in, which can lead to silly ideas and unproductive policy. People often get irate at those who refuse to take police statistics at face value. ‘We all know what they mean,’ they say. It is as though challenging the figures is somehow to be soft on crime. But we don’t know what they mean, and nor do the police.
International comparisons of police statistics are even more unreliable. Different countries have different laws, different customs and very different reporting rates. On the face of it, Australia has seventeen kidnaps per 100,000 while Columbia has only 0.6. Swedes suffer sixty-three sex crimes for only two per 100,000 in India.
Some people actually believe this stuff.
Evidently they don’t read the warning on the crime statistics tin. The Home Office has long warned that ‘police-recorded crime figures do not provide the most accurate measure of crime’, and for years the FBI was so cautious it sounded almost tongue-in-cheek: police data ‘may throw some light on problems of crime’. Yet however shallow, however defective, however inconsistent the figures, they have almost always been treated as far more meaningful than they are. Police responses, policy-makers’ strategies and public opinion navigated according to a tally which sometimes reflects the real world and sometimes doesn’t.
It is not as though we didn’t have a better mousetrap. Back in 1973 when crime was racing up the political agenda, the US Census Bureau started asking people for their actual experience of crime. For the first time they could get consistent data from year to year and from state to state. It was explosive stuff and immediately confirmed how incomplete police statistics were. The UK was already beginning to track crime as part of a General Household Survey, but from 1982 it followed the US lead with dedicated victimisation polls called the British Crime Survey or BCS. Other countries soon followed suit and over eighty now use a common methodology. That means we can now compare crime across borders as well as time.
The big picture
There is a lot wrong with the British Crime Survey. For a start, its name. The BCS only audits England and Wales – Scotland started its own SCS – and by the time they finally rebadged it (as the Crime Survey for England and Wales) the term BCS had become ingrained. So, confusingly, historical reports have to be called BCS and new ones, CSEW. If Wales goes its own way it may have to be rebranded yet again. It is also expensive. Since barely a quarter of the population suffers any sort of crime in any year you have to talk to a lot of citizens before you come up with a representative sample of, say, attempted burglary victims, let alone people who have suffered major trauma. That requires almost 50,000 face-to-face questionnaires, and not everyone will give up forty-five minutes for intrusive questions. It means researchers must doggedly go back to find the hard-to-get-at people, especially where victimisation is at its worst, and get them to trust in the anonymity of the process. It’s not like an opinion poll; it’s a mini-census that costs £100 per interview.
Even so it leaves a lot of gaps. Most obviously, it leaves out business crime, which has had to have a separate survey of its own. It is also hopelessly unreliable on rare crimes – one would have to interview almost a million people to get representative data on homicide. For a long time it missed out on under-sixteens too, fearing parents might object, but that has now been sorted. Past surveys also neglected homeless people and those in communal dwellings like student halls of residence, old people’s homes or hostels. An increasingly significant problem is that it largely ignores internet crime, but then so does almost everyone. And it almost certainly undercounts the most vulnerable in society who are victimised repeatedly and whose complaints are arbitrarily capped at five. Finally, being national, it has limited value in describing local crime.
Yet for all that, it has a huge advantage. Respondents may misremember or lie, but there is no reason to assume that memories or candour will change much from one year to the next. In other words, these big victimisation surveys have a power to describe trends.
So why did surveys like the BCS/CSEW take so long to catch on with the politicians, press and public?
The answer is, they didn’t come up with the right answers. Governments wanted to look competent, but since victim surveys uncovered far more crime than was realised hitherto they made the problem look even worse: the BCS revealed 8 million crimes a year compared to 3 million recorded by the police. Perhaps unsurprisingly, the early reports were met with a ‘conspiracy of silence’. One of the pioneers, Jan van Dijk, describes how his home country, the Netherlands, reacted with dismay in 1989 when the first international survey put it top of the league table for several property crimes, including burglary. The findings were lambasted for weeks by Dutch politicians, the media and criminologists.
On the other hand, crime surveys came to be disparaged by curmudgeons, including most journalists, because from 1995 they started to show crime was coming down. In fact in ten years, BCS crime fell 44 per cent, representing 8.5 million fewer crimes each year. Critics believed that this was just not credible and preferred police statistics which were far less encouraging and sometimes – on vandalism for example – continued in the opposite direction.
Thus it was that the British media continued to report that crime was climbing long after it had peaked and, incredibly, they went on with their rising crime agenda throughout a decade and a half of steep decline.
That is a story in itself.
© Nick Ross 2014
Crime: how to solve it and why so much of what we’re told is wrong. Biteback, £17.99
For background, detailed references and more see www.thecrimebook.com