Too positive about negatives

Probably the largest problem with technology is not in the technology itself, but in people’s failure to understand it. Nowhere is this more obvious than in the world of politics, where decisions are made on the hoof, often in response to emotive issues, and rational policymaking has to follow slowly behind like a poor cousin.

One particular piece of misunderstanding keeps cropping up, and I suspect it’s going to become more and more harmful as technology improves. This concerns the statistics of miscreant detection. I first came across this with the implementation (alledgedly) of facial recognition systems in airports to spot suspected terrorists, which were all the rage a few years back. I came across it again recently, with a proposal from the UK government to imprison anyone who seems to have a dangerous personality disorder, whether they have committed a crime or not.

On the face of it, these sorts of proposals seem very tempting — the world is brimming with bad guys intent on murder and mayhem, so lets spot them early (preferably with cunning machines) and then bang them up. It certainly plays well to the gallery too.

Politicians are concerned only with the *false negatives* — people who fail to be detected in time, and go on to commit dreadful acts. They are right to be concerned about these people of course. I do not disagree with the idea of stopping people from committing murder, my concern is a pragmatic one.

The problem is one of statistics, and concerns *false positives*. A false positive is someone you detect as a terrorist, madman or whatever when in fact they are not. Whenever you apply a test you are going to get some false negatives and some false positives. The number of them you get depends on the accuracy of the test, the overall population size and the number of real miscreants in the population. As you will see, it is no accident that our criminal justice system relies on a level of proof ‘beyond reasonable doubt’.

In all of these cases, the same criteria applies — there is a very large population to test, with a small population of miscreants. For example, in one year ten million people may go through an airport but only 5 terrorists. Psychiatrists may examine 100,000 people with personality disorders in a year, but only one of them will go on to murder a passer-by with a samurai sword.

So, lets apply some very simple maths. Assume you have a test that is 90% effective — which is far better than any of the tests we have available for any of these cases. This means it is wrong 10% of the time, in either direction.

For the airport facial recognition system you have 10,000,000 people to test, and 5 terrorists to catch. So, 90% looks pretty good on the catching front — 5 terrorists, and you miss 10% of them, or 1/2 a terrorist. In a good year you catch the lot. Marvellous. Lets try it the other way around though. Of the 9,999,995 innocents you wrongly identify 999,999 as terrorists. This means in total you have 1,000,004 suspects to do further checks on every year. That’s 2739 per day to check. From my experience of operations rooms, an alarm that goes off 2739 times a day, and is almost always wrong, will be ignored. Even if it isn’t, that’s thousands of innocents held up for further checks, and hundreds of staff who could better be employed in other ways of making planes safer.

So, lets try catching some madmen. Cunning psychiatrists examine 100,000 people to find the one bad apple. Their test also is 90% effective. So, huzzah, we catch our dangerous nutcase. A secure hospital awaits. However, of the 99,999 crazy-but-harmless remainder we also think 9,999 are potentially dangerous, so we’d better lock them up forever too. That’s a lot of secure hospitals. Perhaps a small island would be more suitable.

The results are not particularly affected by the accuracy of the tests. If you make your test 99% effective, something which is frankly inconceivable, you are still locking up 900 innocents for every correctly identified potential murderer. However, the accuracy of the tests is what gains political focus — the criticism of psychiatrists was that because they hang around with the mad all day, perhaps they got used to them and didn’t spot the potential murderers.

An insulting claim such as that, engendered merely by poor mathematics, is bad — much worse would be the implementation of such policies to solve the problem.

3 Responses to “Too positive about negatives”

  1. 1 greatbiglizard

    you are obviously some kind of dangerous communist. don’t you know that locking up innocent people is good for the economy?

  2. 2 Julian

    Its called Signal Detection theory:

  3. 3 Ian

    But Doug, you miss the point, with the terrorist situation, you are right, it quickly becomes apparent that not all those people are terrorists so the authorities get in to trouble for all the internal body searches etc etc. But with the nutters, well everyone who the machine says is a potential killer *is* a potential killer, and with them all locked up and sedated, how can this be proved wrong? It only becomes a false positive if you let them go and they spend the rest of their lives not killing people. It’s a Schroedinger variant. Anyway it’s not like were short of people (especially Americans).

Comments are currently closed.