CSI: current research into the impact of bias on crime scene forensics is limited – but psychologists can help

At HERC we publish blog articles covering a wide range of issues that broadly relate to harm, evidence, crime and justice. In keeping with the critical position of HERC, our aim is to highlight all sides of the debate and to facilitate a discussion so that all voices are heard on the issue.

In this article, Lee John Curley and James Munro discuss the role of bias in crime scene forensics. Lee John Curley is a lecturer in Psychology at The Open University and James Munro is a Psychology Researcher at Edinburgh Napier University.

When a jury decides the fate of a person, they do so based on the evidence presented to them in the courtroom. Evidence obtained from forensic analysis, such as DNA analysis, is often interpreted as strong evidence by jurors.

This perception of forensic evidence is enhanced by popular TV shows like CSI: Crime Scene Investigation, where physical evidence is used to solve murders in a “whodunit” showdown between deductive cops and crafty criminals covering their tracks. All it takes is the right evidence to piece the story together.

But recent research suggests that the reality of forensic analysis is that it can be subjective and fallible. For instance, forensic evidence can sometimes be ambiguous because of factors such as the presence of DNA on samples that originates from more than one person.

When forensic evidence is ambiguous, contextual information (such as knowledge of a confession) may influence how forensic examiners evaluate the evidence. This distortion in their evaluation is called contextual bias and has been stated to be a reason why miscarriages of justice occur.

Our research agrees with this recent research that contextual information may influence the decisions of forensic examiners. But this may not necessarily be a bad thing. We believe it is premature to remove context from forensic analysis. Contextual bias on the part of a forensic examiner does not necessarily mean that errors will be made.

It is difficult for psychologists in the UK to make recommendations about the effects of context on forensic examiners because the research to date has been fairly limited, particularly in the the way it has been conducted.

For example, some studies had a very small sample size. Some lacked a control group. In others, accuracy was not measured. This means that the researchers could not know for certain if participants would have performed differently if no contextual information had been available to them. So it has been difficult to generalise about the effects of contextual bias on forensic examiners’ decisions.

Bias does not equal error

But our study presents the idea that contextual information does not necessarily always lead to inaccurate decision making.

First, forensic evidence will be generated from both the crime scene and the suspect, meaning that the fingerprints left at a crime scene are more likely than not to match the fingerprints of the suspect. For this reason, contextual information (such as knowledge of a confession) that biases forensic examiners towards finding a match may lead to more accurate decisions being made.

Contextual information may also inform the examiner which tests to conduct. If the examiner knows which questions they must answer, then they may avoid worthless tests. But this also means they may overlook something. For example, one piece of research cited a rape-homicide case. In this case, a forensic laboratory was told by detectives to only analyse the evidence for semen samples. This meant that the forensic examiners missed blood samples that turned out to be integral to the case.

Based on this example, researchers stated that contextual ignorance may have more of a negative effect on forensic decisions than contextual bias. This view is supported by psychological studies which have shown that biased decision processes can lead to accurate decision outcomes.

Impact on jury decisions

Despite the potential positive effects, it may remain ethically and legally inappropriate for forensic examiners to use contextual information. For instance, jurors may interpret the different types of evidence, such as a confession and forensic evidence, as being independent of one another.

But if contextual information such as a confession aids the interpretation of forensic evidence, jurors may incorrectly think that each piece of evidence independently supports the other when this is not actually the case. This means that jurors could be overestimating the chances of a defendant being guilty.

Our review suggests that concerns relating to the study of contextual bias in forensic examiners – small sample size, no accuracy measure and failure to use a control group – makes it difficult for implications and recommendations to be drawn.

We suggest that future research employs the skills of both forensic examiners and cognitive psychologists. Then that both skill sets can be used to create realistic experiments. Examiners have the necessary knowledge of both lab environments and forensic evidence, but we believe that access to this knowledge will help psychologists design more rigorous experiments targeted towards the study of contextual bias in forensic examiners. Only then will we discover can proper conclusions be drawn about whether contextual bias is a help or a hindrance.

This article was originally published by The Conversation on the 29th of October 2019, at: http://theconversation.com/csi-current-research-into-the-impact-of-bias-on-crime-scene-forensics-is-limited-but-psychologists-can-help-125467

Artificial Intelligence and rationality as psychological issues

At HERC we publish blog articles covering a wide range of issues that broadly relate to harm, evidence, crime and justice. In keeping with the critical position of HERC, our aim is to highlight all sides of the debate and to facilitate a discussion so that all voices are heard on the issue.

In this blog post, Dr Lee John Curley discusses the widespread fears of AI as involving the loss of ‘our special human capacity of rationality’. Lee John Curley is a lecturer in Psychology at The Open University.  

Our fear of Artificial Intelligence once related to the terminator and Skynet, but in a time of economic uncertainty and mistrust over how artificial intelligence uses human data on the internet, new fears are more related to employment and human rights. Some people see the development of AI as a process in which we recklessly hand over our special human capacity of rationality to machines, condemning ourselves to low paid jobs, or even unemployment. In this week’s blog, I explain why psychologists are interested in rationality. I present the fable of Prometheus, the great titan who was punished for passing on his godly skill of rational thought. I highlight the lessons that can be learnt from this story when considering potential implications of artificial intelligence. 

Rationality or the ability to integrate information to choose an option with the most utility, is a cognitive ability that may be at the heart of what makes us human: the very meaning of the term Homo sapien even means “wise man”. Rationality has become such as constant in human behaviour that the pillars of society (law, economics and medicine) all assume that decision makers employ rational processes when faced with an option. This blog will delve into how the ancients viewed rationality, how modern cognitive psychologists view the term and how rationality will shape the future. 

However, rationality has been studied by more than just cognitive psychologists. Mathematicians, philosophers, social psychologists and psychoanalysts have all studied rationality, each with different viewpoints on rationality and the extent to which humans participate in rational behaviour.  

In Ancient Greece, the world was explained in terms of symbolic entities (gods, deities and titans) that represented observable phenomenon. For instance, Gaia represented the earth, Poseidon the seas, and the almighty Zeus was symbolic of the heavens above. Some of these powerful beings, however, represented very human traits. Prometheus (meaning forethought) and Epimetheus (meaning afterthought) represented the rational and non-rational (or intuitive) part of the human mind, respectively. Once these titans fell out of favour with the Olympians, however, their roles of rationality and intuition fell to the gods Apollo and Dionysus. Prometheus was the champion of thinking ahead and choosing the long term right path, despite the negative short term effects for himself. This is evidenced in the story of Prometheus where he steals fire for the ancient humans, against Zeus’s instructions, and is punished until he is freed by Herakles. Despite the negative ramifications for himself, he metaphorically, and literally, ignites rationality, abstract thought and logic into the minds of Homo sapiens; thus simultaneously making humans more like the deities they worshipped, and the gods less special. The creation of the Prometheus myth shows that rationality is a key aspect of humanity, and that the ancient Greeks were aware of the power of rationality. 

During the Renaissance, there was a reawakening of rationality, with mathematical (or normative) concepts, such as probabilities, essential to modern mathematical and psychological theories of rationality being invented. With rationality and probability becoming interlinked, humans were viewed to be “Laplacean demons”. In other words, ‘we’ were viewed to be rational beings, who had unlimited cognitive capacity and were not influenced by the limitations of the mind. In association with this development in rationality and mathematics, institutions such as law, medicine and economics were all developing fields and were influenced by the perspective of the time (i.e., to be human was to be rational). 

This was the main viewpoint until the cognitive revolution in psychology and the seminal work of Tversky and Kahneman. They conducted a number of experiments in the 70 and 80’s (and even won a Nobel Prize) for highlighting that although rationality should govern our minds when making decisions, that instead, individuals sometimes deviate from rational principles and make decisions based on intuitive cognitive short-cuts called heuristics (Greek for find or discover). Their research showed that humans are flawed and that we can make biased decisions. 

This perspective has dominated the majority of the last 50 years of work in the field of decision science. Contemporary decision scientists, however, see intuitive thought and rationality as brothers (similar to the Greek myths surrounding Prometheus and Epimetheus). The dual process model of decision making suggests that two different modes of cognition (system 1 and system 2) governs our decision making. System one is an intuitive mode of cognition with a plethora of heuristics making up the components of said system. System two on the other hand is the rational part of the mind, which may be unique to humans. System two is believed to be more effortful and conscious than the primitive system one mode of cognition. The modern mind-set of rationality is that it is possible to make rational decisions, but that it is difficult and effortful, thus researchers believe that humans much prefer to default to system one. 

This flawed perspective of human rationality has led to rationality, the very essence of humanity, becoming synonymous with artificial intelligence and robotics. Normative (mathematical) models of rationality have been shown not to reflect the entirety of human behaviour, whereas artificial intelligence (AI) may be a new frontier to apply these classical models of decision making. Unlike human beings, artificial intelligence can be programmed to accord with rational principles and statistics.  Therefore, what classically was seen as something unique to humans, the thing that made ‘us’ special, may in the future become a robotic trait. This mirrors Prometheus’s gift to ancient humans which lead to deities becoming less godlike, and humans becoming more like their creators. 

Now computers are powerful enough to win against a human at chess, and it is estimated by researchers that AI will exceed human ability in a number of tasks (e.g., language translation) in the next 10 years. It is even believed that by 2053 AI could replicate the abilities of a surgeon. This speculation suggests that the expansion of artificial intelligence into the realms of rationality may cause humans to become obsolete, with more rational, consistent, and efficient computers replacing biased and flawed humans. This could cause a number of occupations traditionally employed by humans to be performed by complex AI.

Others, such as Peter Fleming, instead argue that AI will cause an increase in poorly paid jobs, as he argues that an important factor in AI being utilised in a profession is, will it be economically viable? Therefore, Fleming suggests that low skilled and low paid jobs will not be replaced. He expands on this point by suggesting that AI that partially automates a job though an app will also reduce the skill required by the employee, thus decreasing the relevant pay required for the service (e.g., Uber driver with app vs. traditional taxi driver that receives training). Furthermore, contrary to contemporary belief, the age of the AI may have a negative effect on human standards of living. Humans, like Prometheus, may suffer the negative consequences of passing on the sacred flames of rationality to an intelligence that ‘we’ created.

In summary, rationality has always been viewed by humans as a god like ability. The story of rationality is the story of humanity, the way we view rationality changes how we view ourselves, and ‘we’ are becoming increasingly closer to mirroring the story of Prometheus and igniting the flame of rationality in non-organic decision makers, and thus decreasing the specialness of humanity. By giving this special ability to AI, we may be condemning ourselves to low paid jobs; or even unemployment. Further bringing to life the story of Prometheus, as the great titan who was punished for passing on to humans his godly skill of rational thought.

Do new laws on phone use whilst driving fully reflect scientific knowledge?

Gemma Briggs, The Open University

 

On 1st March, tougher penalties for hand-held phone use whilst driving came into force. Those who are caught now face a £200 fine and 6 points on their licence. On announcing the change in legislation, Transport minister, Chris Grayling claimed that drivers must “take responsibility” for their actions, making phone use behind the wheel as socially taboo as drink and drug driving. This is a message few would disagree with, yet the legislation it relates to misses one crucial point: hands-free phone use is just as distracting as hand-held conversations.

The difficultly with this issue is that people are often unwilling to believe that hands-free phone use is any different from talking to a passenger. I’m often asked if this research means drivers must travel in total silence to avoid distraction, or whether I’ve ever tried to drive with screaming kids in the back of the car – surely that’s more distracting than a simple phone call? Of course, any type of secondary task whilst driving can be distracting, but phone use seems to be qualitatively different due to where both conversation partners are: someone on the phone cannot see what the driver can, and therefore consistently demands their attention. A passenger, on the other hand, can see when the driver is facing a challenging situation and can stop talking, thus reducing the amount of information the driver needs to process.

Our research takes this a step further by investigating which aspects of a phone conversation affect driving. As we all have a limited amount of mental resources available to us when completing any task, speaking on the phone introduces competition between the two tasks for these resources: the cognitive resources needed for driving may also be needed for a phone conversation. When talking on the phone, drivers may create mental images of what the other person is saying, where they are and what they’re doing. If this is the case, the conversation could have a ‘visual’ element to it, meaning some of the resources needed for accurate visual attention whilst driving may already be in use for a phone conversation.

Using a hazard detection test, we measured driver’s reaction times to hazards and their eye movements. Some drivers were distracted by a phone task which sparked their visual imagination, and others completed a phone task which did not require imagery. A final group of drivers completed the task without any distraction. Unsurprisingly, we found that dual tasking drivers reacted to fewer hazards, and took longer to react to those hazards they did notice, than undistracted drivers. But, those who were distracted by a conversation sparking mental imagery were the most distracted. Of more interest to us was the finding that those distracted by imagery took longer than undistracted drivers to react to hazards that occurred right in front of them, in the centre of the driving scene, yet did not take longer to react to hazards in the periphery – to the sides – of the scene. This seemed odd, until we established a worrying trend: very few dual tasking drivers reacted to the peripheral hazards at all, suggesting they hadn’t seen them.

Eye-tracking data revealed that dual tasking drivers looked at an area of the driving scene around four times smaller than undistracted drivers – in fact, they tended to focus on a small area at the centre of the scene, largely ignoring what was happening at either side. But, even though they were looking directly ahead, dual taskers took longer to react to hazards presented at that point, and on occasion still missed them altogether!

Taking hazard detection and eye-tracking data together we were able to identify that dual tasking drivers can look at a hazard yet fail to see it, due to a lack of available cognitive resources.

So, having two hands on the wheel and two eyes on the road isn’t enough if a driver is distracted by a phone call. Essentially, distracted drivers can be ‘cognitively blind’ to important aspects of the driving scene, making them more likely to be involved in accidents which could affect both their own and others’ safety. Phone use behind the wheel should definitely be as socially unacceptable as drink driving, but legislation needs to recognise and acknowledge decades of scientific research which emphatically demonstrates that hands-free phone conversations pose a significant danger.

 

This blog post originally appeared on the Open University Centre for Policing and Learning blog, at: http://centre-for-policing.open.ac.uk/taxonomy/term/265/blog-do-new-laws-phone-use-whilst-driving-fully-reflect-scientific-knowledge

Supporting young offenders in the courtroom

Nicola Brace

The Open University

For a long time the NSPCC has highlighted the need for a justice system that is fit for children and last year, three months after they launched their Order in Court campaign, the Government announced a series of changes. One of these changes included the promise to double the number of registered intermediaries to help certain individuals testify in court. Their role is to assist victims and witnesses who are children or who have mental health issues, physical disabilities or learning difficulties. Registered intermediaries are professional communications specialists, accredited by the Ministry of Justice, who help those individuals understand what is being asked of them and communicate their replies.

NSPCC Campaign

This is an image from the NSPCC campaign highlighting the lack of registered intermediaries for children who give evidence in court

The court intermediary service was introduced under the Youth Justice and Criminal Evidence Act 1999, and it was aimed at assisting vulnerable witnesses for the prosecution and for the defence, but not the accused. Although the Coroners and Justice Act 2009 permits examination of a vulnerable defendant through an intermediary in England and Wales, according to the website ‘The Advocate’s Gateway’ this provision is not yet in force. Despite this the website reports an increase in the number of trials where judges allow an intermediary to assist defendants. A recent example is the murder trial involving two teenage girls reported in the news in early April this year, with one article stating that an intermediary was present throughout the trial to help them understand what was being said in the courtroom.

Advocates Gateway

The Advocate’s Gateway is hosted by the Advocacy Training Council and provides guidance based on evidence that relates to vulnerable witnesses and defendants

 Contribution from Speech and Language Therapists

The absence of a statutory provision regarding intermediaries for vulnerable defendants is worrying given the growing evidence of speech and language communication difficulties among children who offend. For example, in 2007 Karen Bryan and colleagues randomly selected 58 young offenders aged 15-18 years, and found that 66-90% had below average language skills and 62% failed to achieve the level of literacy normally expected by 11 years of age. In a subsequent study they reported that 65% of 72 young offenders had language difficulties, with 20% assessed as ‘severely delayed’. These difficulties cover all aspects of communication including expressing themselves through speaking and writing, understanding the spoken and written word, as well as using and understanding non-verbal communication. Another study showed that many young offenders could not define or describe the type of words they will hear in the courtroom, such as ‘penalty’, ‘verify’ and ‘caution’.

At present we do not have evidence that explores the relationship between offending and communication difficulties over a period of time, but it is likely that both language difficulties and youth offending are the outcomes of a set of environmental and biological adversities. There is a high prevalence of other difficulties, including learning disabilities and histories of maltreatment among this group. Furthermore, young offenders coming from low (poor) socio-economic backgrounds are overrepresented, and there is evidence suggesting a link between poor communication skills and social disadvantage, with between 40% and 56% of children from disadvantaged backgrounds showing language delay when starting school.

Another important aspect to consider is that young offenders with speech, language and communication difficulties will have struggled throughout their school years, and their inability to respond to verbal demands in the classroom may have been misinterpreted by teachers as rudeness or disinterest, which in turn may have led to lowered expectations and an underestimation of their academic potential.  When interviewed, young offenders have reported feeling frustrated at school when they did not understand some of the words used by teachers or textbooks. It is not a surprising therefore that Karen Bryan and colleagues found a large majority had stopped attending school before reaching 16 years of age. The Royal College of Speech and Language Therapists has sought to raise awareness of the important role speech and language therapists can play, both in identifying the precise needs of young offenders and offering therapy interventions. As these interventions improve the verbal communication skills of the young offenders, they are then able to access the rehabilitation and treatment programmes that help prevent and reduce re-offending.

Training opportunities

Legislation is not yet in force to ensure that vulnerable defendants have the same access to court intermediaries as vulnerable victims and witnesses. Therefore, given the language that is routinely used in the courtroom, some young defendants may fail to understand or follow the court proceedings. Recently, training opportunities have been developed for anyone communicating with young offenders. The Royal College of Speech and Language Therapists has developed a training programme called The Box, which is designed to help those working within the criminal justice system, including court staff and Crown Prosecution Service staff. Not only will users learn how to spot people with communication needs, they will also find out how to work more effectively with them. The programme includes an e-learning module, a two-day course and a screening tool. The Advocate’s Gateway has recently released new toolkits that focus on the difficulties that can arise when questioning a vulnerable witness or defendant, including how to question someone with a ‘hidden’ disability such as language impairment. This comprises advice on using concrete words, about keeping questions simple in structure and the problems of using negative and passive language in questions. Hopefully, those working within the criminal justice system will avail themselves of these training opportunities.

 

How to improve identification evidence: Practitioner hits and academic false alarms.

Graham Pike and Virginia Harrison

The Open University

Previous articles written for this site have highlighted the dangers and limitations of eyewitness evidence (see Briggs and Westmarland, 2015), and described how such evidence can all too easily lead to a miscarriage of justice (see Kaye, Drake and Pike, 2014). For example, we noted the work of The Innocence Project, a largely US-based national litigation and public policy organisation, that has to date used DNA analysis to secure 330 exonerations of existing convictions. The average sentence served by the innocent people wrongfully convicted is 14 years, and 18 of the exonerations involved death sentences. Those stats might be frightening in themselves, but they are very likely the tip of a very large iceberg.

Embed from Getty Images
A great deal of academic research has been conducted on how and why eyewitness evidence can be so inaccurate, and academics and organisations like The Innocence Project have done their best to shout the results from the rooftops, yet very little change is evident in the gathering and use of such evidence within the criminal justice system. For example, other than a few localised pilot studies, recommendations coming from organisations such as The British Psychological Society and The American Psychological Association have been largely ignored.

In a recent piece of research conducted through the OU Policing Research Consortium and funded by the College of Policing, we explored why evidence from research concerned with witness identification is not being translated into practice. As a lot of research has already been conducted on the barriers to translation within organisations, including those of the criminal justice system, we instead focused on the research itself, looking at four specific areas:

  • The dissemination of research to criminal justice practitioners
  • The way the research was conducted
  • The need for research and change to practice
  • The aims of the researchers and practitioners

Our research used an online survey which was completed by 153 UK policing personnel, including 32 who work in ID suites (and whose work concentrates on obtaining eyewitness identification evidence). The respondents were generally experienced personnel, with less than 5% having less than 5 years experience, and the majority more than 16 years.

In terms of dissemination, we asked questions regarding practitioners’ knowledge of research and the recommendations that have followed from this. Overall, knowledge was generally poor, with the modal (most common) response being “I don’t know anything [about research]”. Even amongst staff working in eyewitness identification (ID) suites, only 11% indicated they tried to keep up to date with identification research, which was less than the 15% who indicated they knew nothing about this research. A similar pattern was apparent when respondents were asked about the recommendations that have been made from research, with over 50%  indicating that they didn’t know such recommendations even existed.

A number of questions were asked concerning research methodology and analyses used in the witness identification literature. A general picture emerged suggesting that one of the major barriers to putting research findings into practice may be due to the complexities of the research process itself.  For example, the responses collected seemed to indicate that participants saw research in this area as overly complex (in terms of both the methods and analyses employed), which in turn led to conclusions that were also too complex and did not speak to operational needs in a way that was implementable.

When asked about their thoughts on whether current ID procedures were effective or not, the modal response was “They generally work well and don’t need much improvement”, with over 85% of ID personnel selecting either that option or “They work very well”. Although the consensus amongst academics is that significant change is required, this is an opinion that does not seem to be shared by practitioners, and could help explain why there is little engagement with research evidence. After all, if you don’t think the system is ‘broke’, why would you look for research on how to ‘fix it’?

Perhaps the most illuminating question was to do with the type of changes that practitioners thought should be researched and implemented. The data summarised in Figure 1 show that the practitioners surveyed overwhelmingly thought that any potential changes should be aimed at increasing positive identifications – that is, a ‘hit’ outcome where the witness selects the suspect from an ID procedure. There was a difference in opinion as to whether this should not be at the cost of also increasing misidentifications (a ‘false alarm’ outcome where the witness selects the suspect from the ID procedure, but the suspect is actually innocent of the crime being investigated), with ID practitioners favouring increasing positive IDs even at the cost of misidentifications.

Changes to ID procedure

Figure 1 What should changes to ID procedures aim to achieve?

What makes this an interesting result is that it represents a fundamental clash with the aim of most research in the area, which is to reduce misidentifications, even if this was at the expense of a reduction in positive identifications – this is represented by the green bar in Figure 1. In other words, researchers have generally been concerned with finding ways of preventing miscarriages of justice, whilst ID practitioners are concerned with increasing conviction rates. To make matters worse, we know that any attempt to increase positive IDs will also result in an increase of misidentifications and likewise any attempt to reduce misidentifications will result in fewer positive IDs. Thus, the aims of researchers and practitioners are fundamentally at odds.

To conclude, this research shows that in addition to any organisational and cultural barriers to translating evidence into practice, in the critical case of eyewitness identification practitioners know very little about research findings and believe research methods, analyses and conclusions to be too complex. Perhaps more importantly, there also seems to be fundamental differences between researchers and practitioners in terms of opinions as to the current state of practice and to what the goals of research in this domain should be.

This article summarises preliminary findings that were presented as:

Harrison, V. and Pike, G. (2015).  Police perceptions of eyewitness evidence and research. Paper presented at the European Association of Psychology and Law, Nuremberg, 2015.

Is there something in the (cabin) air?

Gini Harrison

The Open University

In early 2013, the death of a BA Pilot (Richard Westgate) hit the headlines following claims that he may have been suffering from symptoms related to occupational exposure to neurotoxic chemicals. Specifically, there was speculation about whether his death may have been the result of repeated exposure to contaminated cabin air. Due to the obvious impact that toxic cabin air might have in terms of public safety, the Senior Coroner for the County of Dorset (Sheriff Stanhope Payne) was charged with investigating these claims, and at the time of writing this blog in June 2015, this inquest is still ongoing.

Belly_of_the_plane

In February this year (2015), Richard Westgate’s death hit the headlines again when Sheriff Payne was compelled to issue a Regulation 28 report out of concern for public health. These reports are only written when a coroner determines a possible cause of death that could have a wider impact on others – when it is felt further deaths could result from the same cause and that preventative action could and should be taken. In this case, Sheriff Payne cited organophosphate (OP) exposure as a probable cause of death and he issued the report to both British Airways (BA) and the Civil Aviation Authority (CAA), calling for urgent action to be taken to minimise any future risk of harm. Two months later, both of these organisations issued their formal responses to this report stating that the evidence on which the Sheriff based his conclusions was ‘selective’ and that no link between exposure to contaminated cabin air and long-term health effects has been established – although they acknowledge that such a link cannot be excluded.

Despite these claims, in June 2015 the BBC reported that at least 17 individuals are now seeking legal action against British airlines citing exposure to toxic cabin air as the cause of their ill health. Whether or not they will be successful in their claims remains to be seen, but at least one precedent has been set: in 2010 a former flight attendant was awarded compensation for respiratory damage sustained as a result of exposure to on-board neurotoxic chemicals (including OPs).

What are organophosphates and why are they harmful?

OPs are a type of neurotoxic compound which have a number of different properties and uses. ‘Neurotoxic’ essentially means that they can interfere with nervous system functions, resulting in cognitive, emotional and/or behavioural problems. This interference can take place via a number of different mechanisms, but the main biological actions seem to be through (1) interrupting the normal process of neurotransmission (particularly with regard to the neurotransmitter acetylcholine) and (2) as a result of damage to parts of the neurons themselves, specifically axons and myelin (Abou Donia, 2010).

The fact that OPs can directly interfere with the nervous system is not a new discovery. In fact, some OPs have been used as nerve gas agents since World War II – both Sarin and Soman are types of OPs. However, their use is not confined to areas that are quite so controversial or necessarily harmful. Since the 1950s OPs have been engineered to have lower toxicity to mammals, and some have selective toxicity to specific organisms, such as insects. In this form, OPs make up the vast majority of the world’s pesticides/insecticides and are extensively used to control insects in both domestic and public contexts, agriculture, horticulture and veterinary medicine. However, while these versions of the compounds have been modified to make them less harmful to humans, this does not necessarily mean that they are harmless. Indeed the immediate effects of high level exposure to OPs have been well documented and a plethora of evidence exists to support the view that acute OP poisoning can cause ill health and neuropsychiatric symptoms. However, considerable controversy still surrounds the issue of whether exposure to lower levels of OPs is harmful (see Alavanja et al (2004)  and Mackenzie-Ross et al (2013) for reviews).

But why are these compounds on a plane? The reason is that in addition to acting as pesticides, OP compounds have a number of chemical properties that make them very useful in a number of different contexts. For example, the OP tricresyl phosphate (TCP) has anti-wear and flame retardant properties which allows it to work as a lubricant under extreme pressure. As such, TCP is a very useful additive in jet engine lubricants and hydraulic fluids. In fact, it is exposure to TCP that a number of aircrew are particularly concerned may be the cause of their ill health.

What are the reported effects of OP exposure?

Airline crew and passengers have been reporting ill-health following exposure to contaminated air for many years (see Mackenzie-Ross et al (2012) for more information). The immediate effects of exposure include eye and skin irritation, breathing difficulties, headaches, nausea, dizziness, fatigue and cognitive impairment (e.g. disorientation, confusion and memory problems). These effects tend to occur soon after exposure and resolve when the person moves away from the source of OPs. However some exposed individuals report more persistent symptoms lasting months or even years after their last exposure to OPs. In these more chronic cases, a wide array of physical and psychological symptoms may be experienced, including cognitive impairment (memory, word finding, multitasking difficulties), lack of coordination, nausea/vomiting, diarrhoea, respiratory problems, chest pains, severe headaches, light-headedness, dizziness, weakness and fatigue, paraesthesias (pins and needles or tingling sensations), tremors, increased heart rate, palpitations (pounding heart), irritation of ear, nose and throat, muscle weakness/pain, joint pain, skin itching, rashes, blisters, hair loss, signs of immunosuppression (being more prone to illness) and chemical sensitivity. Although a debate is ongoing in the UK about the causation and diagnosis of these long-term effects, OP exposure remains a possibility.

The uncertainty about a causal link between OP exposure and these chronic symptoms lies in the difficulty we have with establishing an accurate estimation of exposure. For example, given the absence of routine on-board air quality monitoring on commercial aircraft, it is impossible to determine what chemicals enter the cabin or in what quantities. In addition, self-report measures of exposure are notoriously problematic as they rely on people’s memory of events and their capacity to detect noxious substances (both of which vary in reliability enormously). As such, before a causal relationship can be established the mechanisms through which exposure might occur and the level of the exposure need to be better established. So first, let’s consider how someone on a plane might become exposed to OPs.

How does exposure occur?  

The issue here comes from how air is supplied throughout the aircraft to allow crew and passengers to breathe. When on the land, we are used to breathing in air that is an average of 15°C and a pressure of 14.7 psi. (in the UK at sea level). However, at an altitude of 35,000 feet the air pressure is only 3.46 psi with temperatures lower than -50°C. Fresh air is pumped into the plane from outside the aircraft, but it first needs to be warmed and pressurised to a safely breathable level.

smoke-298243_640

As part of the propulsion process, airplane engines heat and compress air before fuel is added and combusted. On some aircraft this air is then ‘bled off’ and pumped into the aircraft, unfiltered. Ordinarily this process is safe, however occasionally faulty seals can result in contamination by allowing engine oil fumes to escape into the airflow.

The incidence of these ‘fume events’ is difficult to quantify, as commercial aircraft are generally not fitted with equipment for monitoring on-board air quality. There is also significant under-reporting of exposure: for example, the Committee on Toxicity of Chemicals in Food, Consumer Products and the Environment (CoT) have estimated that fume events occur on approximately 0.05% of flights. However, a survey by the British Airline Pilots Association (BALPA) found that up to 96% of the contaminated air events that pilots experience may go unreported to the CAA (Michaelis, 2003), possibly due to lack of awareness, commercial pressure and the perception that exposure to such contaminants is normal and part of their everyday job.

In 2000, to better establish the incidence and outcome of fume events, CoT recommended that the UK DfT commission (1) an air monitoring study of affected aircraft types and (2) a cross-sectional epidemiological study examining neurological outcome in pilots. In 2008 Cranfield University were chosen to undertake the first of these studies. They monitored a total of 100 flights, measuring the levels of several chemical compounds that were present in the cabin during various stages of flight. A number of chemicals were detected over the course of this study, including the organophosphate TCP, carbon monoxide and toluene; all levels were reported to be within safe limits – although it is of note that no aircraft safety standards actually exist with regard to TCP. While this may be reassuring for routine flight safety, no fume events were observed on any of the flights that were monitored (which is not overly surprising given the sample size and the relative rarity of cabin air contamination). As such, the possible exposure that passengers and aircrew may face during a fume event remains unclear.

So what evidence is there?

Without accurate measures of exposure, it’s very difficult (if not impossible) to reliably establish whether or not there is a relationship between ill-health and exposure to fume events. However, there is some supporting evidence. For example, symptom surveys (e.g. Michaelis, 2003) and case studies of passengers and crew who have been involved in known contamination events (as confirmed by mechanical report) have demonstrated deficits consistent with OP exposure (e.g. Murawski, 2011). Indeed, my own work (in collaboration with colleagues at UCL) has found a cognitive deficit profile in airline pilots similar to that seen in OP exposed farmers (Mackenzie-Ross et al, 2012). In addition, biological markers of possible neurotoxicity have been found in some individuals with these symptom profiles (e.g. Abou-Donia et al, 2013) and at post-mortem (Abou-donia, 2014). However, while these studies may provide evidence consistent with OP exposure, it is still very difficult to claim causation; mainly due to the small sample sizes and correlational findings.

So it seems there is still a large amount of scientific uncertainty regarding the long term effects of inhaling pyrolysed engine oil on human health. However, with the growing pressure from lobbyists (like the Aerotoxic Association), potential legal suits and inquests it seems like we may soon have a more definitive answer to this question. At the end of 2014, the European Aviation Safety Agency launched a Preliminary Cabin Air Quality Measurement Campaign, which should allow for the funding and development of instruments that may be able to monitor air quality in real time. In addition, biomarkers for exposure to TCP have recently been developed by researchers from the Universities of Washington and Nebraska. Given these advancements, we may not be far away from establishing a valid and reliable measurement of exposure. Once that has been achieved, establishing whether a potential link between ill health and exposure truly exists should be the logical next step.

What can visual attention research tell us about the reliability of eyewitness evidence?

Gemma Briggs and Louise Westmarland

International Centre for Comparative Criminological Research

The Open University

Most people probably think that they would be able to provide a good account of what happened if they witnessed a serious crime. Being able to identify the person who attacked them, for example, from a line up, might seem to be a fairly straightforward matter. However, a recent report from a committee tasked with assessing the factors affecting eyewitness testimony, recommended that all law enforcement agencies should ‘…provide their [personnel] with training on vision and memory and the variables that affect them’. Whilst it may appear to be common sense that research-based policing practice should be the norm, it is evident that this is not always the case. Why, then, has this recommendation been made and how could such training help to reduce erroneous convictions?

Fredrik Fasting Torgersen

Fredrik Fasting Torgersen in the centre of a police lineup

Initiatives such as the Innocence Project highlight an alarmingly high number of cases where individuals have been convicted of crimes they did not commit on the basis of eyewitness identification. Whilst advances in forensic techniques, such as DNA testing, can help overturn some such miscarriages of justice, where forensic evidence isn’t available there still remains a tendency for jurors to assume that a confident eyewitness provides reliable evidence (Smith et al, 1989). So how should eyewitness evidence be regarded by jurors? Many researchers would claim that such evidence should be regarded with caution. This is because although police interviewing strategies, used in the UK at least, can help to focus a witness’s account of an event before their evidence is presented in court, the reliance on the witness’s memory remains. An eyewitness to a crime may feel very confident that their recollection of the event represents clearly what happened, regardless of the speed or traumatic nature of the event. They may also feel highly motivated to help the police secure a prosecution.

The desire of a witness to help the police and the confidence with which they present their account of events represent different problems for assessing eyewitness evidence. The police can control so called system variables – the methods and procedures employed to gather information from a witness, such as the cognitive interview and identification line up procedures – yet they can’t control factors such as the impact of stress, anxiety, distance from the perpetrator or the visual conditions at the time of witnessing the crime – the estimator variables (Wells, 1978). This type of ‘noise’ can affect how eyewitnesses process the crime as well as how they later recall the events. For example, an individual who witnesses a street robbery in a dark alley as they walk home from work may experience anxiety at being physically close to the attacker and fear for their own safety when the assailant sees them phoning the police. In these conditions, they may struggle to provide a clear description of the attacker or the sequence of events. Whilst poor lighting could explain the inability to describe the perpetrator’s face, the effect of fear, anxiety and distraction may be less obvious. Afterwards, the witness may feel able to provide an accurate description of the sequence of events and the physique of the attacker, which could potentially help police with their investigation, yet the emotion and distraction they experienced may well have altered their perception of the scene (Easterbrook, 1959). All of these factors, which are outside of the police’s control, are estimator variables.

scared eye

A great deal of research has been conducted on memory for, and recall of, witnessed crimes, demonstrating the constructive nature of memory (Garry et al., 1996), the effect of leading questions (Kassin and Gudjonsson, 2004) and co-witness effects on accounts (Gabbert et al., 2003), as well as the suggestibility of vulnerable witnesses (Henry and Gudjonsson, 2007). This type of research has proven beneficial in controlling some of the system variables which could affect the evidence provided by eye witnesses. However, estimator variables are harder to pin down. Research on visual attention helps to highlight how what we perceive may differ from what is actually presented to us, and that simply looking at a scene doesn’t necessarily equate to seeing all aspects of that scene (Mack and Rock, 1997). The phenomena of change blindness (the inability to detect changes in a scene despite full attention being paid); and inattentional blindness (the inability to detect changes in a scene due to attention being allocated elsewhere, or being overloaded) can both demonstrate how a witness’s perception of an event may be affected, potentially leading to incorrect identifications.

Classic change blindness investigations (e.g. Simons and Levin, 1998) have shown that people can fail to notice a change in the identity of a person they are talking to if their contact with that person is briefly interrupted (e.g. a person ducks behind a counter and a different person then stands back up). If people fail to notice a change in the identity of someone they are talking to, how good are they at noticing changes from the perspective of an onlooker?  Fitzgerald et al. (2014) showed participants a film of an innocent man entering a building. A theft then occurred, and a different (guilty) man left the building. 64% of participants who viewed the film failed to detect the change in identity. They were then shown a 6 person line up which was either target absent (contained the innocent man and 5 fillers) or target present (contained the guilty man and 5 fillers) and asked to rate their confidence in their identification.  The 36% of participants who detected the change reported higher confidence in their identification accuracy than those who demonstrated change blindness, despite their identification accuracy not being significantly greater: those who demonstrated change blindness were more likely to misidentify an innocent filler in the line up (44%) than those who had detected the change (21%), but both groups showed similar rates of correct identifications in the target present line up

In a similar study, Smart et al. (2014) questioned whether expertise could affect change blindness susceptibility and identification accuracy. They showed groups of students and law enforcement officers a video of a staged traffic event, involving a speeding car being stopped by a police officer, during which the identity and clothing of the driver changed. Participants were asked whether they detected a change in identity, a change in clothes or both before viewing a line up containing six people. There were four different versions of the line up available: two where the target was absent (i.e. only innocent fillers were present), one containing driver 1 and another containing driver 2). Results revealed that students and law enforcement officers showed similar levels of change blindness (37.5% and 50.8%, respectively) but that students were significantly more likely to detect both the change in identity and the change in clothing than police officers (22.5% and 1.6% respectively). Further, only 3.3% of police officers detected the change in clothes, compared to 40% of students. Moreover, whilst those who detected the change in identity made more correct identifications than those who did not, police officers were less accurate on the identification task than students and demonstrated a greater likelihood of selecting an innocent person from a target absent line up. Smart et al. claim that their findings demonstrate that everyone is prone to change blindness and the ‘attentional set’ an individual applies to a scene may affect detection of different changes to that scene (e.g. officers were attending more to the driver’s behaviour than his clothing).

If the attentional set, goals and level of arousal of a viewer can affect what they see when witnessing a crime, it is easy to understand how two witnesses can provide different accounts of the same event. When witnessing a crime, an eyewitness isn’t just a passive surveyor of the events – they bring with them their own goals, past experiences and current concerns – all of which can affect what they look at, process, perceive and remember of the event. Given these factors, along with research into memory and identifications, it is crucially important that eyewitness accounts are treated with caution both by the police and jurors at trial. Educating and informing these groups of the potential pitfalls of eyewitness testimony at both the time of witnessing a crime and at police interview can only be a positive thing.

Fixed Odds Betting Terminals: the psychology of state-corporate harm maximisation

Steve Tombs and Jim Turner

International Centre for Comparative Criminological Research

The Open University

High street slot machine gambling – especially the mushrooming of Fixed Odds Betting Terminals (FOBTs) – has entered the political debate in the UK. FOBTs are a type of gambling machine which has a set (’ fixed’) average level of payout (‘odds’). For example, a FOBT with fixed odds of 70% means that someone who puts in £10 to play would generally get £7 back, although this is an average: in practice, some players will get back more than this, others less. In recent years FOBTs have become common on British High Streets, allowing very large amounts of money to be gambled in a short amount of time.

In this context, it is worth taking a salutary glance to the other side of the world – to Australia, sometimes referred to as the ‘gambling capital of the world’. Central to this perhaps unwanted status is the phenomenon of ‘pokies’ – “a high-stakes, high-intensity” gambling machine which “has become ubiquitous in pubs and community clubs”. According to one recent analysis, “Australians lose more money gambling per person than any other nation. In 2011-12 this amounted to the equivalent of more than £650 per adult”.

Such per capita calculations obscure the real cost of class-targeted forms of gambling within that global figure. Frankston, in fact the second largest city in the state of Victoria but in effect a suburb of Melbourne, is desperately poor, beset by a range of economic and social problems. It is characterised by lower levels of income, a lower rate of education across all age ranges, higher levels of unemployment and youth disengagement, and poorer averages on every indicator of ‘health’ and ‘personal safety’ when compared to the Melbourne metropolitan or State averages.  It also has a higher rate of per capita gambling losses than the Victorian average. At the top of the walkway from the platforms of Frankston train station is a rather stunning visual: a more or less constantly displayed poster warning, in stark white lettering on a black background: POKER MACHINES HARM FRANKSTON. $62,225,277 LOST LAST YEAR ALONE.

Frankston

How, then, does the state seek to mitigate the harms caused by FOBTs  to already disadvantaged communities? In Southern Australia, the Victorian Commission for Gambling and Liquor Regulation organises its regulatory approach around three commitments:

  • achieving high levels of voluntary compliance with gambling laws by setting clear expectations, encouraging the right behaviour and taking strong enforcement action where required
  • constraining the regulatory costs and restrictions imposed on the gambling industries to what is necessary to achieve regulatory objectives
  • upholding a culture of integrity and harm minimisation in the gambling industries.

This illustrates a preference by the state for self-regulation: regulatory costs for and burdens upon industry are to be minimised; and the object is to maintain safer cultures. A walk around many Australian bars, replete with gambling machines, reveals the nature of such self-regulatory efforts. “Take Control Of Your Gambling”, “Don’t Chase Your Losses: walk away”, “Stay in Control”, ”Set Yourself a Limit and Do Not exceed it” and “In the End the Machines will Win” say the signage aimed at the hapless punter. Such exhortations are thoroughly undermined by the psychology that is wired into the very design of these machines.  This makes the regulatory commitment to uphold “a culture of integrity and harm minimisation in the gambling industries” somewhat disingenuous.

Though it might not be obvious from the common media focus, only around 1% of people meet the diagnostic criteria for gambling addiction. We cannot, then, explain the level of gambling in Frankston and, increasingly, in some of the poorest boroughs, towns and cities across the UK, as a pathological type of behaviour exhibited by a small percentage of the population. We might more usefully learn some lessons from the psychology of behaviourism, which explores how people (and other animal species) respond to, and learn through, rewards and punishments. Rewards are specifically relevant to gambling machines as their design often draws directly on behaviourist psychology. Whilst they apply to all learning species, including humans, many behaviourist principles were first discovered in experiments on non-human animals. One classic animal study illustrates the problem with gambling machines from a behaviourist point of view.

In the most basic design of experiment, the animal is placed in an enclosure that has within it a button and a food dispenser. Whilst exploring, at some point the animal will make contact with the button and a food pellet will be released from the dispenser. The animal finds the food pellet rewarding and quickly learns to associate pressing the button with receiving a reward. Because it is usually not fed before the experiment starts, the animal will spend a lot of time pressing the button until it is no longer hungry. Transferring this lesson to FOBTs, playing is the equivalent of the animal pressing the button and getting a win is the equivalent of the food pellet reward.

Now, what happens when pressing the button doesn’t always make the food dispenser give out a food pellet? Say, for example, the food dispenser only gives out a pellet for every third press of the button: does the animal give up pressing the button, because it’s usually not rewarding? No, in fact the animal keeps on tapping that button: because it needs to press it three times as often to get the same reward, that’s what it does. You’ve probably already worked out the link to FOBTs: they don’t give out a reward every time (if they did they would just be change machines), but the fact that they don’t is one of the things that keeps people playing.

Back to our animal experiment. What if, instead of giving out a food pellet every three presses, the food dispenser is set up to give out food pellets randomly? Does the animal, not ‘knowing’ whether or not pressing the button will get it a reward, give up now? Again, no. This actually makes the animal press the button the most of all: because it cannot predict which presses will or will not be rewarded, the animal will press the button over and over and over again, periodically getting a food pellet reward which keeps it going. You can probably also see how this relates to FOBTs: the randomness of which plays get rewarded and which don’t is one of the factors that keeps the player going – the next play might be the one that ‘wins’. Psychologists have known about these principles since the classic work of B.F. Skinner with pigeons, which identified the role of random rewards in explaining how gambling ‘works’.

There’s one more twist. With the animal in our experiment, there are two things that will stop it pressing the button: (1) if there is a long enough sequence with no reward then, yes, eventually the animal will give up; (2) if the animal gets rewarded too often then it won’t be hungry anymore, so will no longer be motivated to seek the food reward. In a truly random set-up either could happen (although they would be fairly unlikely to happen very often). If you wanted to keep the animal pressing the button as much as possible you would put some limitations on the randomness, so that it never went too long without a reward but also never got so much of a reward that it lost the motivation to continue. The designers of FOBTs know this, so the machines are set up not to go too long without giving a ‘winning’ play. They are also, obviously, set up not to pay out more than they take in (FOBTs exist to make a profit, after all), so players will rarely reach a point where they are no longer ‘hungry’ for a ‘win’.

Overall, then, behaviourist psychology demonstrates how FOBTs are designed to maximise the amount that people play. If gambling is ‘harm’, then FOBTs are technologies of harm maximisation. This hardly squares with the regulatory gloss about a “culture of integrity and harm minimisation in the gambling industries”.

FOBTs – the so-called ‘crack cocaine’ of high street gambling – have recently become a matter of formal political debate in the UK. Out of this debate came the Gambling Protection and Controls, April 2014, which most notably required anyone using such machines to inform shop staff if they want to bet more than £50 cash at a time – rather than placing any maximum limit on spending. More recently, some UK councils have proposed a maximum individual stake for these machines. The Association of British Bookmakers inevitably claimed that the law would “restrict growth for the sector and mean hundreds of shops and thousands of jobs are now at risk“, even as others argued that ‘regulation’ was best left to the markets. An example of industry self-regulation in the UK can be seen in the work of the Senet Group, an ‘independent’ body with the ostensible aim of ‘promoting responsible gambling standards’, which was set up by the bookmakers William Hill, Paddy Power, Ladbrokes and Coral. In early 2015 the Senet Group, along with Gambleaware, launched a campaign with the strapline “When the fun stops, stop”. An example image from the campaign is shown below: notice how the word “fun” is presented much larger, and in a more eye-catching design, than the word “stop”. What message is this advert really sending about gambling?

Gambleaware

Again, there is much to learn via lessons from Australia. As anti-gambling campaigner Paul Bendat says, there the industry and its political allies have consistently used a series of discursive techniques to pre-empt effective regulation, so that the “harm to the disadvantaged” can proceed and accelerate. This strategy, resonant of those deployed by, for example, the tobacco and alcohol industries, denies that FBOTs are responsible for harm and deflects attention from the machine to the individual, claiming to defend individual freedoms and calling for voluntary, ‘responsible’ codes while citing potential employment losses as a risk of tighter regulation.

Viewed in the light of the psychology of FOBTs, the dangers of such claims, and their logic of self-regulation and appeals to cultures of harm-minimisation, are clear. Following the development, in the 1990s, by US criminologists of the term state-corporate crime, we might think of the failure to regulate FOBTs effectively as ‘state-corporate harm’ – harm generated by private companies which is facilitated by states. The dominant preference for self-regulation is probably best explained by the convergence of corporate and governmental interests that benefit from it: an enormously profitable industry, that at the same time generates considerable tax revenues for Government. Meanwhile, state and capital benefit by extracting revenue from populations who are already economically, socially and politically marginalised.

Why do people confess to crimes they haven’t committed?

Catriona Havard1 and Kim Drake2

International Centre for Comparative Criminological Research

The Open University

January 2015

1Lecturer in Psychology, The Open University;  2Chartered Psychologist, University of West London

When it comes to being accused of something we haven’t done, most people may assume that they would readily say “it wasn’t me”. On that basis, it seems logical that if a person does confess to a crime, this confession should be taken at face value; why would someone say something that wasn’t in his or her best interests?

Reasons for falsely confessing can include: wanting to escape custody; protecting someone else, such as a peer, a friend or family member; and/or for some perceived positive instrumental gain, such as a lower sentence (Gudjonsson, Sigurdsson & Sigfusdottir, 2009) – this may be especially the case in the US where the plea-bargaining procedure is more common, and encourages defendants to plead guilty in exchange for a lesser sentence. There is some evidence that defendants may plead guilty to very serious charges, such as murder in the US, even though they may be innocent, to avoid harsh sentences such as the death penalty (Leo & Ofshe, 2001).

The police interview itself can also be a risk factor: during the 1980s and 90s, in the UK, several appeal cases saw convictions overturned (a total of 27 murder cases, four terrorist cases – including the Birmingham Six and Guildford Four, one attempted murder, one conspiracy to rob and a sexual assault case). In all of these cases, police misconduct was found to be present, such as the use of undue pressure, fabricating or suppressing evidence, failure to provide an appropriate adult and/or comply with legal rights (see Gudjonsson, 2010). In the summer of 2014, Chris Meissner and colleagues published a systematic review of all the available research that compared accusatorial/coercive and information-gathering methods in terms of the likelihood of them producing false confessions. Their findings showed that coercive  (accusatorial) methods, such as the Reid Technique, are more likely to elicit false confessions, whilst the information gathering PEACE model of investigative interviewing,  is more likely to uncover reliable information and confessions. The primary aim of accusatorial interrogation methods is to achieve a confession from the suspect through the use of coercive tactics, such as (to name a couple) disclosing misleading evidence during the interview to the suspect, or the use of minimisation, where the interviewer shows understanding and plays down offence, suggesting reasons why the suspect had no choice but to do what they did. Slowly over time, the suspect begins to internalise the interviewer’s suggestions and doubt their own memory, coming to think that “I guess I must have done it”, before going on to signing a full written confession. Interrogation methods like this were more common in the UK, in the 1970s and 80s, and are still now being used primarily in the United States, Canada and many Asian countries. Fortunately within England and Wales, research conducted in the late 90s showed that, although police would still resort to intimidation and manipulative tactics to overcome resistance in suspects during interview, the Courts were starting to rule evidence from such interviews as inadmissible (Pearse & Gudjonsson, 1999) – so, progress was beginning to be made by the end of the 20th century. Since that time, the PEACE model of investigative interviewing is what is currently used within the UK and seems to lead to fuller, more accurate accounts.

A possible limitation of the Meissner et. al. (2014) study, however, is that it doesn’t acknowledge that some people maybe more psychologically vulnerable  (have  a tendency towards being highly suggestible or compliant) and more at risk of giving a false confession during a police interview (see Drake, Gudjonsson, Sigfusdottir & Sigurdsson, 2014). In the cases of wrongful conviction that were overturned in the late 1980s and 90s, not only was police misconduct found to be present, but suspect psychological vulnerability in the form of heightened suggestibility and compliance was also identified.  Both interrogative suggestibility (the tendency towards accepting misleading information and changing answers in response to the interviewer pressure) and compliance (simply going along with the interviewer for some perceived gain, i.e. suspects may believe they will get to go home early if they confess, or that their innocence will come out sooner of later) are serious psychological vulnerabilities– and it was the combination of the suspects being highly suggestible and compliant as well as police misconduct that resulted in the false confessions (Gudjonsson, 2010).

The Innocence Project, was set up in the US to take on cases of wrongful convictions, and tries to exonerate those held in custody by using DNA evidence. According to the Innocent project, the psychological state of the suspect, is a risk factor when it comes to false confessions and  a proportion of detainees/suspects have been found to have been diagnosed with learning disabilities, personality or mood disorders, such as borderline personality disorder or attention-deficit-hyperactivity-disorder (ADHD), which can increase the suspect’s likelihood of falsely confessing as a result of heightened anxiety levels and/or attention-deficit issues/memory problems.

There is also evidence showing that suspects, who have not been diagnosed with any personality or mood disorder or learning disability, can also be vulnerable during police interview. These types of general population vulnerable suspects are at greatest risk of remaining unidentified and thus unprotected, because they display no overt signs of being psychologically vulnerable, and have not received any form of clinical diagnosis, which might flag-up to police that this suspect is vulnerable, and therefore should be given an appropriate adult support (see Young et. al., 2013). Recent research has found an association between the reporting of negative life events and both interrogative suggestibility and reported false confessions (see Drake, 2011 for a review).  In these studies, the negative life events consisted of whether or not a person has been a victim of bullying, witnessed family conflict, physical abuse, parental divorce, and/or suffered a serious illness themselves or within their family. Higher scores on the negative life event scale seemed to increase the likelihood that general population individuals are more sensitive to any pressure during police questioning, and thus less able to cope with the interview, which may lead to an increased risk of the suspect accepting misleading information.  Gudjonsson et al. (2009) also reported that, out of their sample of 11388 further education students in Iceland, 2726 individuals who had been questioned by police, of these 375 stated they had falsely confessed to police, and that being a victim of bullying, having committed a burglary and a history of substance abuse significantly predicted their false confessions.

However, not all individuals who report having experienced adverse life experiences such as the ones detailed above, are at an increased risk of suggestibility or compliance (and therefore false confessions). The suspect’s level of trait stress-sensitivity, which can manifest as anxiety, fearfulness, and nervousness, may also be important.  Drake et al (2014) investigated the role of stress-sensitivity, as reported by the levels of nervousness, fear and tension over the past 30 days, in reported false confessions.  320 participants out of 2104 further education students in Iceland, who had reported having been interrogated by police, reported making false confessions. Those scoring high in stress-sensitivity were found to be at a greater risk of false confessions; as they are more susceptible to environmental influences. These individuals thrive the most under positive, supportive, influences, however they also suffer the most under adversity; with false confessions being a direct consequence.  Stress-sensitive suspects are simply more susceptible to their environment, and more negatively affected by adversity, because they experience greater levels of physiological arousal in the face of situations perceived as adverse, including the police interview. The police interview is a novel situation, involving an interviewer asking the suspect questions and the suspect can experience isolation (see Gudjonsson et al., 2014). If a suspect is less able to cope with being questioned they may end up being more likely to falsely confess. In light of this evidence, it is possible that the experience of negative life events may only be relevant if the suspect being interviewed is predisposed to be sensitive to those negative influences.  It may therefore be a person’s sensitivity to stress, rather than negative life events, or interview pressure that is a greater indicator of the likelihood of false confession.

In the past, once a false confession had been given, it was very hard to subsequently convince the judiciary of a suspect’s innocence; especially before the advent, in the late 1980s, of DNA evidence.  This was particularly the case if it appeared the suspect had ‘specialist’ knowledge of the event in question, even though this specialist knowledge often came from leading questions within the police interview(s).  A lot of progress has been made though within the UK and also in the US, since the 80s and 90s, in terms of our understanding of what causes false confessions and how they might be minimised (the difference between the UK and US being that, in the US, the research findings are still not always being taken on board by police, and so police interviewing as a result has not made as much progress as it has here).  This is not to say though that false confessions still do not occur within the UK; indeed, Pearse and Gudjonsson (2011) noted that the PEACE model of suspect interviewing does produce confessions, so future research still needs to ascertain for sure whether the PEACE model actually does deliver in its aim – to achieve best evidence.

Decriminalising drug use: when will the government acknowledge the harm our current laws cause?

Abigail Rowe

International Centre for Comparative Criminological Research

The Open University

December 2014

At the end of October, amid a flurry of controversy, the Home Office published the findings of an international comparison of the policies adopted by thirteen countries to tackle drug misuse and dependency. The study, which has been widely hailed by commentators as demonstrating that a ‘criminal justice approach’ to drugs is ineffective, was a concession to the Liberal Democrats, who had promised a liberalisation of drugs legislation in their 2010 manifesto and have since been arguing for a Royal Commission on drugs. Several days later, however, Liberal Democrat Minister Norman Baker resigned from his post in the Home Office, alleging that Home Secretary Theresa May had blocked publication of the report for several months, and describing working with her as ‘a constant battle’. The controversy within the Home Office over the report has brought the contrasting attitudes of Liberal Democrats and Conservatives around drugs policy into wider public consciousness, but has also highlighted the uneasy relationship between politics and criminal justice policy.

Although the published study contains no overarching conclusions, the evidence it presents demonstrates that there is no consistent relationship between the severity of a country’s drugs laws and the prevalence of drug use, addiction and associated harms among its population. A comparison between the approaches of Portugal and the Czech Republic exemplifies this. While both have effectively decriminalised the possession of a small amount of any drug for personal use, Portugal has seen improved health outcomes and falls in levels of drug use and drug-related deaths, while in the Czech Republic following decriminalisation, rates of marijuana use remain among the highest in Europe, health outcomes have worsened and drug-related deaths have increased. Sweden, on the other hand, which takes a stringent criminal justice approach to psychoactive substances, has relatively low levels of drug use, although not significantly lower than in some other countries taking different approaches. The Home Office study, then, quite clearly demonstrates that the severity of the sanctions in place for drug possession and use doesn’t determine whether or not a country will have a drug problem. That is, whatever the strengths or weaknesses a country’s drug strategy may have in managing the risks and potential harms of drug use, whether or not the trade in and/or use of psychoactive substances are criminalised is clearly not key to managing the problem.

Despite this, the Foreword to the report – authored, as is usual in government documents, by the politicians who commissioned it rather than researchers who conducted the study – introduces the findings with the assertion that what they primarily show is that different policies work in different contexts and wholesale policy transfer is clearly impossible. More than this, it claims that, read in the context of long-term declining drug use in the UK, the study demonstrates that the Government’s ‘balanced and evidence-based drugs policy’ is sound. This indicates a clear resistance to the opening of a debate around reform of the drugs laws.

Drug use trends - Home Office 2014

Source: Home Office (2014). Despite the government’s claim that a long-term decline in drug use indicates that government policy is working, most of the decline is accounted for by a fall in cannabis use, while the use of Class A drugs has been stable for two decades.

Theresa May has offered little comment on either the report or Baker’s noisy resignation. As the story began to gain momentum in the press, however, the Prime Minister intervened with a clear dismissal of the possibility of any relaxation in the drugs laws. He offered little engagement with the evidence presented in the report, but fell back on clichés of ‘common sense’ and the moral claims of parenthood: the criminalisation of drug use would remain in place because ‘as a father of three children’ he did not want to ‘send a message that somehow taking these drugs is okay and safe’, and that decriminalisation would ‘add to the danger’ posed by psychoactive substances – this last point despite the clear evidence of the report that the ‘message’ sent by the law is not the significant factor in determining the prevalence of drug use. Furthermore, not only would the policy of criminalisation not be relaxed, it would be extended to cover substances currently known as ‘legal highs’.

Cameron’s argument rests on the value of the drugs laws as symbolic, and – supposedly – deterrent. This focus on the immediate harms and risks associated with the consumption of drugs neglects the myriad of other, state-sponsored, harms generated by criminalisation, which range from the violence and instability caused by the illegal multi-billion dollar international drugs trade, to the harms to individuals and communities that come with the imposition of criminal justice sanctions on users. This narrow conception of drug-related harm and the resistance to an evidence-led approach to drugs policy by UK politicians has history over successive governments of different political stripes. In 2000, for example, the Blair government greeted the recommendations of the Runciman Commission to downgrade the classification of ecstasy and cannabis, and to treat possession of the latter as a minor civil offence , with panic, conceding only the downgrading of Cannabis from Class B to Class C when it became evident that The Daily Mail had received the report with approval rather than the expected outrage. Eight years later, however, and against scientific advice, Gordon Brown’s administration reversed that decision and sacked senior drugs advisor David Nutt for criticising the move as being without foundation in evidence. For Guardian commentator Simon Jenkins, who was a member of the Runciman Commission, government resistance to an evidence-led drugs approach to drug use represents a failure of drugs politics rather than drugs policy.

A continuation of current policy means a reaffirmation of the government’s commitment to criminal justice sanctions for those convicted of dealing in, or possessing, illegal drugs. This group accounts for a substantial proportion of the prison population. 2013 Ministry of Justice figures show that 14% of men in prison, and 15% of women were serving sentences for drug offences. This, however, masks the much larger number of convicted prisoners whose convictions were indirectly drugs-related (i.e. not for possession or supply of drugs themselves): in the same year, 66% of female and 38% of male prisoners reported that the offence for which they had been sentenced was committed to fund their own or someone else’s drug use. Not only does it draw very large numbers into our prisons, our current criminal justice paradigm serves those with problems of substance abuse and their communities very poorly. For example, illegal drugs are disproportionately used by minority, marginal and disadvantaged groups – young people, those from ethnically mixed backgrounds, gay and bisexual men and women, and people living in areas of relative deprivation. Meanwhile, while members of Black and Minority Ethnic groups use drugs at a lower rate than the population as a whole, for example, they are more often stopped and searched under drugs laws, and receive more severe sanctions for drugs possession offences when convicted.

Not only is the criminal justice approach to the use of psychoactive substances fundamentally problematic, the government’s reaffirmation of its commitment to maintain the status quo in drugs policy comes at a time of mounting strain in the criminal justice system. At close to 86,000, the prison population in England and Wales is not far from its historic high, and in a context of budget cuts, reduced staffing levels and prison closures, the consequence of this is overcrowding and reduced levels of purposeful activity for prisoners. The prison population is being held in facilities not designed to accommodate their numbers, and concentrated into larger and larger prisons, which are known to be less safe. These conditions cannot ensure basic safety, much less deliver the government’s promised ‘rehabilitation revolution’. The number of assaults recorded against both staff and prisoners has increased and, most worrying of all, suicides among prisoners have risen sharply over the last year: clear indicators of a prison system in crisis. The Chief Inspector of Prisons, Nick Hardwick, has issued a clear warning: either the prison population must fall or prisons must be better funded. The Home Office report demonstrates clearly that prohibition is not a meaningful deterrent to drug use. Neither is prison an effective place to deal with problems of addiction: drug use (including first use of heroin) among prisoners is high while the current trend towards larger institutions exacerbates these risks, as prisoners in large establishments are more likely to know how to get hold of drugs, but less likely to know where to get help with problems of addiction.

Over the the last month, the Coalition row over the government response to the Home Office report, and the crisis of rising suicides, overcrowding and violence in our prison system have sat cheek-by-jowl in the headlines. We have a bloated prison system that – while public and welfare services are being cut elsewhere – we can ill afford, and into which we channel thousands of men and women each year, disproportionately from marginal and disadvantaged sections of society, where their social disadvantage is compounded and physical and psychological well-being profoundly threatened. While there is no denying the harms generated by substance addiction, all a criminal justice approach can offer us is what Willem De Haan has described as ‘spiralling cycles of harm’. We know better than this, and it is in all our interests to start doing better.