MRI
Neuroscience is making advances in mapping our brain. In doing so, it questions our fundamental beliefs about our autonomy within the law. However, to date, it still has not been successful in undermining principles of justice that have underpinned Western legal systems and international treaties for the past centuries. This essay provides a study upon what difference neuroscience makes, or not, to the law.

 

Neuroscience is an advancement in cognitive studies which has up until a few decades ago relied mostly upon behavioural studies. Presently there are claims made by corporations involved in neuro-technologies that claim to be able to detect deception accurately, and also to assess whether people have tendencies towards criminality. To study the issue of whether neuroscience in this capacity will not make a difference to the law, this essay will study the history of the cognitive sciences in the law, the claims of Greene and Cohen (2004) that state neuroscience makes no difference to the law, the philosophical and ethical issues that are fundamental to society and the law, and the claims and criticisms made about neuro-technologies such as functional magnetic resonance imaging (fMRI). This analysis concludes that for the foreseeable future neuroscience will make little difference to the law.

To assess whether neuroscience will not make a difference to the law it is necessary to look at the contributions that the cognitive sciences have made and whether they have helped to ascertain the responsibility of people under the law. The justice system relies upon defining the intentions of the defendant to judge whether they are guilty of breaching the law. To achieve this aim, often the cognitive sciences, such as psychology and psychiatry, are called upon for assistance. Eigen (2003, p.x) suggests that in the mid-nineteenth century it was not psychiatrists or legal professionals who identified the difference between insanity and the anomalous behaviour of unconsciousness, such as acts done whilst sleepwalking or from some other automatic reflex, it was the jury. It became necessary to have scientists ascertain the culpability of someone if there was a question of any mental disability, rather than have a non-professional jury assess this.

However, Eigen (2003, p.5) contends that during the nineteenth century, with more novel diagnoses becoming apparent at the courts, expert medical witnesses were at risk of twisting courtroom evidence and framing it within their own contexts. This threatened to displace the function of the jury and served as a critical point between the law and the emerging specialties and technologies of cognitive science.  According to Eigen (2003, p.6), there was an increasing judicial anxiety about insanity acquittals because of the growing diagnoses of different derangements that came before the courts explaining a person’s lack of accountability and moral agency. This is much like the contemporary dilemma with the use of new neuro-technologies and techniques that confront the law today in making assessments about people’s responsibility.  The question that arises is how much neuroscience should be included in the tools of the law for the aims of justice to be achieved.

Greene and Cohen (2004, p.1775) argue that neuroscience’s transformative effect on the law will come about by changing people’s understanding of the notion of ‘free will’. Free will is a problem because of our modern concept of the physical universe. They quote ( p.1777) Peter van Inwagen: “Determinism is true if the world is such that its current state is completely determined by i) the laws of physics, and ii) past states of the world. Therefore, if all is predetermined by physics then the idea of free will is an illusion.  However, although most philosophers and legal theorists accept determinism, many also find it compatible with free will. According to Greene and Cohen ( p.1777), compatibilists claim that free will is a persistent notion that is undeniable and that it is up to science to establish how it works. Greene and Cohen (p.1778) state that the standard legal account of punishment is compatibilist in order to allow for retribution. For Greene and Cohen (p.1779), neuroscience will not change the law because the law assumes a level of minimal rationality for people’s behaviour, rather than notions of free will. They go on to state that if neuroscience supports minimal rationality then there is no reason to think that it poses a threat to the determination of criminal responsibility (p.1779). Although new syndromes are announced as an excuse for criminal behaviour they will only have validity if they undermine one’s rationality in a significant way. Greene and Cohen (p. 1780) argue that neuroscience can be helpful in this way through being able to correlate behaviour with rationality and also helping people understand the mechanical nature of human action. Neuroscience promises to show the ‘when’, ‘where’ and ‘how’ of the mechanical process to be able to assess if someone truly deserves to be punished or if they are just a victim of their neuronal circumstances (p.1780). However, this type of technology may have profound impacts upon the ethical concepts that humans have formed over time in their societies, especially ones that pertain to the autonomy of the individual.

Philosophical and ethical thinking can help to align the law with the sciences through providing the tools with which to develop theories of responsibility and also assessing the ethics of new technologies (Tovino, 2007). Through the use of such studies, legislative, regulatory and judicial bodies can correlate legal processes with technological processes in an ethical manner, in particular when functional magnetic resonance imaging (fMRI) is combined with philosophical ethics (2007, p.44). This technology localizes changes in blood oxygenation in the brain and is used in neuroscience to map sensory, motor and cognitive function, and also physical and mental health conditions, behaviours and characteristics (p.44). The legal issues of fMRI extend beyond patient-physician relationships to confidentiality, privacy and research ethics (p.44).

Some have referred to fMRI as being too reliant upon interpretation to be reliable as evidence (Bizzi et al., 2009). As is noted by Tovino (2007, p.47), ‘Sometimes the difference between seeing higher activity in the parietal lobe compared to the occipital lobe is akin to deciding whether Van Gogh or Matisse is the more colourful artist’.  Tovino (p. 47) also includes a quote from Donaldson: ‘What constitutes a “significantly greater” activation is in a way in the eye of the beholder’. With commercial fMRI companies claiming up to 99% infallibility and areas of use to include risk reduction in dating, insurance verification and employee screening, privacy and confidentiality also become issues, especially if these claims are misleading (p.47). Tovino (p.48) quotes Greely and the U.S. Committee on Science and Law in stating that advances in fMRI threaten ‘to invade the last inviolate area of “self” and have been coined as ‘neuroprivacy’ issues.  Therefore, the questions that Tovino poses are: Is it deceptive to say that an fMRI test is objective, fully automated and infallible? (p.47), and: Will future fMRI tests require heightened confidentiality and privacy protections? (p.48).

These are important questions because of expressions of rights of freedom constructed in international treaties. In Stacy v Georgia, the seminal ‘privacy of thought’ case, the U.S. Supreme Court stated that, ‘also fundamental is the right to be free, expect on very limited circumstances, from unwanted governmental intrusions into one’s privacy’ (Glenn, 2005, p.61). That Court also states in Lawrence v Texas: ‘Liberty presumes an autonomy of self that includes freedom of thought, expression and certain intimate conduct’ (2005, p.61). A fundamental principle of democracy is our accusatory system of criminal justice, which demands that the government in seeking to establish the guilt of an individual produce evidence against him/her by their own independent labours, rather than by compelling it from his/her own mouth (Miranda v Arizona, 1966 at 460) (Tovino, 2007, p.50).

However, some objections to fMRI being argued against on these self-incrimination grounds are that DNA, blood tests, mental examinations, urinalysis, fingerprints are all means of admissible evidence that are used in courts today, so why not fMRI (Tovino, 2007, p.51)? Some questions for counterarguments could be: Does this address the implications involved in seizing an individual’s ‘privacy of thought’? Is fMRI reliable and accurate in identifying or diagnosing physical and mental conditions, behaviours or characteristics? Are such tests as effective as DNA or blood and alcohol tests, or are there more effective methods of identifying target condition? Also, who would be the authority that could gather such data from a brain scan and what precautions and protocols should be followed (p.51)? Although neuro-imaging has been effective in showing courts the diminished responsibility of adolescents on death row (p.52), and discovering brain tumors that may affect responsibility (Burns, 2003, p.48), many lawyers still argue that data gathered from fMRI should not be legally admissible evidence (Tovino, 2007, p.53).

For some philosophers, the citation of neuro-technologies, such as fMRI, as evidence in law is problematic. Fine (2010, 281) states that the problem with advances in neurosciences is that ‘we still have minimal understanding of how neural structures contribute to complex psychological phenomena’. The complex nature of brain structure makes it difficult to attribute behavioural conditions or characteristics to it.  Statistics and data gathered from procedures that involve neuro-technologies may be inadequate or inappropriate (p.281), especially for making assumptions with which to convict someone. Too many assumptions are made about a structure that is extremely complex and massively interconnected to imply a psychological construct that leads to an individual’s imprisonment.

Fine (2010, p.281) contends that inferring a mental process from a significant oxygenation of a particular area of the brain is a reverse inference and fraught with too many difficulties to attribute specific brain functions to various brain regions. For Fine (p.281), the entire brain may not be involved in a particular function and ‘there is no one-to-one mapping between brain regions and psychological processes. Cognition arises through complex interaction of brain areas, with any single region being involved in a number of processes (p.281). This makes it ambiguous as to the amount of psychological implications that can be derived from the amount of activity in particular regions of the brain. Also data acquisition from fMRI is slow which limits psychological interpretations that can be inferred from brain events (p.282). Weisberg et al. (2008, p.20) states that neuroscience has an appeal that relies upon assumptions of infallibility which allows people to find circular explanations of psychological phenomena from information about brain responses acceptable. This is problematic in a courtroom where a judge or jury might accept such scientific evidence without further validation.

Deception detection is one of the areas hailed by those who use fMRI commercially as able to revolutionize testimony in court. However, there is some doubt as to the veracity of such claims. Kanwisher (2009, p.11) points to three exceptionally successful individual subject studies that have been conducted. These three studies analysed two sets of fMRI data that were used to distinguish lies form truth (p.11). However, according to Kanwisher (p.11) in two of the studies lies were not examined but target deception events. From the successful outcomes of these studies, with correct response rates of 90%, 76% and 89% respectively, it appears that classification and imaging methods are rapidly improving (p.11). However, Kanwisher (p.12) points out that this success rate may not be able to be reflected in the real world, and argues that lie detection within a laboratory environment is completely different to lie detection in the real world. Firstly, the subjects are making an instructed false response not a lie. Secondly, real life situations differ in that the stakes are much higher for the subjects. This could cause anxiety whether a subject was guilty or not (p.12). Also, a subject could be uncooperative and fMRI is useless if a subject moves at all (p.12). It would be impossible for such studies to even remotely mimic real life situations as they would need a subject population of defendants suspected of serious crimes. Also, the experimenter would need to know whether the subject was lying for verification of the test (p.12). Therefore it seems impossible to conduct studies to mimic real life situations for ethical and practical reasons.

As neuro-technologies become more advanced they could indeed show us, as Greene and Cohen assert, that our actions are predetermined. However, for an ordered society the law requires us to be responsible for our actions and for this it requires minimal rationality. Behavioural psychologists and psychiatrists are already able to assess people’s minimal, rational psychological states. Neuro-technologies, such as fMRI can also show physical disabilities within the brain. However, for the foreseeable future, to use fMRI  for such purposes as deception detection or to assess whether people have a tendency towards criminal behaviour is spurious. Therefore, neuroscience makes little difference to the law.

 

 

 

 

REFERENCES

 

  1. Eigen, J.P., (2003) Unconscious crime: Mental absence and criminal responsibility in Victorian London, John Hopkins University Press, Maryland
  2. Green and Cohen, (2004), For the law, neuroscience changes nothing and everything, Princeton University, Princeton
  3. Tovino, S. A., (2007), “Functional neuroimaging and the law: Trends and directions for future scholarship”, American Journal of Bioethics , 7:9 , 2007 , pp.44-56
  4. Glenn, L. M. (2005), Keeping an open mind: What legal safeguards are needed? American Journal of Bioethics 5(2), p.60-61
  5. Burns, Jeffrey M; Swerdlow, Russell H. (2003), “Right orbitofrontal tumor with pedophilia symptom and constructional apraxia sign” Archives of Neurology , 60:3 , 2003 , 437-440
  6. Fine, C. (2010) “From scanner to sound bite: Issues in interpreting and reporting sex differences in the brain” Current Directions in Psychological Science , 19:5 , 2010 , 280-283
  7. Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., & Gray, J.R. (2008), “The seductive allure of neuroscience explanations” Journal of Cognitive Neuroscience, 20, 470–477.
  8. Kanwisher, N.  (2009), “The Use of fMRI in Lie Detection: What Has Been Shown and What Has Not”, Bizzi et al., 2009, Using Imaging to Detect Deceit: Scientific and Ethical Questions, American Academy of Arts and Sciences, Cambridge MA
Advertisements