Archives for category: mental states
MRI
Neuroscience is making advances in mapping our brain. In doing so, it questions our fundamental beliefs about our autonomy within the law. However, to date, it still has not been successful in undermining principles of justice that have underpinned Western legal systems and international treaties for the past centuries. This essay provides a study upon what difference neuroscience makes, or not, to the law.

 

Neuroscience is an advancement in cognitive studies which has up until a few decades ago relied mostly upon behavioural studies. Presently there are claims made by corporations involved in neuro-technologies that claim to be able to detect deception accurately, and also to assess whether people have tendencies towards criminality. To study the issue of whether neuroscience in this capacity will not make a difference to the law, this essay will study the history of the cognitive sciences in the law, the claims of Greene and Cohen (2004) that state neuroscience makes no difference to the law, the philosophical and ethical issues that are fundamental to society and the law, and the claims and criticisms made about neuro-technologies such as functional magnetic resonance imaging (fMRI). This analysis concludes that for the foreseeable future neuroscience will make little difference to the law.

To assess whether neuroscience will not make a difference to the law it is necessary to look at the contributions that the cognitive sciences have made and whether they have helped to ascertain the responsibility of people under the law. The justice system relies upon defining the intentions of the defendant to judge whether they are guilty of breaching the law. To achieve this aim, often the cognitive sciences, such as psychology and psychiatry, are called upon for assistance. Eigen (2003, p.x) suggests that in the mid-nineteenth century it was not psychiatrists or legal professionals who identified the difference between insanity and the anomalous behaviour of unconsciousness, such as acts done whilst sleepwalking or from some other automatic reflex, it was the jury. It became necessary to have scientists ascertain the culpability of someone if there was a question of any mental disability, rather than have a non-professional jury assess this.

However, Eigen (2003, p.5) contends that during the nineteenth century, with more novel diagnoses becoming apparent at the courts, expert medical witnesses were at risk of twisting courtroom evidence and framing it within their own contexts. This threatened to displace the function of the jury and served as a critical point between the law and the emerging specialties and technologies of cognitive science.  According to Eigen (2003, p.6), there was an increasing judicial anxiety about insanity acquittals because of the growing diagnoses of different derangements that came before the courts explaining a person’s lack of accountability and moral agency. This is much like the contemporary dilemma with the use of new neuro-technologies and techniques that confront the law today in making assessments about people’s responsibility.  The question that arises is how much neuroscience should be included in the tools of the law for the aims of justice to be achieved.

Greene and Cohen (2004, p.1775) argue that neuroscience’s transformative effect on the law will come about by changing people’s understanding of the notion of ‘free will’. Free will is a problem because of our modern concept of the physical universe. They quote ( p.1777) Peter van Inwagen: “Determinism is true if the world is such that its current state is completely determined by i) the laws of physics, and ii) past states of the world. Therefore, if all is predetermined by physics then the idea of free will is an illusion.  However, although most philosophers and legal theorists accept determinism, many also find it compatible with free will. According to Greene and Cohen ( p.1777), compatibilists claim that free will is a persistent notion that is undeniable and that it is up to science to establish how it works. Greene and Cohen (p.1778) state that the standard legal account of punishment is compatibilist in order to allow for retribution. For Greene and Cohen (p.1779), neuroscience will not change the law because the law assumes a level of minimal rationality for people’s behaviour, rather than notions of free will. They go on to state that if neuroscience supports minimal rationality then there is no reason to think that it poses a threat to the determination of criminal responsibility (p.1779). Although new syndromes are announced as an excuse for criminal behaviour they will only have validity if they undermine one’s rationality in a significant way. Greene and Cohen (p. 1780) argue that neuroscience can be helpful in this way through being able to correlate behaviour with rationality and also helping people understand the mechanical nature of human action. Neuroscience promises to show the ‘when’, ‘where’ and ‘how’ of the mechanical process to be able to assess if someone truly deserves to be punished or if they are just a victim of their neuronal circumstances (p.1780). However, this type of technology may have profound impacts upon the ethical concepts that humans have formed over time in their societies, especially ones that pertain to the autonomy of the individual.

Philosophical and ethical thinking can help to align the law with the sciences through providing the tools with which to develop theories of responsibility and also assessing the ethics of new technologies (Tovino, 2007). Through the use of such studies, legislative, regulatory and judicial bodies can correlate legal processes with technological processes in an ethical manner, in particular when functional magnetic resonance imaging (fMRI) is combined with philosophical ethics (2007, p.44). This technology localizes changes in blood oxygenation in the brain and is used in neuroscience to map sensory, motor and cognitive function, and also physical and mental health conditions, behaviours and characteristics (p.44). The legal issues of fMRI extend beyond patient-physician relationships to confidentiality, privacy and research ethics (p.44).

Some have referred to fMRI as being too reliant upon interpretation to be reliable as evidence (Bizzi et al., 2009). As is noted by Tovino (2007, p.47), ‘Sometimes the difference between seeing higher activity in the parietal lobe compared to the occipital lobe is akin to deciding whether Van Gogh or Matisse is the more colourful artist’.  Tovino (p. 47) also includes a quote from Donaldson: ‘What constitutes a “significantly greater” activation is in a way in the eye of the beholder’. With commercial fMRI companies claiming up to 99% infallibility and areas of use to include risk reduction in dating, insurance verification and employee screening, privacy and confidentiality also become issues, especially if these claims are misleading (p.47). Tovino (p.48) quotes Greely and the U.S. Committee on Science and Law in stating that advances in fMRI threaten ‘to invade the last inviolate area of “self” and have been coined as ‘neuroprivacy’ issues.  Therefore, the questions that Tovino poses are: Is it deceptive to say that an fMRI test is objective, fully automated and infallible? (p.47), and: Will future fMRI tests require heightened confidentiality and privacy protections? (p.48).

These are important questions because of expressions of rights of freedom constructed in international treaties. In Stacy v Georgia, the seminal ‘privacy of thought’ case, the U.S. Supreme Court stated that, ‘also fundamental is the right to be free, expect on very limited circumstances, from unwanted governmental intrusions into one’s privacy’ (Glenn, 2005, p.61). That Court also states in Lawrence v Texas: ‘Liberty presumes an autonomy of self that includes freedom of thought, expression and certain intimate conduct’ (2005, p.61). A fundamental principle of democracy is our accusatory system of criminal justice, which demands that the government in seeking to establish the guilt of an individual produce evidence against him/her by their own independent labours, rather than by compelling it from his/her own mouth (Miranda v Arizona, 1966 at 460) (Tovino, 2007, p.50).

However, some objections to fMRI being argued against on these self-incrimination grounds are that DNA, blood tests, mental examinations, urinalysis, fingerprints are all means of admissible evidence that are used in courts today, so why not fMRI (Tovino, 2007, p.51)? Some questions for counterarguments could be: Does this address the implications involved in seizing an individual’s ‘privacy of thought’? Is fMRI reliable and accurate in identifying or diagnosing physical and mental conditions, behaviours or characteristics? Are such tests as effective as DNA or blood and alcohol tests, or are there more effective methods of identifying target condition? Also, who would be the authority that could gather such data from a brain scan and what precautions and protocols should be followed (p.51)? Although neuro-imaging has been effective in showing courts the diminished responsibility of adolescents on death row (p.52), and discovering brain tumors that may affect responsibility (Burns, 2003, p.48), many lawyers still argue that data gathered from fMRI should not be legally admissible evidence (Tovino, 2007, p.53).

For some philosophers, the citation of neuro-technologies, such as fMRI, as evidence in law is problematic. Fine (2010, 281) states that the problem with advances in neurosciences is that ‘we still have minimal understanding of how neural structures contribute to complex psychological phenomena’. The complex nature of brain structure makes it difficult to attribute behavioural conditions or characteristics to it.  Statistics and data gathered from procedures that involve neuro-technologies may be inadequate or inappropriate (p.281), especially for making assumptions with which to convict someone. Too many assumptions are made about a structure that is extremely complex and massively interconnected to imply a psychological construct that leads to an individual’s imprisonment.

Fine (2010, p.281) contends that inferring a mental process from a significant oxygenation of a particular area of the brain is a reverse inference and fraught with too many difficulties to attribute specific brain functions to various brain regions. For Fine (p.281), the entire brain may not be involved in a particular function and ‘there is no one-to-one mapping between brain regions and psychological processes. Cognition arises through complex interaction of brain areas, with any single region being involved in a number of processes (p.281). This makes it ambiguous as to the amount of psychological implications that can be derived from the amount of activity in particular regions of the brain. Also data acquisition from fMRI is slow which limits psychological interpretations that can be inferred from brain events (p.282). Weisberg et al. (2008, p.20) states that neuroscience has an appeal that relies upon assumptions of infallibility which allows people to find circular explanations of psychological phenomena from information about brain responses acceptable. This is problematic in a courtroom where a judge or jury might accept such scientific evidence without further validation.

Deception detection is one of the areas hailed by those who use fMRI commercially as able to revolutionize testimony in court. However, there is some doubt as to the veracity of such claims. Kanwisher (2009, p.11) points to three exceptionally successful individual subject studies that have been conducted. These three studies analysed two sets of fMRI data that were used to distinguish lies form truth (p.11). However, according to Kanwisher (p.11) in two of the studies lies were not examined but target deception events. From the successful outcomes of these studies, with correct response rates of 90%, 76% and 89% respectively, it appears that classification and imaging methods are rapidly improving (p.11). However, Kanwisher (p.12) points out that this success rate may not be able to be reflected in the real world, and argues that lie detection within a laboratory environment is completely different to lie detection in the real world. Firstly, the subjects are making an instructed false response not a lie. Secondly, real life situations differ in that the stakes are much higher for the subjects. This could cause anxiety whether a subject was guilty or not (p.12). Also, a subject could be uncooperative and fMRI is useless if a subject moves at all (p.12). It would be impossible for such studies to even remotely mimic real life situations as they would need a subject population of defendants suspected of serious crimes. Also, the experimenter would need to know whether the subject was lying for verification of the test (p.12). Therefore it seems impossible to conduct studies to mimic real life situations for ethical and practical reasons.

As neuro-technologies become more advanced they could indeed show us, as Greene and Cohen assert, that our actions are predetermined. However, for an ordered society the law requires us to be responsible for our actions and for this it requires minimal rationality. Behavioural psychologists and psychiatrists are already able to assess people’s minimal, rational psychological states. Neuro-technologies, such as fMRI can also show physical disabilities within the brain. However, for the foreseeable future, to use fMRI  for such purposes as deception detection or to assess whether people have a tendency towards criminal behaviour is spurious. Therefore, neuroscience makes little difference to the law.

 

 

 

 

REFERENCES

 

  1. Eigen, J.P., (2003) Unconscious crime: Mental absence and criminal responsibility in Victorian London, John Hopkins University Press, Maryland
  2. Green and Cohen, (2004), For the law, neuroscience changes nothing and everything, Princeton University, Princeton
  3. Tovino, S. A., (2007), “Functional neuroimaging and the law: Trends and directions for future scholarship”, American Journal of Bioethics , 7:9 , 2007 , pp.44-56
  4. Glenn, L. M. (2005), Keeping an open mind: What legal safeguards are needed? American Journal of Bioethics 5(2), p.60-61
  5. Burns, Jeffrey M; Swerdlow, Russell H. (2003), “Right orbitofrontal tumor with pedophilia symptom and constructional apraxia sign” Archives of Neurology , 60:3 , 2003 , 437-440
  6. Fine, C. (2010) “From scanner to sound bite: Issues in interpreting and reporting sex differences in the brain” Current Directions in Psychological Science , 19:5 , 2010 , 280-283
  7. Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., & Gray, J.R. (2008), “The seductive allure of neuroscience explanations” Journal of Cognitive Neuroscience, 20, 470–477.
  8. Kanwisher, N.  (2009), “The Use of fMRI in Lie Detection: What Has Been Shown and What Has Not”, Bizzi et al., 2009, Using Imaging to Detect Deceit: Scientific and Ethical Questions, American Academy of Arts and Sciences, Cambridge MA

Image

J.E.Thomas. “Lament”, (2006), 160cm x 120cm, oil on canvas

Image

There are a variety of notions as to what consciousness is. Some people denote consciousness simply as the difference between being awake/aware and asleep/unaware. Neuroscience posits consciousness as being various neural oscillations (Block 2002), but is still unclear as to how meaning is generated in the brain (Crick and Koch 1998). One of the most important features of consciousness, its subjectivity, is reported by Searle to be a neurobiological process (Searle 1980), or the notion of ‘what it is like to be’ by Thomas Nagel (Nagel 2002). According to Ned Block (2002), various notions of consciousness cause confusion and Block’s paper, Concepts of Consciousness, wishes to clarify and define consciousness through separating it into two distinct categories- phenomenal consciousness and access consciousness. This essay will argue that Block fails to establish such a separation which does not help his cause of clarification.

Block (2002, p.206) describes the concept of consciousness as a ‘mongrel concept’ which is used in describing a variety of concepts to identify different phenomena. Block (2002) disputes these different phenomena being treated as a single concept, and he wishes to divide consciousness into recognizable states in order to provide clarity and certainty for people when they discuss consciousness. By categorising consciousness into two main types: phenomenal consciousness (P-consciousness) and access consciousness (A-consciousness), Block (2002) contends that one type of consciousness is based upon non-physical phenomena and the other is based upon the physical functioning of the brain.

Block (2002) theorises that P-consciousness is based upon perceptual experience, not simply the state of awareness that one is in when one is awake. P-conscious properties can be referred to as ‘what it is like’ to have states such as pain, sight, hearing, smell, taste, and experiential properties of sensations such as thoughts, desires and emotions (2002, p. 206). Block (2002) also contends that such conscious states can make an intentional difference and can be representational. However, Block also holds that P-conscious states can be held distinct from any cognitive, intentional, or functional property, namely A-consciousness. A-consciousness is non-phenomenal consciousness and it is based upon its functionality. Block (2002) maintains that it is used for reasoning, reporting, and the direct control of rational action. One of the relationships between P and A consciousness is that A consciousness reports on the information gathered from P-consciousness.

Another relationship between the Block’s two concepts of consciousness is that although each type is distinct it also interacts with the other (2002, p.210). For example, when perceptual information is accessed it can change the intentional direction of thought, or as Block puts it, it ‘can change figure to ground and conversely, and a figure-ground switch can effect one’s phenomenal state’ (2002, p.209). In Block’s (2002) view an experience’s content can be in both conscious states at once because of the phenomenal properties of one and the representational properties of the other. However, there are three main differences between these two types of consciousness. Firstly, P-consciousness is phenomenalwhile A-consciousness is representational.  Block (2002) remarks that the content of P-consciousness is the ‘what is like’ component and this allows the content of an experience to be both P-conscious and A-conscious. Secondly, A-consciousness is functional, or as Block (2002, p.209) declares: ‘…what makes a state A-conscious is what a representation of its content does in a system’. Thirdly, P-consciousness can be a type of ‘kind of’ state. For example, if pain is a P-conscious type then every pain must have that feel, whereas A-consciousness could sometimes fail to be accessible. Block sums up these differences by maintaining that P-conscious states are sensations, whereas A-conscious states involve ‘propositional attitudes’ such as thoughts, beliefs and desires, representational states expressed by ‘that’ clauses. (2002, p. 209).

As Block’s intention is to define these states of consciousness so that they can be properly identified and not confused, he needs to show that the relationship between both P and A consciousness can be separated. To do this Block (2002) gives particular examples of A-consciousness without P-consciousness, such as a computational robot that is identical to a person but that does not experience phenomenal or perceptual states. To act, the robot needs to receive information. Even the simplest computer needs information and it does not seem plausible that the robot would be able to do any computing at all if there was not data entered into it. That would appear to make it an inanimate object. Therefore, with data or information taking the place of perceptual states and phenomenal experience needing these states to provide information, this example of A-consciousness without P-consciousness does not seem credible.

Another example that Block (2002) gives of A-conscious states without P-consciousness, is the blindsight patient, who can guess that there is an ‘X’ rather than an ‘O’ in his blind field. For someone to have knowledge of this ‘X’ so that they could guess it was there, they must have some previously gathered experience or knowledge of that ‘X’. An analogy to this example could be my guess as I am driving that there is a motorcyclist in my car’s blindspot from my previous perception in my rearview mirror of her travelling in the same direction as me but in a different lane. I would only think about the motorcyclist, or the ‘X’ in the case of the blindsight patient, if I had previous knowledge or experience of it. Unless we are talking about assumed innate thoughts, I cannot have thoughts about something of which I have no previous knowledge or experience. Therefore, Block’s analogy seems not to succeed on this account.

Block (2002) keeps on with his blindsight analogy with a person who has superblindsight. He states that this superblindsighted person can guess that there is a horizontal field in his blind field purely though introspection, in the way that Block (2002 p.211) says we can solve problems simply through thoughts popping into our minds, or the way that one might just innately know the time, or which way North is without experiencing it. This superblindsight example is contentious because resolutions to problems need to be based upon some experience or knowledge. Even our knowledge of North, without having some perceptual experience such as it being pointed out, is debatable. The concept of North would not have any meaning. It seems that the only way A-consciousness could be a state without P-consciousness would be to conclude, as Descartes did, that ‘even bodies are not strictly perceived by the senses or the faculty of imagination but by the intellect alone, and that this perception derives not from being touched or seen but from their being understood’ (Descartes, 2002, p. 13). Such an assumption of innate knowledge makes this analogy appeal to belief, rather than prove a truth.

Block (2002) also claims that P-consciousness without A-consciousness is possible. To be P-conscious without being A-conscious, one would be perceptually aware without being able to transmit that information into useful data. An objection arose to this claim which states that we would never be in the position to know whether P-consciousness without A-consciousness is possible. Block (2002) responds to this objection in his paper by arguing that introspection would allow us to be aware of our consciousness and to see it as being distinct from A-consciousness. This is a contradictory response, as to be introspectively aware would be putting A-consciousness to use thereby one would be P-conscious as well as A-conscious. The truth of the claim of P-conscious states without A-conscious states also appears unconvincing.

Block’s intention to differentiate various concepts of consciousness in order to counteract confusion seems to end up being confused itself. Intuitively, there does not seem to be any problem thinking about consciousness as being perceptual on the one hand and functional on the other. These two types seem to work together to underpin a functioning mind. However, there is confusion between the two types, with A-consciousness being found by Block (2002) to be indeterminate and P-consciousness sometimes straying into the realm of A-consciousness through having properties such as thoughts, wants and emotions ( 2002, pp.207-08). Although we can assume Block does not see such states as thoughts and desires being functional, these could be categorised as functional activities of the brain.

Computational approaches to the mind see access consciousness being identical to phenomenal consciousness because of its function of information gathering and processing (2002, p.208).  So is the categorical statement that Block puts forward true: If P = A then the computational model of the mind is correct? Phenomenality and accessibility consciousness are considered features of consciousness, as are intentionality, subjectivity, qualia, self-consciousness, unity and dynamic flow etc. (Gulick 2004).  However, this does not mean that being identical features of a single concept or that being part of many features of consciousness allows the computational model to be correct. There were other models of the mind that were not necessarily computational before Block made his claims. Furthermore, computational models of the mind are not necessarily correct for other reasons, such as the binding problem (Crick and Koch 1998). From the number of features of consciousness, it appears that it has a multidimensional rather than a singular or dichotomic quality.

Block argues that his claim needs his two consciousness types to be able to be conceptually separated. For me, he fails to establish this. Without empirical or conceptual evidence, it is like stating that a single or multiple thing/s are necessarily two separate categories simply because they have been put into two separate ‘files’. Therefore, I do not think that Block’s model of consciousness as a single theoretical perspective is plausible.

References:

Block, N 2002 ‘Concepts of Consciousness’, in D Chalmers (ed), Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, New York

Crick, F, Koch, C 1998, ‘Consciousness and Neuroscience’, Cerebral Cortex, no.8, pp. 97-107, viewed 3rd  May 2012 http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html

Descartes, R 2002, ‘Meditations on First Philosophy’ in D Chalmers (ed) Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, New York

Nagel, T 2002, ‘What it is like to be a bat’ in D Chalmers (ed) Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, New York

Searle, JR 1980, ‘Minds, Brains, and Programs,’ Behavioral and Brain Sciences, no. 3, pp.417-457, viewed 3rd May 2012 http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.prob.html

Van Gulick, R 2004, “Consciousness”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), viewed 5th May 2012 http://plato.stanford.edu/archives/sum2011/entries/consciousness/