Archives for the month of: January, 2013

brainmap

The concept of modularity within the mind has been expounded by its adherents since 1983 when Jerry Fodor wrote “The Modularity of the Mind”. While Fodor claimed that modularity was confined to input systems such as perception and language (Robbins, 2010) others, such as Carruthers, have since put forward theories for massive modularity of the mind which include modules for high-level cognition such as beliefs, problem-solving and planning (Robbins, 2010). Carruthers’ theory of massive modularity rests on the criterion that mental processes must be encapsulated systems. For Carruthers, they would be weakly encapsulated, while for Fodor they would have limited accessibility but complete encapsulation (Robbins, 2010).

Prinz (Prinz, 2006, p.22) uses Fodor’s (1983) criteria for modularity to contend that input systems and central systems are not modular. Jesse Prinz’s argument against modularity within the mind is based upon his view that the mind, alternatively, contains systems that are distinguished by their function. Prinz states that Fodor’s argument fails to satisfy, not just in the way the mind’s subsystems work but even in individual systems. At best, Prinz argues, subsystems are just a few dispersed modular types, and that even these systems do not qualify and are not effective constructions of the mind (Prinz, 2006, p.22).

Fodor’s claim of modularism arises from the fact that brain lesions cause particular mental deficits. Also neuroimaging studies profess to show brain areas that are active when healthy people do specific mental tasks (Prinz, 2006, p.23). However, Prinz contends that on closer analysis there are considerable inconsistencies in the findings of neuroimaging. Brain lesions also have similar diagnostic problems where the same lesion can produce different effects on different individuals (p.3). While there are areas of the brain that localize specific functions but, for Prinz, this still does not support modularity (Prinz, 2006, p.24).While defenders of modularity cite domain specifity to brain function, this is an assumption as mental functions could be found to be comprised of large overlapping networks (Prinz, 2006, p.24). This contention is supported by similar brain areas being active whilst a person performs multiple tasks and specific brain lesions causing multiple deficits (Prinz, 2006, p.24).  Therefore, the brain relies upon functional systems that are reliant upon networks of subsystems.

The main criteria of Fodor’s and Carruthers’ concept of modularity are that modules are inaccessible and encapsulated (Prinz, 2006, p.29). Prinz states that this seems reasonable as we have no introspective access to how our sensory or language systems work (Prinz, 2006, p.30). This is evidenced through a study by Nisbett and Wilson (1977) who showed that judgement is often determined at a subconscious level (Prinz, 2006, p.30). However, Prinz argues that this does not show that modules are inaccessible, only that we lack the ability to consciously access them (Prinz, 2006, p.30). Also, encapsulation, which is put forward by Fodor as the most important criterion, is argued by Prinz to be false. If an encapsulated system was true it would have to be insulated from any external systems. However, there is evidence that input systems are interconnected with three studies finding direct and content-specific cross-referencing of information between sensory inputs (Prinz, 2006, p.31).

While the empirical evidence against encapsulation appears to negate the theory, one could also surmise that input systems are not the systems accessed by the brain for such sensory experiences but rather access memory. However, this would still mean that other modules are accessed to provide information for the subject module, leaving encapsulation and inaccessibility not able to be protected as the main criteria for modularity. Therefore, Prinz’s argument that the mind is not modular appears to hold.

References:

  • Prinz, J.J. (2006) Is the mind really modular? In R Stainton (ed)Contemporary Debates in Cognitive Science (pp. 22-26) Oxford, Blackwell
  • Robbins, P. (2010), “Modularity of Mind” in E Zalta (ed) The Stanford Encyclopedia of Philosophy Retrieved from: http://plato.stanford.edu/entries/modularity-mind/ on 3/01/2013

 

Theory-of-Mind-Mimiandeunice

The capacity to be able to ascertain and attribute mental states to others, and also ourselves, is called mindreading or theory of mind (ToM). It is an essential ability that is needed for proper social interaction and moral competence. Children have usually developed the basics of mindreading by the age of five. They understand that others have desires, emotions and beliefs, and that these may be different from their own. We also tend to understand another’s behaviour through understanding our own, or through introspective reasoning, for example: I am happy, therefore I smile/ They smile, therefore they are happy. These are assumptions of mental states and are evident in children even before they can reason articulately.

An alternative view to the introspective ToM is Gopnick’s and Wellman’s (1992) argument which suggests that the concepts of mindreading and mental states are the premises of a theory. Knowledge of ours and other minds is reconciled by this type of understanding and is known as Theory Theory (TT). TT states that children who are learning about other minds are engaged in the construction of this theory with its attendant set of laws, and argues that children apply a theoretical understanding of the world in an attempt to make meaning out of other minds (Gopnick & Wellman, 1992, p.148). As theories do not always have evidence at hand and consequently can be falsified, so too can a child change its reasoning as it develops a theory of mind about other people. This leads to children having distinctive interpretations of evidence and therefore having differing views of the world.

Gopnick and Wellman (1992, p.149) contend that in this way a child’s theory of mind develops and transitions from one view of the mind to another so that by 5 years they have a representational view rather than a mentalistic one. At first, two year olds think about the world in terms of desires and perceptions. By three, children begin to have representational mental states that include using the terms think, know, remember, believe, pretend, and dream. By five, children develop a more thorough representational view where the representations become propositional. Gopnick and Wellman (1992, p.153) posit that at this stage a child’s view of the world is fully intentional. Therefore, they argue that the transitional mental states of children aged between 2 and 5 show all the indicators of a theory change, as a theory shifts as new evidence emerges and new predictions are able to be made (1992, p.158).

The alternative view to TT is simulation theory (ST) which holds that we mentally simulate the mental processes of others to be able to generate similar processes within ourselves. We practise this theory when we empathise with another through placing ourselves in another’s position and use this information to assess another mind’s state. Gopnick and Wellman (1992, p.160) disagree with this view on the grounds that it relies upon a person not erroneously misinterpreting their own mental state, whereas TT allows such false interpretations and the consequential corrections that may be made. They think that a young child’s errors of interpretation are incorrectly termed ‘egocentric’ and that ST disregards that very young children are quite able to attribute to others mental states that differ from their own (1992, p.164).

However, objections to both theories could be that ST can be automatically achieved through neural mechanisms that associate similar behaviours and TT may be considered only a tacit theory if indeed ToM turns out to be false. It could be that beliefs and desires are simply automated workings of the mind. If this is the case, then only neuroscience has the ability to explain the workings of the mind.

References:

  1. Gopnik, Alison; Wellman, Henry M. “Why the child’s theory of mind really is a theory” Mind and Language , 7:1 and 2 , 1992 , 145-171

scarab jewellery 001

 

Bracelet c.1400BCE (New Kingdom) Egypt, findspot unknown, gold, lapis lazuli, cornelian and glazed composition, 20.0cm length, British Museum, London

Scarab beetles were associated with the gods Atum/Re and Khepri in ancient Egypt[1]. According to one conception of the universe, the scarab beetle was the sun travelling across the sky[2] and its protective imagery was used as a stone seal on the mummified remains of the heart, as its hieroglyph meant ‘come into being’ or ‘to exist’[3]. It also was known to actually replace the heart within the mummy[4]. In particular, it was the movement that the dung beetle, Scarabeus sacer, made as it rolled a ball of dung across the ground that was considered interpretative of the sun’s movement across the sky, with the scarab god, Khepri, being responsible for the sun’s transit[5]. The analogy of the self-creating Khepri, known as ‘he who is coming into being’, was reinforced by the scarabs being seen to emerge from these balls, which was the result of these balls containing the beetle’s eggs[6].

This New Kingdom bracelet from Egypt is dated c.1400BCE. Its composition is gold, lapis lazuli, cornelian and glazed ceramics. The lapis lazuli scarab beetle plays a central role in its design, with the main features of the beetle being outlined in gold filigree focusing on the head, thorax and wings. The six legs of the insect are designed so that they provide linkages to the rest of the bracelet, with the strong front and back legs holding the links and the smaller middle legs maintaining the balance of the design. The filigree outlines the wing casings and the thorax of the beetle and the lapis lazuli is carved out to give tiny detailed eyes to the front of the head. Overall, the scarab maintains a strong ovoid design which is also displayed in many other depictions of the scarab beetle in Egyptian art and design.

The actual scarab beetle can be monotone black, brown, patterned or iridescent. They are large and ovoid in shape, have six rather sturdy legs and three distinct parts- the small hemispheric head which extends to the thorax and the large wings joined at their centre by a small, reverse semicircle. Attached to the head are two short, thick antennae. The dimensions of the beetle have been replicated exactly in the design of the bracelet, with the details of the two forward-looking eyes and lines of the body being also accurate. The wings are delineated accurately yet without the addition of the joining semicircle. The eyes and mouth have been depicted on the bracelet without the addition of the antennae. The muscular front legs are also accurately crafted, but the middle and back legs are more simplified. Owing to the emphasis on the beetle in the design of the bracelet, it could be suggested that the bracelet had a protective as well as aesthetic role, perhaps having a use in reproduction such as a cultic charm bracelet.

 

 

BIBLIOGRAPHY:

  1. Teeter, E. (2002) “Animals in Egyptian religion” in History of the Animal World in the Ancient Near East , Collins, Billie Jean , 2002 , pp.335-360
  2. Kritsky, G.; Cherry, R. (2000) “Insects in Egyptian mythology” in Insect Mythology , Kritsky, Gene; Cherry, Ron , 2000 , pp.49-63
  3. Potts, T. (1990) “Egyptian jewellery” in Civilization: Ancient Treasures from the British Museum, Potts, Timothy, 1990, Australian National Gallery, pp.76-79


[1] Teeter, E. (2002) “Animals in Egyptian religion” in History of the Animal World in the Ancient Near East , Collins, Billie Jean , 2002 , p.337

[2] Ibid. p.343

[3] Ibid. p.346

[4] Kritsky, G.; Cherry, R. (2000) “Insects in Egyptian mythology” in Insect Mythology , Kritsky, Gene; Cherry, Ron , 2000 , p.52

[5] Ibid. p.49

[6] Potts, T. (1990) “Egyptian jewellery” in Civilization: Ancient Treasures from the British Museum, Potts, Tim. 1990, p.76

[7] Ibid. p.78

Image

 

In his 1950 article for Mind, Alan Turing (p.433) explores whether machines can think by using an imitation game as an analogy for his explorations. The imitation game is played by three people: a man, a woman, and an interrogator of either sex. The interrogator must establish, by asking a series of questions, which is the man and which is the woman. The interrogator cannot see the respondents or hear their replies as the questions and answers are in written form. In Turing’s game the man or the woman is replaced by a computer. Turing (p.434) then asks the question whether the interrogator would be fooled as often in the game if one of the respondents was a computer, and infers that the computer would be able to respond as succinctly and comprehensively as any human.

 

The game Turing envisages will use only digital computers (p.436) as it must follow a set of rules just as a human computer should and, like Turing’s understanding of human computers, the digital computer has a store which is the memory, an executive unit which carries out various individual operations, and a control which is a ‘book of rules’ (p.437).  The digital computer will also be what Turing terms a ‘discrete state machine’ which only has a finite number of possibilities so that the principle of uncertainty cannot occur (p.440). This allows all future states of the machine to be predicted by the input signals of the machines initial state. Turing states that such a machine with an adequate, suitably increasing storage capacity and appropriately connected programming should be able to convince the interrogator that it is human (p.442).

 

One of the objections to Turing’s assertion that computers can think is a mathematical objection (p.444). Gödel’s theorem infers that such a powerful logical system must have an infinite capacity for it to be successful otherwise there will be certain things that a finite capacity system such as the discrete state machine will not be able to accomplish. In the imitation game there may be questions that the finite machine may not be able to answer even given adequate time. Although Turing answers this objection by stating that it is not proven that humans have such an infinite intellectual capacity either, and that, just as there could be humans that are cleverer than machines, there might also be machines that can be cleverer than humans (p.445). A further response to this reply could be that even infant humans have a capacity for such things as humour, which might be considered open-ended possibilities to the human intellect (Hurley et al. 2011, p.5) such that Turing’s closed state machine could not replicate. If such a contention is found to be the case then Turing’s assertion that machine’s can think  may turn out to be false, in that an interrogator will be able to identify the machine by it lacking the capacity to answer questions laced with implied humour.

 

  1. 1.      Hurley M., Dennett D. C., Adams R. B., (2011) Inside Jokes: Using Humour to Reverse-Engineer the Mind, Massachusetts Institute of Technology
  2. 2.      Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.