Dear Students!
Next week we will have the last speaker
in our seminar series - unfortunately Michael Ferkin has cancelled for
the 21st of November. Stephen Blatti will explore issues of cognitive
skills like self awareness, consciousness, theory of mind, and evidence
for these skills in the animal kingdom. The reading materials will show
you again how different opinions on this topic are - what do YOU
believe? "Believe" is the right word here because at this point "evidence" can be interpreted in different ways!
Our presenters for this week will be:
Stefano: Se Gracia - Self-Awareness in Animals
Abby: Penn et al Part 1: the first 31 pages
Kenneth: Penn et al Part 2: the rest
Have fun!
cheers
Uli
I have been thinking about the question that one of our classmates asked before: Do Vervet monkeys “want” to produce different alarm calls in the presence of different predators? If they really want to produces these fixed signals, can we say that human infants under 6 months of age “want” to cry because they feel pain or hungry? A project done in our lab so far indicated that human infants at 6 months of age know how to get their mothers’ attention by crying but not infants before 6 months, so we can infer that (most) infants at 6 months learned to use cry consciously. As for vervet monkeys described in Dr. Owren’s article, infant vervet monkeys can produce these calls without hearing them from others, but they do not know how to react when they first hear them. Therefore, I think both human infants and infant vervet monkeys need some time to learn to produce these fixed signals consciously.
ReplyDeleteThe second question comes from whether animals can anticipate their own future in DeGrazia’s paper on page 207. I do not know the differences between animals have anticipation and the argument from a skeptic. “A skeptic might reply, however, that what is adaptive is the capacity to encode information gained from experience and use that information in modifying future behavior.” Isn’t experiences part of the memory?
As for Penn et al.’s paper, I wonder if relational reinterpretation (RR) hypothesis is a good explanation for the gap between human and nonhuman cognition, or if human and nonhuman really have a gap. It says “human alone are able to reinterpret the world in terms of unobservable causal forces and mental states.” It makes sense because people prepare things for the rainy day. However, nonhuman animals cannot do this? Sometimes I feel that if an individual worries too much, it becomes annoying and things are hard to be executed… Some of the comments mentioned similar questions that I have, such as Emery and Clayton’s (p.134) or Siegal and Varley’s (p.146).
In reviewing Penn et al.’s peer commentary, I found Burghardt’s question about what do “kind” and “degree” mean very interesting. Ontogenetically, it is also a question of “discontinuity” and “continuity” among developmentalists in early children literature. While a lot of papers we read so far in this seminar suggested or emphasized such possible continuity across species, this particular paper addressed the opposite: the qualitative discontinuity between us and other simpler species.
ReplyDeleteThe author of the commentary cited Adler’s usage of defining differences: 1) the difference is in degree: we have more flexibility in operating X; 2) the difference is measurable: it is obvious that we weigh more than birds; 3) superficial difference in kind: ice and water differ in kind only superficially because they are all H2O, and 4) real difference in kind: rarely things differ in an absolute kind. Even the smallest chemical unit of matter, an atom, does not necessarily differentiate things in kind without using quantitative information about protons, neutrons, and electrons. Here, Burghardt gave an example of “living” vs. “dead,” which I think also not necessarily differ radically. A dead body of human’s is still human’s; you will never confuse a human’s dead body with an animal’s dead body. Some culture considers humans’ dead bodies to be a vehicle of restoring an afterlife, so they keep dead body as much as they could, and respect it.
The problem now is that we don’t know what the smallest unit of language, cognition, or emotion is (unless you tear different constructs apart and then you later combine them, how do you combine?) and we don’t know if it is possible to measure such global construct without other variable intervening (e.g. you almost always need to use some linguistic or prelinguistic skills to communicate with another species).
I have several comments for the David DeGrazia “Self-awareness in animals” chapter. I am by no means trying to refute these claims, rather, I am choosing to point out areas where I am not convinced of the claims made by the author based on the examples provided.
ReplyDeleteMy first question/comment is for the claim made in section 2 (page 203), where the author states, “one must be capable of conscious states, and in particular pleasant and unpleasant feelings, in order to have desires; unconscious desires are possible, but only in beings capable of having conscious desires. “ If an animal is unaware of its unconscious desires, why does it have to be aware of its conscious desires? The spider example seems to support this; spiders unconsciously want to survive so they unconsciously desire to build webs so that they can eat.
My second question is on the same page and section, where the author gives the example of hens preferring wood shavings to wire floors. I am wondering if this preference is conscious, or how one could prove that this preference is conscious. To me this could be something that is hardwired (like a complicated reflex), similar to the spider’s “desire” to build a web. The author later chooses to call such gray areas of “conscious desires” as proto-desires. On that point, what is the difference between content laden “proto-desires” and “proto-beliefs” and memory of stimulus-response interactions with their environment?
In the section about meta-cognition in animals (section 12, page 216) the author provides several studies that help to support the idea that certain animals are capable of meta-cognition. The author states that some of the best evidence comes from David Smith’s work on monkeys with the joystick experiment. To me, a much more convincing argument for the presence of meta-cognition in animals was provided earlier in the paper. I think that a better case for meta-cognition in primate would be the example mentioned in section 9 page 212, where when fighting, male chimps will hide signs of fear, “which might embolden the rival - by suppressing instinctive facial expressions and vocalizations or manually covering his mouth”. This is a clear example of meta-emotion, awareness of one’s emotional state, as well as a good example for meta-cognition, where the chimp knows that the expression of fear will influence the rival chimp’s “confidence”.
The interesting aspect of both of these articles is that despite using extensive footnotes citing various studies, the underlying premise of is an old one. Specifically, much of what is covered is essentially an empirically version of David Hume’s concept of impressions, ideas, causal relations, and identity. Part of this connection is obviously due to my having recently read part of Hume in another course; however, the underlying questions are the same. Primarily, how do we know who we are and how much do animals share this sense of awareness. I do have some questions or concerns about some of the premises presented, particularly within the DeGrazia article. For example, when discussing memory in animals he states that because they may have a sort of episodic memory, this could be evidence of self-awareness. Further, this is based on the idea self-awareness may be based on the ability to have a proto-language in which they can express propositional attitudes about things he or she desires. I wonder if all of these same things could just be reduced to conditioning. While I may have certain desires for what I want for lunch, there are many things that I “do” or could be construed as a “desire” that have no propositional attitude—I simply do. For example, every morning I may eat Cheerios, not because I desire Cheerios but because I have always had Cheerios for breakfast. There is no desire or contemplation, it is simply conditioned or by habit. The day I may choose to have yogurt and fruit because I woke up aware of a new desire and then chose to depart from convention to have this new thing and then actively seek it out indicates desire. I am not sure my dogs have that sense of self-awareness and the ability to contemplate. For example, while my dogs have times when they want to go outside, a lot of this is conditioned. They know that in the morning they will go outside, then before I go to school, they go for a walk, when I get home they will go outside, then at 6 they will eat, then go for a walk, then at 10 they will go outside again. If I miss any of these steps, they will let me know that it is time to go outside. Is this desire to be outside? Is this a self-awareness that they would like to go outside? I am not sure, this just seems to be out of conditioning.
ReplyDeleteIn another example, DeGrazia discusses imitation as a sort of self-awareness. If there are sorts of mirror neurons, or at least some system that allows mirroring, I am not sure that this shows any sort of self-awarenes instead of mimicry. Just as I don’t think my ipad has any self awareness when I switch it to mirror mode and hook it up with a digital adaptor, I am not sure parrots, excluding greys, may be illustrating any sort of cognition…possibly just mimicry—possibly just an encoded ability.
The Penn article, is interesting in that it seems to delve more into the degrees of awareness, our ability to create abstract ideas, and use them creatively. With the Language-Only Hypothesis, would someone unable to speak or read then have a reduced cognitive ability and ToM? While they discuss this on page. 121, and they refute this idea because humans have the faculty of language, they also state learning a language “re-wires” the brain. If so, how? Then, when we teach animals to sign, does this cause them to “evolve” in a sense?
Penn et. al seems to take a more traditional theoretical approach to describing what essentially makes human cognition different from nonhumans. Traditional theories in psychology typically place emphasis on the idea that the brain is solely responsible for generating behavior. For instance, perception can be thought of as the input to a computational, representational system that mentally transforms the input into motor commands. Many researchers attribute this computational approach to explaining language as well. For example, Hauser et al. (2002a), suggests that the only component of the human language that is, in fact, uniquely human is the computational mechanism of recursion.
ReplyDeleteNonetheless, many researchers treat embodied cognition as the idea that the contents of these mental states/representations can be influenced by the states of our bodies. One of the commentaries quickly pointed out that Penn et al. “continue to rely heavily on a computational model of cognition that places all the interesting work to be done solely inside the organism’s head.” The suggestion is that he should take a more embedded approach, where the cognition of humans to surpass other species may be a consequence of how we utilize the elaborate structures we construct in the world, rather than the exploitation of more elaborate structures inside our heads. I agree that Penn et al. should take a more embodied approach to describing the discontinuity between human and nonhuman minds. However, I would not go to the extreme and posit that all cognition is purely embodied, as the commentator purports. I think that embodiment does work to solve problems that were typically assumed to be all done in the head. Still, some aspects of behavior can be better explained using a more computational approach .
Penn et al's discussion concerning the discontinuity between human and nonhuman minds and the Self-Awareness article both examined evidence for their claims that seemed to rely a lot on nonhuman animal behavior in order to determine nonhuman animal cognition. I'm not necessarily criticizing this, but it makes me wonder how much research has been done or could be done using fMRIs or PET scans on animals to help determine some of these things. The problem with something like this is that in order to study "behavior" subjects need to be active in their environment... not strapped down to some table and being pushed into a giant scanner. But it's difficult to really get in the "head" of any animal in order to figure out how "self-aware" they are or to what extent their concept of "sameness" and "difference" goes. Without being able to actually examine nonhuman minds in a more direct way, it seems most of the conclusions drawn by all the researchers must be based solely on behavioral evidence... and so much of the evidence is determined more by their preconceived notions and dedication to certain theories about cognition than the evidence itself. This is just something I thought about mainly while I was reading through the Penn et al paper. It was almost tiresome watching him explain one hypothesis after another only to tear it apart at the end and proclaim it insupportable or without evidence.
ReplyDeleteBased on some of the things Penn et al said in their paper, I'm not sure to what extent they would agree with Degrazia, either. It seemed to me like some of the examples of self-awareness he gave were what I've thought of as simply an "instict." So, where do you drawn the line and how do you know the difference between a nonhuman's self-awareness and just survival instinct?
In this post, I would like to focus on two points of the debate between Penn et. al. and Irene Pepperberg as it appears in the reading for Wednesday.
ReplyDeleteFirst, Pepperberg’s finding that Alex can associate Arabic numerals to sets of objects is an astonishing finding. Alex can do so because he has learned to associate both the numerals and the sets of objects to certain vocalizations. However, it’s not clear to me why this should count as a symbolic capacity, since this behavior seems to be grounded only on associations.
Second, to a certain extent, Pepperberg’s first critique is right. Penn et. al.’s claim that animals cannot distinguish sameness and difference beyond the context in which they are trained is undermined by the fact that Alex can answer questions about new situations using the same criteria he has learned in other situations. To be precise, I’m not sure that the example Pepperberg gives is really relevant. The question “what color bigger?” is not the same as “what’s different?”; it seems plausible to say that when Alex answers “none” to the “what color bigger” question, he does so because he has learnt that when it is not possible to give a positive answer (by mentioning the label of a color), the appropriate response is to say “none”. There is no need for Alex to undertake the thought that the two colors have the same size; it is sufficient to notice that none of the two colors has a larger size. In any case, it seems possible to me that we can train parrots in a way that enables them to extend the criteria of sameness and difference beyond the contexts of training. For example, if we had taught Alex that the term “property” refers to who possesses an object, Alex could have answered “property” if, for the first time in his life, the question “what’s different?” was asked with reference to two objects that have shape, color and matter in common but differ with regard to the person that possesses them.
Yet, I think it is correct to say that when humans think about sameness and difference they are doing something qualitatively different. In my view, the problem is that Penn et al. do not describe what it means for humans to make “categorical, logical distinctions” between sameness and difference. To understand what categories are for humans it would be useful to resort to the first essay that has been written about this topic, that is Aristotle’s Categories. Aristotle defines the categories as the kinds of predicates we can attribute to a subject. He has in mind the typical human activity of talking about something. Thus, extending Aristotle’s definition to capture the meaning the term has today, we can say that categories are the predicates that we attribute to subjects when we talk about them (it’s not essential here whether this talking is vocally expressed or is internalized). Predicates refer to universal features. The function of predication is to let something be known for what it is, to get to know what universal feature characterizes a subject. This is what Alex doesn’t do: when he answers to the “what color” or “what number” questions, he is not describing a reality in terms of universal characteristics, but he is giving the learned, convenient response to a specific command in given situation. Alex associates a vocalization to the color in front of him because he wants to get a reward or because he wants to play. He is not interested in describing realities for what they are, as we humans are. It seems to me that in humans we find basic desires or interests that we don’t find in animals. The desire to become aware of reality for what it is seems to be one of them. Sorry, this is a very sketchy thought, but I think that these are the kind of things one should take into accounts if one pursues these questions. My investigation of these questions is at a very primitive level. [One problem would be to show that the human attitude toward knowledge is no transitory curiosity, nor a masked desire for material well-being].