Dear students!
Our next speaker for the 26th of
September will be Anne Warlamount and she will tell us about modeling in
communication and language evolution, which is a really interesting
topic! You will find the assigned paper (just one, but rather long) on the UMdrive, as usual.
Jeremy will be our presenter this week!
Have fun!
cheers
Uli
I have a few questions for clarity:
ReplyDelete1)In the top paragraph of page 40 the authors state "These simulations typically involve agents who are evolving or learning to communicate (rarely both)". It is not clear to me if this is referring to the simulations that involved female preferences or visual discrimination, or if this is referring to the 24 studies that involved encoder/decoder games. Is the fact that these agents rarely evolve and learn to communicate in these simulations a limitation of the simulations or is it an observed result of the agent interactions?
2) On page 41 the authors state, "The interactions were similar to Levin's above, except that these agents learned with backpropogation instead of evolving". What is the difference between 'backpropogation' and 'evolving' in this computational context? Does 'evolving' here refer to machine learning, or is that what backpropogation refers to? Is a major difference between backpropogation and evolution the presence of a "mutation" variable in evolution? So, in backpropogation agents are static in the sense that the only thing changing about them is their knowledge of communicating with each other. Whereas, with evolution agents change slowly over time due to mate selection and random mutation? Is this correct?
3)On page 43 the authors mention how the initial population variable impacts the rate at agent populations achieve a communication consensus. I have a question about this initial population variable and how it fits into the paleontology and evolutionary biology literature. How does the initial population of a species vary? It seems that these computational models arbitrarily assign when populations begin to communicate, when realistically these species have been communicating with each other for a significant time already. So, when a large initial population size is selected, does the model take into account the amount of time it took for the population to reach that size? Is this merely a result of the nonsituated feature of the agents?
Also, I may have missed this but in general do these computational models have any temporal qualities? What are the natural constraints being placed onto these models, if at all? For example, how many years does one iteration of these neural networks or backpropogations represent?
DeleteInstead of inferring the process of language evolution by observing animals’ behavior, simulations show how a language has been evolving by controlling variables and making possible assumptions (e.g., testing Chomsky’s hypothesis). However, sometimes a seemingly impossible assumption has to be made in order to make a simulation work. For example, the authors mentioned (p.50) that no cheating between agents should be established first so that the alarm calls can be evolved. Although lots of gaps have to be addressed at the time the article published, the simulations have shown the possibility of further improvement.
ReplyDeleteThe computational modeling show how languages have been evolved or learned. Contrary to “the tower of Babel”, it is the spatial constraints that lead to different dialects and global variations. Besides, simulations indicated that grammatical and phonological classes are useful for agent communication. How speakers of different languages manage all the speech sounds into different categories is very interesting. It will be tiresome if English speakers have to differentiate the [ph] in “speech” and [p] in “peach” because they have the same meaning /p/. In other words, there are accent and sound differences because non-native speakers of English have different ways to categorize these “p” sounds. I would like to see how tone and non-tone languages have been evolving because they are so different!
After reading this article, I feel that computational modeling is like watching The Truman Show. Everything and every person you (i.e., agent) see is a design and big people are watching you from the top. Is it possible that someday agents, like Truman, want to walk out of the exit to see the real world?
Because language leaves no fossil record for us to study, it almost becomes clueless if we ask ourselves how did language emerge and diverge, and how did it develop and evolve. Despite rapid advances in many areas of science, we still know relatively little about this particularly human trait (Christiansen & Kirby, 2003). There are so many issues to be explored and being touched on in Wagner et al.’s paper (e.g. the big question of nature and nurture, self-regulation learning and adaptation, continuity and discontinuity, etc). These simulation researches can indeed serve as a guide to where our language origin starts. Because of the nature of simulation work, it is easier than in a real world to manipulate and control for any given variables in the process of deriving language. There are still many intricate complexities and nuances unresolved in this field, but this article has given me a pretty good overview on many topics.
ReplyDeleteWhat is this shared communication system consist of? The role of a sender and a receiver seems to be unarguably necessary, but what about environment? What if language did not emerge because of communication but because of conceptualization (Newmeyer, a linguist)? The question is also intrigued by Deacon’s argument that there are no possible universal substructures as most people think since semiotics (symbol systems) can emerge within its system and thus each language symbol system is shaped by its semiotic constraints. We see that both situated agents and nonsituated agents can both develop some sort of primitive communication systems, but with the former used adaptation and the later used learning. However, this result seems somewhat opposite to what would happen in reality, doesn’t it?
The Wagner et al paper (2003) paper was actually a quite interesting read, especially because I am unfamiliar with some of the work regarding agents. One particular thing that stood out to me was the authors' explanation of the mechanisms underlying the evolution of dialects. In non-situated, unstructured communication, agent simulations have shown that spatial constraints can lead to local dialects and global variations. Consequently, spatial constraints have important implications concerning communication variation. When agents learn from each other, spatial constraints can lead to consensus, but local dialects will develop and there will be substantial global variation (Livingstone & Fyfe, 1999a, b). More so, spatial constraints prevented agents from communicating with others too far away, so local areas developed with one dialect while other areas farther away could retain a different dialect, both equally as efficient. These findings seem to primarily attribute the evolution of dialects to spatial constraints. It seems plausible to say that spatial constraints make the greatest impact regarding the emergence of dialects. However, what other factors may play a role? It seems to me that there are tremendous social and political implications that can be attributed to the emergence of dialects as well.
ReplyDeleteWhile I'm somewhat familiar with the term "computational modeling," how exactly it is being applied in the studies in this paper is somewhat unclear to me. I know a lot of it has to do with my unfamliarity with the field and all of the terminology that goes along with it.
ReplyDeleteAs mentioned in this paper (and some of our other previous readings), I understand that since the evolution of communication isn't something that has a process by process "fossil record," the only way for us to simulate the possible ways in which language emerged over time is to enlist computational models. The idea is fascinating and it is amazing to me that we're even capable of embarking on studies of the nature. However, I'm a little skeptical as to how objective these analyses are conducted. On page 61, Wagner et al says that most of studies of this sort have shown that the agents involved in the study always end up developing a working communication system. They say that because of this there is a need to simulate situations in which working communications systems do not develop, and perhaps this is in order to insure that some bias isn't creeping into the modeling.
Another observation Wagner et al make on page 61 and further discuss on page 62 is that there is an insubstantial amount of information that these studies have been able to provide concerning the origins and evolution of syntax.
So, it seems like this is definitely furtile ground for investigation, but a lot more research still needs to be done to answer questions concerning syntax. Wagner et al also suggest that future research would be bettered served if it utilized situated agents rather than non-situated. If I'm right, most research has been done using non-situated (maybe to make the liklihood of developing a working communication system even more unlikley or difficult??), and so using agents that operate using systems that relate to the world around them might shed more light on the questions at hand.
In this post, I would like to recapitulate an experiments reviewed by Wagner et al. in order to see if I have understood it or not. I’m not sure I’ve understood the basic principles according to which these experiments work. It seems to me that the review article presupposes a reader that has already familiarity with the kind of investigations it talks about; since I lack this familiarity – and this is very unfortunate – I have some problems in understanding a few points.
ReplyDeleteThis experiment is from the most complex category, i.e. “situated, structured communication”: Cangelosi (1999), page 53. In this experiment, simulated agents are elements (something like cursors) in a computer program and are programmed to search for other elements (mushrooms). An initial simulation would consist in something like this: 1. agents and mushrooms are randomly positioned in a virtual space, 2. agents search for mushrooms and eat them, 3. at the end we can see which agents have eaten more food. Agents had different “genetic algorithms”, i.e. different ways of emitting outputs on the basis of inputs (for example to approach detected mushrooms in the appropriate ways). Agents were trained to use their linguistic outputs to name mushroom types. The training technique is backpropagation, which I assume being a way to manipulate the agents’ performances according desirable conditioned outputs. From the review, I can’t say if agents were able to transmit the learned way of behavior to next generations, to their offspring. Agents were able to react to the signals emitted by others. Over many generations and simulations, a system of emitting specific signals for specific objects and specific actions evolved. This means that emitting such specific action was a benefit, i.e. allowed agents to approach mushrooms better (perhaps because this allowed to have more mates around that can indicate the food). If I am reading the experiment correctly, there arise a question for me: what does the actual running of the experiment teach us? It is logic that agents that are programmed to give better outputs will be selected more than others. In other words, if we know beforehand that certain signals will carry benefits for the signaler, it’s obvious to expect that that kind of signals will become predominant in later generations. It seems that we can expect this kind of results simply by knowing the initial conditions, but perhaps simulations allow us to discover factors that are not easy to point out before the actual realization of the simulations. Perhaps simulations can be useful to obtain statistically precise expectations, is this correct?
While this weeks article was very interesting, however, my main questions concerns how these AI studies were actually done. Specifically, how to do you program the computer to simulate language? I suppose I think of the Turing test in which the question is: is the computer actually “thinking” or “learning” or is all of this a step removed. Specifically, is the machine really evolving or learning or is it simply executing a program? How do we know that that the results show evolution or learning or was the program simply designed to give certain somewhat predictable results? Is the variation simply built into the programming? Is the evolution programmed? If so, how much of this actually shows what is happening? This is not to imply that these programs do not show learning and evolution; however, it, more likely, shows my lack of understanding of the simulations themselves. I think a lot of this in some ways comes back to a somewhat metaphysical question: Do the machines, on some basic level, really think? Are they simply executing a variety preprogrammed variable, yet predictable, tasks? Also, at what point can we say that these simulations reflect a sort of consciousness. Further it brings up questions of at what point do animals or machines attain a sense of consciousness—as limited as it may be. With that in mind, I loved this weeks reading, however, in many ways it produced as many questions as it did answers.
ReplyDelete