User:GregEdelsward/sandbox
Willem Levelt was born on May 1938 in Amsterdam. Levelt is a psycholinguist. Levelt has contributed significant findings to the understanding of language acquisition and speech production through his research. The significance of a mental lexicon, which is a mental dictionary containing information associated to word meaning, pronunciation, and syntactic characteristics, has helped him develop a comprehensive theory in the field cognitive processing. Levelt was involved in psycholinguistics, which revolved around the idea of how speech was produced and the cognitive processes used in producing fluent, audible messages. Levelt’s work remained free from external influences during a “pre-Chomsky” era.
The process of words as a staged process is as follows:
Theory in Outline | |
---|---|
1. Conceptual Preparation:
- Process leading upto the lexical concept - Lexical activation captures speakers communicative intention - No simple one-to-one mapping of a notion to be expressed - Phonological encoding feed back as internal speech and activate corresponding lexicon |
|
2. Lexical Selection:
- Retrieving a word (lemma) from a mental word bank, given a lexical concept to be expressed - Lexical concepts spreads activation to its lemma node, that favors highest activated lemma |
|
3. Morphophonological encoding and Syllabification:
- Crossover from the conceptual/syntactic domain into the phonological/articulatory domain - Retrieve the word’s phonological shape from mental lexicon |
|
4. Phonetic encoding:
- A specification of the articulatory task that will produce the word - Different tiers allow different articulatory gestures to be performed - Articulation gestures sum a final score that is to be sent to the articulation processing to be made into a productive word sound |
|
5. Articulation:
- Phonological word gestural score is executed by articulatory system - Muscle machinery that controls lungs, larynx, and vocal tract - Computational neural system that controls execution of abstract gestural scores through a motor system |
|
6. Self Monitoring:
- once speech production has been performed, we monitor our overt speech output - Allow us to discover errors in our output speech |
History
[edit]Early Theory
The debate over speech production reached two key arguments supporting the spreading activation theory or the modular two-step theory <Seaghdha & Levelt, 1991>. The theories differ with respect to the linguistic information they process. The modular two-step theory is constrained to individual’s modules/components that deal with only one form of information; the spreading-activation theory involves direct processing of multiple sources of information. Levelt argued that there was three (3) major modules in producing speech <Seaghdha & Levelt, 1991>:
1) Conceptualizer- where the idea gets transmitted into grammatical forms.
2) Formulator- where the processed idea would travel through a grammatical encoder, create sentence pattern, and phonological encoder, used in phonetically planning arrangement of individual grammatical components. The formulator has access to the Lexicon (mental dictionary containing all known words and meanings). Level characterized two discrete stages in argument of speech production through lexical access. In the first stage, known as Lemma access, the concept is lexicalized, in which an abstract symbol is used to represent a word as a semantic entity. The information is then passed to the second stage, known as Phonological access, where the lemma (lexicalized word) is translated into a phonological form. Due to the steps being modular, only able to process one type of information, the lemma access is restricted to semantic information and the phonological access limited to phonological information <Seaghdha & Levelt, 1991>.
3) Articulator – final process where the initial idea, having gone through all processing, is transformed into audible (overt) speech.
Current Research
[edit]Research today has had a large impact on speech production and acquisition. Recent findings have been focused on the neural impacts that cause cognitive processing. Understanding how different brain regions and firing of neurons can impact how humans go from thinking of what to say, to articulating speech. A recent study done by Guenther and Vladusich (2012) analyzed different brain regions and the impact it had on speech. Subjects were connected to an fMRI machine to measure neuron firing in certain spots in the brain upon exposure to a particular cognitive task. Findings led them to the development of the DIVA model (Directions Into Velocities of Articulators), a computational model that provides a quantitative framework, understanding the roles of various brain regions involved in speech. Additional focus was directed on the speech sound map that linked the motor program, articulating sound to the sensory representation of speech <Guenther & Vladusich>. Once the speech sound map has been activated, neurons are fired in the primary motor cortex, which is involved in motor control. With help from the feed-forward control subsystem, information is sent from the speech sound map to articulatory controls units within the cerebellum <Friedenberg & Silverman, 2012>. The information is then produced through auditory waves from our vocal cords. The DIVA model has played a key role in generating accurate predictions along speech brain activation, observed during functional imaging.
Additional research has proposed concerns surrounding the neural mechanisms and potential localizations within the human brain. Work done by Pulvermuller (1999) addressed issues of brain loci in response to computations performed with our cognitive processing. Within each word production there exists a “core” process and a “lead-in” process. A core process is a sequential subset of stages in the target theory, which is conceptual preparation to articulation. A lead-in process is a task-specific initiation of the core processes. An example can be given in response to picture identification. Naming a picture involves visual object recognition as a lead-in process, followed by a core process used in production of words to describing the image <Pulvermuller, 1999>. In a response paper, Levelt discussed his agreement with his search process for localization of lemma-related operation and evidence in speech articulation. However, Levelt (2000) states that the research done overstates the role of the lemma, as they do not have a direct role in binding the word’s articulation pattern and sound image. It cannot be assumed that lemmas share a link between production and perception of speech. A link can be drawn to one or more morphemes (smallest bits of sound) through production, not articulation. Levelt believes the articulation is a product of phonological encoding and articulatory motor action.
Recently, focus has been brought to the influence of speech planning and word recognition. In a particular study, Levelt and Schriefers (1991) had participants name pictured objects and were then asked to make a lexical decision by pushing a button to an auditory probe, exposed shortly after onset of the picture. This forced the speaker to pay attention to the lexical status of the spoken words in addition to preparing to indicate the name of the object. Semantically related, phonologically related, or identical words caused slower responses. In conclusion, Levelt use his findings to suggest the influence of speech planning on the output of speech and word recognition.
Additional Findings
[edit]Picture Priming Formal testing was conducted where subjects were studied naming pictures of concrete objects <Levelt, Meyer, & Roelofs, 2004>. A secondary lexical decision task was present less than a third of the trials occurring at a short (73ms), medium (373ms), or long (673ms) delay after exposure to the picture. These word probes either shared relevance with the picture or were completely unrelated. The shared relevance carried semantic (similar meaning), phonological (similar sounding), and mediated (similar lettering) priming with respect to the picture. Assessing the priming at different stimulus onset asynchronies (SOA’s) for the related and unrelated conditions, allowed Levelt to track the time course in relation to exposure of the picture with phonological and semantic activation <Levelt & Schriefers, 1991>. Levelt was able to find was evidence favoring the modular two-step theory over spreading activation based on a) the absence of priming in the medicated condition, and b) the absence of semantic priming after the long SOA delay. The lack of priming in the mediated condition was a crucial finding, as it is predicted by the spreading activation theory <Levelt & Schriefers, 1991>. Levelt argued that if the activation parameters had been adjusted to minimize mediated priming, there would not be enough phonological activation. This would lead to the inactivation of the phonological access stage to become activated, thus restricting speech production <Levelt & Schriefers, 1991>.
Oppositional Views Oppositional views have countered findings, suggesting that the spreading activation model can produce the relevant priming effects, including the lack of mediated priming. The spreading activation model does this without compromising the ability of speech error that initially motivated processing. In addition, the oppositional view stands by the fact that spreading-activation explains how semantic and phonological activation conspire to produce mixed errors. Similar studies opposing the two-stage theory have come from “tip-of-the-tongue” phenomenon conducted by Brown (2004). This condition involves a speaker knowing a particular word and arranging it with appropriate syntactic context, but upon articulation, the speaker is unable to say the word in its phonological form <Jones & Langford, 1987>. Spreading-activation theory would not allow this to happen due to simultaneous activity in semantic and phonological processing, which would both become active at the same time and thus, there would be no blockage while retrieving phonological information. Without the restriction of a modular sequential process, the information could have been articulated. Jones and Langford (1987) found additional information and claimed that the phonological blockage could be induced or aggravated with presentation of a word phonologically similar to the target. This would pause the two-step theory from processing and lead to a longer time-course in articulating the word.
Limitations
[edit]Two speech error phenomena must be considered in addition to the findings:
Malapropisms are considered “cognitive slips” where a word is replaced by a phonologically related word. Its presence can be seen as a product of activation spreading, and thus may be mistakenly selected. Another error to take into account is a ''mixed-error (effect)'' which is when a word may take on both a semantic and phonological relation. In contrast to the modular two-step theory, which indicates one form of information can be processed at once, the effect has been supported to hold an interactive position, which is the ability to control both <Levelt, Roelofs, & Meyer, 1999>.
Another limitation in the field of psycholinguistic experimentation is known as the ''Lexical Bias Effect''. This issue arose upon testing done by Humphreys, Riddock, and Quinlan’s (1988). Their cascade model assumed that spreading of activation was only a forward process, called a ''feed-back network'' <Friedenberg, & Silverman, 2012>. Dell (1988) challenged the proposal indicating that the spreading of activation has the ability to move forward and backward, quantified in the model of speech production. Certain blockages, such as the tip-of-the-tongue phenomenon, can put a pause on the flow of information <Schwartz, Bennett, & Metcalfe, 2011>. The process must then revert and complete its original phonological stage before it can flow forward to articulation. Dell also introduced this dilemma due to phonological speech errors occurring more than expected from production of real words. Experimentally, the chances of a word slip on a pair of non-words (eg. “Darn-bore” to “barn-door”) are three times more likely than on a pair of real words (eg. “deal-back” to “beal-dack”).
Future Research
[edit]Eye Movements
There are many different aspects under investigation by psycholinguists with respect to future learning of speech acquisition and articulation. The tracking of eye movements in recognition of objects or words is currently being studied. Researchers have been studying how eye movements during production of nouns and pronouns can influence speech planning and production. In a recent study by Femke, Muelen, Meyer, & Levelt (2001), researchers had participants produce pronouns and nouns to refer to a new object and to objects already known. Participants looked less frequently and for a shorter duration at objects to be named when they had recently seen or heard of these objects, compared to never seen before objects. Levelt and his colleagues made the connection that if a relationship exists between eye gazing and visual attention, then two conclusions can be made. First, participants pay less visual attention to given objects than to new ones. Second, participants pay less visual attention less often and for a shorter duration to objects referred to by pronouns than to objects named in a full noun phrase. In conclusion, the experiments suggest that cognitive processing is beneficial from allocation of visual attention to the recognized object <Muelen, Meyer, & Levelt>.
Word Boundaries
Another point of interest in future linguistic research is the study of word boundaries in the production and perception of causal speech. Acoustic sounds can be unclear causing an increased reliance on contextual information to resolve the ambiguity. In other words, when a person hears a particular sound that her or she cannot attach a semantic meaning too, he or she must use their surrounding environment and context of the sound to attach meaning. In a study by Kim, Stephens, & Pitt (2012), participants produced an ambiguous sequence in two types of sentences frames, one of which carried a neutral context and the other a biasing context. Contextual bias was analyzed over two cue versions. If participants are aware of the clarity at which they are speaking, they will produce stronger cues to the intended segmentation within a neutral context. <Kim, Stephens, & Pitt, 2012>.
References
[edit]{{Brown, J. C. 2004. Eliminating the segmental tier: Evidence from speech errors. Journal of psycholinguistic research 33, (2): 97-101, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/231976698?accountid=15115 (accessed March 21, 2013).
Femke F van, der Meulen, Antje S. Meyer, and Willem J. M. Levelt. 2001. Eye movements during the production of nouns and pronouns. Memory & Cognition (pre-2011) 29, (3): 512-21, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/217451630?accountid=15115 (accessed March 21, 2013).
Friedenberg, J., & Silverman, G. (2012). Cognitive science: An introduction to the study of the mind. (2nd ed., Vol. 1, p. 198). California: SAGE Publications. DOI: sagepub.com
Schwartz, Bennett L., and Janet Metcalfe. 2011. Tip-of-the-tongue (TOT) states: Retrieval, behavior, and experience. Memory & cognition 39, (5): 737-749, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/920257093?accountid=15115 (accessed March 19, 2013).
Guenther, Frank H., and Tony Vladusich. 2012. A neural theory of speech acquisition and production. Journal of Neurolinguistics 25, (5): 408-422, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/1020269092?accountid=15115 (accessed March 20, 2013).
Humphreys, G. W., Riddock, M. J., & Quinlan, P. T. (1988). Cognitive Neuropsychology.
O Seaghdha, Padraig,G., Willem J. M. Levelt, and et al. 1991. Mediated and convergent lexical priming in language production--Comment\Reply. Psychological review 98, (4): 604-604, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/214221129?accountid=15115 (accessed March 19, 2013).
Jones, H. G. V,& Langford, S.(1987). Phonological blocking in the tip of the tongue state. Cognition, 26,115-122.
Levelt, Willem J. M., Antje S. Meyer, and Ardi Roelofs. 2004. Relations of lexical access to neural implementation and syntactic encoding. Behavioral and Brain Sciences 27, (2): 299-301, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/212205295?accountid=15115 (accessed March 19, 2013).
Levelt, Willem J. M., Ardi Roelofs, and Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22, (1): 1-38; discussion 38-75, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/212227145?accountid=15115 (accessed March 19, 2013).
Levelt, Willem J. M., Herbert Schriefers, and et al. 1991. The time course of lexical access in speech production: A study of picture naming. Psychological review 98, (1): 122-122, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/214223563?accountid=15115 (accessed March 19, 2013).
Roelofs, Ardi, Rebecca Özdemir, and Willem J. M. Levelt. 2007. Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition 33, (5): 900-913, https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/614464909?accountid=15115 (accessed March 21, 2013).
<http://upload.wikimedia.org/wikipedia/commons/7/78/Spreading_Activation_Model_Mental_Lexicon.png> <http://commons.wikimedia.org/wiki/File:Brain_regions_of_maps_of_the_ACT_model.jpg> <http://commons.wikimedia.org/wiki/File:DivaBlock2.jpg> <http://commons.wikimedia.org/wiki/File:Speech1.jpg>