How the Human Brain Allows us to Use Language: The Cognitive Structure and Mechanism to Understand Language

神经语言学 认知科学 心理学 认知 认知神经科学 意义(存在) 语言学 心理语言学 计算机科学 神经科学 心理治疗师 哲学
作者
Mingkuo Shao
出处
期刊:American Journal of Psychology [University of Illinois Press]
卷期号:136 (2): 209-214
标识
DOI:10.5406/19398298.136.2.09
摘要

The book Language and the Brain: A Slim Guide to Neurolinguistics focuses on what is it about the human brain that makes it possible to use language. The pursuit of the question covers several disciplines, including language science and neuroscience. Language scientists are interested in uncovering the mental computations and representations that make language possible, while neuroscientists focus on how brains are wired to learn and use information. But scientists don't yet have a comprehensive answer to how this all works. The book discusses how the brain transforms waves of sound pressure into meaningful words, how meaning is represented by networks of neurons, and how brain regions work together to receive new words or sentences.The author of this book, Jonathan R. Brennan, is an associate professor of linguistics and psychology at the University of Michigan. He is the director of the Computational Neurolinguistics Laboratory, which uses theories and models from formal linguistics, cognitive neuroscience, and computational linguistics to study the mental structures and computations used to understand words and sentences. He received the 2019 Early Career Award from the Society for the Neurobiology of Language.Chapter 1, “Introduction,” lays the foundation for subsequent chapters. Chapter 1 introduces the brain system at three different levels and the linking hypotheses between levels. The history of studying “language in the brain” is discussed briefly. The author distinguishes three levels: the computational goals of a system, the algorithmic steps needed to meet those goals, and the implementation in a physical system to carry out those steps. David Marr (1982, p. 28) put it this way: Finding algorithms by which [a computational theory] may be implemented is a completely different endeavor from formulating the theory itself. In our terms, it is a study at a different level, and both tasks have to be done.The linking hypotheses capture how possible answers at each of these levels connect. The efforts to specify these links date back to 150 years ago, starting with research on aphasia, or language disorders caused by brain damage. Patients with nonfluent aphasia have difficulty producing fluent speech. It is associated with damage to the brain's left frontal lobe. Patients with fluent aphasia usually have difficulty understanding and producing sensible language associated with damage to the left temporal lobe. Early aphasia research led to the classical model of language in the left frontal and temporal lobes, which still influences modern theories.Chapter 2, “Tool Box,” introduces the structure of the brain and the tools used to investigate the function of the brain. Before introducing tools, the author leads us through the anatomical geography of the brain, including the central nervous system, major lobes and cortex, the neuron-related structures, and the organization of the cortex. Techniques are regarded as the main part. The chapter lists all prevailing techniques: functional magnetic resonance imaging (fMRI), functional near-infrared imaging, positron emission tomography, electroencephalography, magnetoencephalography (MEG), electrocorticography, transcranial magnetic stimulation, and direct cortical stimulation. They are briefly introduced from the operation principle and their suitability for spatial resolution and temporal resolution. The applicability and feasibility are compared in terms of price and ecological validity. Each technique has both pros and cons for making sense of brain structure and function. For example, fNIRS is quieter and more comfortable than fMRI, making it appropriate for studies of children (Rossi, Telkemeyer, Wartenburger, & Obrig, 2012). Being familiar with these trade-offs and choosing suitable techniques is the most important lesson from this chapter. It's these features that relate particular methods to specific research questions. But to have a good command of the techniques, we need to put them into practice.Chapters 3 and 4 explore how sound travels through the air and becomes brain activity associated with words decoding. Chapter 3, “Sounds in the Brain,” describes the procedures through three separate transformations. Each takes one step from sound to meaning. The journey starts from sound waves moving hair cells in the cochlea and continues all the way to phonological analysis. Acoustic information is represented with a spatial code both at the cochlea and in the primary auditory cortex. Tonotopy is a term for spatial codes for frequency information. Neurons that are adjacent to each other respond to sounds with similar frequencies. The brain may also represent temporal acoustic information in a similar way, called periodotopy. It means that different neurons encode different acoustic features of speech (spatial code/temporal code). To create a phonological sketch, the brain maps from these continuous neural representations of sound, or neurograms, to categorical linguistic units such as phonemes within about 100–150 ms in the superior temporal gyrus surrounding the auditory cortex. To investigate whether the neural code for phonemes, like the neurogram itself, is built on a spatial code, Scharinger, Idsardi, and Poe (2011) tested this proposition with MEG and found that different source locations appear to fall along systematic axes: Front vowels all fall along an inferior-to-superior plane, and back vowels fall along an anterior-to-posterior plane. The mapping (from neurograms to phonemes—from continuous neural representations of sound to categorical linguistic units such as phonemes) is implemented by integrating acoustic information of two temporal windows: a shorter window for the fine structure spectral detail and a longer window that captures changes across time, such as the speech envelope. The details about the integration of windows are described in Chapter 4.It is worth noting that the auditory cortex shows the asymmetrical response in this process: Some studies find that the left hemisphere has a stronger response to the shorter “phonemic-feature-sized” oscillations, whereas the right hemisphere shows a stronger response to the longer “syllable-sized” oscillations. The phonological sketch is finally improved by a series of feedback loops between acoustic input and linguistic knowledge, which is called analysis by synthesis. The representation acquired by the interlocking processes is suitable for recognizing words. Chapter 3 completes the conversion from acoustic input to brain representation of sound and then to phonemes. The neural representation of phonemes is described next.Chapter 4, “A Neural Code for Speech,” focuses on the neural representation of phonemes. This chapter revisits the three transformations and describes some key facts in detail (Table 1). The first conversion from acoustic information to a neural representation of sound (neurogram) is carried out within about 100 ms. It is in the primary auditory cortex that this conversion occurs. Different neurons respond separately to spectral or temporal information, which is represented with a spatial code. The second conversion is from neurograms to phonological sketch. The neural representations encode both the fine structure spectral information that distinguishes different phonemes and the temporal envelope information that encodes syllabic information. Different neurons respond to distinct spectral and temporal information. Neurograms are used to activate neural representations of phonological properties. Populations of neurons adjoining the auditory cortex show responses after a speech is heard within 100–200 ms. And the responses show a categorical effect on the different features. At this stage, the mapping may occur at least in two separate temporal windows: a shorter window, about 25 ms, tuned to phonemic features, and a longer window, about 200 ms, tuned to syllabic features. Finally, this mapping also follows a feedback loop: Linguistic knowledge helps us synthesize or predict upcoming speech input, which guides and refines the analysis of the input. Although this process is assisted by specialized linguistic knowledge, it seems that the process relies on the same basic neural structure used for other kinds of auditory input. The neural representation of speech provides clues to the mental representation of phonological information in the mental lexicon. The neural evidence preliminarily points toward a priority for acoustic features. But some data indicate that the neural signals illustrate that speech perception is built on the general auditory processing systems that are highly attuned to the specific properties of speech.Chapter 5, “Activating Words,” deals with the neural foundation of recognizing words. This chapter turns to the process of recognizing morphemes.Words, or lexical items, are composed of phonological, semantic, and structural features. Word recognition needs to map a phonological representation to a mental representation of meaning. This mapping (phonological representation to mental representation of meaning) occurs rapidly in the superior gyrus and posterior middle gyrus of the left lobe, as phonological representations in the superior temporal gyrus activate lexical items in the posterior middle temporal gyrus. MEG evidence sketches a highly rapid sequence of processing stages (Table 2), from acoustic analysis (50–100 ms), to phonological processing in the superior temporal gyrus (100–150 ms), and to lexical access in the posterior middle temporal gyrus (after 250 ms, or 1/4 s).The current work is trying to unveil the lexical units that form the bases for recognition. There are two theories related to word recognition: the full decomposition theory and the partial decomposition theory. The full decomposition theory asserts that lexical items are recognized after first being broken down into morphemes. To be specific, the input words must be decomposed into minimal morphemes before they can be accessed. Thus, the morphological structure should affect early stages of processing, 250 ms before lexical access begins. In contrast, the partial decomposition theory declares that words can be approached as whole units, so the morphological structure will not affect the early stages of word recognition. There are conflicts among results from different methods, and several possible strategies are available for reconciling these findings.Chapter 6, “Representing Meaning,” focuses on how the brain represents word meaning. When the brain implements the task of semantic processing, the distributed network including the temporal, frontal, and parietal lobes is activated. Scientists use the most advanced methods to map the network of semantic information. There are two main arguments in this chapter. The first is about the existence of the semantic hub. The second is about the concept of embodying. It appears that there is a semantic hub in the brain (Patterson, Nestor, & Rogers, 2007), which is located at the anterior temporal lobes. And both theories concern the architecture of semantic processing networks. One is the distributed-only theory, which holds that only the distributed network is used for emerging concepts. Another theory is the distributed-plus-hub theory, which hypothesizes that the distributed aspects of conceptual meaning need to be bound together in some way to form a concept. Semantic dementia offers key evidence for the existence of the hub. Semantic dementia is a symptom that affects memory with increasing severity. It affects conceptual knowledge in a general way. Patients may gradually lose the ability to recognize the features that distinguish concepts from one another. Semantic dementia influences many different conceptual categories. However, it does not affect the distributed network; rather, it affects the neurons in a focused location: the anterior temporal lobes in both hemispheres. Another argument is about the existence of embodying. Even if we know that semantic representations are distributed in the brain, it is still unknown whether these representations are embodied in sensory and action systems following the embodied concepts hypothesis. A study indicating a possible causal connection between motor cortex and action semantics used TMS (Pulvermüller, Hauk, Nikulin, & Ilmoniemi, 2005). But it also seems that there is no relationship between the ability to perform actions and the capacity to understand action-related semantics. And it is the dissociation that supports the grounded symbolic concepts hypothesis.Chapter 7, “Structure and Prediction,” describes how the brain makes sense of sentences. The chapter focuses mainly on syntactic processing, especially predictions. To understand a sentence, we must identify the structural relationships. The dependencies between words should also be considered. Meaning is a compositional function of words, and the brain needs to identify how they are structurally put together. The brain is an efficient organ, and it needs constantly make and check predictions for what comes next in the sentence. There are two violations that we usually encounter in event-related potential (ERP) experiments: N400 (semantic mismatch) and P600 (syntactical mismatch). When encountering an unexpected word, the N400 ERP component in central posterior areas is observed, which is related to additional lexical activation. It is a negative voltage potential after an expectation is violated for 300–500 ms. When confronted with unexpected syntactic structure, the P600 ERP component is observed on posterior areas of the scalp at about 500–800 ms, which is associated with syntactic reanalysis. Linguistic prediction is an important part of syntactic processing, which is based on kinds of clues. Both the information in discourse and the broad social context help to shape predictions on multiple levels of linguistic representations, including phonemes, words, and sentences. Thus, when we carry out research on sentence understanding, we usually observe word processing in some context. In addition to the predictions that have been made, we need to consider how the brain is likely to deal with some new input.Chapter 8, “Composing Sentences,” unveils the network of sentence processing. To be more specific, the chapter shows how the brain identifies constituents of the sentence during comprehension—how it builds sentence structure. The left hemisphere of the brain is regarded as the main area of language processing. Sentence processing seems to activate the temporal lobe and the frontal lobe of the left hemisphere. Several different approaches show that sentence processing involves a network that spans the anterior and posterior temporal lobes of the left hemisphere (LATL, LPTL) and the left inferior frontal gyrus (LIFG). MEG studies reveal that increased activation shows in the LATL within just 200–300 ms after a word is encountered, even for simple phrases. The LATL seems to be sensitive to constituency and the conceptual specificity of a phrase. As for LPTL, it is engaged in processing constituency and argument structure. Although there is debate about its specific function, one theory connects a part of this region with building syntactic structures predictively. But there is still a question left: What role does the LIFG of the frontal lobe play in sentence understanding? It has been a vexing question in neurolinguistics for a long time.Chapter 9, “Building Dependencies,” focuses on how the brain decodes the dependencies between words, together with constituents of sentence structure. To understand what a sentence means, the “hidden” structure of our brain must be uncovered. Evidence from aphasia indicates that the LIFG is engaged in sentence processing, especially complex sentences, including those that entail long-distance dependencies. There are many theories about the function of LIFG in sentence processing. Some theories claim that LIFG is important for domain-specific linguistic representations, whereas others claim that it plays a domain-general role in maintaining things in working memory. No consensus has been reached on the issue. But the evidence from fMRI and the study of primary progressive aphasia appears to support the latter hypothesis: At least some parts of the LIFG play a more domain-general role. And this remains a very active research area. The key issue here is whether different subparts of the LIFG perform different functions. To handle the question and continue further research, subareas must be teased apart, such as the pars triangularis and pars opercularis.But it is not clear yet whether even smaller parts must be identified and whether these subparts are at the right “level” of the brain regions associated with specific language functions. Indeed, careful fMRI research using the “simple composition” protocol has isolated just one part of the pars opercularis (Zaccarella & Friederici, 2015). As the previous chapter found, the LIFG is engaged in domain-general working memory; could some other parts be engaged in domain-specific phrase structure representations? Another active area of research concerns how the LIFG interacts with other parts of the network to implement domain-specific processing. One enthralling hypothesis, developed by Friederici, Chomsky, Berwick, Moro, and Bolhuis (2017), suggested that the unique compositionality of language arises from increased structural connectivity between posterior temporal regions and these inferior frontal regions.Chapter 10, “Wrapping Up,” reviews how well the book achieves the goals that were set out at the beginning, then looks at the current neurolinguistic research from a broad academic perspective. Finally, the author discusses where the field may be heading next. The book introduces the tools of neurolinguistics and then describes the linking hypothesis that connects brain signals to linguistic representation. What's more, it reviews state-of-the-art results that have emerged from this research and provides a foundation for reading the literature. Last but not least, the book points readers toward resources they may use to engage in research themselves. Overall, the book almost achieves the author's goals. It presents the language processing procedures in steps, mainly from the standpoint of temporal processes and neural representations, thereby offering a slim guide to neurolinguistics.There are several advantages to this book: First, the arrangement of the book is very reasonable and systematic. The geography of the brain and the toolbox at the beginning serve as foundations for later experiments. The content involves the sequence of processing that unfolds on multiple levels, taking readers from the shallower to the deeper. Another merit of the book is its clear structure, as shown in the title and summary of each chapter. The title of each chapter hits the key points directly, and the contents of chapters are closely related. There are always leading questions to the next chapter and a summary at the end of each chapter. Also, each chapter summary lists the main points clearly. Thus, readers can easily connect the information together and form their own knowledge networks. Finally, the book is suitable for beginners. The content is presented in clear, simple language. Professional terms and principles are explained on a basic level (although some jargon can't be avoided). The examples offered are classic, and there is not much discussion about different viewpoints. The author presents his conclusions directly. The figures are accompanied by clear descriptions of the various parts and how they work. This offers beginners a simple and clear visual aid.Despite its benefits, some limitations of this book should also be considered. First, this book is notable for its simplicity, but some content may need to be fleshed out, such as procedures for conducting empirical neurolinguistic research or guidelines for using techniques, offering directions for beginners to conduct neurolinguistic studies. Second, the chapter titles are precise and unfolded in layers, but it would be better if subtitles were added, which make it easier for readers to find the desired content. Third, the book focuses mainly on how the brain processes language, paying particular attention to the process of perception. However, production is also a significant part of language processing. Thus, production should be considered in the next edition.All in all, this book provides a preliminary understanding of neurolinguistics, but there is still much room to develop the topic. The latest research findings and intriguing detailed work can also be covered in future editions.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
tuanheqi应助guozizi采纳,获得50
1秒前
超级灰狼发布了新的文献求助10
1秒前
1秒前
NPC-CBI发布了新的文献求助10
1秒前
slsyia完成签到,获得积分10
2秒前
yoona发布了新的文献求助10
3秒前
loulan发布了新的文献求助10
3秒前
斯文败类应助zzz采纳,获得10
4秒前
华仔应助顺利秋灵采纳,获得10
4秒前
11发布了新的文献求助10
4秒前
4秒前
ah完成签到,获得积分10
4秒前
零零零零发布了新的文献求助10
5秒前
Wcy发布了新的文献求助10
5秒前
毛豆应助nczpf2010采纳,获得10
6秒前
6秒前
6秒前
6秒前
晚灯发布了新的文献求助10
7秒前
上官若男应助无情的怜晴采纳,获得10
8秒前
大神完成签到,获得积分20
10秒前
宣宣宣0733发布了新的文献求助10
10秒前
Charlie完成签到 ,获得积分10
11秒前
辛勤的zack发布了新的文献求助10
11秒前
汉堡包应助晚灯采纳,获得10
12秒前
吹梦成真完成签到,获得积分10
12秒前
丘比特应助科研垃圾采纳,获得10
13秒前
14秒前
15秒前
深情安青应助临水思长采纳,获得10
15秒前
冲冲冲发布了新的文献求助10
15秒前
16秒前
17秒前
快乐邮递员完成签到,获得积分10
17秒前
17秒前
yyy发布了新的文献求助10
17秒前
18秒前
18秒前
18秒前
共享精神应助xianxian采纳,获得10
18秒前
高分求助中
Continuum thermodynamics and material modelling 3000
Production Logging: Theoretical and Interpretive Elements 2500
Healthcare Finance: Modern Financial Analysis for Accelerating Biomedical Innovation 2000
Applications of Emerging Nanomaterials and Nanotechnology 1111
Les Mantodea de Guyane Insecta, Polyneoptera 1000
Theory of Block Polymer Self-Assembly 750
지식생태학: 생태학, 죽은 지식을 깨우다 700
热门求助领域 (近24小时)
化学 医学 材料科学 生物 工程类 有机化学 生物化学 纳米技术 内科学 物理 化学工程 计算机科学 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 电极
热门帖子
关注 科研通微信公众号,转发送积分 3483773
求助须知:如何正确求助?哪些是违规求助? 3073002
关于积分的说明 9128881
捐赠科研通 2764596
什么是DOI,文献DOI怎么找? 1517290
邀请新用户注册赠送积分活动 701998
科研通“疑难数据库(出版商)”最低求助积分说明 700849