APP下载

Multimodal Discourse Analysis—A Corpus-based Study

2021-03-03GUOyi-fan

Journal of Literature and Art Studies 2021年10期
关键词:刘剑电化教学模态

GUO yi-fan

According to multimodal discourse analysis (MDA), different kinds of semiotic modes work together to conduct meaning. Previous studies on MDA only involve case analysis. Nowadays, scholars tend to establish multimodal corpora to provide a more fuller and comprehensive study on multimodal discourses. In this paper, the author will look at some grand thesis about multimodal corpora and summarize the development of multimodal corpora.

Keywords: multimodal discourse analysis, multimodal corpora

Introduction

1.1 Multimodal Discourse Analysis

With the rapid development of society as well as technology, the means of communication has changed dramatically. Apart from language, music, images, photos and videos have entered people’s daily life and played an important role in conveying messages and information. The traditional discourse analysis which regards language as the research object gradually shows its limitations in explaining the meaning of the whole discourse. Under this circumstance, multimodal discourse analysis came into being and became a hot topic among linguistic field.

Many scholars gave their own definitions of multimodal discourse analysis. Jewitt and Kress Leeuwen proposed the term “multimodality”. Kress (2010) states further that “using three modes in our sign: writing, image and colour as well, has real benefits. Each modes does a special thing: image shows what takes too long to read, and writing names what would be difficult to show, colour is used to highlight specific aspects of the overall message.” Iedema (2003) stated that “the term multimodality, as used here, is a technical one aiming to highlight that the meaning work we do at all times exploits various semiotics” (p. 39). Jewitt (2002) explains“multimodality” as “in the move from pages to screen a range of representational modes (including image, movement, gesture and voice) are available as meaning-making resources” (p. 171). Thilbault (2006) expounded that “multimodality refers to the diverse ways in which a number of distinct semiotic resource systems are both codeplayed and contextualized in the meaning-making of a text-specific meaning” (p. 21).

From these definitions, multimodal discourse analysis can be seen as a method that takes into account multiple modes of communication and how they interact with one another.

1.2 Multimodal Corpus Linguistics

Multimodal corpora are data collections of two or more different semiotic modes, such as vedios, gestures, music and so on. Through the use of multimodal corpora, researchers can explore the relationship between different modes and how these modes work together to convey certain meanings. Therefore, a multimodal corpus can be defined as “an annotated collection of coordinated content on communication channels including speech, gaze, hand gesture and body language, and is generally based on recorded human behaviour” (Foster & Oberlander, 2007, pp. 307-308).

Multimodal corpus analysis aims to verify the hypothesis of meaning generation in the field of multimodal discourse research. Bateman (2014), a representative of this school, believes that in the early stage of multimodal discourse analysis, the study of individual cases is of great important, but with the maturity of the theory, it is necessary for researchers to explore the scope and general principle of the theory through the quantitative study of corpus. At present, the most commonly used multimodal database building tools are Anvil, Elan, etc. However, because that the labeling of multimodal discourse is time-consuming and laborious, there are few studies on multimodal corpus.

Literature Review of Multimodal Corpus

Studies Abroad

The new multimodal communication environments are changing the traditional notion of human interaction, and recent trends in linguistics now consider non-linguistic elements as indispensable for the representation and transfer of discourse analysis. Therefore, in order to better study and analyze real human interactions, it is necessary to construct multimodal corpora.

Baldry and Thibault are two leading scholars in developing multimodal corpora, they (2006) holds that “a multimodal perspective in corpus linguistics entails a much wider and more complex vision of meaning-making than the language-only perspective” (p. 12) However, researches on multimodal corpora in their period are mostly theoretical researches. Baldry (2006) carried out a research on film text analysis and developed a new analytical tool: MCA (Multimodal Corpus Authoring System). His studies of film text analysis through the method of MCA forms a new way of studying multimodal discourses. In this way, Baldry connects the computer technology with multimodal discourse analysis together and proposes the concept of multimodal corpora. Considering that the existing concordances cannot answer the question that how different semiotic modes are used together to construct meaning, Baldry and Thibault (2006) also promotes new concordances which is suitable for multimodal discourses.

In the last decade, the application of multimodal corpus includes many fields abroad. Carletta (2007) constructs a multimodal corpus of meeting recordings (AMI Meeting Corpus) in order to develop long-distance meeting systems and improve meeting productivity. The AMI Meeting Corpus lasts for 100 hours and records design team meetings from three different meeting rooms. Besides, it also contains manually orthographic transcription and annotations which include both linguistic features and non-linguistic features. Kipp (2007) studies human gestures through multimodal corpus. Although various annotation schemes used for analyzing gestures exist, most of these schemes are concentrate on only one aspect, such as purely gesture generation and animation system aspect or all manual annotation aspect, which makes the information too informal or too imprecise. Therefore, Kipp “makes a conscious compromise between purely descriptive, high-resolution approaches and abstract interpretative approaches” (Kipp, 2007, p. 327) The scheme tends to apply an animated character to imitate the generation of a speaker’s gestures. And for its usage, the scheme can be used to encode hand shape, separate hand gestures as well as dynamic gestures. Mostefa (2007) construct a corpus to analyze interative lectures and meetings in smart rooms. The corpora is annotated with various kind of human activities in lectures and meetings, such as acoustic events and visual events. Based on the framework of the European project“Computers in the Human Interaction Loop (CHIL)” which tries to create a new human-computer interaction approach to facilitate human communication in lectures and meetings, the paper aims at acquiring a corpora to develop audiovisual technologies in smart rooms. Mark (2021) applies multimodal corpus to the study of consultations between patients and healthcare professionals. The author collects vedio recordings as well as audio recordings of seven simulated consultations between three different patient (acted by an actor) and five healthcare professionals to construct the Patient Consultation Corpus. According to the author, the Patient Consultation Corpus allows researchers to analyse its data from various of aspects. For instance, the analysis of annotation of non-linguistic features will facilitate the development of intelligent agent which are used in medical domain.

From the above analysis, it is obvious that the studies of multimodal corpora abroad cover a wide range. More and more foreign scholars are recognizing the importance of different semiotic modes and paying more attention to find out the relationship between various modes in meaning construction.

2.2 Domestic studies

In fact, multimodality was an important feature of ancient Chinese artistic works in which poems, paintings and calligraphies coexisted. Although people have long recognized the coexistence of visual and auditory symbols in various forms of discourses, the multimodality of discourses in China has not become an object of study in academic research until recently due to its complexity.

At present, the research on multimodal corpora in China is a new branch of study and still in its infancy. Therefore, this research field needs more scholars to explore. Gu Yueguo (2006) is the first domestic scholar who introduce a corpus linguistic study to multimodal text analysis. In his paper, he construct a framework for analyzing multimodal discourse based on corpus. Gu proposes that every multimodal text contains two layers: the content layer (doing-behaviour) and the medium layer (a logical holder of the content unit). Therefore, the analysis of a multimodal text refers to both the analysis of the content layer and the analysis of the medium layer. For the analysis of the content layer, he regards a multimodal text as a social situation, which is a configuration of activity type, which in turn is a configuration of task/episode, which is in turn a configuration of participants’individual behaviour. According to this, the analytical schema can be divided into two layers: sociopsychological layer which includes social situation, activity type and task/episode and individual behaviour layer which includes talking, doing, act and prosodic unit of illocutionary force. Gu suggests several clues for segmentation, they are spatiotemporal clues, clues from social roles and functions, goals, and goal-attaining schema, clues from situational goal congruence, clues from the asymmetry of goal-attaining schema, clues from human bodies and clues from voice qualities. For the annotation of the analytic schema, Gu uses the DTD content modeling, the the analytic units will thus become

Feng Dezheng and Zhang Delu (2014) points out that the new researches on multimodal discourse anaysis which apply digital technology such as computer simulation and corpus annotation to multimodal discourse analysis are developing quickly. In this paper, the authors introduce three software which help construct the multimodal corpus. Firstly, semiomix developed by Kay O’ Halloran. The dynamic transcription and zoom function of semiomix enables the flexibility of the transcripted unit. Secondly, Multimodal Corpus Authoring(MCA) which is developed by Baldry and Thibault. It is based on XML. By using it, Multimodal data can be searched, annotated, quantitatively analyzed and so on. Thirdly, the SCCSD and the agent-oriented modeling(AOM) proposed by Gu Yueguo. The AOM can segment and annotate multimodal discourses based on XML and build database model for the social activities and interpersonal relationships contained in multimodal discourse analysis.

With the development of the analysis and application of multimodal corpora, Liu Jian (2017) summarizes and evaluates the foreign as well as domestic research on multimodal corpus. From his point of view, although many studies in foreign countries on the construction and application of multimodal corpora has made important progress, there are still many problems: firstly, the corpus collection is unnatural and unreliable, for the face of microphones and cameras may influence participants’ speech, actions as well as emotions and affect participants’true reactions to the context. Therefore, the representativity and validity of collected data are doubtful. Secondly, problems also exist in corpus annotation. The annotation of multimodal corpus are still mainly manual, and the construction of it cost much time; besides, the annotation standard is not unified. Thirdly, the manual annotation may be subjective. Researchers from different cultural background may have different attitudes and judgments to a same multimodal text, thus may result in different annotations.

As for the application of multimodal corpus, it mainly focuses on some fields such as artificial intelligence, medical diagnosis and rehabilitation, foreign language education, and translation. Peng Yuan (2016) discusses the relationship between EFL teachers’ speech and the frequency of gestures that accompany speech in classrooms. In the research, she takes real classroom teaching videos as objects, then the author uses the Multimodal annotation software ELAN to annotate powerful language forms and gestures that accompany the dialogue between teachers and students. After annotation, the paper compares the different frequency of powerful language forms and gestures used in both successful and unsuccessful dialogues in the classroom. Through the multimodal corpus analysis, the paper finds that in a same teaching situation, the best communicative effect can be achieved when the ratio of the number of powerful language and accompanying gestures is more than 1.3:1 and less than 1.3:1. If not so, it will have a negative effect on the communicative effect.

Conclusion

The utility of corpus-based research and method is in fact becoming popular in a range of different academic disciplines and fields of research, far beyond linguistics. For example, the processes of construction itself is of interest to computer scientists, while the tools developed can be utilized to answer questions posed by behaviourists, psychologists, social scientists and ethnographers. This means that multimodal corpora and corpus-based methods and related projects, which are often necessarily interdisciplinary and collaborative, receive ever-increasing support from academic researchers as well as people from other fields, something which is likely to be sustained well in to the future.

Multimodal corpus is a new branch of corpus linguistics. Language research based on multimodal corpus contains rich ideas of research approach and reflects novel language views, which can effectively enrich the field of language research and promote the new development of linguistic theories. Proper use of multimodal corpus can also help the research of other humanities and social science field, and has great potential for application technology development.I believe that the construction, corresponding research and application of multimodal corpus will have a promising future.

References

Baldry, A., & Thibault, P. J. (2006). Multimodal transcription and text analysis. London: Equinox Publishing.

Bateman, J. A., & Schmidt, K. H. (2012). Multimodal film analysis. London: Routledge.

Carletta, J. (2007). Unleashing the killer corpus: Experiences in creating the multi-everything AMI meeting corpus. Language Resources and Evaluation, 41(2), 181-190.

Foster, M. E., & Oberlander, J. (2007). Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation, (3/4), 305-323.

Gu, Y. (2006). Multimodal text analysis—A corpus linguistic approach to situated discourse. Text and Talk, 26(2), 127-167.

Iedema, R. (2003). Multimodality, reseniotization: extending the analysis of discourse as multi-semiotic practice. Visual Communication, (1).

Jewitt, C. (2002). The move from page to screen: the multimodal reshaping of school English. Visual Communication, (2).

Kipp, M. et al. (2007). An annotation scheme for conversational gestures: How to economically capture timing and form. Language Resources and Evaluation, (41), 325-339.

Kress, G., & Leeuwenhoek, T. (2002). Color as a semiotic mode: Notes for a grammar of color. Visual Communication, (3).

Mostefa, D. et al. (2007). The CHIL audiovisual corpus for lecture and meeting analysis inside smart rooms. Language Resources and Evaluation, (41), 389-407.

O’ Halloran, K. L. (2004). Multimodal discourse analysis: systemic functional perspectives. New York: Continuum International Publishing Group.

O’Toole. (1994). The language of displayed art. USA: Fairleigh Dickenson University Press.

Royce, T. (2002) Multimodality in the TESOL classroom: Exploring visual-verbal synergy. TESOL Quarterly, (36), 191-205.

Snaith, M. et al. (2021). A multimodal corpus of simulated consultations between a patient and multiple healthcare professionals. Language Resources and Evaluation, 1-16.

馮德正. (2014). 张德禄.多模态语篇分析的进展与前沿. 当代语言学, (16), 88-99.

顾曰国. (2010) 多媒体、多模态初步剖析. 外语电化教学, (2), 3-12.

刘剑. (2017). 国外多模态语料库建设及相关研究述评. 外语教学, (4), 40-45.

彭圆. (2016). 多模态语料库驱动的中国大学EFL教师课堂语伴手势的产出量研究. 外语教学理论与实践, (2), 62-69.

猜你喜欢

刘剑电化教学模态
妞妞学说话
联合仿真在某车型LGF/PP尾门模态仿真上的应用
英语科技文本翻译在英语教学中的运用
《厉害了,我的国》观后感
模态可精确化方向的含糊性研究
基于滑动拟合阶次和统计方法的模态阻尼比辨识技术
分手谎言4年后险酿悲剧
基于CAE的模态综合法误差分析
现代电化教育在地理课中的应用
论电化教学的辅助性作用