Beyond ‘Learning Styles’: What is Multimodality?
Multimodality is an inter-disciplinary concept that understands representation, meaning-making and communication to be more than about language and traditional print literacies. Multimodality has been developed over the past decades to systematically address much-debated questions about changes in society, especially in relation to new media, technology, and new learning.
• Written Language: writing (representing meaning to another) and reading (representing meaning to oneself)—handwriting, the printed page, the screen, everyday objects and artefacts.
• Oral Language: live or recorded speech (representing meaning to another); listening (representing meaning to oneself).
• Visual Representation: still or moving images, sculpture, craft (representing meaning to another); view, vista, scene, perspective (representing meaning to oneself). This includes how moving images are put together (editing, montage, film genre).
• Audio Representation: music, ambient sounds, noises (representing meaning to another); hearing, listening (representing meaning to oneself).
• Tactile Representation: touch, smell and taste: the representation to oneself of bodily sensations and feelings or representations to others which ‘touch’ them bodily. Forms of tactile representation include kinaesthesia, physical contact, skin sensations (heat/cold, texture, pressure), grasp, manipulable objects, artefacts, aromas.
• Gestural Representation: movements of the hands and arms, expressions of the face, eye movements and gaze, demeanours of the body, clothing and fashion, hair style, action sequences (Scollon 2001), ceremony and ritual, as well as dance and embodied performance. Here gesture is understood broadly and metaphorically as a physical act of signing (as in ‘a gesture to …’), rather than the narrower literal meaning of hand and arm movement. Representation to oneself may take the form of feelings and emotions or rehearsing action sequences in one’s mind’s eye.
• Spatial Representation: proximity, spacing, layout, design, placing images together in sequence, juxtaposing images and sounds, architecture/building, streetscape, cityscape, landscape (graphics, maps and cartogrpahic representation, layout and combinations of other modes in space (on a poster, page, screen, digital book, etc) and in time (sequence of voice, sound and images in film).
Debunking Learning Styles: Multimodalility – and multimodal literacies – should not be confused or conflated with older notions of ‘learning styles’ or ‘multiple intelligences’ (Gardner, 1983). Whereas the notion of ‘learning style’ assumes that students have preferred ways of, or innate dispositions for, learning, these so-called ‘styles’ can trap both teachers and students into believing that a learner can only effectively learn in one ‘style’. This can become a self-fulfilling prophecy that forecloses learning opportunities in advance, in turn constraining rich interdisciplinary engagements and deeper (multiliteracy) learning adventures…As Cope & Kalantzis (2011) argue, a pedagogy which restricts learning to one artificially segregated mode will favour some types of learners over others. It also means that the starting point for meaning in one mode may be a way of extending one’s representational repertoire by shifting from favoured modes to less comfortable ones’ p. 181).
…Multimodality is not about working in – or learning through – distinct and separated modes, but is rather about how we combine, connect, translate, and intertwine a multiplicity of modal forms (pictures and photographs, sounds and music, gestures, texts, oral speech, moving images) to enrich and deepen meaning-making, as well as make, design, create, and share (new) knowledge and learning...Multimodality is often understood in terms of supporting diverse learners, especially in culturally/linguistically diverse communities, where dominant ‘text’ and ‘print-based’ materials may establish boundaries for access and inclusion.
Multimodal approaches to learning are concerned with the multiple material and intersemiotic connections – and synergisms – when multiple modes of experience and communicative forms are ‘in play’ together…as well as embodied action and material-centric forms of inquiry… Instead of single ‘styles’ (the so-called ‘visual learner’), multimodality and multimodal literacies refer to the dynamic ensembles of modes and genres we use to show and tell, to both ‘read and write’, our worlds in 21st Century media – and material – landscapes.’ – Researching in the Wild
Multimodality: Theoretical Assumptions
Three interconnected theoretical assumptions underpin multimodality and multimodal literacies / forms of communication.
First, multimodality assumes that representation and communication always draw on a multiplicity of modes, all of which contribute to meaning. It focuses on analyzing and describing the full repertoire of meaning-making resources that people use (visual, spoken, gestural, written, three-dimensional, and others, depending on the domain of representation and interests of the meaning-maker/designer) in different contexts, and on developing means that show how these modes are organized to signify and make meaning.
Second, multimodality assumes that resources are socially shaped over time to become meaning making resources that articulate meanings determined by the requirements of different communities and the contexts and purposes of communication. These organized sets of semiotic (sign-making) resources for making meanings are referred to as modes which realize communicative work in distinct ways – and making the choice of a mode(s) is a central aspect of interaction and meaning and design. The more a set of resources has been used in the social life of a particular community, the more fully and finely articulated it will have become (for example, how a digital textbook combines or ‘lays outs’ text, images, graphics, maps, videos, font […] or how videos and films ‘work’ through modal ensembles that combine image, text, voice, voice-over, action, actor-gesture, music, sound, visual metaphor or textual symbol, technical features like camera ‘angle’, background and foreground, editing/sequencing, etc, in relation to genre conventions e.g., action, horror, comedy, documentary, ‘new wave’, ‘Indie’, etc).
In order for something to ‘be a mode’ there needs to be a shared cultural sense within a community of a set of resources and how these can be organized to realize and communicate meanings.
Third, people orchestrate meaning through their selection and configuration of modes, foregrounding the significance of the interaction between modes (in time and in space). A spatial image(s), picture(s) or text, placed in the temporal (time-based) form of a video document (with sounds, music, or voice-over) demonstrates how modal resources are brought – or edited – together (as an ensemble). Thus all communicational acts are shaped by the norms and rules operating at the moment of sign making and design, and influenced by the motivations, purposes and meaning-making interests of people in a specific social context.
Core Concepts: What is a Mode?
Four core concepts are common across multimodal research: mode, semiotic resource, modal affordance, and intersemiotic relations. Within social semiotics, a Mode is understood as an outcome of the cultural shaping of a material through its use in the daily social interaction of people. The Semiotic Resources of a mode come to display regularities through the ways in which people use them and can be thought of as the connection between representational resources and what people do with them (create graphic narratives, videos, arguments, etc). The term Modal Affordance refers to the material and the cultural aspects of modes: what it is possible to express and represent easily with a mode. It is a concept connected to both the material as well as the cultural and social historical use of a mode. Modal affordance raises the question of what mode is ‘best’ [for specific communication goals or audiences]. This raises the concept of Intersemiotic Relations, and how modes are configured in particular contexts (multimodal ensembles).
These four concepts provide the starting point for multimodal analysis.
Multimodality can be used to build inventories of the semiotic resources, organizing principles, and cultural references that modes make available to people in particular places and times: the actions, materials and artifacts people [learn through] and communicate with. This has included contributions to mapping the semiotic resources of visual communication and colour, gesture and movement, gaze, voice and music, to name a few.
References Used to Assemble (Edited)
1 Jewitt, C. (ed.) (2009) The Routledge Handbook of Multimodal Analysis, London:Routledge.
2 Kress, G. (2009) Multimodality: a Social Semiotic Approach to Contemporary Communication, London: Routledge.
3 Bezemer, J. and Mavers, D. (2011) Multimodal Transcription as Academic Practice, International Journal of Social Research Methodology Vol. 14, No. 3, May 2011, 191-206.
4. X. (2016) Researching in the Wild.
5. Cope & Kalantzis (2009). Multiliteracies.