Rethinking mechanisms of iconicity - proprioception

Rethinking mechanisms of iconicity - proprioception

Some thoughts on mechanisms of iconicity, inspired by Jarkko Keränen’s excellent recent article in Cognitive Linguistics!

Starting with the article inspiring this post, you can find (and cite) it below:

Keränen, J. (2023). Cross-modal iconicity and indexicality in the production of lexical sensory and emotional signs in Finnish Sign Language. Cognitive Linguistics 34(3-4): 333-369.

I wanted to write a post on this fantastic new paper by my friend and colleague Jarkko Keränen! Jarkko is doing his PhD thesis on cross-modal iconicity in Finnish Sign Language—which is such a cool topic!—at the University of Jyväskylä in Finland, and we are basically in the same year too 😄

There is a lot to love in this paper (and I’ve included my full set of notes on all the most interesting things at the end of this post), but probably my favourite thing is Jarkko’s discussion of iconicity, and in particular proprioceptive iconicity.

As Jarkko points out, analyses of iconicity in language are most often from the point of view of the perception, rather than production of signs. As also discussed in Emmorey et al. (2009), the most important difference between production and perception is that in production we also have articulatory feedback, which can result in a sense of proprioceptive iconicity. Jarkko illustrates this with the example of the sign HAMMERING from FinSL. He explains that when producing the HAMMERING sign, the signer perceives not only the visual iconicity of the sign from the handshape which resembles the shape of the hammer, but there is also a proprioceptive iconicity in how producing the sign feels, which extends across the entire body as producing the sign mimics the action of hammering.

As a comparable example from spoken languages, I thought about the Japanese ideophone netyanetya [netʃa.netʃa], which means ‘sticky’, and which feels sticky to say in your mouth, as well as sounding sticky. Try saying it yourself and you’ll see. Interestingly, in a study of iconicity ratings and sensory experience ratings for English words (Winter et al. 2017), tactile words were found to be most iconic, after words for sounds. And this has also been found in signed languages (iconicity is even stronger in signs for haptic meanings than it is in signs with visual meanings, Perlman et al. 2018).


Iconicity of English words by sensory modality (modality data are from Lynott & Connell, 2009, 2013; Winter, 2016a); points indicate linear model fits with 95% confidence intervals. Taken from Winter et al. 2017 (p.444)

The authors attributed this finding to the close relationship between our senses of sound and touch, but I am wondering now whether proprioceptive iconicity might also have an explanatory role here.

The diagram at the top of this post is inspired by Figure 1 in Jarkko’s paper, and is supposed to illustrate the different mechanisms for iconicity in spoken language (Jarkko’s diagram is for FinSL). With iconic words in spoken languages, and particularly ideophones, we have iconicity in:

  • The auditory modality; acoustic iconicity
  • The visual modality; from the articulatory gestures involved (e.g. consider the iconicity of bilabials in words like bulbous, bumpy and burst, it’s at least equally about the actions of the lips as it is the sound itself) + accompanying manual gestures (there is a very strong link between ideophones and manual gestures, e.g. Kita 1997; Dingemanse 2013; Dingemanse & Akita 2017)
  • Proprioception; from the articulatory gestures involved, from the perspective of the producer. For example, one Japanese study showed that English speakers were better able to guess the meanings of Japanese ideophones when they pronounced the ideophones themselves as opposed to when they only heard them (Oda 2000).

According to Emmorey et al. (2009), the proprioceptive iconicity would only be available to the producer of the sign, but I am wondering now whether that is really true, since we know that we have ‘mirror neurons’ that are activated both when performing an action ourselves, as well as when witnessing others performing the same action. Proprioceptive iconicity is probably stronger in production, but may not necessarily be absent from perception, especially when perceiving a sign from a language that you use (not sure how it would work with a foreign language). In any case, it’s an interesting open question for future research.

That’s the main thing I wanted to talk about from this paper, but I’ve also included some bullet points below of other interesting concepts and findings from the article, because there really are a lot!

Other interesting points

  • I like the distinction used in the paper (from Sonesson 1996; 2014) between performative indexicality and abductive indexicality. With performative indexicality, the sign itself causes the contiguity between it and its object. Examples are signs involving finger points, or, for spoken languages, a word like nose which itself contains the nasal sound. With abductive indexicality, on the other hand, prior knowledge of the relationship between two things is necessary for establishing the contiguity. For example, the use of the Japanese ideophone dokidoki (which depicts a beating heart) to mean ‘afraid’ requires abductive indexicality. The distinction is somewhat analogous to Sonesson’s other distinction between primary and secondary iconicity. An interesting finding of this paper was that 71 of the 118 signs examined (60% of them!) were cross-modally indexical, and if you look at Appendix B you can see that this indexicality is in most cases abductive. I don’t know of any similar studies for ideophones, but my intuition would be that if you were to look into it you would find similar patterns. So abductive indexicality is hugely important to sensory language, and cross-modal indexicality is actually much more common than cross-modal iconicity (which was found for only 10 signs in the sample)! I do feel on the whole that the importance of indexicality in language is often underestimated or backgrounded, as there is so much focus on the importance of iconicity (which is actually much more complex and difficult to aquire than indexicality).
  • Another important finding, only one sign in the paper was analysed as exhibiting metaphorical iconicity. Jarkko points out that researchers may be overly eager to ascribe metaphor to their analysis of signs, perhaps because they assume that emotions are abstract (I’ve been guilty of this myself). However, Jarkko shows that “emotions can be expressed by various intramodally iconic strategies such as showing actions (e.g., a fist up for punching indicates hate) and drawing (e.g., an outline of smile on mouth indicates happy) or less frequently by cross-modally – or cross-experientially – iconic strategies (e.g., the sign INTERESTING in Section 4.5) without metaphorical processes.” (Keränen 2023, p.356) It reminds me of a similarly interesting argument by Bodo Winter and colleagues about synaesthetic metaphors (Winter et al. 2019)
  • I really like the use of phenomenological triangulation in this paper, something I never really understood very well before reading this but I think that this study illustrates very well its usefulness as a method. There is a basic four-step method (coined by Edmund Husserl) for doing phenomenological triangulation, with the four steps being (1) epoché, (2) phenomenological reduction, (3) eidetic variation, and (4), intersubjective corroboration. You can read more about them in Jarkko’s paper, and see also the references therein. Of the four, I have found (3) particularly helpful when analysing iconicity in ideophones, and I never knew that there was a specific term for this way of thinking about things before so that was nice to find out!


  1. Emmorey, Karen, Rain Bosworth, and Tanya Kraljic. 2009. ‘Visual Feedback and Self-Monitoring of Sign Language’. Journal of Memory and Language 61(3): 398–411.
  2. Keränen, Jarkko. 2023. ‘Cross-Modal Iconicity and Indexicality in the Production of Lexical Sensory and Emotional Signs in Finnish Sign Language’. Cognitive Linguistics 34(3–4): 333–69.
  3. Kita, Sotaro. 1997. ‘Two-dimensional semantic analysis of Japanese mimetics’. Linguistics, vol. 35, no. 2, pp. 379-416.
  4. Perlman, Marcus, Hannah Little, Bill Thompson, and Robin L Thompson. 2018. ‘Iconicity in Signed and Spoken Vocabulary: A Comparison Between American Sign Language, British Sign Language, English, and Spanish’. Frontiers in psychology 9: 1433.
  5. Winter, Bodo, Marcus Perlman, Lynn K Perry, and Gary Lupyan. 2017. ‘Which Words Are Most Iconic?’ Interaction Studies 18(3): 443–64.
  6. Winter, Bodo. 2019. ‘Synaesthetic Metaphors Are Neither Synaesthetic nor Metaphorical’. In Perception Metaphors, eds. Laura J. Speed, Carolyn O’Meara, Lila San Roque, and Asifa Majid. Amsterdam & Philadelphia: John Benjamins Publishing Company, 105–26.
  7. Sonesson, Göran. 1996. ‘Indexicality as Perceptual Mediation’. In Christiane Pankow (ed.), Indexicality. Papers from the Third Biannual Meeting of the Swedish Society for Semiotic Studies , 127–143. Gothenburg University SSKKII Report 9604.
  8. Sonesson, Göran. 2014. The cognitive semiotics of the picture sign. In David Machin (ed.). Visual communication,23–50. Berlin, Boston: Walter de Gruyter.