Assessing sound symbolism: Investigating phonetic forms, visual shapes and letter fonts in an implicit bouba-kiki experimental paradigm

PLoS One. 2018 Dec 21;13(12):e0208874. doi: 10.1371/journal.pone.0208874. eCollection 2018.

Abstract

Classically, in the bouba-kiki association task, a subject is asked to find the best association between one of two shapes-a round one and a spiky one-and one of two pseudowords-bouba and kiki. Numerous studies report that spiky shapes are associated with kiki, and round shapes with bouba. This task is likely the most prevalent in the study of non-conventional relationships between linguistic forms and meanings, also known as sound symbolism. However, associative tasks are explicit in the sense that they highlight phonetic and visual contrasts and require subjects to establish a crossmodal link between stimuli of different natures. Additionally, recent studies have raised the question whether visual resemblances between the target shapes and the letters explain the pattern of association, at least in literate subjects. In this paper, we report a more implicit testing paradigm of the bouba-kiki effect with the use of a lexical decision task with character strings presented in round or spiky frames. Pseudowords and words are, furthermore, displayed with either an angular or a curvy font to investigate possible graphemic bias. Innovative analyses of response times are performed with GAMLSS models, which offer a large range of possible distributions of error terms, and a generalized Gama distribution is found to be the most appropriate. No sound symbolic effect appears to be significant, but an interaction effect is in particular observed between spiky shapes and angular letters leading to faster response times. We discuss these results with respect to the visual saliency of angular shapes, priming, brain activation, synaesthesia and ideasthesia.

Publication types

  • Clinical Trial
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Female
  • Humans
  • Language*
  • Male
  • Models, Biological*
  • Pattern Recognition, Visual / physiology*
  • Phonetics
  • Speech Perception / physiology*

Grant support

For their financial support, the authors are grateful to the Université du Québec à Chicoutimi (UQAC), as well as to the LABEX ASLAN (ANR-10-LABX-0081) of Université de Lyon within the program Investissements d’Avenir (ANR-11-IDEX-0007) of the French government operated by the National Research Agency (ANR). Université du Québec à Chicoutimi: https://www.uqac.ca/. Labex ASLAN: http://aslan.universite-lyon.fr/ ANR: http://www.agence-nationale-recherche.fr/.