Linguistic Prosody and Melodic Characteristics of Korean Emotion Vocabulary: A Musical-Linguistic Analysis

Authors

  • Soo Yon Yi

    Department of Music Education, The Graduate School, Gachon University, Seongnam-si 13120, Korea

  • Hyun Ju Chong

    Department of Music Therapy, The Graduate School, Ewha Womans University, Seoul 03760, Korea

DOI:

https://doi.org/10.30564/fls.v7i10.11138
Received: 18 July 2025 | Revised: 25 July 2025 | Accepted: 4 August 2025 | Published Online: 24 September 2025

Abstract

This study explores the intersection between linguistic prosody and melodic characteristics in Korean emotion vocabulary, aiming to quantify emotional articulation in spoken language and inform melodic construction in songwriting and lyrical composition. Thirty Korean female participants (ages 19–23) were asked to speak emotion-related words representing three target emotions: happiness, anger, and sadness. Acoustic analyses were conducted to examine prosodic features, including fundamental frequency (Hz) and articulation time (ms), which were then translated into musical parameters such as pitch register, pitch range, melodic contour, and tempo. Statistical analyses identified significant differences in prosodic and melodic characteristics across emotional categories. Results showed that the mean pitch corresponded to B3 (253.7 Hz) for happy words, A3 (213.1 Hz) for angry words, and G3 (211.6 Hz) for sad words. Happy words featured high pitch registers, wide ranges, and descending contours; angry words had mid-range registers with rising-falling or descending contours; and sad words exhibited low registers, narrow ranges, and either descending or unisonous contours. In terms of tempo, angry words were articulated most quickly (172 ms), followed by happy (191 ms) and sad (210 ms). Significant differences were found in frequency between happy and angry words (18.5 Hz), and in articulation time between happy and sad (0.02 ms), and angry and sad (0.03 ms). These findings suggest that the prosodic expression of emotion can be meaningfully translated into melodic representation, with potential applications in music composition, song therapy, and affective computing. This framework establishes a foundation for future interdisciplinary exploration linking language, music, and emotion.

Keywords:

Linguistic Prosody; Melodic Characteristics; Korean Emotion Vocabulary; Pitch; Pitch Range

References

[1] Plutchik, R., 2000. A psychoevolutionary theory of emotion. In: Plutchik, R. (ed.). Emotion in the practice of psychotherapy: Clinical implications of affect theories. American Psychological Association: Washington, DC, USA. pp. 59–79.

[2] Juslin, P.N., Vӓstfjӓll, D., 2008. Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences. 31(5), 559–575. DOI: https://doi.org/10.1017/s0140525x08006079

[3] Eerola, T., Vuoskoski, J.K., 2011. A comparison of the discrete and dimensional models of emotion in music. Psychology of Music. 39(1), 18–49. DOI: https://doi.org/10.1177/0305735610362821

[4] Ekman, P., 1992. An argument for basic emotions. Cognition and Emotion. 6, 169–200. DOI: https://doi.org/10.1080/02699939208411068

[5] Zentner, M., Grandjean, D., Scherer, K.R., 2008. Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion. 8(4), 494–521. DOI: https://doi.org/10.1037/1528-3542.8.4.494

[6] Bӓnziger, T., Scherer, K.R., 2005. The role of intonation in emotional expressions. Speech Communication. 46, 252–267. DOI: https://doi.org/10.1016/j.specom.2005.02.016

[7] Kent, R., 2004. Normal aspect of articulation. In: Bernthal, J., Bankson, N. (eds.). Articulation and phonological disorders. Prentice Hall: Englewood Cliffs, NJ, USA.

[8] Scherer, K.R., Bänziger, T., 2004. Emotional expression in prosody: A review and an agenda for future research. In Proceedings of the 1st International Conference on Speech Prosody; pp. 1–14. Available from: https://sprosig.org/sp2004/PDF/Scherer-Baenziger.pdf

[9] Banse, R., Scherer, K.R., 1996. Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology. 70, 614–636. DOI: https://doi.org/10.1037/0022-3514.70.3.614

[10] Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., et al., 2001. Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine. 18, 32–80. https://ieeexplore.ieee.org/document/911197

[11] Juslin, P.N., Laukka, P., 2003. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin. 129, 770–814. DOI: https://doi.org/10.1037/0033-2909.129.5.770

[12] Lee, S.H., 2002. K-ToBI Labelling system: A study of description system of Korean prosodic structure. The Linguistics Association of Korea Journal. 10(2), 1–18. Available from: https://scispace.com/pdf/korean-intonation-patterns-from-the-viewpoint-of-f-0-45v4hgx295.pdf

[13] Park, M.Y., Park, M.K., 2007. An analysis of emotional speech using distribution chart of f(0). Korean Linguistics. 27, 233–254. DOI: https://doi.org/10.17790/kors.2007..27.233

[14] Cho, S., Ban, T., Choi, J., et al., 2021. Study on the phonological characteristics of Korean anger speech-acts of responses. Journal of East Asian Cultures. (85), 151–174. DOI: https://doi.org/10.16959/jeachy..85.202105.151

[15] Yi, S.P., 2014. An acoustical analysis of emotional speech using close-copy stylization of intonation curve. Phonetics and Speech Science. 6(3), 131–138. DOI: https://doi.org/10.13064/ksss.2014.6.3.131

[16] Guo, X., Lai, Q., 2021. Semantic relations and prosodic features of ranhou in spontaneous Mandarin conversation. Forum for Linguistic Studies. 3(1). DOI: https://doi.org/10.18063/fls.v3i1.1250

[17] Jiang, H., 2019. Intonational pitch features of interrogatives and declaratives in Chengdu dialect. Forum for Linguistic Studies. 1(1). DOI: https://doi.org/10.18063/fls.v1i1.1082

[18] Boxill, E.H., 1985. Music therapy for the developmentally disabled. Aspen Systems Corporation: Rockville, MD, USA.

[19] Robarts, J.Z., 2003. The healing function of improvised songs in music therapy with a child survivor of early trauma and sexual abuse. In: Hadley, S. (ed.). Psychodynamic music therapy: Case studies. Barcelona Publishers: Gilsum, NH, USA.

[20] Bruscia, K.E., 1998. The dynamics of music psychotherapy. Barcelona Publishers: Gilsum, NH, USA.

[21] Baker, F.A., Wigram, T., 2005. Songwriting: Methods, techniques and clinical applications for music therapy clinicians, educators and students. Jessica Kingsley Publishers: London, UK.

[22] Gerardi, G.M., Gerken, L., 1995. The development of affective responses to modality and melodic contour. Music Perception. 12, 279–290. DOI: https://doi.org/10.2307/40286184

[23] Juslin, P.N., Sloboda, J.A., 2010. Handbook of music and emotion: Theory, research, applications. Oxford University Press: London, UK.

[24] Thompson, W.F., Robitaille, B., 1992. Can composers express emotions through music? Empirical Studies of the Arts. 10, 79–89. DOI: https://doi.org/10.2190/nbny-akdk-gw58-mtel

[25] Coutinho, E., Dibben, N., 2013. Psychoacoustic cues to emotion in speech prosody and music. Cognition and Emotion. 27(4), 658–684. DOI: https://doi.org/10.1080/02699931.2012.732559

[26] Ilie, G., Thompson, W.F., 2006. A comparison of acoustic cues in music and speech for three dimensions of affect. Music Perception. 23(4), 319–330. DOI: https://doi.org/10.1525/mp.2006.23.4.319

[27] Rodero, E., 2011. Intonation and emotion: Influence of pitch levels and contour type on creating emotions. Journal of Voice. 25(1), 25–34. DOI: https://doi.org/10.1016/j.jvoice.2010.02.002

[28] Ross, D., Choi, J., Purves, D., 2007. Musical intervals in speech. Proceedings of the National Academy of Sciences. 104(23), 9852–9857. https://doi.org/10.1073/pnas.0703140104

[29] Friend, M., Bryant, J.B., 2000. A developmental lexical bias in the interpretation of discrepant messages. Merrill Palmer Quarterly: A Journal of Developmental Psychology. 46(2), 140–167. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC5502114/

[30] Doh, S., Kim, J., Lee, H., 2023. Textless speech-to-music retrieval using emotion similarity. Preprint. arXiv:2303.10539. DOI: https://doi.org/10.48550/arXiv.2303.10539

[31] Jansen, N., Harding, E., Loerts, H., et al., 2023. The relation between musical abilities and speech prosody perception: A meta-analysis. Journal of Phonetics. 101, 101278. DOI: https://doi.org/10.1016/j.wocn.2023.101278

[32] Brown, S., 2017. A joint prosodic origin of language and music. Frontiers in Psychology. 8, 1894. DOI: https://doi.org/10.3389/fpsyg.2017.01894

[33] Kraxenberger, M., Menninghaus, W., Roth, A., et al., 2018. Prosody-based sound-emotion associations in poetry. Frontiers in Psychology. 9, 1284. DOI: https://doi.org/10.3389/fpsyg.2018.01284

[34] Larrouy-Maestri, P., Poeppel, D., Pell, M., 2024. The sound of emotional prosody: Nearly three decades of research and future directions. Perspectives on Psychological Science. 20(4), 623–638. DOI: https://doi.org/10.1177/17456916231217722

[35] Patel, D., Daniele, J.R., 2003. An empirical comparison of rhythm in language and music. Cognition. 87(1), 35–45. DOI: https://doi.org/10.1016/S0010-0277(02)00187-7

[36] Cho, T., 2021. Linguistic functions of prosody and its phonetic encoding with special reference to Korean. Japanese/Korean Linguistics. 29, 1–22. Available from: https://site.hanyang.ac.kr/documents/11105253/13205893/Cho_2022_JK-29-Chapter+1.pdf

[37] Zhao, D., Chang, W.H., 2025. Acoustic features of emotional prosody: A cross-linguistic analysis of English, Chinese, and Korean. Language and Linguistics. 107, 23–56. DOI http://dx.doi.org/10.20865/202510702

[38] Takahashi, O., Tanaka, K., Kobayashi, K., et al., 2021. Melodic emotional expression increases ease of talking to spoken dialog agents. In Proceedings of the 9th International Conference on Human-Agent Interaction (HAI ’21), Kyoto, Japan, 9–11 November 2021; pp. 84–92. DOI: https://doi.org/10.1145/3472307.3484180

[39] Yi, S.Y., Oh, J.H., Chong, H.J., 2016. Tonal characteristics based on intonation pattern of the Korean emotion words. Journal of Music and Human Behavior. 13(2), 67–83. DOI: https://doi.org/10.21187/jmhb.2016.13.2.67 (in Korean)

[40] Choi, J.H., Park, J.M., 2024. Acoustic analysis and melodization of Korean intonation for language rehabilitation. Journal of Music and Human Behavior. 21(1), 49–68. DOI: https://doi.org/10.21187/JMHB.2024.21.1.049 (in Korean)

[41] Oh, E.H., 2024. Acoustic cues for emotions in Korean vocal expressions. Linguistic Research. 41(4), 611–633. DOI: https://doi.org/10.17250/khisli.41.3.202412.012 (in Korean)

[42] Lee, E.Y., Yoo, G.E., Lee, Y., 2024. Analysis of semantic attributes of Korean words for sound quality evaluation in music listening. Journal of Music and Human Behavior. 21(2), 107–134. DOI: https://doi.org/10.21187/JMHB.2024.21.2.107 (in Korean)

[43] Clopper, C.G., Smiljanic, R., 2011. Effects of gender and regional dialect on prosodic patterns in American English. Journal of Phonetics. 39(2), 237–245. DOI: https://doi.org/10.1016/j.wocn.2011.02.006 (in English)

[44] Roy, S., 2016. Future directions of modelling the uncertainty in the cognitive domain. In: Vaidya, A. (ed.). Decision Making and Modelling in Cognitive Science. Springer: New Delhi, India. (in English)

[45] Kim, J.Y., 2014. A study on direction of emotional vocabulary education for building emotional literacy. Journal of Cheong Ram Korean Language Education. 49, 321–345. DOI: https://doi.org/10.26589/jockle..49.201403.321 (in Korean)

[46] Seong, C.J., 2013. Script_toneLabler_cj.praat [computer program]. Available from: http://blog.naver.com/cj_seong (cited 1 December 2024).

[47] Lindsay, P.H., Norman, D.A., 2013. Human Information Processing: An Introduction to Psychology, 2nd ed. Academic Press: London, UK.

[48] Rumelhart, D.E., 2017. Schemata: The building blocks of cognition. In: Spiro, R.J., Bruce, B.C., Brewer, W.F. (eds.). Theoretical Issues in Reading Comprehension. Routledge: New York, NY, USA. pp. 33–58.

[49] Scherer, K.R., 2003. Vocal communication of emotion: A review of research paradigms. Speech Communication. 40(1–2), 227–256. DOI: https://doi.org/10.1016/S0167-6393(02)00084-5

[50] Scherer, K.R., Ellgring, H., 2007. Multimodal expression of emotion: Affect programs or componential appraisal patterns? Emotion. 7(1), 158–171. DOI: https://doi.org/10.1037/1528-3542.7.1.158

[51] Klasen, M., Chen, Y.H., Mathiak, K., et al., 2014. Multimodal emotion processing in autism spectrum disorders: An integrative review. Frontiers in Human Neuroscience. 8, 897. DOI: https://doi.org/10.1016/j.dcn.2012.08.005

[52] Ko, H.J., 2012. Reconsideration of the methodology of Korean accent study. Journal of North-east Asian Cultures. 33, 257–268. DOI: https://doi.org/10.17949/jneac.1.33.201212.014 (in Korean)

Downloads

How to Cite

Yi, S. Y., & Chong, H. J. (2025). Linguistic Prosody and Melodic Characteristics of Korean Emotion Vocabulary: A Musical-Linguistic Analysis. Forum for Linguistic Studies, 7(10), 301–317. https://doi.org/10.30564/fls.v7i10.11138