-
5066
-
4876
-
1950
-
1807
-
1492
Disembodied Meaning? Generative AI and Understanding
DOI:
https://doi.org/10.30564/fls.v7i3.8060Abstract
This study explores the cognitive and philosophical implications of Large Language Models (LLMs), focusing on their ability to generate meaning without embodiment. Grounded in the coherence-based semantics framework, the research challenges traditional views that emphasize the necessity of embodied cognition for meaningful language comprehension. Through a theoretical and comparative analysis, this paper examines the limitations of embodied cognition paradigms, such as the symbol grounding problem and critiques like Searle’s Chinese Room, and evaluates the practical capabilities of LLMs. The methodology integrates philosophical inquiry with empirical evidence, including case studies on LLM performance in tasks such as medical licensing exams, multilingual communication, and policymaking. Key findings suggest that LLMs simulate meaning-making processes by leveraging statistical patterns and relational coherence within language, demonstrating a form of operational understanding that rivals some aspects of human cognition. Ethical concerns, such as biases in training data and societal implications of LLM applications, are also analyzed, with recommendations for improving fairness and transparency. By reframing LLMs as disembodied yet effective cognitive systems, this study contributes to ongoing debates in artificial intelligence and cognitive science. It highlights their potential to complement human cognition in education, policymaking, and other fields while advocating for responsible deployment to mitigate ethical risks.
Keywords:
Large Language Models; Semantic Competence; Disembodied MeaningReferences
[1] Conroy, G., 2023. How generative AI could disrupt scientific publishing. Nature. 622, 234.
[2] Prillaman, M., 2024. Is ChatGPT making scientists hyper-productive? The highs and lows of using AI. Nature. 27(8002), 16-17.
[3] Meyer, J.G., Urbanowicz, R.J., Martin, P.C., et al., 2023. ChatGPT and large language models in academia: opportunities and challenges. BioData Mining. 16(1), 20.
[4] Noy, S., Zhang, W., 2023. Experimental evidence on the productivity effects of generative artificial intelligence. Science. 381(6654), 187-192.
[5] Yampolskiy, R.V., Unmasking, A.I., 2024. Ethical dilemmas in autonomous systems. AI Ethics Journal. 12(2), 89-103.
[6] Tangermann, V., 2024. OpenAI CEO says he has no idea what GPT-5 will be capable of. Available from: https://futurism.com/the-byte/gpt-5-openai-ceo (cited 15 November 23).
[7] Liu, Z., Kitouni, O., Nolte, N.S., et al., 2022. Towards understanding grokking: An effective theory of representation learning. Advances in Neural Information Processing Systems. 35, 34651-34663.
[8] Power, A., Burda, Y., Edwards, H., et al., 2022. Grokking: Generalization beyond overfitting on small algorithmic datasets. Available from: https://arxiv.org/abs/2201.02177 (cited 6 Jan 2022).
[9] Romera-Paredes, B., Bisk, Y., Paperno, D., 2024. Evaluating reasoning capabilities in generative models. Journal of AI Research. 58, 102-121.
[10] Battle, R., Gollapudi, T., 2024. The unreasonable effectiveness of eccentric automatic prompts. Available from: https://arxiv.org/abs/2402.10949 (cited 20 Feb 2024).
[11] Minaee, S., Mikolov, T., Nikzad, N., et al., 2024. Large language models: A survey. Available from: https://arxiv.org/abs/2402.06196 (cited 20 Feb 2024).
[12] Kasai, J., Sakaguchi, K., Yamada, Y., et al., 2023. Evaluating GPT-4 and ChatGPT on Japanese medical licensing examinations. Available from: https://arxiv.org/abs/2303.18027 (cited 31 Mar 2023).
[13] Nori, H., King, N., McKinney, S.M., et al., 2023. Capabilities of GPT-4 on medical challenge problems. Available from: https://arxiv.org/abs/2303.13375 (cited 12 Apr 2023).
[14] Oh, N., Choi, G.S., Lee, W.Y., 2023. ChatGPT goes to the operating room: evaluating GPT-4 performance and its potential in surgical education and training. Annals of Surgical Treatment and Research. 104(5), 269.
[15] Chen, L., Zaharia, M., Zou, J., 2023. How is ChatGPT's behavior changing over time? Available from: https://arxiv.org/abs/2307.09009 (cited 31 Oct 2023).
[16] Hagendorff, T., Fabi, S., Kosinski, M., 2023. Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nature Computational Science. 3(10), 833-838.
[17] Ortu, F., Jin, Z., Doimo, D., et al., 2024. Competition of mechanisms: tracing how language models handle facts and counterfactuals. Available from: https://arxiv.org/abs/2402.11655 (cited 6 Jun 2024).
[18] Wei, J., Wang, X., Schuurmans, D., et al., 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 35, 24824-24837.
[19] Prystawski, B., Thibodeau, P., Goodman, N., 2022. Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models. Available from: https://arxiv.org/abs/2209.08141 (cited 19 May 2023).
[20] Qi, J., Xu, Z., Shen, Y., et al., 2023. The art of Socratic questioning: Recursive thinking with large language models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing; December 6–10, 2023; New York, United States. pp. 4177–4199.
[21] Bender, E.M., Gebru, T., McMillan-Major, A., et al., 2021. On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; March 3-10, 2021; Online. pp. 610–623.
[22] Genkina, D., 2024. AI prompt engineering is dead: Long live AI prompt engineering. IEEE Spectrum. Available from: https://spectrum-ieee-org.cdn.ampproject.org/c/s/spectrum.ieee.org/amp/prompt-engineering-is-dead-2667410624
[23] Boden, M.A., 2014. 4 GOFAI. In: Frankish K., Ramsey, W.M. (eds.). The Cambridge Handbook of Artificial Intelligence. Cambridge University Press: Cambridge, United Kingdom. p. 89.
[24] Clark, A., 1997. Being There: Putting Brain, Body, and World Together Again. MIT Press: Massachusetts, United States. p. 295.
[25] Thompson, E., Varela, F.J., 2001. Radical embodiment: Neural dynamics and consciousness. Trends in Cognitive Sciences. 5(10), 418-425.
[26] Varela, F.J., Thompson, E., Rosch, E., 1991. The Embodied Mind: Cognitive Science and Human Experience. MIT Press: Massachusetts, United States.
[27] Géron, A., 2022. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow. O’Reilly Media, Inc: Sebastopol, United States. p. 819.
[28] Tegmark, M., 2018. Life 3.0: Being Human in the Age of Artificial Intelligence. Vintage: New York, United States. p. 400.
[29] Marcus, G., 2018. Deep learning: A critical appraisal. Available from: https://arxiv.org/abs/1801.00631 (cited 2 Jan 2018).
[30] Froese, T., Ziemke, T., 2009. Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artificial intelligence. 173(3-4), 466-500.
[31] Shanahan, M., Crosby, M., Beyret, B., et al., 2020. Artificial intelligence and the common sense of animals. Trends in Cognitive Sciences. 24(11), 862-872.
[32] Fjelland, R., 2020. Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications. 7(1), 1-9.
[33] Kıcıman, E., Ness, R., Sharma, A., et al., 2023. Causal reasoning and large language models: Opening a new frontier for causality. Available from: https://arxiv.org/abs/2305.00050 (cited 20 Aug 2024).
[34] Brown, T., Mann, B., Ryder, N., et al., 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems. 33, 1877-1890.
[35] Prystawski, B., Thibodeau, P., Goodman, N., 2022. Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models. Available from: https://arxiv.org/abs/2209.08141 (cited 19 May 2023).
[36] Chakrabarty, T., Saakyan, A., Winn, O., et al., 2023. I spy a metaphor: Large language models and diffusion models co-create visual metaphors. Available from: https://arxiv.org/abs/2305.14724 (cited 14 Jul 2023).
[37] Bisk, Y., Holtzman, A., Thomason, J., et al., 2020. Experience grounds language. Available from: https://arxiv.org/abs/2004.10151 (cited 2 Nov 2020).
[38] Wang, S., Petridis, S., Kwon, T., et al., 2023. PopBlends: Strategies for conceptual blending with large language models. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems; April 19, 2023; Hamburg, Germany. pp. 1–19.
[39] Bubeck, S., Chandrasekaran, V., Eldan, R., et al., 2023. Sparks of artificial general intelligence: Early experiments with GPT-4. Available from: https://arxiv.org/abs/2303.12712 (cited 13 Apr 2023).
[40] Gupta, T., Gong, W., Ma, C., et al., 2024. The essential role of causality in foundation world models for embodied AI. Available from: https://arxiv.org/abs/2402.06665 (cited 29 Apr 2024).
[41] CBS News., 2023. The AI revolution: Google's artificial intelligence developers on what's next in the field. Available from: https://www.cbsnews.com/news/ai-advancements-google-artificial-intelligence-future-60-minutes-2023-04-16/ (cited 16 April 2023).
[42] Searle, J.R., 1980. Minds, brains, and programs. Behavioral and Brain Sciences. 3(3), 417-424.
[43] Harnad, S., 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena. 42(1-3), 335-346.
[44] Block, N., 1980. Troubles with functionalism. Minnesota Studies in the Philosophy of Science. 9, 261-325.
[45] Bender, E.M., Koller, A., 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; July 6–July 8, 2020. Washington, United States. pp. 5185–5198.
[46] Durt, C., Froese, T., Fuchs, T., 2022. Against AI understanding and sentience: Large language models, meaning, and the patterns of human language use. In: Voeneky, S., Kellmeyer, P., Mueller, O., et al. (eds.). The Cambridge Handbook of Responsible Artificial Intelligence. Cambridge University Press: Cambridge, United Kingdom. pp. 67-82.
[47] Durt, C., 2022. Artificial Intelligence and Its Integration into the Human Lifeworld. In: Voeneky, S., Kellmeyer, P., Mueller, O., Burgard, W. (eds.). The Cambridge Handbook of Responsible Artificial Intelligence. Cambridge University Press: Cambridge, United Kingdom. p. 526.
[48] GoogleTalks, 2015. Consciousness in Artificial Intelligence | John Searle | Talks at Google. Available from: https://www.youtube.com/watch?v=rHKwIYsPXLg&t=17s (cited 3 December 2015).
[49] Basnight-Brown, D.M., Altarriba, J., 2018. The influence of emotion and culture on language representation and processing. In: Faucher, C. (ed.). Advances in Culturally-Aware Intelligent Systems and in Cross-Cultural Psychological Studies. Springer International Publishing: Cham, Switzerland. pp. 415-432.
[50] Gumperz, J.J., Levinson, S.C., 1996. Rethinking Linguistic Relativity. Cambridge University Press: Cambridge, United Kingdom. p. 500.
[51] Nisbett, R.E., Peng, K., Choi, I., et al., 2001. Culture and systems of thought: holistic versus analytic cognition. Psychological Review. 108(2), 291-310.
[52] Lakoff, G., Johnson, M., 1980. Metaphors We Live by. University of Chicago Press: Chicago, United States. p. 292.
[53] Wilson, R.A., Foglia, L., 2017. Embodied Cognition. Available from: https://plato.stanford.edu/entries/embodied-cognition (cited 25 January 2021).
[54] Boroditsky, L., 2001. Does language shape thought? English and Mandarin speakers' conceptions of time. Journal of Cognitive Psychology. 43(1), 1-22.
[55] Vasantkumar, C., 2022. Not “multiple ontologies” but ontic capaciousness: Radical alterity after the ontological turn. HAU: Journal of Ethnographic Theory. 12(3), 819-835.
[56] Gan, C., Mori, T., 2023. Sensitivity and robustness of large language models to prompts in Japanese. Available from: https://arxiv.org/abs/2305.08714 (cited 8 Jun 2023).
[57] Chakrabarty, T., Saakyan, A., Ghosh, D., et al., 2022. Flute: Figurative language understanding through textual explanations. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing; December 7-December 11, 2022; Abu Dhabi, the United Arab Emirates. pp. 7139–7159.
[58] Tversky, A., Kahneman, D., 1979. Prospect theory: An analysis of decision under risk. Econometrica. 47(2), 263-292.
[59] Vallverdú, J., 2019. Blended Cognition: The Robotic Challenge. Springer International Publishing: Cham, Switzerland. p. 316.
[60] Piaget, J., 1977. The Development of Thought: Equilibration of Cognitive Structures. Viking Press: New York, United States. p. 213.
[61] Lakoff, G., Núñez, R., 2000. Where mathematics comes from: How the embodied mind brings mathematics into being. Basic Books: New York, United States. p. 511.
[62] Macrine, S.L., Fugate, J.M., 2022. Movement matters: How embodied cognition informs teaching and learning. MIT Press: Massachusetts, United States. p. 300.
[63] Girdhar, R., El-Nouby, A., Liu, Z., et al., 2023. ImageBind: One Embedding Space to Bind Them All. Available from: https://arxiv.org/abs/2305.05665 (cited 31 May 2023).
[64] Wen, Q., Liang, J., Sierra, C., et al., 2024. AI for education (AI4EDU): Advancing personalized education with LLM and adaptive learning. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining; August 25-August 29, 2024; Barcelona, Spain. pp. 6743–6744.
[65] Sakib, S.K., Das, A.B., 2024. Challenging fairness: A comprehensive exploration of bias in LLM-based recommendations. Proceedings of the 2024 IEEE International Conference on Big Data (BigData); Dec 15-Dec 18, 2024; Washington DC, USA. pp. 1585–1592.
[66] Gutiérrez, J.D., 2024. Critical appraisal of large language models in judicial decision-making. In: Paul, R., Carmel, E., Cobbe, J. (eds.). Handbook on Public Policy and Artificial Intelligence. Edward Elgar Publishing: Cheltenham, United Kingdom. pp. 323-338.
[67] Anderson, C., 2021. The role of coherence in language comprehension. Trends in Cognitive Sciences. 25(2), 123-134.
[68] Clark, A., 2013. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences. 36(3), 181-204.
Downloads
How to Cite
Issue
Article Type
License
Copyright © 2025 Jordi Vallverdu, Iván Redondo

This is an open access article under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License.