The Efficacy of English Academic Writing Improvement of Doctoral Students of Humanities and Social Sciences in China

Authors

  • Yuwei Huang

    School of Foreign Studies, University of Science & Technology Beijing, Beijing 100083, China

  • Xiaohui Liang

    School of Foreign Studies, University of Science & Technology Beijing, Beijing 100083, China

  • Chang Li

    Department of Education, The Catholic University of Korea, Bucheon 14662, Republic of Korea

DOI:

https://doi.org/10.30564/fls.v7i7.10444
Received: 10 June 2025 | Revised: 16 June 2025 | Accepted: 19 June 2025 | Published Online: 15 July 2025

Abstract

Over the past 30 years, the expansion of doctoral student enrollment in China has led to a substantial increase in the number of doctoral students specializing in the humanities and social sciences. Nevertheless, the majority of these students predominantly publish in Chinese-language journals, which often results in a deficiency in their ability to write English academic writing. Consequently, identifying efficacious strategies to enhance their English academic writing proficiency is imperative. The emergence of AI tools has introduced both challenges and opportunities for students and educators in writing training. This paper aims to investigate the differentiation in writing feedback provided by AI systems versus human instructors for the group’s English academic writing. The findings indicate that the group demonstrates a greater engagement with AI-generated feedback compared to that provided by human instructors. Analysis revealed that human instructors delivered feedback characterized by greater Accuracy and Essential Feedback (p = 0.023 < 0.05; p = 0.013 < 0.05), whereas AI feedback excelled in offering clearer Direction for Improvement and a more Supportive Tone (p = 0.000 < 0.05; p = 0.009 < 0.05). No statistically significant disparity was identified between AI and human instructors concerning Criterion-Based Comments (p = 0.323 > 0.05). Based on these empirical findings, this paper proposes recommendations for integrating AI and human instructor feedback mechanisms, aiming to enhance the English academic writing skills of doctoral students, a substantial demographic within China that exhibits an urgent need for such training.

Keywords:

Writing Feedback; AI; Human Instructors; English Academic Writing; Humanities and Social Sciences

References

[1] Steiss, J., Tate, T., Graham, S., et al., 2024. Comparing the quality of human and ChatGPT feedback of students' writing. Learning and Instruction. 91, 101894. DOI: https://doi.org/10.1016/j.learninstruc.2024.101894

[2] Applebee, A.N., Langer, J.A., 2011. "EJ" extra: a snapshot of writing instruction in middle schools and high schools. English Journal. 100(6), 14–27.

[3] Graham, S., 2019. Changing how writing is taught. Review of Research in Education. 43(1), 277–303.

[4] Escalante, J., Pack, A., Barrett, A., 2023. AI-generated feedback on writing: insights into efficacy and ENL student preference. International Journal of Educational Technology in Higher Education. 20, 55.

[5] Deeva, G., Bogdanova, D., Serral, E., et al., 2021. A review of automated feedback systems for learners: classification framework, challenges and opportunities. Computers and Education. 162, 104094. DOI: https://doi.org/10.1016/j.compedu.2020.104094

[6] Drachsler, H., 2023. Towards highly informative learning analytics. Open Universiteit: Heerlen, Netherlands. DOI: https://doi.org/10.25656/01:26787

[7] Drachsler, H., Kalz, M., 2016. The MOOC and learning analytics innovation cycle (MOLAC): a reflective summary of ongoing research and its challenges. Journal of Computer Assisted Learning. 32(3), 281–290. DOI: https://doi.org/10.1111/jcal.12135

[8] Pardo, A., Jovanovic, J., Dawson, S., et al., 2019. Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology. 50(1), 128–138. DOI: https://doi.org/10.1111/bjet.12592

[9] Bond, M., Khosravi, H., De Laat, M., et al., 2024. A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education. 21(4), 1–41. DOI: https://doi.org/10.1186/s41239-023-00436-z

[10] Wang, Y., Xue, L., 2024. Using AI-driven chatbots to foster Chinese EFL students' academic engagement: an intervention study. Computers in Human Behavior. 159, 108353.

[11] Zhang, L., Bi, J., Qin, X., 2021. The dilemma of English academic publication for non-native English speaking doctoral students: review and countermeasures. Foreign Language World. (03), 64–72.

[12] Reeve, J., 2012. A Self-Determination Theory Perspective on Student Engagement. In: Christenson, S.L., Reschly, A.L., Wylie, C. (eds.). Handbook of Research on Student Engagement. Springer: New York, United States. pp. 149–172.

[13] Sang, Y., Hiver, P., 2021. Engagement and tCompanion Constructs in Language Learning: Conceptualizing Learners' Involvemen in the L2 Classroom. In: Hiver, P., Al-Hoorie, A.H., Mercer, S. (eds.). Student Engagement in the Language Classroom. Multilingual Matters: Bristol, UK. pp. 17–37.

[14] Derakhshan, A., Fathi, J., 2023. Grit and foreign language enjoyment as predictors of EFL learners' online engagement: The mediating role of online learning self-efficacy. The Asia-Pacific Education Researcher. 33(4), 759–769. DOI: https://doi.org/10.1007/s40299-023-00745-x

[15] Zare, J., Delavar, K.A., Derakhshan, A., et al., 2024. The relationship between self-regulated learning strategy use and task engagement. International Journal of Applied Linguistics. 34(3), 842–861. DOI: https://doi.org/10.1111/ijal.12535

[16] Zare, J., Derakhshan, A., Zhang, L.J., 2024. Investigating the relationship between metastrategy use and task engagement in an EFL context: a structural equation modeling approach. Innovation in Language Learning and Teaching. 19(2), 105–121. DOI: https://doi.org/10.1080/17501229.2024.2337710

[17] Graham, S., Hebert, M., Harris, K.R., 2015. Formative assessment and writing: a meta-analysis. The Elementary School Journal. 115(4), 523–547.

[18] Wilson, J., Czik, A., 2016. Automated essay evaluation software in English language arts classrooms: effects on teacher feedback, student motivation, and writing quality. Computers and Education. 100, 94–109.

[19] Hattie, J., Timperley, H., 2007. The power of feedback. Review of Educational Research. 77(1), 81–112.

[20] Motz, B., Canning, E., Green, D., et al., 2021. The influence of automated praise on behavior and performance. Technology, Mind, and Behavior. 2(3), 1–12.

[21] MacArthur, C.A., 2016. Instruction in Evaluation and Revision. In: MacArthur, C.A., Graham, S., Fitzgerald, J. (eds.). Handbook of Writing Research, 2nd ed. Guilford: New York, NY, USA. pp. 272–287.

[22] Roscoe, R.D., Varner, L.K., Crossley, S.A., et al., 2013. Developing pedagogically-guided algorithms for intelligent writing feedback. International Journal of Learning Technology. 8(4), 362–381.

[23] Panadero, E., Jonsson, A., Pinedo, L., et al., 2023. Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: a meta-analytic review. Educational Psychology Review. 35(4), 113. DOI: https://doi.org/10.1007/s10648-023-09823-4

[24] Bai, L., Hu, G., 2017. In the face of fallible AWE feedback: how do students respond? Educational Psychology. 37, 67–81. DOI: https://doi.org/10.1080/01443410.2016.1223275

[25] Grimes, D., Warschauer, M., 2010. Utility in a fallible tool: a multi-site case study of automated writing evaluation. The Journal of Technology, Learning and Assessment. 8(6), 4–43.

[26] Ranalli, J., 2018. Automated written corrective feedback: how well can students make use of it? Computer Assisted Language Learning. 31(7), 653–674. DOI: https://doi.org/10.1080/09588221.2018.1428994

[27] Zhu, M., Liu, O.L., Lee, H.S., 2020. The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers and Education. 143, 103668.

[28] Van Steendam, E., Rijlaarsdam, G., Sercu, L., et al., 2010. The effect of instruction type and dyadic or individual emulation on the quality of higher-order peer feedback in EFL. Learning and Instruction. 20(4), 316–327.

[29] Rüdian, S., Heuts, A., Pinkwart, N., 2020. Educational text summarizer: which sentences are worth asking for? Proceedings of The DELFI 2020 - The 18th Conference on Educational Technologies of the German Informatics Society; September 14–17, 2020; Bonn, Germany. pp. 277–288.

[30] Zawacki-Richter, O., Marín, V.I., Bond, M., et al., 2019. Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education. 16(1), 1–27. DOI: https://doi.org/10.1186/s41239-019-0171-0

[31] Moore, N.S., MacArthur, C.A., 2016. Student use of automated essay evaluation technology during revision. Journal of Writing Research. 8(1), 149–175.

Downloads

How to Cite

Huang, Y., Liang, X., & Li, C. (2025). The Efficacy of English Academic Writing Improvement of Doctoral Students of Humanities and Social Sciences in China. Forum for Linguistic Studies, 7(7), 565–578. https://doi.org/10.30564/fls.v7i7.10444