Lecture: Linguistic Complexity as an Indicator of Writing Quality
Exploring correlations between written essay grades and measures of syntactic and lexical complexity.
This work examines correlations between measures of linguistic complexity and writing quality in eighth-grade students' persuasive essays from the ASAP dataset. These correlations can indicate useful features for automated essay scoring, and contribute to personalized feedback for students. The lecture will include descriptions of the various measures of syntactic and lexical complexity used, as well as the results of an analysis of the relationship between these measures and essay grades. Results highlight some useful indicators of writing quality, as well as potential pitfalls and weaknesses of some measures of complexity. At the end of the lecture, a general comparison of syntactic complexity vs lexical complexity will be made, and applications of this work will be discussed.
The development of writing skills is essential to success in many areas, both in education and the workplace (McNamara et al., 2010). Despite their importance, these skills are slow to develop, and generally of a low standard (Ferretti and Graham, 2019). This work explores the relationship between linguistic complexity and writing quality in persuasive essays written by eighth-grade students. More specifically, the study aims to identify correlates of awarded essay grade among various measures of syntactic and lexical complexity. Understanding the linguistic features that correlate with writing quality could find application in areas such as personalized pedagogical feedback and the development of automated essay scoring systems (McNamara et al., 2014; Kumar and Boulanger, 2020). With these applications in mind, the set of essays was partitioned into 3 groups: high, medium, and low quality, based on the essays’ grades. As a result, characteristics of essays belonging to specific groups should emerge, highlighting areas for specific improvement in low-quality essays, and features to be emulated in high-quality essays.
Many measures of syntactic complexity were calculated using the L2SCA system (Lu, 2011). Regarding lexical complexity, MLTD, HD-D, Maas TTR (McCarthy and Jarvis, 2010), and statistics on words and syllables were used for the analysis. The lecture will describe these measures of complexity in detail.
It was found that, in general, the measure of lexical complexity used were better predictors of essay quality than the syntactic ones. In particular, MLTD and HD-D showed strong positive correlations with essay grade. Reliance on the correct use of punctuation was discovered to be a weakness in many measures of syntactic complexity, particularly relevant to students of lower writing skill. It is shown that clause-based measures of syntactic complexity, which are not reliant on punctuation, proved more useful and robust in these analyses.
Ferretti, R. P. and Graham, S. (2019). Argumentative writing: Theory, assessment, and instruction. Reading and Writing, 32(6):1345–1357.
Kumar, V. and Boulanger, D. (2020). Explainable automated essay scoring: Deep learning really has pedagogical value. In Frontiers in Education, volume 5, page 186. Frontiers.
Lu, X. (2010). Automatic analysis of syntactic complexity in second language writing. International journal of corpus linguistics, 15(4):474–496.
McCarthy, P. M. and Jarvis, S. (2007). vocd: A theoretical and empirical evaluation. Language Testing, 24(4):459–488.
McNamara, D. S., Crossley, S. A., and McCarthy, P. M. (2010). Linguistic features of writing quality. Written communication, 27(1):57–86.
McNamara, D. S., Graesser, A. C., McCarthy, P. M., and Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix. Cambridge University Press.