pisco_log
banner

A Comparative Study of Different Feedback Methods on University Students English Writing Ability

Fengxiang Lyu

Abstract


The study intends to apply process writing and sociocultural theory by noticing the hypothesis as the theoretical background
and using questionnaires, pre-tests and post-tests as research methods to conduct empirical research through teacher feedback and automatic feedback in university English writing teaching and to conduct experiments on seniors from a university in Hong Kong to answer
the questions above. The experiment lasted for 16 weeks, and a total of 300 students from 10 classes will participate in the experiment.
There will be 200 samples respectively in the experimental and control classes. In writing teaching, students in the experimental group
received feedback that combined online automatic feedback and teacher feedback, and students in the control group solely received
teacher feedback. During the experiment, participating students need to complete two writing tasks. The materials for these two writing
tasks are all original IELTS writing questions. Two English teachers will give a score based on the grading standard after each writing
task is completed. This research will use SPSS.25 to analyze the experimental data collected in the previous semester. It will summarize
and reflect on the experimental results to provide some teaching enlightenment for second language teaching, especially what kind of
feedback mode to adopt.

Keywords


Teacher feedback; Online machine feedback; English writing; University students; Second language acquisition

Full Text:

PDF

Included Database


References


[1] Bitchener, J., S. Young & D. Cameron. 2005. The effect of different types of corrective feedback on ESL student writing [J]. Journal of

Second Language Writing 14 (3): 191-205.

[2] Keh, C. L. 1990. Feedback in the writing process: A model and methods for implementation [J]. ELT Journal 44 (4): 294-304.

[3] Knoblauch, C. H. & L. Brannon. 1981. Teacher commentary on student writing: The state of the art [J]. Carbohydrate Research 10 (2):

373-376.

[4] Kukich, K. 2000. Beyond Automated Essay Scoring [J]. The Journal of Technology, learning, and Assessment (9):56-60.

[5] Leki I. Potential problems with peer responding in ESL writing classes [J] CATESOL Journal, 1990, (3): 5-19.

[6] Lee, I. 2004. Error correction in L2 secondary writing classrooms: the case of Hong Kong [J]. Journal of Second Language Writing

13(4): 285-312.

[7] Lee, I. 2007. Feedback in Hong Kong secondary writing classrooms: assessment for learning or assessment of learning? [J]. Assessing

Writing 12(3): 180-198.

[8] Lee, S. 2008. Exploring the potential of a web-based writing instruction program and AES: An empirical study using My Access [J].

Multimedia-Assisted Language Learning 11 (2): 103-125.

[9] Shermis, M. D. & J. Burstein. 2013. Handbook of automated essay evaluation: current applications and new directions [J]. Language

Learning & Technology 18 (2):65-69.

[10] Shermis, M. & J. Burstein. 2003. Automated essay scoring: a cross disciplinary perspective [J]. Lawrence Erlbaum Associates Inc.

[11] Vygotsky LS. Mind in Society: The Development of Higher Psychological Processes [M]. Cambridge, MA: Harvard University Press,

1978.

[12] Vantage, L. 2007. MY Access! Efficacy Report [OL].

[13] Wang, J. & M. S. Brown. 2008. Automated essay scoring versus human scoring: a comparative study [J]. Journal of Technology Learning & Assessment 6(2):29.




DOI: http://dx.doi.org/10.70711/eer.v2i6.5432

Refbacks

  • There are currently no refbacks.