Using grademark® to improve feedback and engage students in the marking process

29 April 2018 - By: Alison Graham and Sara Marsham

Using grademark® to improve feedback and engage students in the marking process

By Alison Graham and Sara Marsham, Newcastle University

Many of us will be familiar with the frustration that students frequently express towards the assessment and feedback process: e.g., marking criteria are too generic, feedback is too negative and not useful, etc. In response to this, we embarked on a project, backed by funding from our university’s Learning and Teaching Innovation Fund, to improve the clarity of marking criteria, link feedback more explicitly to criteria, and to help students understand the criteria and be able to apply it to their own work. We also wanted to produce a system that created equity between marks and feedback even if the work was marked by different assessors.

Our project therefore had two components: a series of tutorials that provided opportunities for students to practice using the criteria to mark exemplars, and a trial of GradeMark® (part of Turnitin®) as an electronic platform to provide feedback on coursework. GradeMark® provides in-built and customisable comment banks that can increase the efficiency of marking and increase the consistency of marking between different markers.

The first step was to write assignment specific marking criteria based on students’ prior knowledge. In some modules this required running preliminary focus groups to better understand what the level of prior knowledge was. We then ran a series of tutorials to engage students with these assignment-specific marking criteria. This involved talking through the marking criteria, picking out key words for discussion, encouraging students to see the differences between performances in different grade ranges as well as asking students to grade and rank exemplars. Depending on the size of the group, this could involve students discussing with their colleagues, discussing with the session leader or voting using anonymous audience participation software.

In the first iteration of this project, we chose modules with summative in-course assignments that were somewhat unfamiliar to students e.g. a laboratory report at Stage 1, a reflective log at Stage 2 and a grant application assignment at Stage 3. This ensured t hat students had few pre-conceived ideas about the structure and content of the assessment and little or no expectation of the format of feedback to be received. When asked, students frequently said that the most useful aspect of these tutorials was seeing examples of other students’ work, rather than talking through the criteria or having the opportunity to mark work themselves. This was perhaps a result of the types of assignment that were chosen and the students’ unfamiliarity with those formats. After engaging students with the marking criteria, markers used GradeMark® to provide feedback, which included custom-built assignment-specific comment libraries, and give an overall mark.

Once feedback was returned to students, we collected their thoughts on the tutorials and on the use of GradeMark® as a feedback tool. Ninety-seven percent of students found it helpful when feedback was linked to marking criteria. Anecdotally it was reported to be helpful to see the grade range achieved by individual parts of the assessment as well as the overall mark; giving an indication of areas to work on in the future and also increasing transparency. Other notable remarks were that the majority of students preferred electronic feedback to feedback on a pro-forma or mark sheet and over 80% would like to have received more electronic feedback in other modules. One concern we had was that students may have found the library comments to have been too generic and not sufficiently specific to their piece of work. However over half of the group indicated that they found the comments specific. Overall, students indicated that the electronic comments were more positive, more fair, more thorough, more helpful, easier to understand and specific (compared to other feedback). It was also clear that students valued a variety of feedback - there was a preference for “summary” comments as well as specific comments.

GradeMark® allows analysis of the feedback given to students: for instance, the number of students that receive different types of grammatical comments or the number of students that fall into each mark range for a given criterion. This allows lecturers to identify common errors which can be helpful when planning the following year’s classes and deciding where the time in class would be most beneficially spent. GradeMark® also indicates when students have viewed their feedback. In one particular module, we found that 84% of students obtaining 70-100% viewed their feedback within 3.5 weeks of the marks being released whereas only 14% of those obtaining a failing mark (0-39%) did so. Forty-six t o forty-nine percent of students obtaining 40-69% viewed their feedback within the same time frame.

Reflecting on the project overall, we saw numerous benefits from the students’ perspective: on-line feedback is easier to read, is automatically saved online and can be accessed in private. From the markers’ perspective, there is no need for printing or scanning for retention, more detailed comments can be given efficiently and the use of a library bank of comments helps to avoid repetition. Electronic marking also complements online submission and return of work and links to originality checking. The use of a comment library can help to increase consistency in situations where multiple markers are involved, for example when postgraduate demonstrators or teaching assistants are involved in marking work from a large cohort.

Since our initial use of engagement tutorials and GradeMark® to provide feedback in a limited number of modules, we have embedded GradeMark® in other modules we teach and several colleagues both within our departments and across the University have done the same. We continue to develop the marking criteria each year and devise new activities to engage students with the marking process. One aspect we wanted to include from the outset of the project was to engage students in writing the marking criteria themselves. It quickly became apparent that this was too ambitious at the time but we still hope to incorporate this at some point. There is also a question about how best to support those students who continue to struggle with marking criteria, particularly with higher levels of comparative judgement. In-class time is precious and some staff may be reluctant to use face-to-face time to discuss marking criteria rather than subject content. Could tools for independent learning be useful here?

If any readers would like to know more about our project, we encourage them to get in touch. We are happy to share the materials we have created throughout this project or happy to offer advice on how to implement similar activities.
 
Category: Teaching and Learning
Share