Page 34 - In_at_the_Deep_End_Document
P. 34

Assessment and feedback: two central drivers for successful learning
Nothing we do affects students more. If we get our assessment wrong, students’ whole lives or careers could be jeopardised. And developmental, dialogic feedback is vital
to students, so that they can be praised for what they do well, and learn from their mistakes, and improve their next piece of work on the basis of our feedback.
We may have ‘lecturer’ in our job title,
but for most of us we actually spend a signi cant part of our time on designing student assignments and exams, marking students’ work, and giving students feedback on their progress. For many new to teaching in higher education, our roles
in assessment and feedback are real ‘in at the deep end’ experiences, and we can feel very much out of our depth. Sometimes it can feel as if we’re expected automatically to be skilled at making assessment decisions, and letting students know why and how we did this!
‘Summative’ assessment is normally measured at the end of an element of learning - for example end of-course exams. Students usually get the results as marks or grades, and may sometimes not get any further feedback (for example on their exam performance). The purpose of summative assessment is to make a judgment, to ‘sum up’ performance outcomes, so it is normally end point. ‘Formative’ assessment is often used throughout the programme, and even though the marks or grades may count towards students’ overall awards, the feedback they receive is intended to help them to identify weaknesses, and build on strengths, so as to make their next piece of assessed work better.
With large classes, the time it takes us to give students effective formative feedback increases, and the danger is that the quality of the feedback is reduced by the pressure on assessors. The principal purpose of formative feedback is to form, shape and transform students’ performance, so it tends to be incremental.
Of course, both formative and summative assessment can be blended, so formative feedback often includes numbers or grades and summative assessments, like a major project, will often include formative commentaries.
Students are often quite strategic about their learning - if it counts towards their overall quali cations they will engage fully with what
is being asked - if it doesn’t, many won’t pay much attention to
it! This, in fact, is an intelligent response to the situation students often  nd themselves to be in - a heavy burden of coursework assessment and looming exams, with frequently other signi cant calls on their time beyond study, requiring them to prioritise whatever they deem essential.
Yet assessment and feedback are very often the areas where students are least satis ed with their experiences of higher education, as shown (for example) by the data from national students’ surveys in Ireland, the U.K. and many other nations.
It may be the case that students who are highly successful in assessment tend to be perfectly satis ed with the feedback they get, and that much of the student dissatisfaction with assessment and feedback is attributable to students who fare less well, and perhaps rightly believe they could have done better if they had been given more formative feedback early enough to improve their performance.
Because assessment is so important and personal to students, emotions can run high. Students can be very sensitive to the language we use when we give them feedback, particularly when our feedback is just words on paper or on-screen, without the encouraging smiles, warmth and humanity which can accompany face-to-face feedback. It is all too easy for us, despite our best intentions, to damage students’ motivation in our attempts to give them constructive feedback on weaknesses in their work. This danger is exacerbated if we have large piles of work to mark, and not enough time to phrase our feedback carefully or to comment on positive features of the work as well.
Assessment is at the sharp end for us too, as if things go wrong we can have very unhappy students, and our assessment judgements can be challenged, and we are likely to be under the scrutiny of external examiners. It can also be for academics one of the key causes of excessive stress and pressure when it feels unmanageable.
These terms are widely used in higher education professional practice but what do they actually mean?
Validity is about making sure that we’re using assessment to measure exactly what we set out to measure - students’ evidence of achievement of what we said they should be able to know and do in the intended learning outcomes. We need therefore to make sure that we know exactly which intended learning outcomes each element of assessment is addressing. But sometimes validity can be compromised by the form of assessment we choose - for example traditional exams sometimes end up measuring how well students can write about what they know, rather than how well they can use and apply the information.
Reliability is about making sure that we’re being fair and consistent, and that each mark or grade is accurate and realistic. If two different assessors marked the same piece of work using the agreed criteria, they should end up with broadly the same mark/ grade (inter-assessor reliability). Fairness really matters to students, and they’re very quick to notice any irregularities in our practice. This means that we’ve got to make a well-honed marking scheme for each element of assessed work (whether it is an exam question,

   32   33   34   35   36