Page 36 - In_at_the_Deep_End_Document
P. 36

an essay, a report, a presentation, or many other possibilities)
so that we can be sure that we’re being equally fair to all of our students. When there’s a really good marking scheme, different assessors will normally reasonably readily agree on the marks to
be awarded for particular exam answers or assignments. Also, there won’t be any signi cant variation in the standard of marks you give from the  rst piece of work to the last piece of work in the pile - this is actually quite dif cult for us to achieve, even for very-well- practised assessors, but can be achieved if we regularly scrutinise evidence against the criteria!
Transparency means we have to make sure that our students know how assessment works. Therefore, assessment shouldn’t involve guessing games about what is actually being sought.
They need to know what we’re looking for in an excellent answer. They need to know what they must do to reach a pass mark. They need to know what would not get them a pass. In other words, we need to help our students to see that what is being assessed is their evidence of achievement of the intended learning outcomes, and that these outcomes are useful to them as goalposts for their studying.
Authenticity has two sides. We need to be able to be sure that what we are marking is indeed the work of the students concerned - in other words that they haven’t copied it or downloaded chunks from the web. At least in traditional exam situations, we’re fairly sure about whose work it is, so long as effective invigilation is undertaken. The term ‘contract cheating’ is now used for students commissioning or purchasing work to use as their own efforts
in coursework assignments and is a really complex problem to counter, since it isn’t readily detected with anti-plagiarism software like Turnltln. ‘Academic integrity’ needs to be assured. But plagiarism and contract cheating can largely be problems of our (or our institution’s) own making. We need to design assignments that make it really dif cult for students to plagiarise, by making what we assess more clearly students’ individual efforts (for example critical incident accounts, re ective logs, and so on) and perhaps include regular checkpoints, especially for major assignments, which require students to discuss progress with us incrementally.
The other side of authenticity is about how ‘real life’ our assessment is in practice. For example, we can’t expect to measure drama performance skills effectively by asking students
to sit in an exam room and write about drama performance skills! We must therefore ensure that we choose and use appropriate methodologies and approaches to  t the student cohort, the level, the context, and the subject discipline.
lnclusivity in assessment is in many ways the hardest to achieve! It’s about doing everything in our power to avoid discriminating against anyone who may be disadvantaged in any way - whether having physical or mental problems, learning in a second or
third language, being in a minority group in a large cohort - and
so on. We’ve got to strive to make our assessment tasks as straightforward and transparent as possible to all students, so that they each know what they’re trying to do with them. The better
we know all the students we’re assessing, the easier it becomes
to avoid disadvantaging any of them in assessment, but it is then dif cult to plan assessments before meeting the student cohort, as each cohort will bring its own surprises!
Manageability has two sides - assessment needs to be manageable for us - and for our students. In many countries,
it can be argued that there’s too much assessment, and that because of all the pressure this causes that it doesn’t work very well. We need to be streamlining assessment so that it is of high quality and we’re assessing (making judgements on important things) and not just marking (merely ticking off routine things, for example spelling, punctuation and grammar). When students themselves are overloaded with assessment, they are often driven
to surface-learning mode, learning things rapidly just for the exam or assignment, then forgetting them just as quickly.
BEYOND EXAMS, ESSAYS AND REPORTS
Traditionally in higher education in several countries, there has for a long time been too much emphasis on written assessment, and students’ quali cations have depended too much on their skills relating to quite a narrow range of ways of demonstrating their achievement of the intended learning outcomes: answering exam questions, writing essays and writing reports. There are many alternatives, including:
• Computer-marked multiple-choice tests or exams: once set up, the technology can handle all the marking, and can even cause feedback to be printed out for candidates as they leave the test venue, or indeed give them instant on-screen feedback if the main purpose is feedback rather than testing. It can also help to manage data analysis related to assessment. Care has to be taken, however, when designing multiple-choice questions for testing purposes. It should never be regarded as a quick  x.
• Short-answer exams or tests: these can reduce the effect of students’ slow speed of handwriting, and can allow us to cover a greater breadth of syllabus in a given assessment element than when long answers are required.
• Annotated bibliographies: for example, where students
are asked to select (say) the  ve most relevant sources on a particular idea or topic, then review them critically, comparing and contrasting them in only (say) 300 words. This can cause students to think more deeply about the topic than they may have done if writing a 3,000-word essay (and the annotated bibliographies are much faster to mark). Extra value can be added in terms of information literacy if they are also asked to provide a rationale for their information sources.
• Portfolios of evidence: these can take even longer to assess than essays or reports if sensible constraints in terms of size or volume are not put in place, but can test far more than mere essay writing or report-writing skills, since submitted elements tend to be organised around learning outcomes demonstrated, and many regard portfolios to be a much more ‘rounded’ and authentic form of assessment.
• Oral presentations: these can focus on important skills that would not be addressed or assessed through written assessment formats, for example in individual or group presentations. It’s worth remembering, however, that some students can be quite terri ed of speaking under pressure, particularly before audiences, however important it may be in their later careers, so if used, it’s worth checking how they relate to speci ed learning outcomes.
• Vivas (oral exams): these can be a better measure of students’ understanding, as their reactions to on-the-spot questions are gauged and there is no doubt about the authenticity of their answers (such doubts can colour the assessment of various kinds of written work). However, some students can be disadvantaged by being terri ed of being put on the spot in this way - even though they may need to do this in their future careers. Creating good rapport with the
FURTHER READING
Chapters 7 and 8 of Brown, S. (2015) Learning, teaching and assessment in higher education: global perspectives. London: Palgrave-MacMillan on  t-for-purpose assessment, and on inclusive assessment: Adams, M. and Brown, S. (2006) Towards inclusive higher education: supporting disabled students. London: Routledge.
36 IN AT THE DEEP END | LEARNING + TEACHING academy


































































































   34   35   36   37   38