The pandemic has prompted universities to initiate or accelerate plans to bring exams online. These forced transitions are often painful and involve stress and burnout. Exams were a huge pain.
There are many accounts from the pandemic of widespread cheating in online exams. These range from fun to depressing. Either way, cheating creates problems for everyone involved.
Read more: Unis uses artificial intelligence to help students take exams honestly But this creates its own problems
We need to understand student achievement to effectively determine, plan, and support student learning. Assessment is to inform this understanding.
Tests are high opportunities to generate large “chunks” of evidence of student achievement. Cheating invalidates these credentials and has ramifications at the individual, course, and program levels.
Academic program reviews, for example, are driven by analyzes of that year’s exam results. Test data helps staff make changes to the program. If a significant percentage of exam scores are the result of plagiarism, this can lead to incorrect assessments of the curriculum and misunderstandings in designing future exams.
What happened during the pandemic?
It is understandable, then, why many universities have adopted remote proctoring. It involves using artificial intelligence software to identify and monitor students during exams. The value proposition of remote proctoring is that it easily allows us to virtually replicate the safety of an in-person, seated and invigilated exam, wherever our students are. It seemed like a custom-made solution to the pandemic.
There is some evidence that remote proctoring is working as intended. However, we also need to consider emerging concerns.
Many students are hostile to what they see as inappropriate observation methods. There is concern about critical allegations of cheating by universities in “flag” cases generated by monitoring software.
On the faculty side, remote proctoring doesn’t necessarily lead to less work for staff. This may increase the workload associated with the exam.
Read more: ANU to invigilate exams using remote software, many students unhappy
Working in educational assessment for two decades has taught me that plagiarism in exams is a serious and complex problem. It rules out easy solutions.
Remote proctoring continues to have a role to play. However, it is essential that we define that role critically and carefully.
So why not go back to the old ways?
As enrollments increase and in-person teaching resumes, it can be tempting to return to familiar testing methods. Bringing back traditional tests, however, invites back other well-documented and chronic problems.
Organizing mass and individual exams is a big challenge. Assuring the relevance of traditional tests to modern skills is also problematic.
It’s worth asking ourselves: How comfortable were we with testing practices before the pandemic?
Of the many ways we engage learners in higher education, assessment is typically the slowest area to change. As exams are high stakes, it is not surprising that exams are quite resistant to change.
So we are presented with an extraordinary and timely opportunity. Now, there is a strong push for systemic improvement of learning, including better assessment.
Read more: COVID has changed student needs and expectations. How will universities respond?
Let me suggest two ways related to best exam practices. These are not latitudinal instructions. Instead, these are some resource-supported ways to open conversations within institutions and teaching teams to explore solutions that make sense for them and their students.
Make informed decisions
Scholarship informs our subjects. It should also inform assessment within our subjects.
Teaching and Learning (SoTL) scholarship is not new in higher education. In my experience, SoTL or SoLT, as the popular forms of the acronym suggest, have often de-emphasized or failed to include evaluation.
Increasingly, we need to embrace SOLTA, that is, scholarship that embraces and promotes evidence and research-supported evaluation methods. Embracing Solta involves becoming deeply familiar with the best research on assessment and testing practices in higher education and disciplinary contexts. This includes informing practice by consulting highly regarded journals such as Assessment and Evaluation in Higher Education.
As with our disciplines, we must see ourselves not only as consumers of knowledge, but also as creators. This gives universities the opportunity to support teachers in applying scholarship to teaching, including teaching-centred academics.
Read more: What’s the point of assessment in higher education anyway?
Don’t refuse exams, make them better
Exploring alternatives to exams is good general advice, but doing so is not always possible. Programs often have reasonable requirements for maintaining examinations, including the expectations of external accrediting bodies. In these cases, it is better to seek improvement rather than alternatives to tests.
One way to improve is to adopt better open-book testing methods. For exams with multiple-choice questions, there are strong guidelines for improving these. There are even approaches that allow multiple-choice questions to elicit cognitively complex responses.
The two main problems I have found with online test practices are cheating and students using search engines to look for answers. One way to address the first problem is through case-based approaches that use novel material created specifically for the test.
It’s hard to break a bond, but some people are taking new approaches to it. An expected and welcome part of this process involves collaborative, segmented running tests.
Read more: Online learning has changed the way students work – definitions of ‘cheating’ need to change too
Business as usual is not good enough
Changing the assessment is challenging. Higher stakes mean greater challenges and greater resistance. As universities find their post-pandemic footing, we have a window of opportunity in which we know we must change.
This allows us to answer the question: What’s next for exams? Clinging to new and hastily adopted practices provides an unsatisfactory answer. A return to business as usual is no better.
Instead, we can take a scholarship-informed approach to developing our tests and ourselves to better face an uncertain and challenging post-pandemic future.