13 January 2009 4 Comments
Following my work related to the Audience Response System (ARS) pilot at the University of Bath, I recently came across a paper by Jeremy B. Williams from Universitas 21 Global, Singpore entitled Assertation-reason multiple-choice testing as a tool for deep learning: a qualitative analysis (2006). [download]
Assertion-reason questions (ARQs) are a developed from of Multiple Choice Questions (MCQs) which aim to ‘encourage higher-order thinking on the part of the student’. As Berk (1998 ) remarks, the MCQ format ‘holds world records in the categories of most popular, most unpopular, most used, most misused, most loved, and most hated’. Quite a statement! Williams states that whilst the ARQ format has not been particularly used or embraced, it ‘constitutes a useful assessment tool and one that appears to be superior to the traditional MCQ format in terms of student learning outcomes’.
The aim of the development of ARQs, as outlined by Williams’ project team, was to develop a question set which would test reasoning (procedural knowledge) rather than recall (declarative knowledge). The paper refers to Bloom’s taxonomy (Bloom, 1956) which puts ARQs in the context of aiming to focus on the highest levels of learning within the cognitive domain, namely, Application, Analysis and Synthesis.
As with a traditional set of MCQs, ARQs present students with a given set of possible solutions. However, the key difference in ARQs is that they also include a true/false element. ‘Specifically, each item consists of two statements, an assertion and a rason, that are linked by the word ‘because’. Traditional MCQs will usually only test one particular issue or concept – ARQs will test two per question (the assertion and reason statements) as well as the validity of the ‘because’ statement. As Williams observes, ‘…judging the correctness of two statements must be harder than judging the correctness of one’.
However, the construction and use of ARQs are not without their drawbacks. For example, Connelly (2004) observes that it took some time for the students to become accustomed to this format. In addition, some students (for whom English was not their first language) found that the ARQs were testing their ‘English skills rather than knowledge of the subject being studied’.
Given my work with the ARS at the University of Bath, this ARQ format certainly lends itself to the type of questions that could potentially be asked with the hardware. The aim for any course of ARS-related PowerPoint slides should be limit the number of slides where students already know the answer, but instead getting them to think and consider the arguments put in front of them, thereby using ‘higher order skills’. I wrote a blog post in a similar vain late last year on the ARS project website.
ARQs could well be one way to achieving this, though I feel that I should first undertake additional research to assess the validity of Williams’ claims. For example, has further research been done in this area since 2006? If so, what were the results? Additionally, have ARQs been successfully applied to ARS-related formative (or summative) assessments?