Students’ Views on the Use of Formative Assessment and Feedback for Learning at Higher Education in Singapore During the Covid-19 Pandemic Volume 58- Issue 4
James Kwan*
Nanyang Technological University, Singapore
Received: September 01, 2024; Published: September 19, 2024
*Corresponding author: James Kwan, Nanyang Technological University, Singapore
The Covid-19 pandemic has caused a significant disruption on the learning and teaching practices within the
higher education sector in Singapore. This study examines the effectiveness of formative assessment, feedback,
and peer assessment on undergraduate and postgraduate students’ learning outcome during the pandemic. This
study employed a quantitative method approach where students (N = 251) from an American university with
an Asian campus in Singapore completed an Assessment and Feedback Experience Questionnaire (AFEQ). The
findings revealed significant differences in feedback and peer assessment effectiveness between undergraduates
and postgraduates. However, there were no significant differences in the perceptions of the effectiveness of
formative assessment, feedback, and peer assessment between gender and age groups for both undergraduates
and postgraduates. Regarding the mode of study, there was a significant difference in their perceptions of
feedback between full-time and part-time students. These findings have more far-reaching implications for
students, instructors, and the university in the post-pandemic era.
Abbreviations: OTL: Opportunity to Learn; TESTA: Transforming Experience of Students Through Assessment;
AEQ: Assessment Experience Questionnaire; NAEQ: Norwegian Assessment Experience Questionnaire; AFEQ:
Assessment and Feedback Experience Questionnaire; MBA: Master of Business Administration
The Covid-19 pandemic has brought the world to a storm and an
unprecedented challenge for the education system globally as more
than 1.7 billion students were affected by schools and universities’
closure in 192 countries (Daniel, et al. [1,2]), and a declining enrolment
of international students in most of the universities worldwide
(MacKie, et al. [3,4]). The ‘new normality’ (Tesar, et al. [5]) has forced
many higher education institutions, both public and private, to replace
physical classes with online remote learning (Basilaia, et al. [6-
9]) such as digitalised virtual classroom (Mulenga, et al. [10,11]), and
mobile learning (Naciri, et al. [12]). In terms of assessments, many
universities have to grapple with the option of forgoing all summative
assessments till the situation is more controllable or the assessment
structure is changed (Camara, et al [13,14]). Large-scale examinations
have replaced low-stakes online remote proctored assessments (Jodoin,
et al. [15,16]). Higher education instructors have experienced
many challenges in their teaching, assessment, and feedback practices
during the tumultuous times of the pandemic. The early outbreak
of the pandemic has caused educators to switch from traditional
classroom teaching to a blended learning delivery, which demands a
change in their teaching style from teacher-centric to student-centric
(Tan, et al. [17,18]).
Many instructors have little prior experience in online facilitation
and providing online assessment; an understanding of e-pedagogy
is vital to improving engagement and motivation among students
(Garrison, et al. [19-22]). In Singapore, universities and private higher
education institutions responded swiftly amidst the pandemic by
having all learning activities delivered online and converting all summative assessments to proctored examinations or replaced with individual
assignments or team projects (Tan, et al. [17]). These changes
occurred between 10 February and 1 June 2020, and many students
expressed anxiety about the sudden transition to fully online learning
and the need to adapt to online assessment. Instructors also felt
the stress of converting the curriculum to online delivery and changing
the assessments to an online format, including peer assessment.
While recognising the importance of having assessments that align
with the learning outcomes, scholars argued that the opportunity to
learn (OTL) is perceived as a threat to test scores’ reliability and comparability
(De Pascale, et al. [23,24]). To minimise OTL loss caused
by Covid-19 and take into consideration the diverse cultural, social,
and learning abilities of students, education assessment scholars reviewed
existing literature to identify operational psychometric procedures
and (re)design assessments that integrate theoretical concepts
and job-related skills, knowledge, and abilities with evidence of fairness,
reliability, and validity (Keng, et al. [24]).
Thus, this study seeks to examine students’ and instructors’ perceptions
of the effectiveness of formative assessment, feedback, and
peer assessment in enhancing students’ learning during the pandemic
in Singapore.
Motivation
Several studies reported that the Covid-19 pandemic had caused
university students to face academic burnout (Fernández-Castillo,
et al. [25-28]), their wellbeing, and ability to cope with their studies,
mental health, social connectedness, or life issues (Aristovnik et
al. [29-32]). Globally, educational researchers worldwide have been
presenting studies examining the impact of the pandemic and online
learning on students’ academic performance, mental health, social
connectedness, or life issues in Blangadesh (Shuchi, et al. [33]); China
(Cao et al. [34-37]), France (Essadek, et al. [38]), Germany (Händel, et
al. [39]), India (Kapasia, et al. [40,41]), Pakistan (Adnan, et al. [42]),
the Philippines (Labrague, et al. [26,43]), Saudi Arabia (Khan, et al.
[44]), Spain (Odriozola-González, et al. [45]), Switzerland (Elmer, et
al. [46]), Ukraine (Nenko, et al. [47]), the U.K. (Burns, et al. [48-50]),
the U.S. (Bono, et al. [51-57]), and Vietnam (Tran, et al. [58]). While
there were studies examining the impact of the pandemic on students’
academic burnout, resilience level, campus connectedness (Kwan, et
al. [22]), and adoption of online learning and teaching in Singapore
(Tan, et al. [17]), it appears that there is no study examining the use of
formative assessment and feedback on students learning in Singapore
during the pandemic.
Against the backdrop of the Covid-19 pandemic, this study aims
to examine the effectiveness of formative assessment, feedback, and
peer assessment on undergraduate and postgraduate students’ learning
approaches, particularly in the higher education sector in Singapore.
This topic is worth investigating for three reasons. First, from
the constructive theoretical approach, feedback is regarded as one of
the most critical aspects of teaching, learning, and assessment practices
(Carless, et al. [59-62]). There is no universally accepted definition
and purpose of assessment feedback, and there has been an increasing
body of evidence that current feedback practices are poorly
executed in higher education (Bell, et al. [63-66]), this study will shed
some light on the effectiveness of feedback (Hounsell, et al. [67]),
based on the feedback Mark 2 model propounded by (Boud, et al.
[68]), on students’ learning from the perspective of students. Second,
at the practical level, there have been many changes in the teaching
and assessment practices in the higher education sector in Singapore
amid the pandemic, such as the increasing use of hybrid teaching,
blended learning, and online assessment (Ng, et al. [69,70]). Thus, it
is believed that this study may provide further insights to teaching
faculty and policyholders in higher education on the effective use of
formative assessment and feedback in different modes and technology
platforms to improve student learning during and post-pandemic.
Third, the researcher hopes the findings from this study, which
is believed to be the first to examine formative assessment from students’
perspectives in the higher educator sector in Singapore during
the pandemic, will gain interest from higher education assessment
scholars in Singapore and other countries to perform comparative
studies and meta longitudinal studies post-pandemic.
Formative assessment, or assessment for learning, is “activities
undertaken by educators and their students in assessing themselves
that provide information to be used as feedback to modify
teaching and learning activities” (Black, et al. [71]). This low-stakes
assessment provides an ongoing source of information for teachers
to understand students’ learning progress, develop interventions
to improve students’ learning, and support them in achieving their
learning goals (Shepard, 2006; Stiggins, 1999) (Wiliam, et al. [72]).
Formative assessments are broadly categorised into spontaneous and
planned (Dixson, et al. [73]). Spontaneous formative assessments are
impromptu and real-time when a teacher calls on students to answer
conceptual questions covered in the previous lesson or engages the
class to participate actively in questions raised by students during
the lesson. Planned formative assessments include quizzes, homework
assignments, and group discussions to assess student progress
and improve collaborative learning (Dixon, et al. [73]). Prior studies
reported that formative assessment with quality feedback enhances
learning and achievement (Black, et al. [74-80]). Based on the theory
of constructivism applied to higher education, assessment is a critical
element for learning and teaching for students’ reflective construction
of knowledge (Ion, et al. [81]). This theory suggests that students’
active involvement in formative assessment includes a wide range of
activities, such as understanding the assessment rubrics, collaboration
with instructors in assessment design, peer assessment, and
feedback from instructors to improve their learning.
In their seminal work on assessment and learning, (Black, et al.
[74]) argued that educational policies in many countries see the classroom
as a ‘black box’ where little attention has been paid to what happens
inside the classrooms. Instead, universities pay lots of attention
to raising education quality, which involves changing the inputs such
as regulation of teachers’ qualifications, adjusting student achievement
standards, investment in technology, etc., and evaluating the
outputs, which include standardised testing for summative assessment,
students’ performances, and graduate employability (Stančić,
et al. [82]). Prior studies reported that the quality of students’ learning
may depend on the assessment used (Carless, et al. [83-85]).
(Biggs, et al. [86]) use the term ‘backwash’ to refer to the impact of assessment
on students’ approaches to learning. For instance, formative
assessments appear more inclined to promote deep learning, while
summative assessments are more conducive to surface learning (Lynam,
et al. [84,87,88]). Assessment scholars argued that assessments
that involve case studies, simulations, and team presentations should
emphasise real-world applications to prepare students to succeed in
the workplace in twenty-first-century society (Carless, et al. [89,90]).
Over the past two decades, formative assessment has been noticeable
intonation in the assessment literature where many universities have
adopted the use of online formative assessment instead of continuing
with the conventional pen-and-paper summative assessments (Cavus,
et al. [91-95]).
In the context of this study, online formative assessment refers
to “the use of information and communication technology to support
the iterative process of gathering and analysing information about
student learning by teachers as well as learners and of evaluating it
about prior achievement and attainment of intended, as well as unintended
learning outcomes” (Pachler, et al. [96]). From the students’
perspective, online formative assessment provides flexibility and accessibility
concerning time and place, enhancing students’ learning
experiences (Kumar, et al. [97,98]). Students also received more timely
feedback from peer assessment and digitally-marked assessment
compared to the conventional teacher-marked (Hoo, et al. [99-103]).
Studies also reported that online formative assessment improves
test reliability with machine marking, enhances impartiality, and
permits question styles to be interactive through multimedia (Akib,
et al. [104,105]). Using online multiple-choice questions that permit
multiple attempts improves students’ engagement and motivation
for learning (Furnham, et al. [106-108]). While there are concerns
over the use of multiple-choice questions in promoting deep learning
(Jordan, et al. [109]), assessment scholars argue that well-designed
multiple-choice questions that emphasise critical thinking and analytical
skills benefit students compared to essay-type questions which
may evoke students to regurgitate and reproduce factual knowledge
(Brady, et al. [110,111]).
The pandemic has opened a floodgate for universities and faculty
to re-examine the use of online assessment and feedback to promote
students’ learning (Zou, et al. [22,112-115]). Online formative assessment
may be more prominent as students take classes remotely with
minimal physical interaction (Senel, et al. [115]) and transform teaching
and learning by removing time, distance, and space constraints
(Cirit, et al. [116,117]). During the pandemic, learning management
systems such as Canvas, Blackboard, SharePoint, and Moodle have
been extensively used for students to access online materials and
submit their assignments. There has been a rise in the use of Zoom,
Mircosoft Teams, and WebEx for synchronous classes and interaction
between instructors and students (Koh, et al. [118,119]). These platforms
provide a fertile ground for formative assessment and instant
feedback using online quizzes involving multiple-choice, true-false,
and matching questions (Shrago, et al. [120]). Instructors can use
these platforms to monitor students’ performance and learning commitment
via access rate, the attendance rate for synchronous classes,
and participation time and frequency in forum discussions (Murray,
et al. [121]). The suitability and feasibility of employing these online
platforms largely depend on their availability, compatibility with the
existing information technology infrastructure and network, storage
capacity, and internet connectivity for synchronous sessions (Crawford,
et al. [122]).
Feedback on Student Performance
There has been a growing body of literature that discusses the importance
of feedback to promote student learning in higher education
in recent years (Boud, et al. [68,81,123-126]). Feedback is regarded
as one of the most critical influences on student learning in teaching
and assessment practices (Hattie, et al. [61,62]). As feedback may be
seen as a multifaceted and complex process that deals with evaluating
students’ assessment performance and managing their expectations
(Bloxham et al. [127-131]), the effectiveness of feedback depends on
the teachers’ preference of feedback practice, including the use of online
feedback (Evans, et al. [60,132]), timely communication process
(Higgins, et al. [62,133]), depth and quality (Dawson et al. [134,135]),
students emotions (Alqassab, et al. [136-138]), students’ perceived
usefulness for improvement and their ability to understand, interpret,
and act upon it (Sadler, et al. [129,139-141]). Studies have examined
the association between student involvement with feedback
and a deep learning approach (Filius, et al. [142-144]). For instance,
(Filius, et al. [142]) examined the importance of peer feedback intervention
in promoting deep learning for an online course.
They found that students who advocate a deep learning approach
are more likely to seek more quality feedback. Their findings are consistent
with the earlier study by (Geitz, et al. [143]). More recently,
(Leenknecht, et al. [144]) surveyed 80 first-year undergraduates from
a Dutch university to examine their feedback-seeking behaviour and
their antecedents, including goal orientation and a deep learning approach.
They concluded that students with a higher goal orientation to
learn will employ more deep learning strategies and seek more feedback.
(Weaver, et al. [145]) noted four types of feedback perceived as
ineffective to student learning: overly vague or generic, feedback that
does not relate to assessment crite ria, feedback that does not provide
direction for further improvement (feedforward), and overly negative
feedback. (Boud, et al. [68]) provided two models of feedback: Feedback
Mark 1 and Feedback Mark 2. Feedback Mark 1 focuses on an
engineering approach where feedback involves information used and
not information transmitted. It assumes that students depend highly
on teachers to provide the information they require to learn; thus, the
feedback process appears mechanistic. Feedback Model 2 uses a sustainable
approach where students respond to the feedback, develop
their informed judgement, and relate their learning beyond the immediate
task (Boud, et al. [146]). Thus, educators and students need
to perceive feedback as a way of promoting self-regulation of learning
and emphasise the need for students to appreciate the feedback as an
essential way of improving their ability to make judgements and act
upon them.
However, studies suggest that students often raise their concerns
and complaints over the quality of feedback received as they find it
not valuable for their learning or they do not comprehend the feedback
given (Weaver, et al. [145,147-149]). Consequently, they are
demotivated toward receiving feedback, and worse, if the feedback
appears to be negative, they may be frustrated and have low self-esteem
and emotions (Sellbjer, et al. [131], [150-152]), and even lead
to leaving the course (Shaikh, et al. [153]). However, (Walker, et al.
[154]) argues that the effectiveness of feedback may not depend on
the quality or characteristics of the feedback but on the ability of students
to understand and interpret it. Students may be unclear about
the learning objectives and assessment expectations, unable to comprehend
the feedback or value the score and grade more important
than the feedback received (Jessop, et al. [155-156]). Thus, assessment
feedback may impact students’ emotions, academic resilience,
and buoyancy (Jonsson, et al. [137,157]). Educators need to adopt
a balanced approach when providing feedback that allows students
to see the value and promotes self-efficacy and self-esteem with the
right amount of socio-emotional support (Higgins, et al. [158,159]).
Prior studies using specific instruments measuring students’ views
of the use of formative assessment and feedback practices have been
conducted in Australia (Dawson et al [134,160]).
China (Wei, et al. [141]), Serbia (Stančić, et al. [82]), Spain (Ion,
et al. [81]), and the UK (Wu, et al. [95,157,161,162]). For instance,
(Wu, et al. [95]) employ the Assessment Experience Questionnaire
(AEQ) to examine the influence of the assessment system on student
learning in three different universities in the UK. The AEQ uses constructs
developed through the Transforming Experience of Students
Through Assessment (TESTA) adopted by more than 50 UK universities
since its inception in 2009 (Batten, et al. [161,163]). They reported
that formative assessment is the weakest domain across all three
universities. In comparison, students from the new teaching-focused
university provided significantly higher scores in the feedback quality
and student approaches to learning dimensions than the two research-
intensive universities. In Australia, (Dawson, et al. [134]) used
the Feedback for Learning survey to conduct a large-scale study involving
4,514 students and 406 instructors from two Australian universities
to evaluate the effectiveness of feedback on student learning.
They found that instructors strongly emphasised feedback design
while students perceived effective feedback as detailed and with considerable
affection and personalisation. More recently, (Vattøy, et al.
[164]) examined a sample of 182 undergraduates from a Norwegian
university to evaluate students’ feedback engagement and feedback
experiences using a mixed method, including an adapted Norwegian
Assessment Experience Questionnaire (N-AEQ).
They reported that quantity of effort and feedback quality are the
more robust predictors of variance in students’ use of feedback. Results
from prior studies on the effectiveness of online feedback were
mixed (Alvarez, et al. [165-173]). For instance, (Chong, et al. [167])
examined 93 college students’ perceptions of online feedback in Hong
Kong. He found that students were more motivated and responded
more proactively to the instructor’s online feedback as they gained
clarity on annotated comments with tracked changes and highlighting,
which saved time when revising their work. His findings were
also supported by earlier studies conducted by (McCabe, et al. [174])
and (Alvarez, et al. [165]).
Peer Assessment
Peer assessment is defined as an “arrangement in which individuals
consider the amount, level, value, worth, quality or success of the
products or outcomes of learning of peers of similar status” (Topping,
et al. [175]). It is commonly a form of a self-regulated learning tool in
higher education (Liu, et al. [176]), which typically involves students
to “provide either feedback or grades (or both) to their peers on a
product, process, or performance, based on the criteria of excellence
for the product” (Falchikov, et al. [150]). Typically, the product would
be in writing, portfolios, oral presentations (both individuals and
teams), and other performance tasks as prescribed by the instructors
(Topping, et al. [177]). Peer assessment can be summative (provide
evaluation and assigning a grade or a score) or formative (provide
feedback to support learning and suggest improvement) to promote
collaborative learning (Falchikov, et al. [178-180]) and self-regulation
in learning (Boud, et al. [68,128,180-185]). Students are empowered
to demonstrate their subject knowledge, reflective and evaluation
skills, and critical thinking process while evaluating their peer work,
in writing or oral (Topping, et al. [177,186-190]), which deepens their
learning (Bangert, et al. [191-193]).
Performing a detailed peer assessment enables students to evaluate
other students’ performance from the perspective of an assessor,
improves their work and learning quality to a large extent, and promotes
independence and task ownership (Bong, et al. [194-199]) in a
more varied and timely manner (Boud, et al. [182,200,201]). As peer
assessment enables students to be aware of assessment standards,
make an evaluative judgement and provide feedback with a set of rubrics
and predefined assessment criteria (Carless, et al. [202,203]), it
provides opportunities for students to cultivate a broad range of behavioural,
cognitive, and transferable skills such as verbal and written
communication, team building, self-awareness, critical thinking, and
time management (Nicol, et al. [188,189,202,204-206]). These skills
are precious for students to acquire to be career-ready when they
gain employment upon graduation (Carless, et al. [88,202,207-209]).
While students see the benefits of peer assessment in promoting
self-regulated learning, there are several limitations to peer assessment
(Boud, et al. [68,201,210-213]). For instance, prior studies reported
that students see peer assessment as a time-consuming and
stressful exercise (Bong, et al. [194,214-221]). Students may lack
the skills or motivation to provide peer assessment (Stančić, et al.
[82,216,219,222-228]), they remain sceptical and distrust over their
peers’ assessment reliability and accuracy compared to their instructors’
assessment (Liu et al. [89,177,221,223,229-231]), quality of peer
relationship (Brown, et al. [232,233]), competitive pressure to provide
lower assessment grade or peer pressure to give favourable or
bias feedback (Chen, et al. [234,235]). The advent of digital education
has gained increasing attention to online learning and the use of online
educational technologies in teaching, assessment, and feedback
in the higher education sector globally (Liu, et al. [176,236,237]).
The use of online peer assessment, in which students evaluate
their peers’ work and provide feedback through online collaboration,
has been employed by many universities as a primary online assessment
format (Liu, et al. [238-241]), and also for large virtual classes
such as massive open online learning (Kulkarni, et al. [236,242])
during the pandemic (Dominiguez-Figaredo, et al. [243-245]). As
educational technology is perceived as an avenue for academics to
design and implement online assessments and feedback, online peer
assessment has become a primary online assessment format with
several distinct benefits over conventional peer assessment (Wang,
et al. [37,176,210]). For instance, online peer assessment permits the
use of anonymity and may be conducted in a more flexible timing and
remote locations (Li, et al. [35,242,246-248]), resulted in more significant
learning gains (Li, et al. [35,249]). In addition, online peer assessment
may be automatically recorded and stored digitally with the
ease of retrieval by faculty members, thus reducing their workload
(Yang, et al. [242,247,250,251]). Beyond these, prior studies reported
that online peer assessment deepens students’ knowledge construction
and learning reflection (Rosa, et al. [252]) and assists students in
evaluating their affective, behavioural, cognitive, and metacognitive
behaviours about peer assessment and comments (Hou, et al. [253]);
and boosts students’ confidence and comfort to provide anonymous
online peer assessment to minimise adverse peer relationship (Demir,
et al. [254]).
Despite the various advantages documented in the literature, online
peer assessment does have a fair share of limitations (Doiron, et
al. [255]). (Liu, et al. [176]) argue that students may take the online
peer assessment lightly in an online environment since faculty do not
monitor the process regularly. Students may also experience anxiety
or frustration using online technologies (Bolliger, et al. [256]), reacting
to criticism from peers (Brindley, et al. [257-258]) and unclear
online guidelines and assessment procedures resulting in reliability
and fairness issues being compromised (Kaufman, et al. [223]). Several
studies have been conducted to investigate students’ attitudes
towards online peer assessment and reported mixed results (Wang,
et al. [37,176,227,231,259-264]). More recently, in the US, (Wang, et
al. [237]) employed a mixed method to examine the factors associated
with online graduate students’ attitude change in online peer assessment.
They found that perceived accurate and specific feedback,
communication with the peer’s work and logistics concerns helped
students display a positive attitude towards online peer assessment.
Similar positive attitudes towards online peer assessment were reported
in earlier studies by (Liu, et al. [260,262,265]), and another
recent study by (Zheng, et al. [263]).
However, (Kaufman, et al. [223]) reported that university students
exhibited negative attitudes toward fairness issues. (Wen, et
al. [231]) found that students expressed a positive attitude toward
conventional peer assessment than online peer assessment. However,
they failed to explain the possible factors resulting in this difference.
Thus, the mixed results call for further investigation of students’ attitudes
towards online peer assessment.
Prior studies using specific instruments measuring students’
views of the use of formative assessment and feedback practices have
been conducted in Australia (Dawson, et al. [134,160]), China (Wei,
et al. [141]), Serbia (Stančić, et al. [82]), Spain (Ion, et al. [81]), and
the UK (Wu, et al. [95,134,160,162]). For instance, (Wu, et al. [95])
employ the Assessment Experience Questionnaire (AEQ) to examine
the influence of the assessment system on student learning in three
different universities in the UK. The AEQ uses constructs developed
through the Transforming Experience of Students Through Assessment
(TESTA) adopted by more than 50 UK universities since its inception
in 2009 (Batten, et al. [161,163]). They reported that formative
assessment is the weakest domain across all three universities. In
comparison, students from the new teaching-focused university provided
significantly higher scores in the feedback quality and student
approaches to learning dimensions than the two research-intensive
universities. In Australia, (Dawson, et al. [134]) used the Feedback for
Learning survey to conduct a large-scale study involving 4,514 students
and 406 instructors from two Australian universities to evaluate
the effectiveness of feedback on student learning. They found that
instructors strongly emphasised feedback design while students perceived
effective feedback as detailed and with considerable affection
and personalisation.
For this study, the Assessment and Feedback Experience Questionnaire
(AFEQ) was employed, adapted from the latest version of
the AEQ (V.4.0) as it was the best fit to address the first two research
questions. This version comprises 18 items clustered into five factors:
formative assessment, how students learn, student effort, quality of
feedback, and internalisation of standards. The factors ‘how students
learn’ and ‘student effort’ measure learning approaches. However,
this instrument did not include peer assessment and included only
four items relating to feedback. Thus, the AFEQ has six factors comprised
of 30 items, including the existing five factors of 23 items, and a
new factor, ‘peer assessment’ of seven items. The ‘quality of feedback’
factor was expanded, incorporating the relevant items from the Feedback
for Learning survey developed by Monash University, Deakin
University, and the University of Melbourne. A 5-point Likert scale
ranging from 1 (strongly disagree) to 5 (strongly agree), was used to
measure each item. Demographic variables such as gender, age group,
year, and school of the study were included in the questionnaire. The
target participants for this study comprised undergraduates and
postgraduates from a US university with a campus-based in Singapore.
The undergraduates pursued full-time business, accountancy,
engineering, or social sciences degrees. The duration of their degrees
varied between three and four years, and typically, they underwent
internships during their first and second year of study.
The postgraduates were pursuing their first-year or second-year
Master of Business Administration (MBA) degree full-time or parttime.
The participants were ex-students or current students of the
researcher and students referred by other instructors within the
university. The ex-students were recruited randomly via direct contact
with the researcher, where emails were sent to the prospective
participants to invite them to participate. For existing students, the
researcher and other instructors made a verbal announcement after
their lesson on the purpose and duration of the research. An invitation
letter with the Participation Information Sheet and Consent Form
was emailed to 160 undergraduates and 145 postgraduates. A total
of 133 undergraduates and 127 postgraduates responded and agreed
to participate, constituting 83% and 88% response rates, respectively.
A self-administered questionnaire was emailed to these students.
Upon receipt of the completed questionnaire, a participant debrief
letter will be emailed to them. Nine students did not reply despite
several follow-ups. The final sample comprised 128 undergraduates
(52 females, 75 males) and 123 postgraduates (42 females, 81
males). The undergraduates are currently in their first (22), second
(53), third (41), and fourth year (12) of study. The majority of the
participants are pursuing their degree in business (66%) and science
(23%), and a small percentage of the participants are in engineering
(7%), humanities, arts and social sciences (6%). Among the postgraduate
participants, 74 are first-year students, and the remaining 49 are
second-year students. The distribution of full-time and part-time students
is 72 and 51, respectively.
A total of 251 students (128 undergraduates and 123 postgraduates)
participated in the survey, of which 156 were male students
(75 undergraduates and 81 postgraduates) and the remaining were
95 female students (53 undergraduates and 42 postgraduates). Table
1 summarises the students’ profiles by their level of study and
gender. Table 2 summarises the age distribution of the students and
mode of study for postgraduates. All the undergraduates are full-time
students; most of them fall under the 21-24 age group, accounting
for 66% of the undergraduate sample. More male students fall within
the 21-24 and 25-27 age groups than female students. The overall
age distribution is in line with the year of study, where 42% and
32% of the students are in their second and third year (the majority
fall within the 21-24 age group), respectively, and only 17% and 9%
of the undergraduates, respectively are in their first and fourth year
of study. In terms of discipline, the majority of the students are pursuing
Accountancy/Business (65%), while the remaining students
come from science (23%), Engineering (7%), and Humanities, Arts
and Social Science (5%). For the postgraduates, the age group begins
with 25-29 as the entry MBA requirement for age is 25 and above. The
distribution between Year 1 and Year 2 students is 74 (60%) and 49
(40%). It is evident that there is a higher number of male and female
students aged 35 and below; the majority are full-time students pursuing
postgraduate study, suggesting these students may see an MBA
as a vital credential to gain more job opportunities upon graduation
(Simpson, et al. [266]) and stay competitive in the job market (Edington,
et al. [267-269]). There are more part-time students over 35
years pursuing an MBA who may consider career switching (Mark, et
al. [270,271]) or obtain career advancement from their current employers
(Baruch, et al. [272-277]).
Table 1: Sample Distribution – Level of Study and Gender.
Table 2: Sample Distribution – Level of Study, Gender and Age Group.
Descriptive Statistics and Significance
Table 3 summarises the mean score and standard deviation for
each of the 30 items in the AFEQ for undergraduates and postgraduates.
Based on the 5-point Likert scale ranging from 1 to 5, the higher
the score provided by the respondents, the more they agreed with the
statement. The top three items with the highest mean score for the
undergraduates were item 4 (“I had to put the hours in regularly every
week if I wanted to do well.”), item 20 (“I studied things that were
covered in graded assessments.”), Moreover, item 27 (“I provided fair
assessment and feedback to my peers.”). It appears that the participants
saw graded assessment as essential and put in more effort on
those “examinable” topics/areas. As these undergraduates were fulltime
students, they may be able to commit more time every week than
the part-time postgraduate students. The three items with the lowest
mean score for the undergraduates were item 5 (“I prefer handwritten
feedback on hardcopy documents.” item 28 (“I prefer typewritten
feedback on hardcopy/scanned copy documents.”), and item 9 (“I enjoyed
the peer assessment process.”). It appears that the undergraduates
had a relatively neutral preference for written feedback. As for
the peer assessment process, the relatively low score may be attributable
to a lack of enthusiasm for carrying out the peer assessment
process, as it may be time-consuming. In addition, respondents may
see the peer assessment as less credible as they are inexperienced
and not trained to conduct these assessments.
Table 3: Descriptive statistics.
Note:
• *= p<0.05
• A higher score suggests students agree with the statement and a score lower than 3 suggests students tend to disagree with the statement.
Interestingly, two of the top three items with the highest mean
score among the postgraduates are the same as the undergraduates
(items 20 and 27), while the other highest mean score item is “The
feedback helped me to understand my performance better.” (item
10). Like full-time undergraduates, postgraduates adopted the “study
smart” attitude, where they were willing to spend more time only on
“ examinable “ topics. However, they are less willing to put in more
hours weekly, especially the part-time students who have to juggle
work, personal (or family) and study commitments, as evidenced by a
relatively low score for item 4. It Is telling that these students appreciate
the feedback provided by the faculty members more than their
undergraduate counterparts. The reasons for their appreciation may
be two folds. Firstly, many of the assessments are informal peer discussions
and team presentations of case studies where postgraduates
see the importance of feedback to enhance their knowledge and raise
their confidence in applying what they have learned in their current
or future (for the full-time MBA students) workplace. Secondly, several
core modules in the MBA program, such as corporate finance,
organisational behaviour, and marketing, are prerequisites for their
electives, such as advanced corporate finance, leadership development,
and international marketing. Thus, MBA students value the
feedback provided in the core modules in the first year are crucial
for improving their assessment performance in the second year when
they choose electives based on their specialisation or interest.
The three items with the lowest mean score among the postgraduates
are item 5, item 8 (“I only valued assessments that count towards
my grade.”), Moreover, item 11 (“I felt the assessment expectations
were constantly changing, especially during the pandemic.”). The
low score for item 8 may suggest that MBA students prefer formative
assessment over summative assessment as they enjoy peer learning
via team discussion and experiential learning in classrooms or synchronous
online learning. The low mean score for item 11 aligns with
the views gathered from the faculty members, who said that most did
not change their expectations on formative assessments during the
pandemic as they felt that many MBA students enjoy peer interaction
even when attending online classes. While there are differences in the
mean scores between the undergraduates and postgraduates, only 10
out of the 30 items reported significant differences, as indicated in the
last column of Table 4 (p < 0.05). Three items (1, 8, 14) are within the
Feedback factor, and another four items (7, 9, 13, 26) fall under the
Peer Assessment factor. A closer examination of these items indicated
that postgraduates are more participative and engaging in formative
assessments as they felt they learned much more from these assessments.
In addition, these respondents enjoy the peer assessment as
they are more competent in providing peer assessment and feedback
to their classmates. Consequently, they are more motivated after seeing
the peer feedback.
Table 4: Reliability – Cronbach’s Alpha.
Reliability and Inter-Factor Correlation: While the 18-item AEQ
V.4 has five factors, the 30 items in the AFEQ are grouped into six factors:
how students learn; internationalization of standards, feedback
quality, student effort, formative assessment, and peer assessment.
Cronbach’s alpha reliability coefficients were computed to evaluate
the reliability of the items within each factor and to estimate response
consistency. Generally, Cronbach’s alpha reliability coefficients, equivalent
to 0.70 or higher, are acceptable for research purposes (Nunnally,
et al. [278,279]). Table 4 summarises Cronbach’s alpha for the
six factors in the AFEQ. All six factors reported acceptable Cronbach’s
alpha reliability coefficients, ranging from 0.71 to 0.85. Spearman’s
rank order correlation coefficients are employed to ascertain the degree
to which the factors in the questionnaire are related. Questionnaires
may reveal factors that are related to a certain extent, though
they may not be strong when they measure the same concept (Byrne,
et al. [280]). Table 5 summarises the bivariate correlations, and there
is no evidence of multicollinearity as all correlations are below 0.80
(Stevens, 1996). All the correlations are significant, though they yield
weak (r = 0.2 – 0.39) and moderate (r = 0.4 – 0.69) levels (Akoglu, et
al. [281]), suggesting the items in the questionnaire indicate sound
psychometric properties and the factors are more distinct than anticipated
(Tabachnik, et al. [282]).
Table 5: Correlations between factors of the AFEQ.
Note: ** = p<0.01
Undergraduate: Gender, Age Group: Tables 6-11 present the
overall mean scores by gender for undergraduates concerning formative
assessment (Tables 6 & 7), feedback (Tables 8 & 9), and peer
assessment (Tables 10 & 11). Though the male participants recorded
a marginally higher mean score than the female students for formative
assessment (3.84 versus 3.72) and feedback (3.83 versus 3.65),
and both have almost equal mean scores for peer assessment (3.75
versus 3.76), they are not statistically significant (p > 0.05). Thus, the
study’s findings indicate no significant differences in the perceptions
of the effectiveness of formative assessment, feedback, and peer assessment
between male and female undergraduates. Tables 12-17
present the overall mean scores by age group for undergraduates concerning
formative assessment (Tables 12 & 13), feedback (Tables 14
& 15), and peer assessment (Tables 16 & 17). The age group with the
highest sample, age 21-24, recorded a marginally higher mean score
than the other age groups for feedback and peer assessment, but it
has the same mean score for formative assessment as those aged 17-
20. However, the ANOVA analysis indicates no significant differences
in the perceptions of the effectiveness of formative assessment, feedback,
and peer assessment between the three age groups of these undergraduates.
Table 11: Independent Samples Test – Peer Assessment: Gender (Undergraduates).
Table 12: Formative assessment: Age Group (Undergraduates).
Table 13: ANOVA – Formative assessment: Age Group (Undergraduates).
Table 14: Feedback: Age Group (Undergraduates).
Table 15: ANOVA – Formative Assessment: Age Group (Undergraduates).
Table 16: Peer Assessment: Age Group (Undergraduates).
Table 17: ANOVA – Peer Assessment: Age Group (Undergraduates).
Postgraduate: Gender, Mode of Study, Age Group: Tables 18-23
present the overall mean scores by gender for postgraduates concerning
formative assessment (Tables 18 & 19), feedback (Tables
20 & 21), and peer assessment (Tables 22 & 23). The female postgraduates
recorded a marginally higher mean score than their male
counterparts in all three factors formative assessment (3.88 versus
3.85), feedback (3.97 versus 3.75), and peer assessment (4.00 versus
3.90)> However, they are not statistically significant (p > 0.05). Thus,
the findings of the study indicate that there are no significant differences
in the perceptions of the effectiveness of formative assessment,
feedback, and peer assessment between male and female postgraduates.
Tables 24 & 29 present the overall mean scores by mode of study
for postgraduates concerning formative assessment (Tables 24 & 25),
feedback (Tables 26 & 27), and peer assessment (Tables 28 & 29). The
full-time postgraduates recorded a relatively higher mean score than
their male counterparts in all three factors: formative assessment
(3.97 versus 3.71), feedback (4.00 versus 3.57), and peer assessment
(4.06 versus 3.77). A closer examination of sample t-test results in
Table 26 indicates that the difference in mean score between fulltime
and part-time students for feedback is statistically significant
(p < 0.05). Thus, the study’s findings indicate a significant difference
in the perceptions of the feedback between full-time and part-time
postgraduates, but not for formative and peer assessments. Tables 30
- 35 present the overall mean scores by age group for postgraduates
concerning formative assessment (Tables 30 & 31), feedback (Tables
32 & 33), and peer assessment (Tables 34 & 35). The age group with
the highest sample, age 31-35, recorded the highest mean score of the
other age groups for formative feedback. However, participants who
fall between 41 and 45 report the highest mean score for feedback
and peer assessment. However, the ANOVA analysis indicates no significant
differences in the perceptions of the effectiveness of formative
assessment, feedback, and peer assessment between the five age
groups of these postgraduates.
The findings in the study revealed that postgraduate students
placed a higher value on formative assessment than undergraduates.
However, there is no significant difference between gender, age group,
and mode of study among students. It suggests that the postgraduates
are more inclined to adopt a deep learning approach where they have
a strong interest in gaining a deeper understanding of the relevant
concepts and theories covered and can relate them to their prior personal
experiences and current workplace (Beattie, et al. [283,284]).
Higher education researchers noted that deep learning contributes
to a more positive and higher quality learning outcome and improved
academic performance as compared to a surface learning approach
(Biggs, et al. [285-291]). This is evident from the “How students
learn” factor, where postgraduates reported a higher mean score
for “I was able to apply learning from my assessments to new situations”,
“Assessments enabled me to explore complex problems facing
the world”, and “Assessments helped me develop skills for graduate
work”. The pandemic may have created anxiety and challenges faced
by the postgraduates as the full-time postgraduates facing the uncertainty
of landing a full-time job that recognises the MBA they are pursuing
while the part-time MBA students may be facing retrenchment
and a bleak career path has given the poor financial performance of
many firms driven by the pandemic.
Thus, these students may be more engaged in formative assessment
as they see the value in collaborative learning to reduce anxiety
and expand their professional network with their classmates, which
may translate into many business and career opportunities (Mark, et
al. [270,292]). The findings in the study also revealed that both undergraduates
and postgraduates recognised the importance of feedback
in improving their understanding of their assessment performance.
This aligns with the earlier studies reported by (Vattøy, et al. [164]).
A closer examination of the study’s results revealed that students put
in substantial effort to study regularly given the challenging assessment
demands and hope to achieve better results with more effort.
Thus, it appears that they valued the feedback given by the instructors.
Regarding the mode of feedback, students have a strong preference
for online and face-to-face feedback compared to handwritten
and typewritten feedback, and postgraduate students have a stronger
preference for online and face-to-face feedback than undergraduates.
Prior studies noted that students who appreciated face-to-face
feedback were perceived as having a stronger desire for in-depth and
interactive feedback that allowed immediate responses from instructors
(Henderson, et al. [170,293]). For peer assessment, the findings
revealed that postgraduates reported higher mean scores than undergraduates
for most of the peer assessment items found in the AFEQ.
Possible reasons for the higher mean score for the postgraduates
maybe that they held positive attitudes and were more open to providing
and receiving peer assessment (Collimore, et al. [259,294-296]),
see such feedback improves their assessment quality and learning
outcome (Falchikov, et al. [178,297-300]), fairer way to assign grades
for group projects, which are commonly found in MBA courses (Wang,
et al. [126,301]). In contrast, undergraduates may see peer assessment
as relatively less beneficial as they may lack the skills to perform
such assessment (Liu, t al. [89,223,229]), sceptical about the reliability
and accuracy of student ratings (Kaufman, et al. [223,230,231]),
power relations among students (Liu, et al. [89,232]), and lack the
motivation and time perform such activity (Liu, et al. [89]). An interesting
finding from the interviews with several instructors confirmed
that peer assessment for most of the MBA modules is voluntary. In
contrast, most of the business-related modules come with mandatory
peer assessment. Prior studies have been conducted on mandatory
peer assessment (Yang, et al. [242,302]) and voluntary peer assessment
(Hafner, et al. [303]).
Thus, the postgraduates reported a higher mean score for peer
assessment, which may be attributable to the voluntary nature of
providing more vital interest and being more likely to put more effort
and motivation into peer assessment. This is echoed by a recent
study by (Liu, et al. [176]), which found that voluntary peer assessment
provides a better motivation to provide better quality feedback
and improve students’ learning outcomes and rating accuracy than
mandatory peer assessment. In terms of gender, the study reported
there is no significant difference in peer assessment between male
and female students, which is in line with those reported by (Collimore,
et al. [259,304]). With the opening of the Transition Phase by
the Singapore government on 22 November 2021, the university has
reduced the online assessment component as it resumed physical
classes where up to 75% of the students can be on campus at any
time. Students are now having a hybrid of online assessments and inclass
formative assessments. Effective integration of both online and
in-class formative assessment is vital to enhance interaction between
instructors and students, boost students’ confidence in achieving the
learning outcome, and foster the formation of a meaningful learning
community that promotes self-directed and deep learning with effective
utilisation of online technology (Dixson, et al. [305-307]). To
reduce students’ anxiety and be more ready for online assessments,
instructors may provide shorter, low-stake, bite-sized online assessments
that permit multiple attempts and detailed pre-programmed
online feedback.
To promote collaborative learning and engagement, instructors
may encourage students to form “buddy teams” where they could
meet weekly or fortnightly to share any challenges faced in online
assessment. Instructors may also coach and mentor to support students
by helping them address their concerns. For students unfamiliar
with peer assessment, instructors may also provide scaffolding to
guide them to improve their commitment to providing quality peer
assessment. The findings suggest that students have a stronger preference
for online feedback over traditional handwritten comments on
manuscripts; there are concerns that instructors need to address. In
the absence of face-to-face discussion, online feedback may lead to
misinterpretation and reduce the opportunity for immediate clarification
(Hattie, et al. [61,308,309]). In addition, there may be a delay in
accessing the online feedback by students, and there are times that instructors
may not be available for clarification, which has an off-putting
effect resulting in depersonalisation, disengagement, and reduced
self-regulated learning (McCabe, et al. [174,310,311]). Further,
appropriate training and support need to be provided to students unfamiliar
with accessing online feedback via various platforms, especially
new students and mature students who are digital immigrants
(Hast, et al. [309,312,313]). Students must be more adaptable during
ambiguous times such as the pandemic to thrive and develop resilience
and perseverance.
This study is believed to be the first in Singapore to examine the
effectiveness of formative assessment, feedback, and peer assessment
to promote student learning during the pandemic from both the student’s
perspectives. The findings revealed significant differences in
feedback and peer assessment effectiveness between undergraduates
and postgraduates. However, there were no significant differences in
the perceptions of the effectiveness of formative assessment, feedback,
and peer assessment between gender and age groups for both
undergraduates and postgraduates. Regarding the mode of study,
there was a significant difference in their perceptions of feedback between
full-time and part-time students. The findings and implications
gathered from the quantitative and qualitative approaches presented
some limitations. The sample was selected from a single university
and focused mainly on full-time undergraduates and MBA students
that the researcher has or is currently teaching. However, other instructors
teach a fraction of the respondents. Thus, the findings do not
represent students from other universities and private higher education
institutions in Singapore and other countries. Second, the study
did not gather data from part-time undergraduates and non-Business
School postgraduate students who may offer a different response to
the AFEQ items. While this study focuses on the students’ perceptions
of the value of formative assessment, feedback, and peer assessment
to students’ learning during the pandemic, other relevant areas have
yet to be fully explored in Singapore.
Firstly, longitudinal studies may be conducted to evaluate to what
extent the perceived benefits of online assessments and feedback on
students’ learning and academic performance during and post-pandemic
(Slack, et al. [314]). Secondly, the study may also be extended to
other countries where factors such as government support, cultural
dimensions such as those propounded by (Hofstede, et al. [315,316])
and (Hampden, et al. [317]), students’ resilience, hybrid learning,
and changes in assessment structure and feedback mechanisms may
have an impact on student’s performance during and post-pandemic.
Thirdly, focused group interviews may be conducted with instructors,
assessment scholars, curriculum specialists, and department heads
from various divisions and schools to gain deeper insights into how
learning and teaching practices may have an impact on assessment
changes in the higher education sector. The pandemic is unprecedented
in its scale and has provided opportunities for higher education
institutions to relook into their existing learning and teaching, assessment,
and feedback practices. Given the ambiguity in the epidemiological
and economic outlook, predicting when all conventional educational
activities can resume is difficult. Any changes in educational
policies and assessment practices must be supported by the government,
organisation (professional and private), faculty, educational
designer, and educational technologist. Future developments, such
as introducing the 5G network and AI generative tools, may enable
universities to implement more sophisticated online learning and assessment
tools that enhance student learning (Thathsara, et al. [18]).
Such technologies may play a pivotal role in online assessment and
feedback in a student-centric learning environment in the higher education
sector in Singapore (Kwan, et al. [31,318-321]). They may be
the new standard in the post-pandemic era for universities.
Tesar M (2020) Towards a postCovid-19 “New Normality?”: Physical and social distancing, the move to online and higher education. Policy Future in Education 18(5): 556-559.
Or C, Chapman E (2022) Development and acceptance of online assessment in higher education: Recommendations for further research. Journal of Applied Learning & Teaching 5(1): 10-26.
Kirschner P (2002) Can we support CSCL? Educational, social and technological affordances for learning. In: P Kirschner (Edt.)., Three worlds of CSCL: can we support CSCL, Open University of the Netherlands, p. 7-47.
Doiron J (2003) The value of online student peer review, evaluation and feedback in higher education. Centre for Development of Teaching and Learning 6(9): 1-2.