info@biomedres.us   +1 (502) 904-2126   One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA   Site Map
ISSN: 2574 -1241

Impact Factor : 0.548

  Submit Manuscript

Research ArticleOpen Access

Students’ Views on the Use of Formative Assessment and Feedback for Learning at Higher Education in Singapore During the Covid-19 Pandemic Volume 58- Issue 4

James Kwan*

  • Nanyang Technological University, Singapore

Received: September 01, 2024; Published: September 19, 2024

*Corresponding author: James Kwan, Nanyang Technological University, Singapore

DOI: 10.26717/BJSTR.2024.58.009197

Abstract PDF

ABSTRACT

The Covid-19 pandemic has caused a significant disruption on the learning and teaching practices within the higher education sector in Singapore. This study examines the effectiveness of formative assessment, feedback, and peer assessment on undergraduate and postgraduate students’ learning outcome during the pandemic. This study employed a quantitative method approach where students (N = 251) from an American university with an Asian campus in Singapore completed an Assessment and Feedback Experience Questionnaire (AFEQ). The findings revealed significant differences in feedback and peer assessment effectiveness between undergraduates and postgraduates. However, there were no significant differences in the perceptions of the effectiveness of formative assessment, feedback, and peer assessment between gender and age groups for both undergraduates and postgraduates. Regarding the mode of study, there was a significant difference in their perceptions of feedback between full-time and part-time students. These findings have more far-reaching implications for students, instructors, and the university in the post-pandemic era.

Keywords: Formative Assessment; Feedback; Online Peer Assessment; Online Assessment

Abbreviations: OTL: Opportunity to Learn; TESTA: Transforming Experience of Students Through Assessment; AEQ: Assessment Experience Questionnaire; NAEQ: Norwegian Assessment Experience Questionnaire; AFEQ: Assessment and Feedback Experience Questionnaire; MBA: Master of Business Administration

Introduction

The Covid-19 pandemic has brought the world to a storm and an unprecedented challenge for the education system globally as more than 1.7 billion students were affected by schools and universities’ closure in 192 countries (Daniel, et al. [1,2]), and a declining enrolment of international students in most of the universities worldwide (MacKie, et al. [3,4]). The ‘new normality’ (Tesar, et al. [5]) has forced many higher education institutions, both public and private, to replace physical classes with online remote learning (Basilaia, et al. [6- 9]) such as digitalised virtual classroom (Mulenga, et al. [10,11]), and mobile learning (Naciri, et al. [12]). In terms of assessments, many universities have to grapple with the option of forgoing all summative assessments till the situation is more controllable or the assessment structure is changed (Camara, et al [13,14]). Large-scale examinations have replaced low-stakes online remote proctored assessments (Jodoin, et al. [15,16]). Higher education instructors have experienced many challenges in their teaching, assessment, and feedback practices during the tumultuous times of the pandemic. The early outbreak of the pandemic has caused educators to switch from traditional classroom teaching to a blended learning delivery, which demands a change in their teaching style from teacher-centric to student-centric (Tan, et al. [17,18]).

Many instructors have little prior experience in online facilitation and providing online assessment; an understanding of e-pedagogy is vital to improving engagement and motivation among students (Garrison, et al. [19-22]). In Singapore, universities and private higher education institutions responded swiftly amidst the pandemic by having all learning activities delivered online and converting all summative assessments to proctored examinations or replaced with individual assignments or team projects (Tan, et al. [17]). These changes occurred between 10 February and 1 June 2020, and many students expressed anxiety about the sudden transition to fully online learning and the need to adapt to online assessment. Instructors also felt the stress of converting the curriculum to online delivery and changing the assessments to an online format, including peer assessment. While recognising the importance of having assessments that align with the learning outcomes, scholars argued that the opportunity to learn (OTL) is perceived as a threat to test scores’ reliability and comparability (De Pascale, et al. [23,24]). To minimise OTL loss caused by Covid-19 and take into consideration the diverse cultural, social, and learning abilities of students, education assessment scholars reviewed existing literature to identify operational psychometric procedures and (re)design assessments that integrate theoretical concepts and job-related skills, knowledge, and abilities with evidence of fairness, reliability, and validity (Keng, et al. [24]).

Thus, this study seeks to examine students’ and instructors’ perceptions of the effectiveness of formative assessment, feedback, and peer assessment in enhancing students’ learning during the pandemic in Singapore.

Motivation

Several studies reported that the Covid-19 pandemic had caused university students to face academic burnout (Fernández-Castillo, et al. [25-28]), their wellbeing, and ability to cope with their studies, mental health, social connectedness, or life issues (Aristovnik et al. [29-32]). Globally, educational researchers worldwide have been presenting studies examining the impact of the pandemic and online learning on students’ academic performance, mental health, social connectedness, or life issues in Blangadesh (Shuchi, et al. [33]); China (Cao et al. [34-37]), France (Essadek, et al. [38]), Germany (Händel, et al. [39]), India (Kapasia, et al. [40,41]), Pakistan (Adnan, et al. [42]), the Philippines (Labrague, et al. [26,43]), Saudi Arabia (Khan, et al. [44]), Spain (Odriozola-González, et al. [45]), Switzerland (Elmer, et al. [46]), Ukraine (Nenko, et al. [47]), the U.K. (Burns, et al. [48-50]), the U.S. (Bono, et al. [51-57]), and Vietnam (Tran, et al. [58]). While there were studies examining the impact of the pandemic on students’ academic burnout, resilience level, campus connectedness (Kwan, et al. [22]), and adoption of online learning and teaching in Singapore (Tan, et al. [17]), it appears that there is no study examining the use of formative assessment and feedback on students learning in Singapore during the pandemic.

Against the backdrop of the Covid-19 pandemic, this study aims to examine the effectiveness of formative assessment, feedback, and peer assessment on undergraduate and postgraduate students’ learning approaches, particularly in the higher education sector in Singapore. This topic is worth investigating for three reasons. First, from the constructive theoretical approach, feedback is regarded as one of the most critical aspects of teaching, learning, and assessment practices (Carless, et al. [59-62]). There is no universally accepted definition and purpose of assessment feedback, and there has been an increasing body of evidence that current feedback practices are poorly executed in higher education (Bell, et al. [63-66]), this study will shed some light on the effectiveness of feedback (Hounsell, et al. [67]), based on the feedback Mark 2 model propounded by (Boud, et al. [68]), on students’ learning from the perspective of students. Second, at the practical level, there have been many changes in the teaching and assessment practices in the higher education sector in Singapore amid the pandemic, such as the increasing use of hybrid teaching, blended learning, and online assessment (Ng, et al. [69,70]). Thus, it is believed that this study may provide further insights to teaching faculty and policyholders in higher education on the effective use of formative assessment and feedback in different modes and technology platforms to improve student learning during and post-pandemic.

Third, the researcher hopes the findings from this study, which is believed to be the first to examine formative assessment from students’ perspectives in the higher educator sector in Singapore during the pandemic, will gain interest from higher education assessment scholars in Singapore and other countries to perform comparative studies and meta longitudinal studies post-pandemic.

Literature Review

Formative Assessment and Online Assessment

Formative assessment, or assessment for learning, is “activities undertaken by educators and their students in assessing themselves that provide information to be used as feedback to modify teaching and learning activities” (Black, et al. [71]). This low-stakes assessment provides an ongoing source of information for teachers to understand students’ learning progress, develop interventions to improve students’ learning, and support them in achieving their learning goals (Shepard, 2006; Stiggins, 1999) (Wiliam, et al. [72]). Formative assessments are broadly categorised into spontaneous and planned (Dixson, et al. [73]). Spontaneous formative assessments are impromptu and real-time when a teacher calls on students to answer conceptual questions covered in the previous lesson or engages the class to participate actively in questions raised by students during the lesson. Planned formative assessments include quizzes, homework assignments, and group discussions to assess student progress and improve collaborative learning (Dixon, et al. [73]). Prior studies reported that formative assessment with quality feedback enhances learning and achievement (Black, et al. [74-80]). Based on the theory of constructivism applied to higher education, assessment is a critical element for learning and teaching for students’ reflective construction of knowledge (Ion, et al. [81]). This theory suggests that students’ active involvement in formative assessment includes a wide range of activities, such as understanding the assessment rubrics, collaboration with instructors in assessment design, peer assessment, and feedback from instructors to improve their learning.

In their seminal work on assessment and learning, (Black, et al. [74]) argued that educational policies in many countries see the classroom as a ‘black box’ where little attention has been paid to what happens inside the classrooms. Instead, universities pay lots of attention to raising education quality, which involves changing the inputs such as regulation of teachers’ qualifications, adjusting student achievement standards, investment in technology, etc., and evaluating the outputs, which include standardised testing for summative assessment, students’ performances, and graduate employability (Stančić, et al. [82]). Prior studies reported that the quality of students’ learning may depend on the assessment used (Carless, et al. [83-85]). (Biggs, et al. [86]) use the term ‘backwash’ to refer to the impact of assessment on students’ approaches to learning. For instance, formative assessments appear more inclined to promote deep learning, while summative assessments are more conducive to surface learning (Lynam, et al. [84,87,88]). Assessment scholars argued that assessments that involve case studies, simulations, and team presentations should emphasise real-world applications to prepare students to succeed in the workplace in twenty-first-century society (Carless, et al. [89,90]). Over the past two decades, formative assessment has been noticeable intonation in the assessment literature where many universities have adopted the use of online formative assessment instead of continuing with the conventional pen-and-paper summative assessments (Cavus, et al. [91-95]).

In the context of this study, online formative assessment refers to “the use of information and communication technology to support the iterative process of gathering and analysing information about student learning by teachers as well as learners and of evaluating it about prior achievement and attainment of intended, as well as unintended learning outcomes” (Pachler, et al. [96]). From the students’ perspective, online formative assessment provides flexibility and accessibility concerning time and place, enhancing students’ learning experiences (Kumar, et al. [97,98]). Students also received more timely feedback from peer assessment and digitally-marked assessment compared to the conventional teacher-marked (Hoo, et al. [99-103]). Studies also reported that online formative assessment improves test reliability with machine marking, enhances impartiality, and permits question styles to be interactive through multimedia (Akib, et al. [104,105]). Using online multiple-choice questions that permit multiple attempts improves students’ engagement and motivation for learning (Furnham, et al. [106-108]). While there are concerns over the use of multiple-choice questions in promoting deep learning (Jordan, et al. [109]), assessment scholars argue that well-designed multiple-choice questions that emphasise critical thinking and analytical skills benefit students compared to essay-type questions which may evoke students to regurgitate and reproduce factual knowledge (Brady, et al. [110,111]).

The pandemic has opened a floodgate for universities and faculty to re-examine the use of online assessment and feedback to promote students’ learning (Zou, et al. [22,112-115]). Online formative assessment may be more prominent as students take classes remotely with minimal physical interaction (Senel, et al. [115]) and transform teaching and learning by removing time, distance, and space constraints (Cirit, et al. [116,117]). During the pandemic, learning management systems such as Canvas, Blackboard, SharePoint, and Moodle have been extensively used for students to access online materials and submit their assignments. There has been a rise in the use of Zoom, Mircosoft Teams, and WebEx for synchronous classes and interaction between instructors and students (Koh, et al. [118,119]). These platforms provide a fertile ground for formative assessment and instant feedback using online quizzes involving multiple-choice, true-false, and matching questions (Shrago, et al. [120]). Instructors can use these platforms to monitor students’ performance and learning commitment via access rate, the attendance rate for synchronous classes, and participation time and frequency in forum discussions (Murray, et al. [121]). The suitability and feasibility of employing these online platforms largely depend on their availability, compatibility with the existing information technology infrastructure and network, storage capacity, and internet connectivity for synchronous sessions (Crawford, et al. [122]).

Feedback on Student Performance

There has been a growing body of literature that discusses the importance of feedback to promote student learning in higher education in recent years (Boud, et al. [68,81,123-126]). Feedback is regarded as one of the most critical influences on student learning in teaching and assessment practices (Hattie, et al. [61,62]). As feedback may be seen as a multifaceted and complex process that deals with evaluating students’ assessment performance and managing their expectations (Bloxham et al. [127-131]), the effectiveness of feedback depends on the teachers’ preference of feedback practice, including the use of online feedback (Evans, et al. [60,132]), timely communication process (Higgins, et al. [62,133]), depth and quality (Dawson et al. [134,135]), students emotions (Alqassab, et al. [136-138]), students’ perceived usefulness for improvement and their ability to understand, interpret, and act upon it (Sadler, et al. [129,139-141]). Studies have examined the association between student involvement with feedback and a deep learning approach (Filius, et al. [142-144]). For instance, (Filius, et al. [142]) examined the importance of peer feedback intervention in promoting deep learning for an online course.

They found that students who advocate a deep learning approach are more likely to seek more quality feedback. Their findings are consistent with the earlier study by (Geitz, et al. [143]). More recently, (Leenknecht, et al. [144]) surveyed 80 first-year undergraduates from a Dutch university to examine their feedback-seeking behaviour and their antecedents, including goal orientation and a deep learning approach. They concluded that students with a higher goal orientation to learn will employ more deep learning strategies and seek more feedback. (Weaver, et al. [145]) noted four types of feedback perceived as ineffective to student learning: overly vague or generic, feedback that does not relate to assessment crite ria, feedback that does not provide direction for further improvement (feedforward), and overly negative feedback. (Boud, et al. [68]) provided two models of feedback: Feedback Mark 1 and Feedback Mark 2. Feedback Mark 1 focuses on an engineering approach where feedback involves information used and not information transmitted. It assumes that students depend highly on teachers to provide the information they require to learn; thus, the feedback process appears mechanistic. Feedback Model 2 uses a sustainable approach where students respond to the feedback, develop their informed judgement, and relate their learning beyond the immediate task (Boud, et al. [146]). Thus, educators and students need to perceive feedback as a way of promoting self-regulation of learning and emphasise the need for students to appreciate the feedback as an essential way of improving their ability to make judgements and act upon them.

However, studies suggest that students often raise their concerns and complaints over the quality of feedback received as they find it not valuable for their learning or they do not comprehend the feedback given (Weaver, et al. [145,147-149]). Consequently, they are demotivated toward receiving feedback, and worse, if the feedback appears to be negative, they may be frustrated and have low self-esteem and emotions (Sellbjer, et al. [131], [150-152]), and even lead to leaving the course (Shaikh, et al. [153]). However, (Walker, et al. [154]) argues that the effectiveness of feedback may not depend on the quality or characteristics of the feedback but on the ability of students to understand and interpret it. Students may be unclear about the learning objectives and assessment expectations, unable to comprehend the feedback or value the score and grade more important than the feedback received (Jessop, et al. [155-156]). Thus, assessment feedback may impact students’ emotions, academic resilience, and buoyancy (Jonsson, et al. [137,157]). Educators need to adopt a balanced approach when providing feedback that allows students to see the value and promotes self-efficacy and self-esteem with the right amount of socio-emotional support (Higgins, et al. [158,159]). Prior studies using specific instruments measuring students’ views of the use of formative assessment and feedback practices have been conducted in Australia (Dawson et al [134,160]).

China (Wei, et al. [141]), Serbia (Stančić, et al. [82]), Spain (Ion, et al. [81]), and the UK (Wu, et al. [95,157,161,162]). For instance, (Wu, et al. [95]) employ the Assessment Experience Questionnaire (AEQ) to examine the influence of the assessment system on student learning in three different universities in the UK. The AEQ uses constructs developed through the Transforming Experience of Students Through Assessment (TESTA) adopted by more than 50 UK universities since its inception in 2009 (Batten, et al. [161,163]). They reported that formative assessment is the weakest domain across all three universities. In comparison, students from the new teaching-focused university provided significantly higher scores in the feedback quality and student approaches to learning dimensions than the two research- intensive universities. In Australia, (Dawson, et al. [134]) used the Feedback for Learning survey to conduct a large-scale study involving 4,514 students and 406 instructors from two Australian universities to evaluate the effectiveness of feedback on student learning. They found that instructors strongly emphasised feedback design while students perceived effective feedback as detailed and with considerable affection and personalisation. More recently, (Vattøy, et al. [164]) examined a sample of 182 undergraduates from a Norwegian university to evaluate students’ feedback engagement and feedback experiences using a mixed method, including an adapted Norwegian Assessment Experience Questionnaire (N-AEQ).

They reported that quantity of effort and feedback quality are the more robust predictors of variance in students’ use of feedback. Results from prior studies on the effectiveness of online feedback were mixed (Alvarez, et al. [165-173]). For instance, (Chong, et al. [167]) examined 93 college students’ perceptions of online feedback in Hong Kong. He found that students were more motivated and responded more proactively to the instructor’s online feedback as they gained clarity on annotated comments with tracked changes and highlighting, which saved time when revising their work. His findings were also supported by earlier studies conducted by (McCabe, et al. [174]) and (Alvarez, et al. [165]).

Peer Assessment

Peer assessment is defined as an “arrangement in which individuals consider the amount, level, value, worth, quality or success of the products or outcomes of learning of peers of similar status” (Topping, et al. [175]). It is commonly a form of a self-regulated learning tool in higher education (Liu, et al. [176]), which typically involves students to “provide either feedback or grades (or both) to their peers on a product, process, or performance, based on the criteria of excellence for the product” (Falchikov, et al. [150]). Typically, the product would be in writing, portfolios, oral presentations (both individuals and teams), and other performance tasks as prescribed by the instructors (Topping, et al. [177]). Peer assessment can be summative (provide evaluation and assigning a grade or a score) or formative (provide feedback to support learning and suggest improvement) to promote collaborative learning (Falchikov, et al. [178-180]) and self-regulation in learning (Boud, et al. [68,128,180-185]). Students are empowered to demonstrate their subject knowledge, reflective and evaluation skills, and critical thinking process while evaluating their peer work, in writing or oral (Topping, et al. [177,186-190]), which deepens their learning (Bangert, et al. [191-193]).

Performing a detailed peer assessment enables students to evaluate other students’ performance from the perspective of an assessor, improves their work and learning quality to a large extent, and promotes independence and task ownership (Bong, et al. [194-199]) in a more varied and timely manner (Boud, et al. [182,200,201]). As peer assessment enables students to be aware of assessment standards, make an evaluative judgement and provide feedback with a set of rubrics and predefined assessment criteria (Carless, et al. [202,203]), it provides opportunities for students to cultivate a broad range of behavioural, cognitive, and transferable skills such as verbal and written communication, team building, self-awareness, critical thinking, and time management (Nicol, et al. [188,189,202,204-206]). These skills are precious for students to acquire to be career-ready when they gain employment upon graduation (Carless, et al. [88,202,207-209]).

While students see the benefits of peer assessment in promoting self-regulated learning, there are several limitations to peer assessment (Boud, et al. [68,201,210-213]). For instance, prior studies reported that students see peer assessment as a time-consuming and stressful exercise (Bong, et al. [194,214-221]). Students may lack the skills or motivation to provide peer assessment (Stančić, et al. [82,216,219,222-228]), they remain sceptical and distrust over their peers’ assessment reliability and accuracy compared to their instructors’ assessment (Liu et al. [89,177,221,223,229-231]), quality of peer relationship (Brown, et al. [232,233]), competitive pressure to provide lower assessment grade or peer pressure to give favourable or bias feedback (Chen, et al. [234,235]). The advent of digital education has gained increasing attention to online learning and the use of online educational technologies in teaching, assessment, and feedback in the higher education sector globally (Liu, et al. [176,236,237]).

The use of online peer assessment, in which students evaluate their peers’ work and provide feedback through online collaboration, has been employed by many universities as a primary online assessment format (Liu, et al. [238-241]), and also for large virtual classes such as massive open online learning (Kulkarni, et al. [236,242]) during the pandemic (Dominiguez-Figaredo, et al. [243-245]). As educational technology is perceived as an avenue for academics to design and implement online assessments and feedback, online peer assessment has become a primary online assessment format with several distinct benefits over conventional peer assessment (Wang, et al. [37,176,210]). For instance, online peer assessment permits the use of anonymity and may be conducted in a more flexible timing and remote locations (Li, et al. [35,242,246-248]), resulted in more significant learning gains (Li, et al. [35,249]). In addition, online peer assessment may be automatically recorded and stored digitally with the ease of retrieval by faculty members, thus reducing their workload (Yang, et al. [242,247,250,251]). Beyond these, prior studies reported that online peer assessment deepens students’ knowledge construction and learning reflection (Rosa, et al. [252]) and assists students in evaluating their affective, behavioural, cognitive, and metacognitive behaviours about peer assessment and comments (Hou, et al. [253]); and boosts students’ confidence and comfort to provide anonymous online peer assessment to minimise adverse peer relationship (Demir, et al. [254]).

Despite the various advantages documented in the literature, online peer assessment does have a fair share of limitations (Doiron, et al. [255]). (Liu, et al. [176]) argue that students may take the online peer assessment lightly in an online environment since faculty do not monitor the process regularly. Students may also experience anxiety or frustration using online technologies (Bolliger, et al. [256]), reacting to criticism from peers (Brindley, et al. [257-258]) and unclear online guidelines and assessment procedures resulting in reliability and fairness issues being compromised (Kaufman, et al. [223]). Several studies have been conducted to investigate students’ attitudes towards online peer assessment and reported mixed results (Wang, et al. [37,176,227,231,259-264]). More recently, in the US, (Wang, et al. [237]) employed a mixed method to examine the factors associated with online graduate students’ attitude change in online peer assessment. They found that perceived accurate and specific feedback, communication with the peer’s work and logistics concerns helped students display a positive attitude towards online peer assessment. Similar positive attitudes towards online peer assessment were reported in earlier studies by (Liu, et al. [260,262,265]), and another recent study by (Zheng, et al. [263]).

However, (Kaufman, et al. [223]) reported that university students exhibited negative attitudes toward fairness issues. (Wen, et al. [231]) found that students expressed a positive attitude toward conventional peer assessment than online peer assessment. However, they failed to explain the possible factors resulting in this difference. Thus, the mixed results call for further investigation of students’ attitudes towards online peer assessment.

Method

Prior studies using specific instruments measuring students’ views of the use of formative assessment and feedback practices have been conducted in Australia (Dawson, et al. [134,160]), China (Wei, et al. [141]), Serbia (Stančić, et al. [82]), Spain (Ion, et al. [81]), and the UK (Wu, et al. [95,134,160,162]). For instance, (Wu, et al. [95]) employ the Assessment Experience Questionnaire (AEQ) to examine the influence of the assessment system on student learning in three different universities in the UK. The AEQ uses constructs developed through the Transforming Experience of Students Through Assessment (TESTA) adopted by more than 50 UK universities since its inception in 2009 (Batten, et al. [161,163]). They reported that formative assessment is the weakest domain across all three universities. In comparison, students from the new teaching-focused university provided significantly higher scores in the feedback quality and student approaches to learning dimensions than the two research-intensive universities. In Australia, (Dawson, et al. [134]) used the Feedback for Learning survey to conduct a large-scale study involving 4,514 students and 406 instructors from two Australian universities to evaluate the effectiveness of feedback on student learning. They found that instructors strongly emphasised feedback design while students perceived effective feedback as detailed and with considerable affection and personalisation.

For this study, the Assessment and Feedback Experience Questionnaire (AFEQ) was employed, adapted from the latest version of the AEQ (V.4.0) as it was the best fit to address the first two research questions. This version comprises 18 items clustered into five factors: formative assessment, how students learn, student effort, quality of feedback, and internalisation of standards. The factors ‘how students learn’ and ‘student effort’ measure learning approaches. However, this instrument did not include peer assessment and included only four items relating to feedback. Thus, the AFEQ has six factors comprised of 30 items, including the existing five factors of 23 items, and a new factor, ‘peer assessment’ of seven items. The ‘quality of feedback’ factor was expanded, incorporating the relevant items from the Feedback for Learning survey developed by Monash University, Deakin University, and the University of Melbourne. A 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree), was used to measure each item. Demographic variables such as gender, age group, year, and school of the study were included in the questionnaire. The target participants for this study comprised undergraduates and postgraduates from a US university with a campus-based in Singapore. The undergraduates pursued full-time business, accountancy, engineering, or social sciences degrees. The duration of their degrees varied between three and four years, and typically, they underwent internships during their first and second year of study.

The postgraduates were pursuing their first-year or second-year Master of Business Administration (MBA) degree full-time or parttime. The participants were ex-students or current students of the researcher and students referred by other instructors within the university. The ex-students were recruited randomly via direct contact with the researcher, where emails were sent to the prospective participants to invite them to participate. For existing students, the researcher and other instructors made a verbal announcement after their lesson on the purpose and duration of the research. An invitation letter with the Participation Information Sheet and Consent Form was emailed to 160 undergraduates and 145 postgraduates. A total of 133 undergraduates and 127 postgraduates responded and agreed to participate, constituting 83% and 88% response rates, respectively. A self-administered questionnaire was emailed to these students. Upon receipt of the completed questionnaire, a participant debrief letter will be emailed to them. Nine students did not reply despite several follow-ups. The final sample comprised 128 undergraduates (52 females, 75 males) and 123 postgraduates (42 females, 81 males). The undergraduates are currently in their first (22), second (53), third (41), and fourth year (12) of study. The majority of the participants are pursuing their degree in business (66%) and science (23%), and a small percentage of the participants are in engineering (7%), humanities, arts and social sciences (6%). Among the postgraduate participants, 74 are first-year students, and the remaining 49 are second-year students. The distribution of full-time and part-time students is 72 and 51, respectively.

Findings

Respondent Demographics

A total of 251 students (128 undergraduates and 123 postgraduates) participated in the survey, of which 156 were male students (75 undergraduates and 81 postgraduates) and the remaining were 95 female students (53 undergraduates and 42 postgraduates). Table 1 summarises the students’ profiles by their level of study and gender. Table 2 summarises the age distribution of the students and mode of study for postgraduates. All the undergraduates are full-time students; most of them fall under the 21-24 age group, accounting for 66% of the undergraduate sample. More male students fall within the 21-24 and 25-27 age groups than female students. The overall age distribution is in line with the year of study, where 42% and 32% of the students are in their second and third year (the majority fall within the 21-24 age group), respectively, and only 17% and 9% of the undergraduates, respectively are in their first and fourth year of study. In terms of discipline, the majority of the students are pursuing Accountancy/Business (65%), while the remaining students come from science (23%), Engineering (7%), and Humanities, Arts and Social Science (5%). For the postgraduates, the age group begins with 25-29 as the entry MBA requirement for age is 25 and above. The distribution between Year 1 and Year 2 students is 74 (60%) and 49 (40%). It is evident that there is a higher number of male and female students aged 35 and below; the majority are full-time students pursuing postgraduate study, suggesting these students may see an MBA as a vital credential to gain more job opportunities upon graduation (Simpson, et al. [266]) and stay competitive in the job market (Edington, et al. [267-269]). There are more part-time students over 35 years pursuing an MBA who may consider career switching (Mark, et al. [270,271]) or obtain career advancement from their current employers (Baruch, et al. [272-277]).

Table 1: Sample Distribution – Level of Study and Gender.

biomedres-openaccess-journal-bjstr

Table 2: Sample Distribution – Level of Study, Gender and Age Group.

biomedres-openaccess-journal-bjstr

Descriptive Statistics and Significance

Table 3 summarises the mean score and standard deviation for each of the 30 items in the AFEQ for undergraduates and postgraduates. Based on the 5-point Likert scale ranging from 1 to 5, the higher the score provided by the respondents, the more they agreed with the statement. The top three items with the highest mean score for the undergraduates were item 4 (“I had to put the hours in regularly every week if I wanted to do well.”), item 20 (“I studied things that were covered in graded assessments.”), Moreover, item 27 (“I provided fair assessment and feedback to my peers.”). It appears that the participants saw graded assessment as essential and put in more effort on those “examinable” topics/areas. As these undergraduates were fulltime students, they may be able to commit more time every week than the part-time postgraduate students. The three items with the lowest mean score for the undergraduates were item 5 (“I prefer handwritten feedback on hardcopy documents.” item 28 (“I prefer typewritten feedback on hardcopy/scanned copy documents.”), and item 9 (“I enjoyed the peer assessment process.”). It appears that the undergraduates had a relatively neutral preference for written feedback. As for the peer assessment process, the relatively low score may be attributable to a lack of enthusiasm for carrying out the peer assessment process, as it may be time-consuming. In addition, respondents may see the peer assessment as less credible as they are inexperienced and not trained to conduct these assessments.

Table 3: Descriptive statistics.

biomedres-openaccess-journal-bjstr

Note:
• *= p<0.05
• A higher score suggests students agree with the statement and a score lower than 3 suggests students tend to disagree with the statement.

Interestingly, two of the top three items with the highest mean score among the postgraduates are the same as the undergraduates (items 20 and 27), while the other highest mean score item is “The feedback helped me to understand my performance better.” (item 10). Like full-time undergraduates, postgraduates adopted the “study smart” attitude, where they were willing to spend more time only on “ examinable “ topics. However, they are less willing to put in more hours weekly, especially the part-time students who have to juggle work, personal (or family) and study commitments, as evidenced by a relatively low score for item 4. It Is telling that these students appreciate the feedback provided by the faculty members more than their undergraduate counterparts. The reasons for their appreciation may be two folds. Firstly, many of the assessments are informal peer discussions and team presentations of case studies where postgraduates see the importance of feedback to enhance their knowledge and raise their confidence in applying what they have learned in their current or future (for the full-time MBA students) workplace. Secondly, several core modules in the MBA program, such as corporate finance, organisational behaviour, and marketing, are prerequisites for their electives, such as advanced corporate finance, leadership development, and international marketing. Thus, MBA students value the feedback provided in the core modules in the first year are crucial for improving their assessment performance in the second year when they choose electives based on their specialisation or interest.

The three items with the lowest mean score among the postgraduates are item 5, item 8 (“I only valued assessments that count towards my grade.”), Moreover, item 11 (“I felt the assessment expectations were constantly changing, especially during the pandemic.”). The low score for item 8 may suggest that MBA students prefer formative assessment over summative assessment as they enjoy peer learning via team discussion and experiential learning in classrooms or synchronous online learning. The low mean score for item 11 aligns with the views gathered from the faculty members, who said that most did not change their expectations on formative assessments during the pandemic as they felt that many MBA students enjoy peer interaction even when attending online classes. While there are differences in the mean scores between the undergraduates and postgraduates, only 10 out of the 30 items reported significant differences, as indicated in the last column of Table 4 (p < 0.05). Three items (1, 8, 14) are within the Feedback factor, and another four items (7, 9, 13, 26) fall under the Peer Assessment factor. A closer examination of these items indicated that postgraduates are more participative and engaging in formative assessments as they felt they learned much more from these assessments. In addition, these respondents enjoy the peer assessment as they are more competent in providing peer assessment and feedback to their classmates. Consequently, they are more motivated after seeing the peer feedback.

Table 4: Reliability – Cronbach’s Alpha.

biomedres-openaccess-journal-bjstr

Reliability and Inter-Factor Correlation: While the 18-item AEQ V.4 has five factors, the 30 items in the AFEQ are grouped into six factors: how students learn; internationalization of standards, feedback quality, student effort, formative assessment, and peer assessment. Cronbach’s alpha reliability coefficients were computed to evaluate the reliability of the items within each factor and to estimate response consistency. Generally, Cronbach’s alpha reliability coefficients, equivalent to 0.70 or higher, are acceptable for research purposes (Nunnally, et al. [278,279]). Table 4 summarises Cronbach’s alpha for the six factors in the AFEQ. All six factors reported acceptable Cronbach’s alpha reliability coefficients, ranging from 0.71 to 0.85. Spearman’s rank order correlation coefficients are employed to ascertain the degree to which the factors in the questionnaire are related. Questionnaires may reveal factors that are related to a certain extent, though they may not be strong when they measure the same concept (Byrne, et al. [280]). Table 5 summarises the bivariate correlations, and there is no evidence of multicollinearity as all correlations are below 0.80 (Stevens, 1996). All the correlations are significant, though they yield weak (r = 0.2 – 0.39) and moderate (r = 0.4 – 0.69) levels (Akoglu, et al. [281]), suggesting the items in the questionnaire indicate sound psychometric properties and the factors are more distinct than anticipated (Tabachnik, et al. [282]).

Table 5: Correlations between factors of the AFEQ.

biomedres-openaccess-journal-bjstr

Note: ** = p<0.01

Undergraduate: Gender, Age Group: Tables 6-11 present the overall mean scores by gender for undergraduates concerning formative assessment (Tables 6 & 7), feedback (Tables 8 & 9), and peer assessment (Tables 10 & 11). Though the male participants recorded a marginally higher mean score than the female students for formative assessment (3.84 versus 3.72) and feedback (3.83 versus 3.65), and both have almost equal mean scores for peer assessment (3.75 versus 3.76), they are not statistically significant (p > 0.05). Thus, the study’s findings indicate no significant differences in the perceptions of the effectiveness of formative assessment, feedback, and peer assessment between male and female undergraduates. Tables 12-17 present the overall mean scores by age group for undergraduates concerning formative assessment (Tables 12 & 13), feedback (Tables 14 & 15), and peer assessment (Tables 16 & 17). The age group with the highest sample, age 21-24, recorded a marginally higher mean score than the other age groups for feedback and peer assessment, but it has the same mean score for formative assessment as those aged 17- 20. However, the ANOVA analysis indicates no significant differences in the perceptions of the effectiveness of formative assessment, feedback, and peer assessment between the three age groups of these undergraduates.

Table 6: Formative Assessment: Gender (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 7: Independent Samples Test – Formative assessment: Gender (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 8: Feedback: Gender (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 9: Independent Samples Test – Feedback: Gender (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 10: Peer Assessment: Gender (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 11: Independent Samples Test – Peer Assessment: Gender (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 12: Formative assessment: Age Group (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 13: ANOVA – Formative assessment: Age Group (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 14: Feedback: Age Group (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 15: ANOVA – Formative Assessment: Age Group (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 16: Peer Assessment: Age Group (Undergraduates).

biomedres-openaccess-journal-bjstr

Table 17: ANOVA – Peer Assessment: Age Group (Undergraduates).

biomedres-openaccess-journal-bjstr

Postgraduate: Gender, Mode of Study, Age Group: Tables 18-23 present the overall mean scores by gender for postgraduates concerning formative assessment (Tables 18 & 19), feedback (Tables 20 & 21), and peer assessment (Tables 22 & 23). The female postgraduates recorded a marginally higher mean score than their male counterparts in all three factors formative assessment (3.88 versus 3.85), feedback (3.97 versus 3.75), and peer assessment (4.00 versus 3.90)> However, they are not statistically significant (p > 0.05). Thus, the findings of the study indicate that there are no significant differences in the perceptions of the effectiveness of formative assessment, feedback, and peer assessment between male and female postgraduates. Tables 24 & 29 present the overall mean scores by mode of study for postgraduates concerning formative assessment (Tables 24 & 25), feedback (Tables 26 & 27), and peer assessment (Tables 28 & 29). The full-time postgraduates recorded a relatively higher mean score than their male counterparts in all three factors: formative assessment (3.97 versus 3.71), feedback (4.00 versus 3.57), and peer assessment (4.06 versus 3.77). A closer examination of sample t-test results in Table 26 indicates that the difference in mean score between fulltime and part-time students for feedback is statistically significant (p < 0.05). Thus, the study’s findings indicate a significant difference in the perceptions of the feedback between full-time and part-time postgraduates, but not for formative and peer assessments. Tables 30 - 35 present the overall mean scores by age group for postgraduates concerning formative assessment (Tables 30 & 31), feedback (Tables 32 & 33), and peer assessment (Tables 34 & 35). The age group with the highest sample, age 31-35, recorded the highest mean score of the other age groups for formative feedback. However, participants who fall between 41 and 45 report the highest mean score for feedback and peer assessment. However, the ANOVA analysis indicates no significant differences in the perceptions of the effectiveness of formative assessment, feedback, and peer assessment between the five age groups of these postgraduates.

Table 18: Formative Assessment: Gender (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 19: Independent Samples Test – Formative Assessment: Gender (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 20: Feedback: Gender (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 21: Independent Samples Test – Feedback: Gender (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 22: Peer Assessment: Gender (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 23: Independent Samples Test – Peer Assessment: Gender (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 24: Formative assessment: Mode of Study (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 25: Independent Samples Test – Formative assessment: Mode of Study (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 26: Feedback: Mode of Study (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 27: Independent Samples Test – Feedback: Mode of Study (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 28: Peer Assessment: Mode of Study (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 29: Independent Samples Test – Peer Assessment: Mode of Study (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 30: Formative assessment: Age Group (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 31: ANOVA – Formative assessment: Age Group (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 32: Feedback: Age Group (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 33: ANOVA – Feedback assessment: Age Group (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 34: Peer assessment: Age Group (Postgraduates).

biomedres-openaccess-journal-bjstr

Table 35: ANOVA – Peer assessment: Age Group (Postgraduates).

biomedres-openaccess-journal-bjstr

Discussion

The findings in the study revealed that postgraduate students placed a higher value on formative assessment than undergraduates. However, there is no significant difference between gender, age group, and mode of study among students. It suggests that the postgraduates are more inclined to adopt a deep learning approach where they have a strong interest in gaining a deeper understanding of the relevant concepts and theories covered and can relate them to their prior personal experiences and current workplace (Beattie, et al. [283,284]). Higher education researchers noted that deep learning contributes to a more positive and higher quality learning outcome and improved academic performance as compared to a surface learning approach (Biggs, et al. [285-291]). This is evident from the “How students learn” factor, where postgraduates reported a higher mean score for “I was able to apply learning from my assessments to new situations”, “Assessments enabled me to explore complex problems facing the world”, and “Assessments helped me develop skills for graduate work”. The pandemic may have created anxiety and challenges faced by the postgraduates as the full-time postgraduates facing the uncertainty of landing a full-time job that recognises the MBA they are pursuing while the part-time MBA students may be facing retrenchment and a bleak career path has given the poor financial performance of many firms driven by the pandemic.

Thus, these students may be more engaged in formative assessment as they see the value in collaborative learning to reduce anxiety and expand their professional network with their classmates, which may translate into many business and career opportunities (Mark, et al. [270,292]). The findings in the study also revealed that both undergraduates and postgraduates recognised the importance of feedback in improving their understanding of their assessment performance. This aligns with the earlier studies reported by (Vattøy, et al. [164]). A closer examination of the study’s results revealed that students put in substantial effort to study regularly given the challenging assessment demands and hope to achieve better results with more effort. Thus, it appears that they valued the feedback given by the instructors. Regarding the mode of feedback, students have a strong preference for online and face-to-face feedback compared to handwritten and typewritten feedback, and postgraduate students have a stronger preference for online and face-to-face feedback than undergraduates. Prior studies noted that students who appreciated face-to-face feedback were perceived as having a stronger desire for in-depth and interactive feedback that allowed immediate responses from instructors (Henderson, et al. [170,293]). For peer assessment, the findings revealed that postgraduates reported higher mean scores than undergraduates for most of the peer assessment items found in the AFEQ.

Possible reasons for the higher mean score for the postgraduates maybe that they held positive attitudes and were more open to providing and receiving peer assessment (Collimore, et al. [259,294-296]), see such feedback improves their assessment quality and learning outcome (Falchikov, et al. [178,297-300]), fairer way to assign grades for group projects, which are commonly found in MBA courses (Wang, et al. [126,301]). In contrast, undergraduates may see peer assessment as relatively less beneficial as they may lack the skills to perform such assessment (Liu, t al. [89,223,229]), sceptical about the reliability and accuracy of student ratings (Kaufman, et al. [223,230,231]), power relations among students (Liu, et al. [89,232]), and lack the motivation and time perform such activity (Liu, et al. [89]). An interesting finding from the interviews with several instructors confirmed that peer assessment for most of the MBA modules is voluntary. In contrast, most of the business-related modules come with mandatory peer assessment. Prior studies have been conducted on mandatory peer assessment (Yang, et al. [242,302]) and voluntary peer assessment (Hafner, et al. [303]).

Thus, the postgraduates reported a higher mean score for peer assessment, which may be attributable to the voluntary nature of providing more vital interest and being more likely to put more effort and motivation into peer assessment. This is echoed by a recent study by (Liu, et al. [176]), which found that voluntary peer assessment provides a better motivation to provide better quality feedback and improve students’ learning outcomes and rating accuracy than mandatory peer assessment. In terms of gender, the study reported there is no significant difference in peer assessment between male and female students, which is in line with those reported by (Collimore, et al. [259,304]). With the opening of the Transition Phase by the Singapore government on 22 November 2021, the university has reduced the online assessment component as it resumed physical classes where up to 75% of the students can be on campus at any time. Students are now having a hybrid of online assessments and inclass formative assessments. Effective integration of both online and in-class formative assessment is vital to enhance interaction between instructors and students, boost students’ confidence in achieving the learning outcome, and foster the formation of a meaningful learning community that promotes self-directed and deep learning with effective utilisation of online technology (Dixson, et al. [305-307]). To reduce students’ anxiety and be more ready for online assessments, instructors may provide shorter, low-stake, bite-sized online assessments that permit multiple attempts and detailed pre-programmed online feedback.

To promote collaborative learning and engagement, instructors may encourage students to form “buddy teams” where they could meet weekly or fortnightly to share any challenges faced in online assessment. Instructors may also coach and mentor to support students by helping them address their concerns. For students unfamiliar with peer assessment, instructors may also provide scaffolding to guide them to improve their commitment to providing quality peer assessment. The findings suggest that students have a stronger preference for online feedback over traditional handwritten comments on manuscripts; there are concerns that instructors need to address. In the absence of face-to-face discussion, online feedback may lead to misinterpretation and reduce the opportunity for immediate clarification (Hattie, et al. [61,308,309]). In addition, there may be a delay in accessing the online feedback by students, and there are times that instructors may not be available for clarification, which has an off-putting effect resulting in depersonalisation, disengagement, and reduced self-regulated learning (McCabe, et al. [174,310,311]). Further, appropriate training and support need to be provided to students unfamiliar with accessing online feedback via various platforms, especially new students and mature students who are digital immigrants (Hast, et al. [309,312,313]). Students must be more adaptable during ambiguous times such as the pandemic to thrive and develop resilience and perseverance.

Conclusion

This study is believed to be the first in Singapore to examine the effectiveness of formative assessment, feedback, and peer assessment to promote student learning during the pandemic from both the student’s perspectives. The findings revealed significant differences in feedback and peer assessment effectiveness between undergraduates and postgraduates. However, there were no significant differences in the perceptions of the effectiveness of formative assessment, feedback, and peer assessment between gender and age groups for both undergraduates and postgraduates. Regarding the mode of study, there was a significant difference in their perceptions of feedback between full-time and part-time students. The findings and implications gathered from the quantitative and qualitative approaches presented some limitations. The sample was selected from a single university and focused mainly on full-time undergraduates and MBA students that the researcher has or is currently teaching. However, other instructors teach a fraction of the respondents. Thus, the findings do not represent students from other universities and private higher education institutions in Singapore and other countries. Second, the study did not gather data from part-time undergraduates and non-Business School postgraduate students who may offer a different response to the AFEQ items. While this study focuses on the students’ perceptions of the value of formative assessment, feedback, and peer assessment to students’ learning during the pandemic, other relevant areas have yet to be fully explored in Singapore.

Firstly, longitudinal studies may be conducted to evaluate to what extent the perceived benefits of online assessments and feedback on students’ learning and academic performance during and post-pandemic (Slack, et al. [314]). Secondly, the study may also be extended to other countries where factors such as government support, cultural dimensions such as those propounded by (Hofstede, et al. [315,316]) and (Hampden, et al. [317]), students’ resilience, hybrid learning, and changes in assessment structure and feedback mechanisms may have an impact on student’s performance during and post-pandemic. Thirdly, focused group interviews may be conducted with instructors, assessment scholars, curriculum specialists, and department heads from various divisions and schools to gain deeper insights into how learning and teaching practices may have an impact on assessment changes in the higher education sector. The pandemic is unprecedented in its scale and has provided opportunities for higher education institutions to relook into their existing learning and teaching, assessment, and feedback practices. Given the ambiguity in the epidemiological and economic outlook, predicting when all conventional educational activities can resume is difficult. Any changes in educational policies and assessment practices must be supported by the government, organisation (professional and private), faculty, educational designer, and educational technologist. Future developments, such as introducing the 5G network and AI generative tools, may enable universities to implement more sophisticated online learning and assessment tools that enhance student learning (Thathsara, et al. [18]). Such technologies may play a pivotal role in online assessment and feedback in a student-centric learning environment in the higher education sector in Singapore (Kwan, et al. [31,318-321]). They may be the new standard in the post-pandemic era for universities.

References

  1. Daniel J (2020) Education and the Covid-19 pandemic. Prospects 49(1-2): 91-96.
  2. OECD (2020) Education and COVID-19: Focusing on the long-term impact of school closures. Paris, France: OECD.
  3. Mackie C (2020) The Pandemic Drives Unprecedented Decline in International Students. Mobility Trends.
  4. Xiong W, Mok K, Ke G, Cheung J (2020) Impact of COVID-19 Pandemic on International Higher Education and Student Mobility: Student Perspectives from Mainland China and Hong Kong. Int J Educ Res 105: 101718.
  5. Tesar M (2020) Towards a postCovid-19 “New Normality?”: Physical and social distancing, the move to online and higher education. Policy Future in Education 18(5): 556-559.
  6. Basilaia G, Kvavadze D (2020) Transition to online education in schools during a SARS-CoV-2 Coronavirus (Covid-19) pandemic in Georgia. Pedagogical Research 5(4): 1-9.
  7. Kuleva M (2020) The impact of Covid-19 pandemic on the evaluation of effectiveness on online distance learning. Pedagogy 92(7): 74-83.
  8. Mishra L, Gupta T, Shree A (2020) Online Teaching-Learning in Higher Education during Lockdown Period of COVID-19 Pandemic. International Journal of Educational Research Open (1): 100012.
  9. Tzivinikou S, Charitaki G, Kagkara D (2020) Distance Education Attitudes (DEAS) During Covid 19 Crisis: Factor Structure, Reliability and Construct Validity of the Brief DEA Scale in Greek Speaking SEND Teachers. Technology, Knowledge and Learning 26: 461-479.
  10. Mulega E, Marbán JM (2020) Is Covid-19 the gateway for digital learning in mathematics education? Contemporary Educational Technology 12(2): ep269.
  11. Sintema E (2020) E-learning and smart revision portal for Zambian primary and secondary school learners: A digitalized virtual classroom in the Covid-19 era and beyond. Aquademia 4(2): 1-2.
  12. Naciri A, Baba MA, Achbani A, Kharbach A (2020) Mobile learning in higher education: Unavoidable alternative during Covid-19. Aquademia 4(1): 1-2.
  13. Camara W (2020) Never let a crisis go to waste: Large-scale assessment and the response to Covid-19. Educational Measurement: Issues and Practice 39(3): 10-18.
  14. Richards E, West S, Altavena L (2020) Amid coronavirus, AP exams went online and had tech problems. College Board says it's investigating.
  15. Jodoin M, Rubright JD (2020) When examinees cannot test: The pandemic's assault on certification and licensure. Educational Measurement: Issues and Practice 39(3): 31-33.
  16. Weiner J, Hurtz GM (2017) A Comparative Study of Online Remote Proctored versus Onsite Proctored High-Stakes Exams. Journal of Applied Testing Technology 18(1): 13-20.
  17. Tan S, Rudolph J, Crawford J, Butler Henderson K (2022) Emergency remote teaching or andragogical innovation? Higher education in Singapore. Journal of Applied Learning & Teaching 5(S1): 64-80.
  18. Thathsara D, Maddumapatabandi. Gamage KAA (2020) Novel coronavirus (Covid-2019) pandemic: Common challenges and responses from higher education providers. Journal of Applied Learning & Teaching 3(2): 41-50.
  19. Garrison D, Vaughan ND (2008) Blended learning in higher education: Framework, principles, and guidelines. Jossey-Bass: 1-272.
  20. Harris C, Tan H (2020) You can teach old dogs new clicks - the importance of teacher use of online content in a blended higher education course in Singapore. Journal of Applied Learning & Teaching 3(2): 59-70.
  21. Oliver R, Herrington J (2003) Factors influencing quality online learning experiences. In: G D Stacey (Edt.)., Quality Education @ a Distance, Kluwer Academic Publishers, pp. 137-142.
  22. Zou M, Kong D, Lee I (2021) Teacher engagement with online formative assessment in EFL writing during Covid-19 pandemic: The case of China. Asia-Pacific Educational Research 30(6): 487-498.
  23. DePascale C, Gong B (2020) Comparability of individual students’ scores on the “same test. In: EHAI Berman (Edt.)., Comparability of large-scale educational assessments: Issues and recommendations, Washington, DC: National Academy of Education, p. 25-48.
  24. Keng L, Boyer M, Marion S (2020) Into the Unknown: Assessments in Spring 2021. Educational Measurement: Issues and Practice 39(3): 53-59.
  25. Fernández Castillo A (2021) State anxiety and academic burnout regarding university access selective examinations in Spain during and after the COVID-19 lockdown. Front Psychol 12: 621863.
  26. Labrague L, Ballad CA (2020) Lockdown fatigue among college students during the Covid-19 pandemic: Predictive role of personal resilience, coping behaviours, and health. Sultan Qaboos University 57(4): 1905-1912.
  27. Moreno Fernandez J, Ochoa J, Lopez Aliaga I, Alferez M, Gomez Guzman M, et al. (2020) Lockdown, emotional intelligence, academic engagement and burnout in pharmacy students during the quarantine. Pharmacy 8(4): 194.
  28. Sundarasen S, Chinna K, Kamaludin K, Nurunnabi M, Baloch GM, et al. (2020). Psychological impact of COVID-19 and lockdown among university students in Malaysia: implications and policy recommendations. International Journal of Environmental Research and Public Health 17(17): 6206.
  29. Aristovnik A, Keržič D, Ravšelj D, Tomaževic N, Umek l, et al. (2020) Impacts of the Covid-19 pandemic on life of higher education studetns: A global perspective. Sustainability 12(20): 1-34.
  30. Hawley S, Thrivikraman JK, Noveck N, St Romain T, Ludy MJ, et al. (2021) Concerns of college students during the Covid-19 pandemic: Thematic perspectives from the United States, Asia, and Europe. Journal of Applied Learning & Teaching 4(1): 11-20.
  31. Kwan J (2022) Academic burnout, resilience level, and campus connectedness among undergraduate students during the Covid-19 pandemic: Evidence from Singapore. Journal of Applied Learning & Teaching 5(1): 52-63.
  32. Yang C, Chen A, Chen Y (2021) College students' stress and health in the Covid-19 pandemic: The role of academic workload, separation from school, and fears or contagion. PLoS ONE 16(2): 1-16.
  33. Shuchi M, Tabassum SC, Toufique MMK (2021) A year of online classes amid COVID-19 pandemic at a Bangladeshi university: Economics students’ experience and suggestions for improvements. Journal of Applied Learning & Teaching 4(2): 37-45.
  34. Cao W, Fang Z, Hou G, Han M, Xu X, et al. (2020) The psychological impact of the Covid-19 epidemic on college students in China. Psychiatry Research 287: 112934.
  35. Li W, Yu H, Miller DJ, Yang F, Rousen C, et al. (2020) Novelty seeking and mental health in Chinese university students before, during, and after the COVID-19 pandemic lockdown: A longitudinal study. Frontiers in Psychology 11: 1-15.
  36. Tang W, Hu T, Yang L, Xu J (2020) The role of alexithymia in the mental health problems of home-quarantined university students during the COVID-19 pandemic in China. Personality and Individual Differences 165: 110131.
  37. Wang C, Zhao H (2020) The Impact of COVID-19 on Anxiety in Chinese University Students. Frontiers in Psychology 11: 1168.
  38. Essadek A, Rabeyron T (2020) Mental health of French students during the Covid-19 pandemic. Journal of Affective Disorders 277: 392-393.
  39. Händel M, Stephan M, Gläser Zikuda M, Kopp B, Bedenlie S, et al. (2020) Digital readiness and its effects on higher education student socio-emotional experiences in the context of Covid-19 pandemic. Journal of Research on Technology in Education 54(2): 267-280.
  40. Kapasia N, Paul P, Roy A, Saha J, Zaveri A, et al. (2020) Impact of lockdown on learning status of undergraduate and postgraduate students during the Covid-19 pandemic in West Bengal, India. Child Youth Serv Rev 116: 105194.
  41. Mahapatra A, Sharma P (2020) Education in times of Covid-19 pandemic: Academic stress and its psychosocial impact on children and adolescents in India. International Journal of Social Psychiatry 57: 1107.
  42. Adnan M, Anwar K (2020) Online learning amid the Covid-19 pandemic: Students' perspectives. Journal of Pedagogical Social Psychology 2(1): 45-51.
  43. Baloran E (2020) Knowledge, attitudes, anxiety, and coping strategies of students during COVID-19 pandemic. Journal of Loss and Trauma 25(8): 635-642.
  44. Khan I (2020) Electronic Learning Management System: Relevance, challenges and preparedness. Journal of Emerging Technologies and Innovation Research 7(5): 471-480.
  45. Odriozola González P, Planchuelo Gómez Á, Irurtia M, de Luis Gar R (2020) Psychological effects of the Covid-19 outbreak and lockdown among students and workers of a Spanish. Psychiatry Research 290: 113108.
  46. Elmer T, Mepham K, Stadtfeld C (2020) Students under lockdown: Comparisons of students' social networks and mental health before and during the Covid-19 crisis in Switzerland. PLoS ONE 15(7): 1-22.
  47. Nenko Y, Êybalna N, Snisarenko Y (2020) The COVID-19 Distance Learning: Insight from Ukrainian students. Revista Brasileira de Educação do Campo 5: e8925.
  48. Burns D, Dagnall N, Holt M (2020) Assessing the impact of the Covid-19 pandemic on student wellbeing at universities in the United Kingdom: A conceptual analysis. Frontiers in Education 5: 1-10.
  49. Mulrooney H, Kelly AF (2020) COVID-19 and the move to online teaching: impact on perceptions of belonging in staff and students in the UK widening participation university. Journal of Applied Learning & Teaching 3(2).
  50. Savage M, James R, Magistro D, Donaldson J, Healey L, et al. (2020) Mental health and movement behaviour during the COVID-19 pandemic in U.K. university students: Prospective cohort study. Mental Health and Physical Activity 19: 100357.
  51. Bono G, Reil K, Hescox J (2020) Stress and wellbeing in college students during the COVID-19 pandemic: Can grit and gratitude help? International Journal of Wellbeing 10(3): 39-57.
  52. Browning M, Larson LR, Sharaievska I, Rigolon A, McAnirlin O, et al. (2021). Psychological impacts from Covid-19 among university students: Risk factors across seven states in the United States. PLoS ONE 16(1): 1-27.
  53. Calhoun K, Yale LA, Whipple ME, Allen S, Douglas E Wood, et al. (2020) The impact of COVID-19 on medical student surgical education: Implementing extreme pandemic response measures in a widely distributed surgical clerkship experience. America American Journal of Surgery 220: 44-47.
  54. Copeland W, McGinnis E, Bai Y, Adams Z, Nardone H, et al. (2021) Impact of COVID-19 Pandemic on College Student Mental Health and Wellness. Journal of the American Academy of Child & Adolescent Psychiatry 60(1): 134-141.
  55. Duong V, Pham P, Yang T, Wang Y, Luo J, et al. (2020) The Ivory Tower lost: How college students respond differently than the general public to the Covid-19 pandemic. IEEE.
  56. Kecojevic A, Basch CH, Sullivan M, Davi NK (2020) The impact of the Covid-19 epidemic on mental health of undergraduate students in New Jersey, cross-sectional study. PLoS ONE 15(9): e0239696.
  57. Mshigen S, Sarwar E, Kimunai E (2021) College students’ educational experiences amid COVID-19 pandemic. Journal of Applied Learning & Teaching 4(1): 38-48.
  58. Tran T, Hoang AD, Nguyen YC, Nguyen LC, Ta N, et al. (2020). Toward sustainable learning during school suspension: Socioeconomic, occupational aspirations, and learning behaviour of Vietnamese students during Covid-19. Sustainability 12(10): 1-20.
  59. Carless D, Boud D (2018) The development of student feedback literacy: literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education 43(8): 1315-1325.
  60. Evans C (2013) Making sense of assessment feedback in higher education. Review of Educational Research 83(1): 70-120.
  61. Hattie J, Timperley H (2007) The power of feedback. Review of Educational Research 77(1): 81-112.
  62. Higgins G, Spencer RL, Kane R (2010) A systematic review of the experiences and perceptions of the newly qualified nurse in the United Kingdom. Nurse Educ Today 30(6): 499-508.
  63. Bell A, Brooks C (2018) What makes students satisfied? A discussion and analysis of the UK’s national student survey. Journal of Further and Higher Education 42(8): 1118-1142.
  64. Carless D, Salter D, Yang M, Lam J (2011) Developing sustainable feedback practices. Studies in Higher Education 36(4): 395-407.
  65. Carroll D (2014) Graduate Course Experience 2013: A Report on the Course Experience Perceptions of Recent Graduates. Melbourne: Graduate Careers Australia.
  66. Robinson S, Pope D, Holyoak L (2013) Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduates’ students. Assessment & Evaluation in Higher Education 38(3): 260-272.
  67. Hounsell D (2007) Towards more sustainable feedback to students. In: N. a. Falchikov (Edt.)., Rethinking Assessment in Higher Education, Routledge, pp. 101-113.
  68. Boud D, Molly E (2013) Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education 38(6): 698-712.
  69. Ng P (2021) Timely change and timeless constants: COVID-19 and educational change in Singapore. Educational Research for Policy and Practice 20: 19-27.
  70. Yeo S, Lai CKY, Tan J, Gooley JJ (2021) A targeted e-learning approach for keeping universities open during the COVID-19 pandemic while reducing student physical interactions. PLoS ONE 16(4): e0249839.
  71. Black P, Wiliam D (2010) Inside the Black Box: Raising Standards through Classroom Assessment. Phi Delta Kappan Magazine 92(1): 81-90.
  72. Wiliam D, Thompson M (2008) Integrating Assessment with Instruction: What Will It Take to Make It Work? In: Dwyer CA (Edt.)., The Future of Assessment: Shaping Teaching and Learning, Lawrence Erlbaum Associates, Mahwah, p. 53-82.
  73. Dixson DD, Worrell FC (2016) Formative and Summative Assessment in the Classroom. Theory Into Practice 55(2): 153-159.
  74. Black P, Wiliam D (1998) Assessment and Classroom Learning. Assessment in Education 5(1): 7-74.
  75. Black P, Wiliam D (2009) Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability 21(1): 5-31.
  76. Black P, Wiliam D (2018) Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice 25(6): 551-575.
  77. Boston C (2002) The Concept of Formative Assessment. Practical Assessment, Research, and Evaluation 8(1): 9.
  78. Bransford J, Brown AL, Cocking RR (2000) How People Learn: Brain, Mind, Experience, and School. National Academy Press.
  79. Cowan E (2009) Implementing formative assessment: student teachers' experiences on placements. Teacher Development 13(1): 71-84.
  80. Yorke M (2003) Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education 45(4): 477-501.
  81. Ion G, Marti AS, Morell IA (2018) Giving or receiving feedback: which is more beneficial to students' learning? Assessment & Evaluation in Higher Education 44(1): 124-138.
  82. Stančić M (2021) Peer assessment as a learning and self-assessment tool: a look inside the black box. Assessment & Evaluation in Higher Education 46(6): 852-864.
  83. Carless D (2007) Learning-oriented assessment: Conceptual bases and practical implications. Innovations in Education and Teaching International 44(1): 57-66.
  84. Lynam S, Cachia M (2017) Students’ perceptions of the role of assessments at higher education. Assessment & Evaluation in Higher Education 43(2): 223-234.
  85. Raupach T, Brown J, Anders S, Hasenfuss G, Harendza S, et al (2013) Summative assessments are more powerful drivers of student learning than resource intensive teaching formats. BMC Medicine 11(61): 1-10.
  86. Biggs J, Tang C (2011) Teaching for Quality Learning at University. Open University Press.
  87. Alkhaddar R, Wooder T, Sertyesilisik B, Tunstall A (2012) Deep learning approach's effectiveness on sustainability improvement in the UK construction industry. Management of Environmental Quality An International Journal 23(2): 126-139.
  88. Fry H, Ketteridge S, Marshall S (2009) A handbook for teaching and learning (3rd)., Routledge.
  89. Liu NF, Carless D (2006) Peer feedback: The learning element of peer assessment. Teaching in Higher Education 11(3): 279-290.
  90. Libman Z (2010) Alternative assessment in higher education: An experience in descriptive statistics. Studies in Educational Evaluation 36(1-2): 62-68.
  91. Cavus N (2015) Distance learning and learning management systems. Procedia - Social and Behavioral Sciences 191: 872-877.
  92. Diprose M (2013) Learning and assessment credibility: The design of examination strategies in a changing learning environment. Knowledge Management & E-Learning: An International Journal 5(1): 104-116.
  93. Fukuda S, Lander BW, Pope CJ (2020) Formative assessment for learning how to learn: Exploring university student learning experiences. RELC Journal 53(1): 118-133.
  94. Wicking P (2020) Formative assessment of students from a Confucian heritage culture: Insights from Japan. Assessment & Evaluation in Higher Education 45(2): 180-192.
  95. Wu Q, Jessop T (2018) Formative assessment: missing in action in both research-intensive and teaching focused universities. Assessment & Evaluation in Higher Education 43(7): 1091-1031.
  96. Pachler N, Daly C, Mor Y, Mellar H (2010) Formative e-assessment: Practitioner cases. Computers & Education 54(3): 715-721.
  97. Kumar R, Sarukesi K, Uma GV (2013) A framework for formative knowledge. International of Journal of Scientific Research 2(5): 242-244.
  98. Rolim C, Isaias P (2019) Examining the use of e‐assessment in higher education: teachers and students’ viewpoints. British Journal of Educational Technology 50(2): 1785-1800.
  99. Hoo HT, Tan K, Deneen C (2020) Negotiating self- and peer-feedback with the use of reflective journals: an analysis of undergraduates' engagement with feedback. Assessment & Evaluation in Higher Education 45(3): 431-446.
  100. Llamas Nistal M, Fernández Iglesias MJ, González Tato J, Mikic Fonte FA (2013) Blended E-Assessment: Migrating classical exams to the digital world. Comput Educ 62: 72-87.
  101. Ogange B, Agak JO, Okelo KO, Kiprotich P (2018) Student perceptions of the effectiveness of formative assessment in an online learning environment. Open Praxis 10(1): 29-39.
  102. Ridgway J, Mccusker S, Pead D (2004) Literature Review of E-assessment. A NESTA Futurelab Research report - report 10.
  103. Spivey M, Mcmillan JJ (2014) Classroom versus online assessment. The Journal of Education for Business 89(8): 450-456.
  104. Akib E, Ghafar MNA (2015) Assessment for learning instrumentation in higher education. International Education Studies 8(4): 166-172.
  105. James M, Black P, Carmichael P, Drummond MJ, Fox A, et al. (2007) Improving Learning How to Learn in classrooms, schools and networks. Routledge.
  106. Furnham A, Cook R, Martin N, Batey M (2011) Mental health literacy among university students. Journal of Public Mental Health 10(4): 198-210.
  107. Marriott P, Lau A (2008) The Use of On-line Summative Assessment in an Undergraduate Financial Accounting Course. Journal of Accounting Education 26(2): 73-90.
  108. Trotter E (2006) Student perceptions of continuous summative assessment. Assessment & Evaluation in Higher Education 31(5): 505-521.
  109. Jordan S (2009) Assessment for learning: pushing the boundaries of computer-based assessment. Practitioner Research in Higher Education 3(1): 11-19.
  110. Brady AM (2005) Assessment of learning with multiple-choice questions. Nurse Education in Practice 5(4): 238-242.
  111. Draper S (2009) Catalytic assessment: understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology 40(2): 285-293.
  112. Almeida F, Monteiro J (2021) The challenges of assessing and evaluating the students at distance. Journal of Online Higher Education 5(1): 3-10.
  113. Meccawy Z, Meccawy M, Alsobhi A (2021) Assessment in 'survival mode': student and faculty perceptions of online assessment practices in HE during Covid-19 pandemic. International Journal for Educational Integrity 17(16): 1-24.
  114. Or C, Chapman E (2022) Development and acceptance of online assessment in higher education: Recommendations for further research. Journal of Applied Learning & Teaching 5(1): 10-26.
  115. Senel S, Senel HC (2021) Remote assessment in higher education during Covid-19 pandemic. International Journal of Assessment Tools in Education 8(2): 181-199.
  116. Cirit NC (2015) Assessing ELT pre-service teachers via Web 2.0 tools: Perceptions toward traditional, online and alternative assessment. Turkish Online Journal of Educational Technology-TOJET 14(3): 9-19.
  117. Lei S, Gupta RK (2010) College distance education courses: Evaluating benefits and costs from institutional, faculty and students’ perspectives. Education 130(4): 616-631.
  118. Koh J, Kan RYP (2020) Perceptions of learning management system quality, satisfaction, and usage: Differences among students of the arts. Australasian Journal of Educational Technology 36(3): 26-40.
  119. Nyachwaya J (2020) Teaching general chemistry (I) Online during COVID-19. Process, outcomes, and lessons learned: A reflection. Journal of Chemistry Education 97(9): 2935-2939.
  120. Shrago J, Smith MK (2006) Online assessment in the K-12 classroom: A formative assessment model for improving student performance on standardized tests. In: LSM Hricko (Edt.)., Online assessment and measurement: Case studies from higher education, K-12 and corporate, Information Science Publishing, pp. 181-195.
  121. Murray M, Pérez J, Geist D, Hedrick A (2012) Student interaction with online course content: Build it and they might come. Journal of Information Technology Education: Research 11: 125-140.
  122. Crawford J, Butler Henderson K, Rudolph J, Malkawi B, Glowatz M, et al. (2020). COVID-19: 20 countries’ higher education intra-period digital pedagogy responses. Journal of Applied Learning & Teaching 3(1): 9-28.
  123. Pentassuglia M (2018) Inside the ‘body box’: exploring feedback in higher education. Assessment & Evaluation in Higher Education 43(5): 683-696.
  124. Pitt E, Bearman M, Esterhazy R (2020) The Conundrum of Low Achievement and Feedback for Learning. Assessment & Evaluation in Higher Education 45(2): 239-250.
  125. Van Heerden M (2020) It has a purpose beyond justifying a mark: Examining the alignment between the purpose and practice of feedback. Assessment & Evaluation in Higher Education 45(3): 359-371.
  126. Wang S, Zhang D (2020) Perceived teacher feedback and academic performance: the mediating effect of learning engagement and the moderating effect of assessment characteristics. Assessment & Evaluation in Higher Education 45(2): 973-987.
  127. Bloxham S, Hudson B, Price M (2015) External peer review of assessment: an effective approach to verifying standards? Higher Education Research & Development 34(6): 1069-1082.
  128. Nicol D, Macfarlane Dick D (2006) Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31(2): 199-218.
  129. Sadler D (2010) Beyond feedback: developing student capability in complex appraisal. Assessment and Evaluation in Higher Education 35(5): 535-550.
  130. Sadler D (2013) Opening up feedback: Teaching learners to see. In: SP Merry (Edt.)., Reconceptualising Feedback in Higher Education: developing dialogue with students, Routledge, p. 54-63.
  131. Sellbjer S (2018) “Have you read my comments? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in Higher Education 43(2): 163-174.
  132. Mulliner E, Tucker M (2017) Feedback on feedback practice: perceptions of students and academics. Assessment & Evaluation in Higher Education 42(2): 266-288.
  133. Gibbs G, Simpson C (2004) Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education (1): 3-31.
  134. Dawson P, Henderson M, Mahoney P, Phillips M, Ryan T, et al. (2018) What makes for effective feedback: staff and student perspectives. Assessment & Evaluation in Higher Education 44(1): 25-36.
  135. Winstone N, Nash RA, Parker M, Rowntree J (2017) Supporting learners' agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologists 52(1): 17-37.
  136. Alqassab M, Strijbos JW, Ufer S (2017) Training peer-feedback skills on geometric construction tasks: role of domain knowledge and peer-feedback levels. European Journal of Pychology Education 33: 11-30.
  137. Jonsson A (2012) Facilitating productive use of feedback in higher education. Active Learning in Higher Education 14(1): 63-76.
  138. Poulos A, Mahony MJ (2008) Effectiveness of feedback: the students’ perspective. Assessment & Evaluation in Higher Education 33(2): 143-154.
  139. Adcroft A (2011) The mythology of feedback. Higher Education Research & Development 30(4): 405-419.
  140. Blair A, McGinty S (2012) Feedback-dialogues: exploring the student perspective. Assessment & Evaluation in Higher Education 38(4): 466-476.
  141. Wei W, Xie Y (2018) University teachers’ reflections on the reasons behind their changing feedback practice. Assessment & Evaluation in Higher Education 43(6): 867-879.
  142. Filius R, de Kleijn RAM, Uijl SG, Prins FJ, van Rijen HVM, et al. (2018) Strengthening dialogic peer feedback aiming for deep learning in SPOCs. Computers & Education 125: 86-100.
  143. Geitz G, Brinke DJT, Kirschner PA (2015) Goal orientation, deep learning, and sustainable feedback in higher business education. Journal of Teaching in International Business 26(4): 273-292.
  144. Leenknecht M, Hompus P, van der Schaaf M (2019) Feedback seeking behaviour in higher education: the association with students’ goal orientation and deep learning approach. Assessment & Evaluation in Higher Education 44(7): 1069-1078.
  145. Weaver M (2006) Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education 31(3): 379-394.
  146. Boud D (2009) How can practice reshape assessment? In: G Joughin (Edt.)., Assessment, Learning and Judgement in Higher Education, Springer, p. 29-44.
  147. Duncan N (2007) ‘Feed‐forward’: improving students' use of tutors' comments. Assessment & Evaluation in Higher Education 32(3): 271-283.
  148. Price M, Handley K, Millar J, ODonovan B (2010) Feedback: all that effort, but what is the effect? Assessment & Evaluation in Higher Education 35(3): 277-289.
  149. Van Heerden M (2021) (How) do written comments feed-forward? A translation device for developing tutors’ feedback-giving literacy. Innovations in Education and Teaching International 58(5): 555-564.
  150. Boud D, Falchikov N (2007) Developing assessment for informing judgement. In: D Boud, Falchikov N (Eds.)., Rethinking Assessment for Higher Education: Learning for the Longer Term. Routledge, pp. 181-197.
  151. Dowden T, Pittaway S, Yost H, McCarthy R (2013) Students’ perceptions of written feedback in teacher education: ideally feedback is a continuing two-way communication that encourages progress. Assessment & Evaluation in Higher Education 38(3): 349-362.
  152. Shields S (2015) My work is bleeding: Exploring students’ emotional responses to first-year assignment feedback. Teaching in Higher Education 20(6): 614-624.
  153. Shaikh U, Asif Z (2022) Persistence and dropout in higher online education: Review and categorization of factors. Front Psychol 13: 1-14.
  154. Walker M (2015) The quality of written peer feedback on undergraduates’ draft answers to an assignment and the use made of the feedback. Assessment & Evaluation in Higher Education 40(2): 232-247.
  155. Jessop T, El Hakim Y, Gibbs G (2014) The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns. Assessment & Evaluation in Higher Education 39(1): 73-88.
  156. Wojtas O (1998) Feedback? No, just give us the answers. Times Higher Education.
  157. Shafi A, Hatley J, Middleton T, Millican R (2018) The role of assessment feedback in developing academic buoyancy. Assessment & Evaluation in Higher Education 43(3): 415-427.
  158. Higgins R, Hartley P, Skelton A (2002) The conscientious consumer: reconsidering the role of assessment feedback in student learning. Studies in Higher Education 27(1): 53-64.
  159. Lizzio A, Wilson K (2008) Feedback on assessment: students’ perceptions of quality and effectiveness. Asssessment & Evaluation in Higher Education 33(3): 263-275.
  160. Ryan T, Henderson M (2018) Feeling feedback: Students’ emotional responses to educator feedback. Assessment & Evaluation in Higher Education 43(6): 880-892.
  161. Batten J, Jessop T, Birch P (2019) Doing what it says on the tin? A psychometric evaluation of the Assessment Experience Questionnaire. Assessment & Evaluation in Higher Education 44(2): 309-320.
  162. McCallum S, Milner MM (2021) The effectiveness of formative assessment: student views and staff reflections. Assessment & Evaluation in Higher Education 46(1): 1-16.
  163. Jessop T, Tomás C (2017) The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education 42(6): 990-999.
  164. Vattøy K, Gamlem SM, Rogne WM (2021) Examining students’ feedback engagement and assessment experiences: a mixed study. Studies in Higher Education 46(11): 2325-2337.
  165. Alvarez I, Espasa A, Guasch T (2012) The value of feedback in improving collaborative writing assignments in an online learning environment. Studies in Higher Education 37(4): 387-400.
  166. Brearley F, Cullen WR (2012) Providing students with formative audio feedback. Bioscience Education 20(1): 22-36.
  167. Chong S (2019) College students’ perception of e-feedback: a grounded theory perspective. Assessment & Evaluation in Higher Education 44(7): 1090-1105.
  168. Deeley S (2018) Using technology to facilitate effective assessment for learning and feedback in higher education. Assessment & Evaluation in Higher Education 43(3): 439-448.
  169. Ellegaard M, Damsgaard L, Bruun J, Johannsen BF (2018) Patterns in the form of formative feedback and student response. Assessment & Evaluation in Higher Education 43(5): 727-744.
  170. Henderson M, Ryan T, Philips M (2019) The challenges of feedback in higher education. Assessment & Evaluation in Higher Education 44(8): 1237-1252.
  171. Ice P, Reagan C, Perry P, John W (2007) Using asynchronous audio feedback to enhance teaching presence and students' sense of community. Journal of Asynchronous Learning Networks 11(2): 3-25.
  172. Morris C, Chikwa G (2016) Audio versus written feedback: exploring learners’ preference and the impact of feedback format on students’ academic performance. Active Learning in Higher Education 17(2): 125-137.
  173. Shang HF (2022) Exploring online peer feedback and automated corrective feedback on EFL writing performance. Interactive Learning Environments 30(1): 4-16.
  174. McCabe J, Doerflinger A, Fox R (2011) Student and faculty perceptions of E-feedback. Teaching of Psychology 38(3): 173-179.
  175. Topping K (1998) Peer assessment between students in colleges and universities. Review of Educational Research 68(3): 249-276.
  176. Liu J, Guo X, Guo R, Fram P, Ling Y, et al. (2019) Students’ learning outcomes and peer rating accuracy in compulsory and voluntary online peer assessment. Assessment & Evaluation in Higher Education 44(6): 835-847.
  177. Topping K (2010) Peers as a source of formative assessment. In: G. C. H. Andrade (Edt.)., Handbook of formative assessment, Routledge, p. 61-74.
  178. Falchikov N, Goldfinch J (2000) Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research 70(3): 287-322.
  179. Panadero E, Alqassab M (2019) An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education 44(8): 1253-1278.
  180. Panadero E, Andrade H, Brookhart SM (2018) Fusing self-regulated learning and formative assessment: A roadmap of where we are, how we got here, and where we are going. The Australian Educational Researcher 45(1): 13-31.
  181. Andrade H, Valtcheva A (2009) Promoting learning and achievement through self-assessment. Theory into Practice 48(1): 12-19.
  182. Boud D (1995) Enhancing Learning Through Self-Assessment. Kogan, pp. 1-256.
  183. Orsmond P, Merry S, Reiling K (2000) The use of student-derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education 25(1): 23-38.
  184. Panadero E, Alonso Tapia J, Reche E (2013) Rubrics vs. self-assessment scripts effect on self-regulation, performance and self-efficacy in pre-service teachers. Studies In Educational Evaluation 39(3): 125-132.
  185. Topping K (2013) Peers as a source of formative and summative. In: J McMillan (Edt.)., SAGE Handbook of Research on Classroom Assessment, SAGE Publication, pp. 395-412.
  186. Barak M, Dori YJ (2009) Enhancing higher-order thinking skills among in-service science teachers via embedded assessment. Journal of Science Teacher Education 20(5): 459-474.
  187. Bolzer M, Strijbos JW, Fischer F (2015) Inferring mindful cognitive-processing of peer feedback via eye-tracking: role of feedback-characteristics, fixation-durations and transitions. Journal of Computer Assisted Learning 31(5): 422-434.
  188. Nicol D, Thomson A, Breslin C (2014) Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education 39(1): 102-122.
  189. Reinholz D (2016) The assessment cycle: A model for learning through peer assessment. Assessment & Evaluation in Higher Education 41(2): 301-315.
  190. Thomas G, Martin D, Pleasants K (2011) Using self-and peer-assessment to enhance students’ future learning in higher education. Journal of University Teaching and Learning Practice 8(1): 52-69.
  191. Bangert Drowns R, Kulik JA, Kulik CLC (1991) Effects of frequent classroom testing. The Journal of Educational Research 85(2): 89-99.
  192. Man D, Xu Y, Otoole JM (2018) Understanding autonomous peer feedback practices among postgraduate students: a case study in a Chinese university. Assessment & Evaluation in Higher Education 43(4): 527-536.
  193. Van Lehn K, Chi MTH, Baggett W, Murray RC (1995) Progress report: Towards a theory of learning during tutoring. Learning Research and Development Center.
  194. Bong J, Park MS (2020) Peer assessment of contributions and learning processes in group projects: an analysis of information technology undergraduate students’ performance. Assessment & Evaluation in Higher Education 45(8): 1155-1168.
  195. Cassidy S (2006) Developing employability skills: Peer assessment in higher education. Education + Training 48(7): 508-517.
  196. Johnston L, Miles L (2004) Assessing contributions to group assignments. Assessment & Evaluation in Higher Education 29(6): 751-768.
  197. Kirschner P (2002) Can we support CSCL? Educational, social and technological affordances for learning. In: P Kirschner (Edt.)., Three worlds of CSCL: can we support CSCL, Open University of the Netherlands, p. 7-47.
  198. McMahon T (2010) Peer feedback in an undergraduate programme: using action research to overcome students' reluctance to criticise. Educational Action Research 18(2): 273-287.
  199. Slujsmans D, Brand Gruwei S, Van Merriënboer J (2002) Peer assessment training in teacher education: Effects on performance and perceptions. Assessment & Evaluation in Higher Education 27(5): 443-454.
  200. Boud D, Cohen R, Sampson J (1999) Peer learning and assessment. Assessment & Evaluation in Higher Education 24(4): 413-426.
  201. Carnell B (2016) Aiming for autonomy: formative peer assessment in a final-year undergraduate course. Assessment & Evaluation in Higher Education 41(8): 1269-1283.
  202. Carless D (2013) Sustainable feedback and the development of student self-evaluation capacities. In: SP Merry (Edt.)., Reconceptualising feedback in higher education: developing dialogue with students. Routledge, pp. 117-126.
  203. Tai J, Ajjwai R, Boud D, Dawson P, Panadero E, et al. (2018) Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76: 467-481.
  204. Tighe Mooney S, Bracken M, Dignam B (2016) Peer assessment as a teaching and learning process: The observations and reflections of three facilitators on a first-year undergraduate critical skills module. All Ireland Journal of Teaching and Learning in Higher Education 8(2): 2831-2847.
  205. Wu C, Chanda E, Willison J (2014) Implementation and outcomes of online self and peer assessment on group-based honours research projects. Assessment & Evaluation in Higher Education 39(1): 21-37.
  206. Yucel R, Bird FL, Young J, Blanksby T (2014) The road to self-assessment: exemplar marking before peer review develops first-year students’ capacity to judge the quality of a scientific report. Assessment & Evaluation in Higher Education 39(8): 971-986.
  207. Boud D, Soler R (2016) Sustainable assessment revisited. Assessment & Evaluation in Higher Education 41(3): 400-413.
  208. Kearney S (2013) Improving engagement: the use of ‘Authentic self-and peer-assessment for learning’ to enhance the student learning experience. Assessment & Evaluation in Higher Education 38(7): 875-891.
  209. Weaver D, Esposto A (2012) Peer assessment as a method of improving student engagement. Assessment & Evaluation in Higher Education 37(7): 805-816.
  210. Adachi C, Tai JHM, Dawson P (2018) Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education. Assessment & Evaluation in Higher Education 43(2): 294-306.
  211. Boud D, Lawson R, Thompson DG (2015) The calibration of student judgement through self-assessment: Disruptive effects of assessment patterns. Higher Education Research & Development 34(1): 45-59.
  212. Murdoch J (2015) Using self- and peer assessment at honours level: Bridging the gap between law school and the workplace. The Law Teacher 49(1): 73-91.
  213. Naomi W, Boud D (2022) The need to disentangle assessment and feedback in higher education. Studies in Higher Education 47(3): 656-667.
  214. Ashenafi M (2017) Peer-assessment in higher education – twenty-first century practices, challenges and the way forward. Assessment & Evaluation in Higher Education 42(2): 226-251.
  215. Falchikov N (1986) Product comparisons and process benefits of collaborative peer group and self-assessments. Assessment & Evaluation in Higher Education 11(2): 146-166.
  216. Hanrahan S, Issacs G (2001) Assessing self- and peer-assessment: The students' views. Higher Education Research & Development 20(1): 53-70.
  217. Nortcliffe A (2012) Can students assess themselves and their peers? a five-year study. Student Engagement and Experience Journal 1(2): 1-17.
  218. Patton C (2012) Some kind of weird, evil experiment”: student perceptions of peer assessment. Assessment & Evaluation in Higher Education 37(6): 719-731.
  219. Wanner T, Palmer E (2018) Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback. Assessment & Evaluation in Higher Education 43(7): 1032-1047.
  220. Topping K, Smith E, Swanson I, Elliot A (2000) Formative peer assessment of academic writing between postgraduate students. Assessment & Evaluation in Higher Education 25(2): 149-169.
  221. Zhou J, Zheng Y, Tai JHM (2020) Grudges and gratitude: the social-affective impacts of peer assessment. Assessment & Evaluation in Higher Education 45(3): 345-358.
  222. Gurbanov E (2016) The challenge of grading in self and peer-assessment (undergraduate students' and university teachers' perspectives). Journal of Education in Black Sea Region 1(2): 97-107.
  223. Kaufman J, Schunn CD (2011) Students’ perceptions about peer assessment for writing: their origin and impact on revision work. Instructional Science 39: 387-406.
  224. Lladó A, Soley L, Sansbelló R, Pujolras G, Planella J, et al. (2014) Student perceptions of peer assessment: an interdisciplinary study. Assessment & Evaluation in Higher Education 39(5): 592-610.
  225. Orsmond P, Merry S, Reiling K (1996) The importance of marking criteria in the use of peer assessment. Assessment & Evaluation in Higher Education 21(3): 239-250.
  226. Peterson E, Irving SE (2008) Secondary school students' conceptions of assessment and feedback. Learning and Instruction 18(3): 238-250.
  227. To J, Panadero E (2019) Peer assessment effects on the self-assessment process of first-year undergraduates. Assessment & Evaluation in Higher Education 44(6): 920-932.
  228. Wanner T, Palmer E (2015) Personalising learning: Exploring student and teacher perceptions about flexible learning and assessment in a flipped university course. Computers & Education 88: 354-369.
  229. Cheng W, Warren M (2005) Peer assessment of language proficiency. Language Testing 22(1): 93-121.
  230. Walvoord M, Hoefnagels MH, Gaffin DD, Chumchal MM, Long DA, et al. (2008) Journal of College Science Teaching 37(4): 66-73.
  231. Wen M, Tsai CC (2006) University students' perceptions of and attitudes toward (online) peer assessment. The International Journal of Higher Education and Educational Planning 51(1): 27-44.
  232. Brown S, Knight P (1994) Assessing Learners in Higher Education. Routledge.
  233. Panadero E, Alonso Tapia J (2013) Self-assessment: Theoretical and practical connotations. When it happens, how is it acquired and what do we do to develop it in our students? Electronic Journal of Research in Educational Psychology 11(2): 551-576.
  234. Chen C (2010) The implementation and evaluation of a mobile self- and peer-assessment system. Computers & Education 55(1): 229-236.
  235. Sluijsmans D, Moerkerke G, Van Merriénboer J, Dochy F (2001) Peer assessment in problem-based learning. Studies in Educational Evaluation 27(2): 153-173.
  236. Kulkarni C, Bernstein MS, Klemmer S (2015) Peerstudio: Rapid peer feedback emphasizes revision and improves performance. Proceedings, p. 75-84.
  237. Wang J, Gao R, Gao X, Liu J (2020) Factors associated with students’ attitude change in online peer assessment – a mixed methods study in a graduate-level course. Assessment & Evaluation in Higher Education 45(5): 714-727.
  238. Liu E, Lee C (2013) Using peer feedback to Improve learning via online peer assessment. Turkish Online Journal of Educational Technology 12(1): 187-199.
  239. Liu X, Li L, Zhang Z (2018) Small group discussion as a key component in online assessment training for enhanced student learning in web-based peer assessment. Assessment & Evaluation in Higher Education 43(2): 207-222.
  240. Seifert T, Feliks O (2019) Online self-assessment and peer-assessment as a tool to enhance student-teachers’ assessment skills. Assessment & Evaluation in Higher Education 44(2): 169-185.
  241. Zong Z, Schunn C, Wang Y (2022) What makes students contribute more peer feedback? The role of within-course experience with peer feedback. Assessment & Evaluation in Higher Education 47(6): 972-983.
  242. Yang YF, Tsai CC (2010) Conceptions of and approaches to learning through online peer assessment. Learning & Instruction 20(1): 72-83.
  243. Domínguez Figaredo D, Gil Jaurena I, Morentin Encina J (2022) The impact of rapid adoption of online assessment on students’ performance and perceptions: Evidence from a distance learning university. The Electronic Journal of e-Learning 20(3): 224-241.
  244. Halaweh M (2021) Are universities using the right assessment tools during the pandemic and crisis time?. Higher Learning Research Communications 11: 1-9.
  245. Lee V, Lam PLC, Lo JTS, Lee JLF, Li JTS, et al. (2022) Rethinking online assessment from university students’ perspective in COVID-19 pandemic. Cogent Education 9(1): 1-13.
  246. Lin S, Liu EZF, Yuan SM (2001) Web-based peer assessment: Feedback for students with various thinking styles. Journal of Computer Assisted Learning 17(4): 420-432.
  247. Tsai CC, Liang JC (2009) The development of science activities via online peer assessment: the role of scientific epistemological views. Instructional Science 37(3): 293-310.
  248. Tsai CC, Liu EZF, Lin SSJ, Yuan SM (2001) A network peer assessment system based on a Vee heuristic. Innovations in Education and Training International 38(3): 220-230.
  249. Panadero E, Broadbent J, Boud D, Lodge JM (2019) Using formative assessment to influence self- and co-regulated learning: the role of evaluative judgement. European Journal of Psychology of Education 34(1): 535-557.
  250. Bouchoucha S, Wozniak H (2010) Is peer assessment of asynchronous group discussions fostering skills relevant to our future graduates? Sydney Ascilite, pp. 113-118.
  251. Dominguez C, Gonçalo Cruz AM, Pedrosa D, Grams G (2012) Online PA: an exploratory case study in a higher education civil engineering course. IEEE.
  252. Rosa S, Coutinho CP, Flores MA (2016) Online peer assessment: Method and digital technologies. Procedia - Social and Behavioral Sciences 228: 418-423.
  253. Hou HT, Chang KE, Sung YT (2007) An analysis of peer assessment online discussions within a course that uses project-based learning. Interactive Learning Environments 15(3): 237-251.
  254. Demir M (2018) Using online peer assessment in an instructional technology and material design course through social media. Higher Education 75: 399-414.
  255. Doiron J (2003) The value of online student peer review, evaluation and feedback in higher education. Centre for Development of Teaching and Learning 6(9): 1-2.
  256. Bolliger D, Halupa C (2012) Student perceptions of satisfaction and anxiety in an online doctoral program. Distance Education 33(1): 81-98.
  257. Brindley C, Scoffield S (1998) Peer assessment in undergraduate programmes. Teaching in Higher Education 3(1): 79-90.
  258. Smith H, Cooper A, Lancaster L (2002) Improving the quality of undergraduate peer assessment: A case for student and staff development. Innovations in Education and Teaching International 39(1): 71-81.
  259. Collimore L, Paré DE, Joordens S (2015) SWDYT: So, What Do You Think? Canadian students’ attitudes about PeerScholar, an Online Peer-Assessment Tool. Learning Environments Research 18(1): 33-45.
  260. Liu C, Tsai CM (2005) Peer assessment through web-based knowledge acquisition: tools to support conceptual awareness. Innovations in Education and Teaching International 42(1): 43-59.
  261. Mercader C, Ion G, Dĺaz Vicario A (2020) Factors influence students' peer feedback uptake: instructional design matters. Assessment & Evaluation in Higher Education 45(8): 1169-1180.
  262. Paré D, Joordens S (2008) Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool. Journal of Computer Assisted Learning 24(6): 526-540.
  263. Wen M, Tsai CC (2008) Online peer assessment in an in-service science and mathematics teacher education course. Teaching in Higher Education 13(1): 55-67.
  264. Zheng L, Zhang X, Cui P (2020) The role of technology-facilitated peer assessment and supporting strategies: a meta-analysis. Assessment & Evaluation in Higher Education 45(3): 372-386.
  265. Wen M, Tsai CC (2008) Online peer assessment in an in-service science and mathematics teacher education course. Teaching in Higher Education 13(1): 55-67.
  266. Simpson R, Sturges J, Woods A, Altman Y (2005) Gender, age, and the MBA: An analysis of extrinsic and intrinsic career benefits. Journal of Management Education 29(2): 218-247.
  267. Edgington R, Bruce G (2003) 2003 mba.com Registrant Survey Executive Summary. Graduate Management Admission Council.
  268. Powell M (2010) Professional MBA Student Expectations Survey. Chicago, Illinois: 17th Annual PMBA Conference at DePaul University.
  269. Williams A, Mujtaba BG (2008) Comparative Outcomes Assessment of Students in the United States, Jamaica and the Bahamas in the Economic Thinking course in the MBA Program. International Journal of Education Research 3(3): 78-90.
  270. Mark J, Edgington R (2006) Motivations and barriers for women in the pursuit of an MBA degree. Graduate Management Admission Council.
  271. Thompson E, Gui Q (2000) Hong Kong executive business students' motivations for pursuing an MBA. Journal of Education for Business 75(4): 236-240.
  272. Baruch Y, Leeming A (2001) The added value of MBA studies. Personnel Review 30(5): 589-608.
  273. Dailey L, Anderson M, Ingenito C, Duffy D, Krimm P, et al. (2006) Understanding MBA consumer needs and the development of marketing strategy. Journal of Marketing for Higher Education 1(16): 143-158.
  274. Heslop L, Nadeaub J (2010) Branding MBA programs: The use of target market desired outcomes for effective brand positioning. Journal of Marketing for Higher Education 20(1): 85-117.
  275. Lewis J (1992) Student expectations on the Open Business School's MBA. Target Management Development Review 5(2): 16-23.
  276. Mihail D, Elefterie KA (2006) Perceived effects of an MBA degree on employability and career advancement: The case of Greece. Career Development International 11(4): 352-361.
  277. Zhao J, Truell AD, Alexander MW, Hill IB (2006) Less success than meets the eye?" The impact of master of business administration education on graduates' careers. Journal of Education for Business 81(5): 261-268.
  278. Nunnally J, Bernstein IH (1994) Psychometric theory (3rd)., McGraw-Hill.
  279. Taber K (2018) The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education 48: 1273-1296.
  280. Byrne BM (2010) Structural Equation Modeling with Amos: Basic Concepts, Applications, and Programming (2nd)., New York: Taylor and Francis Group.
  281. Akoglu H (2018) User’s Guide to Correlation Coefficients. Turkish Journal of Emergency Medicine 18(3): 91-93.
  282. Tabachnick BG, Fidell LS (2001) Using Multivariate Statistics (4th)., Allyn and Bacon, Boston.
  283. Beattie V, Collins B, McInnes B (1997) Deep and surface learning: a simple or simplistic dichotomy? Accounting Education 6(1): 1-12.
  284. Entwistle N, McCune V, Walker P (2000) Conceptions, styles and approaches within higher education: analyse abstractions and everyday experience. In: R &. -F Sternberg (Eds.)., Perspectives on Cognitive, Learning, and Thinking Styles. Lawren, pp. 103-136.
  285. Biggs J (1993) What do inventories of students' learning processes really measure? A theoretical review and clarification. British Journal of Educational Psychology 63(1): 3-19.
  286. Diseth A, Pallesen S, Hovland A, Larsen S (2006) Course experience approaches to learning and academic performance. Educational + Training 48(2): 156-169.
  287. Felder R, Brent R (2005) Understanding student differences. Journal of Engineering Education 94(1): 57-72.
  288. Gijbels D, Dochy F, Van den Bossche P, Sergers M (2005) Effects of problem-based learning: A meta-analysis from the angle of assessment. Review of Educational Research 75(1): 27-61.
  289. Smith N, Miller RJ (2005) Learning approaches: examination type, discipline of study, and gender. Educational Psychology 25(1): 43-53.
  290. Spicer D (2004) The impact of approaches to learning and cognition on academic performance in business and management. Education + Training 46(4): 194-205.
  291. Tiwari A, Chan S, Wong E, Wong D, Chui C, et al. (2006) The effect of problem-based learning on students' approaches to learning in the context of clinical nursing education. Nurse Education Today 26(5): 430-438.
  292. Teowkul K, Seributra NJ, Sangkaworn C, Denvilai S, Mujtaba B, et al. (2009) Motivational factors of graduate Thai students pursuing master's and doctoral degrees in business. Ramkhamhaeng University International Journal 3(1): 25-56.
  293. Ajjawi R, Boud D (2017) Researching feedback dialogue: an interactional analysis approach. Assessment & Evaluation in Higher Education 42(2): 252-265.
  294. Gatfield T (1999) Examining student satisfaction with group projects and peer assessment. Assessment & Evaluation in Higher Education 24(4): 365-377.
  295. Roskams T (1999) Chinese EFL students' attitudes to peer feedback and peer assessment in an extended pairwork setting. RELC Journal 30(1): 79-106.
  296. Schunn C (2016) Writing to learn and learning to write through SWoRD. In: S A McNamara (Edt.)., Adaptive educational technologies for literacy instruction, Routledge, pp. 244-259.
  297. Li H, Xiong Y, Hunter CV, Guo X, Tywoniw R, et al. (2020) Does peer assessment promote student learning? Assessment & Evaluation in Higher Education 45(2): 193-211.
  298. Sanchez C, Atkinson KM, Koenka AC, Moshontz H, Cooper H, et al (2017) Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: A meta-analysis. Journal of Educational Psychology 109(8): 1049-1066.
  299. Sung YT, Chang KE, Chiou SK, Hou HT (2005) The design and application of a Web-based self- and peer-assessment system. Computers & Education 45(2): 187-202.
  300. Tsai CC, Lin SSJ, Yuan SM (2002) Developing science activities through a networked peer assessment system. Computers and Education 38(1-3): 241-252.
  301. Zhang B, Ohland MW (2009) How to assign individualized scores on a group project: An empirical evaluation. Applied Measurement in Education 22(3): 290-308.
  302. Bouzidi L, Jaillet A (2009) Can Online Peer Assessment Be Trusted? Educational Technology & Society 12(4): 257-268.
  303. Hafner J, Hafner P (2003) Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer‐group rating. International Journal of Science Education 25(12): 1509-1528.
  304. Cheng KH, Hou HT, Wu SY (2014) Exploring students’ emotional responses and participation in an online peer assessment activity: A case study. Interactive learning environments 22(3): 271-287.
  305. Dixson M (2015) Measuring student engagement in the online course: The Online Student Engagement Scale (OSE). Online Learning 19(4): 1-15.
  306. Geng S, Law KMY, Niu B (2019) Investigating self-directed learning and technology readiness in blending learning environment. International Journal of Educational Technology in Higher Education 16(17): 1-22.
  307. Tekkol I, Demirel M (2018) An investigation of self-directed learning skills of undergraduate students. Frontiers in Psychology 9: 2324.
  308. Andrade H (2018) Feedback in the context of self-assessment. In: AA Smith (Edt.)., The Cambridge handbook of instructional feedback. Cambridge University Press, pp. 376-408.
  309. Hast M, Healy C (2018) "It's like fifty-fifty”: Using the student voice towards enhancing under graduates’ engagement with online feedback provision. Journal of Teaching and Learning with Technology 7(1): 139-151.
  310. Hast M (2017) Supporting student transition to higher education feedback: An evaluation of an online feedback training approach. Journal of Learning Development in Higher Education (12): 1-5.
  311. Hast M (2021) Higher education in times of Covid-19: Giving online feedback. Higher Education Studies 11(1): 1-7.
  312. Gale T, Parker S (2014) Navigating change: a typology of student transition in higher education. Studies in Higher Education 39(5): 734-753.
  313. Kift S, Nelson K, Clarke J (2010) Transition pedagogy: A third-generation approach to FYE – A case study of policy and practice for the higher education sector. International Journal of the First Year in Higher Education 1(1): 1-20.
  314. Slack H, Priestley M (2022) Online learning and assessment during the Covid-19 pandemic: exploring the impact on undergraduate student well-being. Assessment & Evaluation in Higher Education 48(3): 333-349.
  315. Hofstede G (1994) The business of international business is culture. International Business Review 3(1): 1-14.
  316. Hofstede G (2001) Culture's Consequences: Comparing Values, Behaviors, Institutions and Organizations Across Nations (2nd )., SAGE Publications.
  317. Hampden Turner C, Trompennars F (2000) Building Cross-Cultural Competence: How to Create Wealth from Conflicting Values. Yale University Press.
  318. Boud D, Holmes H (1995) Self and peer marking in a large technical subject. In: D Boud (Edt.)., Enhancing learning through self-assessment Kogan Page Ltd, p. 63-78.
  319. Cheng W, Warren M (1997) Having second thoughts: student perceptions before and after a peer assessment exercise. Studies in Higher Education 22(2): 233-239.
  320. Hattie J, Clark S (2019) Visible Learning: Feedback (1st)., Routledge.
  321. Patchan M, Schunn CD, Clark RJ (2018) Accountability in peer assessment: examining the effects of reviewing grades on peer ratings and peer feedback. Studies in Higher Education 43(12): 2263-2278.