These Boards, though independent, are closely interconnected at the provincial level and every year thousands of students from private and public schools/colleges take the exams administered by these Boards.The compulsory English paper, which carries 18% of the total scores, has many essay-type questions.Tags: What Are Problem Solving StrategiesMarie Curie Biography EssayEssay Laramie ProjectEasy Music EssayStructure Of Law EssayStyle Essay ManGerman Essay CorrectorEssay On Nationalities
The study shows a great deal of variability amongst markers, in their actual scores as well as in the criteria they use to assess English essays.
Even they apply the same evaluation criteria, markers differ in the relative weight they give them.
For the purpose of this study, they weren’t provided with any rating scale as to replicate the current practices.
Qualitative data came from semi-structured interviews with the selected markers and short written commentaries by the markers to rationalize their scores on the essays.
Research has shown that, in contexts where essays are assessed by more than one rater, discrepancies often exist among the different raters because they do not apply scoring criteria consistently (Hamp-Lyons 1989; Lee 1998; Vann et al. This study examines this issue in Pakistan, a context where composition writing is a standard feature of English assessment systems at the secondary & post-secondary levels, but where no research has been conducted into the criteria raters use in assessing written work.
The particular focus of this project is a large-scale high-stakes examination conducted by the Board of Intermediate and Secondary Education (BISE) in the Punjab province of Pakistan.
One factor that makes this context particularly interesting is that raters are not provided with formal criteria to guide their assessment and this makes it even more likely that variations in the criteria raters use will exist.
Language testers and researchers emphasize the importance of reliability in scoring since scorer reliability is central to test reliability (Hughes 1989; Lumley 2002). 3) believes that “rating discrepancy between raters may cause a very serious impediment to assuring test validation, thereby incurring the mistrust of the language assessment process itself.” Bachman and Alderson (2004), while openly acknowledging the difficulties raters face in assessing essays, consider writing to be one of the most difficult areas of language to assess.
To find out the answer to the 3 The context for the study is the Higher Secondary School Certificate (HSCC) conducted by the BISE in the Punjab province of Pakistan.
Out of a total of nine BISEs in the Punjab, three are responsible for conducting examinations in South Punjab (SP).