Linguapeda 2019, Helsinki, Finlandiya, 17 - 18 Mayıs 2019, ss.20
The transition from discrete-points tests to performance-based assessment has brought changes that all
stake-holders in language education need to take into account. Performance-based assessment of writing
(hereafter PBAW) is not an exception in this respect. It is complex and multi-faceted (Hamps-Lyons, 1995,
2016a, 2016b) since there is a variety of components in the assessment process (such as raters, test takers,
and/or rubric) that is assumed to affect test scores in a systematic way (Eckes, 2011). This subjective nature
of PBAW necessitates the construction of reliable, valid and fair measures of test takers ability. One central
aspect of PBAW in this regard is the rubric, and the way the raters interpret it represents the de-facto test
construct (Knoch, 2011: 81), particularly in diagnostic assessment of writing (Knoch, 2007, 2009, & 2011).
The purpose of this mixed-methods study is to investigate whether a rubric for diagnostic assessment of
writing with two additional categories of coherence and cohesion to assess discourse competence in an
intensive English program of a Turkish state university would result in reliable ratings. A psychometric
modelling approach called many-facet Rasch model (MFRM) was used for the analysis of the quantitative
data gathered through rater-mediated assessment of test-takers writing performance, where eight raters
assessed ten A2 level student essays according to the seven categories in the rubric. A further aim of the
study is to explore raters’ perspectives on the rubric through an open-ended questionnaire which yields
qualitative data. The results of the study will be discussed in light of the extant literature on PBAW, and the
advantages of MFRM over the methods of Classical Test Theory.