Home > Research > Publications & Outputs > Analysing tests of reading and listening in rel...
View graph of relations

Analysing tests of reading and listening in relation to the Common European Framework of Reference: The Experience of the Dutch CEFR Construct Project.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Analysing tests of reading and listening in relation to the Common European Framework of Reference: The Experience of the Dutch CEFR Construct Project. / Alderson, J. Charles; Figueras, N.; Kuiper, H. et al.
In: Language Assessment Quarterly, Vol. 3, No. 1, 01.01.2006, p. 3-30.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Alderson JC, Figueras N, Kuiper H, Nold G. Analysing tests of reading and listening in relation to the Common European Framework of Reference: The Experience of the Dutch CEFR Construct Project. Language Assessment Quarterly. 2006 Jan 1;3(1):3-30. doi: 10.1207/s15434311laq0301_2

Author

Bibtex

@article{09fffb1582784afd96da25a532992493,
title = "Analysing tests of reading and listening in relation to the Common European Framework of Reference: The Experience of the Dutch CEFR Construct Project.",
abstract = "The Common European Framework of Reference (CEFR) is intended as a reference document for language education including assessment. This article describes a project that investigated whether the CEFR can help test developers construct reading and listening tests based on CEFR levels. If the CEFR scales together with the detailed description of language use contained in the CEFR are not sufficient to guide test development at these various levels, then what is needed to develop such an instrument? The project methodology involved gathering expert judgments on the usability of the CEFR for test construction, identifying what might be missing from the CEFR, developing a frame for analysis of tests and specifications, and examining a range of existing test specifications and guidelines to item writers and sample test tasks for different languages at the 6 levels of the CEFR. Outcomes included a critical review of the CEFR, a set of compilations of CEFR scales and of test specifications at the different CEFR levels, and a series of frameworks or classification systems, which led to a Web-mounted instrument known as the Dutch CEFR Grid. Interanalyst agreement in using the Grid for analyzing test tasks was quite promising, but the Grids need to be improved by training and discussion before decisions on test task levels are made. The article concludes, however, that identifying separate CEFR levels is at least as much an empirical matter as it is a question of test content, either determined by test specifications or identified by any content classification system or grid.",
author = "Alderson, {J. Charles} and N. Figueras and H. Kuiper and G. Nold",
note = "Alderson was the leader of this group and main author of the paper. RAE_import_type : Journal article RAE_uoa_type : Linguistics",
year = "2006",
month = jan,
day = "1",
doi = "10.1207/s15434311laq0301_2",
language = "English",
volume = "3",
pages = "3--30",
journal = "Language Assessment Quarterly",
issn = "1543-4303",
publisher = "Routledge",
number = "1",

}

RIS

TY - JOUR

T1 - Analysing tests of reading and listening in relation to the Common European Framework of Reference: The Experience of the Dutch CEFR Construct Project.

AU - Alderson, J. Charles

AU - Figueras, N.

AU - Kuiper, H.

AU - Nold, G.

N1 - Alderson was the leader of this group and main author of the paper. RAE_import_type : Journal article RAE_uoa_type : Linguistics

PY - 2006/1/1

Y1 - 2006/1/1

N2 - The Common European Framework of Reference (CEFR) is intended as a reference document for language education including assessment. This article describes a project that investigated whether the CEFR can help test developers construct reading and listening tests based on CEFR levels. If the CEFR scales together with the detailed description of language use contained in the CEFR are not sufficient to guide test development at these various levels, then what is needed to develop such an instrument? The project methodology involved gathering expert judgments on the usability of the CEFR for test construction, identifying what might be missing from the CEFR, developing a frame for analysis of tests and specifications, and examining a range of existing test specifications and guidelines to item writers and sample test tasks for different languages at the 6 levels of the CEFR. Outcomes included a critical review of the CEFR, a set of compilations of CEFR scales and of test specifications at the different CEFR levels, and a series of frameworks or classification systems, which led to a Web-mounted instrument known as the Dutch CEFR Grid. Interanalyst agreement in using the Grid for analyzing test tasks was quite promising, but the Grids need to be improved by training and discussion before decisions on test task levels are made. The article concludes, however, that identifying separate CEFR levels is at least as much an empirical matter as it is a question of test content, either determined by test specifications or identified by any content classification system or grid.

AB - The Common European Framework of Reference (CEFR) is intended as a reference document for language education including assessment. This article describes a project that investigated whether the CEFR can help test developers construct reading and listening tests based on CEFR levels. If the CEFR scales together with the detailed description of language use contained in the CEFR are not sufficient to guide test development at these various levels, then what is needed to develop such an instrument? The project methodology involved gathering expert judgments on the usability of the CEFR for test construction, identifying what might be missing from the CEFR, developing a frame for analysis of tests and specifications, and examining a range of existing test specifications and guidelines to item writers and sample test tasks for different languages at the 6 levels of the CEFR. Outcomes included a critical review of the CEFR, a set of compilations of CEFR scales and of test specifications at the different CEFR levels, and a series of frameworks or classification systems, which led to a Web-mounted instrument known as the Dutch CEFR Grid. Interanalyst agreement in using the Grid for analyzing test tasks was quite promising, but the Grids need to be improved by training and discussion before decisions on test task levels are made. The article concludes, however, that identifying separate CEFR levels is at least as much an empirical matter as it is a question of test content, either determined by test specifications or identified by any content classification system or grid.

U2 - 10.1207/s15434311laq0301_2

DO - 10.1207/s15434311laq0301_2

M3 - Journal article

VL - 3

SP - 3

EP - 30

JO - Language Assessment Quarterly

JF - Language Assessment Quarterly

SN - 1543-4303

IS - 1

ER -