Authors: Adewuni, D. Adebayo, Busari, Y. Taiwo
Journal Article | Publish Date: 01/December/2021
Abstract This study analyzed the Standard Error of Measurement (SEM) of 2019 May/June West African Senior School Certificate Examination Multiple-choice Objective Tests in Economics at three different confidence Interval (CI). A quantitative research design of the descriptive type was adopted for the study. The sample of the study was Three hundred and Two (302) Senior Secondary School Three (SSS.3) students that offered Economics selected from twelve (12) schools in the three senatorial districts in Osun State, Nigeria and Multi-stage sampling technique was adopted in this study. 2019 May/June and Nov/Dec (GCE/WASSCE) Multiple-Choice Objective Tests in Economics were adopted as instruments for the study. Data collected were analyzed using Descriptive statistics. The findings of this study revealed that the performance of students in WASSCE May/June Economics Multiple choice objective test of 2019 flagged the SEM of 4 (+ or - 4). Also, the performance of students in 2019 GCE WASSCE Economics Multiple-choice objective test flagged the SEM of 12 (+ or - 12) both at 68% confidence interval. It was concluded that 2019 May/June WASSCE Economics Multiple-choice objective test is more precise, reliable and correct than 2019 GCE WASSCE Economics Multiple-choice objective test at all the confidence intervals. The study recommended that educators should consider the magnitude of SEMs for students across the achievement distribution. It was also recommended that test practitioners should adopted classical Test Theory (CTT) in test scoring and test precision. Keywords: Standard Error of Measurement (SEM), Confidence Interval, Economics
Adedoyin, O. O., Nenty, H. J. & Chilisa, B. (2008). Investigating the variance of Item difficulty parameter estimates based in CTT and IRT. Educational Research and Review. 3(2), 83-93.
AERA, APA, & NCME (1985). Standards for educational and psychological testing. Washington, D. C.: American Psychological Association. p. 94.
Bland, J. M. and Altman, D. G. (1996). Measurement Error. British Medical Journal, 313: 744 - 753.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika. 16: 297-334.
Fan, X. (1998). Item Response Theory and Classical Test Theory: an empirical comparison of their item/person statistics. Educational and Psychological Measurement. Sage Publication, Inc. V58 N3 P357 (25).
Hambleton, R. K.., Swaminathan, H. & Roger, H. J. (1991). Fundamentals of Item Response Theory, Newbury Par, CA. Sage.
Jensen, N. (2015). Making Sense of Standard Error of Measurement, Research and Thought Leadership, MAP Growth.
Mutodi, P. & Ngirande, H. (2014). Influence of students’ perceptions on mathematics performance. A case study of a selected high school in South Africa. Mediterranean Journal of Social Sciences, 5(3):431-445.
Postgraduate Medical Education and Training Board (2004). Principles for an assessment system for postgraduate training: A working paper from the Postgraduate Medical Education Training Board, London: PMETB Google Scholar.
Postgraduate Medical Education and Training Board (2007). Developing and maintaining an assessment system - a PMETB guide to good practice, London: PMETB Google Scholar.
Postgraduate Medical Education and Training Board (2008). Standards for curricula and assessment systems., London: PMETB Google Scholar.
Postgraduate Medical Education and Training Board (2009). Reliability issues in the assessment of small cohorts (Guidance 09/1)., London: PMETB, [http://www.pmetb.org.uk] Google Scholar.
Richard, B. (2016). Calculating the Standard Error of measurement. Retrieved from www.worldpress.com.
Tighe, J., McManus, I. C., Dewhurst, N. G., Chis, L. & Mucklow, J. (2010). The Standard Error of Measurement is a More Appropriate Measure of Quality for Postgraduate Medical Assessments than is reliability: An Analysis of MRCP (UK) Examinations; BMC Medical Education 10 P.40.