THE BEST-ACHIEVING ONLINE STUDENTS ARE OVERREPRESENTED IN COURSE RATINGS

Ricardo Tejeiro, Alexander Whitelock-Wainwright, Alina Perez, Miguel Angel Urbina-Garcia

Abstract


Student ratings are the most used and influential measure of performance in Higher Education, and an integral component of formative and summative decision making. This may be particularly relevant in the relatively new online courses, where the pedagogical model is still developing. However, student ratings face strong controversy, and some remarkable challenges –one of which stems from the fact that not all students provide ratings. Nonresponse bias, or the lack of representativeness of the providers of ratings, has been measured and discussed in traditional courses, but to date no study has analysed nonresponse bias in the online evaluation of a fully online higher education course. Our study aims to close this gap. We analysed archival data for the students completing the intake module of four psychology online postgraduate programmes in a 2-year period (June 2014 to May 2016; n = 457). Statistical analyses included correlation, chi-square test, Mantel-Haenszel test of trend, Mann-Whitney’s U and regression analysis; effect size was measured with odds radios, Cramer´s V, and r. We found that the likelihood of providing ratings was not associated with sex, age, educational background, or familiarity with the British higher education system; however, respondents presented significantly higher values than nonrespondents in the key variable used to measure their learning experience –final mark. The implications of this finding are discussed in relation to Groves’ (2006) causal models for nonresponse bias, as well as the validity and leniency hypotheses.

 

Article visualizations:

Hit counter

DOI

Keywords


student ratings; learning analytics; teaching quality; nonresponse; online education

Full Text:

PDF

References


Adams, M. J. D., and Umbach, P. D. (2012). Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments. Research in Higher Education, doi: 10.1007/s11162-011- 9240-5

Ardalan, A., Ardalan, R., Coppage, S., and Crouch, W. (2007). A comparison of student feedback obtained through paper-based and web-based surveys of faculty teaching. British Journal of Educational Technology, doi: 10.1111/j.1467- 8535.2007.00694.x

Ary, D., Jacobs, L., and Razavieh, A. (1996). Introduction to research in education (5th Ed.). Ft. Worth, TX: Holt, Rinehar, and Winston, Inc.

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., and Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations? Journal of Economic Education, doi: 10.3200/JECE.37.1.21-37

Berk, R. A. (2005). Survey of 12 strategies to measure teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 17(1), 48-62.

Blackhart, G. C., Peruche, B. M., DeWall, C. N., and Joiner, T. E. (2006). Factors influencing teaching evaluations in higher education. Teaching of Psychology, 33(1), 37–39.

Campanelli, P., Sturgis, P., and Purdon, S. (1997). Can You Hear Me Knocking: An Investigation into the Impact of Interviewers on Survey Response Rates. London: S.C.P.R.

Crumbley, D. L., and Reichelt, K. J. (2009). Teaching effectiveness, impression management, and dysfunctional behavior: Student evaluation of teaching control data. Quality Assurance in Education, doi: 10.1108/09684880910992340

Darby, J. A. (2006). The effects of the elective or required status of courses on student evaluations. Journal of Vocational Education and Training, doi: 10.1080/13636820500507708

Denson, N., Loveday, T., and Dalton, H. (2010). Student evaluation of courses: what predicts satisfaction. Higher Education Research and Development, doi: 10.1080/07294360903394466

Dillman, D. A. (1991). The design and administration of mail surveys. Annual Review of Sociology, doi: 10.1146/annurev.so.17.080191.001301

Donovan, J., Mader, C., and Shinsky, J. (2007). Online vs. traditional course evaluation formats: student perceptions. Journal of Interactive Online Learning, 6(3), 158-180.

Ellis, R. A., Endo, C. H., and Armer, J. M. (1970). The use of potential nonrespondents for studying nonresponse bias. Pacific Sociological Review, doi: 10.2307/1388313

Emery, C. R., Kramer, T. R., and Tian, R. G. (2003). Return to academic standards: A critique of students’ evaluations of teaching effectiveness. Quality Assurance in Education: An International Perspective, doi: 10.1108/09684880310462074

Estelami, H. (2015). The effects of survey timing on student evaluation of teaching (SET) measures obtained using online surveys. Journal of Marketing Education, doi: 10.1177/0273475314552324

Gaillard, F. D., Mitchell, S. P., and Kavota, V. (2006). Students, faculty, and administrators’ perception of students’ evaluations of faculty in higher education business schools. Journal of College Teaching and Learning, doi: 10.19030/tlc.v3i8.1695

Gall, M. D., Borg, W. R., and Gall, J. P. (1996). Educational research: An introduction (6th ed.). White Plains, NY: Longman.

Gee, N. (2015). A study of student completion strategies in a Likert-type course evaluation survey. Journal of Further and Higher Education. Advance online publication. doi: 10.1080/0309877X.2015.1100717

Gomez-Mejia, L. R., and Balkin, D. B. (1992). Determinants of Faculty Pay: An Agency Theory Perspective. Academy of Management Journal, 35(5), 921-955.

Gravestock, P., and Gregor-Greenleaf, E. (2008). Student course evaluations: Research, models and trends. Toronto: Higher Education Quality Council of Ontario.

Groves, R. M. and Couper, M. P. (1998). Nonresponse in Household Interview Surveys. New York: Wiley.

Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, doi:10.1093/poq/nfl033

Groves, R. M., and Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A metaanalysis. Public Opinion Quarterly, doi: 10.1093/poq/nfn011

Groves, R. M., Couper, M., Presser, S., Singer, E., Tourangeau, R., Piani Acosta, G., et al. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, doi: 10.1093/poq/nfl036

Groves, R. M., Presser, S., and Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, doi: 10.1093/poq/nfh002

Guder, F., and Malliaris, M. (2013). Online course evaluations response rates. American Journal of Business Education, 6(3), doi: 10.19030/ajbe.v6i3.7813

Haladyna, T., and Amrein-Beardsley, A. (2009). Validation of a research-based student survey of instruction in a college of education. Educational Assessment, Evaluation and Accountability, doi:10.1007/s11092-008-9065-8

Heckert, T. M., Latier, A., Ringwald, A., and Drazen, C. (2006). Relations among student effort, perceived class difficulty appropriateness, and student evaluations of teaching: Is it possible to "buy" better evaluations through lenient grading? College Student Journal, 40(3), 588-596.

Heine, P., and Maddox, N. (2009). Student perceptions of the faculty course evaluation process: An exploratory study of gender and class differences. Research in Higher Education Journal, 3, 1–10.

Hochstim, J. R., and Athanasopoulos, D. A. (1970). Personal follow-up in a mail survey: Its contribution and its cost. Public Opinion Quarterly, doi: 10.1086/267774

Liegle, J., and McDonald, D. S. (2005). Lessons Learned From Online vs. Paper-based Computer Information Students’ Evaluation System. Information Systems Education Journal, 3(37). Retrieved from http://isedj.org/3/37/. ISSN: 1545-679X.

Lindner, J. R., Murphy, T. H., and Briers, G. H. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, doi: 10.5032/jae.2001.04043

Marcus, B., and Schutz, A. (2005). Who are the people reluctant to participate in research? Personality correlates of four different types of nonresponse as inferred from self- and observer ratings. Journal of Personality, doi: 10.1111/j.1467- 6494.2005.00335.x

Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R. P. Perry and J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319–383). Dordrecht: Springer.

Marsh, H. W. and Roche, L. A. (1999). Rely upon SET research. American Psychologist, doi: 10.1037/0003-066X.54.7.517

Mau, R. R., and Opengart, R. A. (2012). Comparing ratings: In-class (paper) versus out of class (online) student evaluations. Higher Education Studies, doi: 10.5539/hes.v2n3p55

McDaniel, C., Jr., and Gates, R. (2012). Marketing research (9th Ed.). Hoboken, NJ: John Wiley.

McInnis, E. D. (2006). Nonresponse Bias in Student Assessment Surveys: A Comparison of Respondents and Non-Respondents of the National Survey of Student Engagement at an Independent Comprehensive Catholic University (Doctoral dissertation, Marywood University). Retrieved from http://nsse.indiana.edu/pdf/research_papers/Nonresponse%20Bias%20in%20Stu dent%20Assessment%20Surveys%20-%20Elizabeth%20McInnis.pdf

McPherson, M. A. (2006). Determinants of how students evaluate teachers. Journal of Economic Education, doi: 10.3200/JECE.37.1.3-20

Micklewright, J., Schnepf, S. V., and Skinner, C. (2012). Non-response biases in surveys of schoolchildren: the case of the English Programme for International Student Assessment (PISA) samples. Journal of the Royal Statistical Society: Series A (Statistics in Society), doi: 10.1111/j.1467-985X.2012.01036.x

Millea, M. and Grimes, P. W. (2002). Grade expectations and student evaluation of teaching. College Student Journal, 36(4), 582–591.

Miller, L. E., and Smith, K. L. (1983). Handling nonresponse issues. Journal of Extension, 21(5), 45-50.

Murray, H. G. (2005, June). Student Evaluation of Teaching: Has It Made a Difference? Paper presented at the Annual Meeting of the Society for Teaching and Learning in Higher Education, Charlottetown, Canada. Retrieved from https://www.stlhe.ca/wp-content/uploads/2011/07/Student-Evaluation-of- Teaching1.pdf

Nowell, C., Gale, L. R., and Handley, B. (2010). Assessing faculty performance using student evaluations of teaching in an uncontrolled setting. Assessment and Evaluation in Higher Education, doi: 10.1080/02602930902862875

Nowell, C., Gale, L. R., and Kerkvliet, J. (2014). Non-response bias in student evaluations of teaching. International Review of Economics Education, doi: 10.1016/j.iree.2014.05.002

Nulty, D.D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment and Evaluation in Higher Education, doi: 10.1080/02602930701293231

Olsen, D. (2008). Teaching patterns: A pattern language for improving the quality of instruction in higher education settings (Doctoral dissertation, Utah State University). Retrieved from http://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1050andcontext=etd

Porter, S. R., and Umbach, P. D. (2006). Student survey response rates across institutions: Why do they vary? Research in Higher Education, doi: 10.1007/s11162- 005-8887-1

Porter, S. R., and Whitcomb, M. E. (2005). Non-response in student surveys: The role of demographics, engagement, and personality. Research in Higher Education, doi: 10.1007/s11162-004-1597-2

Pritchard, R. E., and Potter, G. C. (2011). Adverse changes in faculty behavior resulting from use of student evaluations of teaching: A case study. Journal of College Teaching and Learning, doi: 10.19030/tlc.v8i1.980

Reisenwitz, T. H. (2016). Student Evaluation of Teaching: An Investigation of Nonresponse Bias in an Online Context. Journal of Marketing Education, doi: 10.1177/0273475315596778

Sax, L. J., Gilmartin, S. K., and Bryant, A. N. (2003). Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, doi: 10.1023/A:1024232915870

Sax, L. J., Gilmartin, S. K., Lee, J. J., and Hagedorn, L. S. (2008). Using web surveys to reach community college students: An analysis of response rates and response bias. Community College Journal of Research and Practice, doi: 10.1080/10668920802000423

Sosdian, C. P., and Sharp, L. M. (1980). Nonresponse in mail surveys: Access failure or respondent resistance. Public Opinion Quarterly, doi: 10.1086/268606

Spencer, K. J., and Schmelkin, L. P. (2002). Student perspectives on teaching and its evaluation. Assessment and Evaluation in Higher Education, doi: 10.1080/0260293022000009285

Stehle, S., Spinath, B. and Kadmon, M. (2012). Measuring Teaching Effectiveness: Correspondence Between Students’ Evaluations of Teaching and Different Measures of Student Learning. Research in Higher Education, doi: 10.1007/s11162- 012-9260-9

Steiner, S., Holley, L. C., Gerdes, K., and Campbell, H. H. (2006). Evaluating teaching: Listening to students while acknowledging bias. Journal of Social Work Education, doi: 10.5175/JSWE.2006.200404113

Stronge, J. H., Tucker, P. D., and Hindman, J. L. (2004). Handbook for Qualities of Effective Teachers. Alexandria, VA: Association for Supervision and Curriculum Development.

Theall, M., and Franklin, J. L. (2001). Looking for bias in all the wrong places: A search for truth or a witch hunt in student ratings of instruction? In M. Theall, P. C., Abrami, and L. A. Mets (Eds.), The student ratings debate: Are they valid? How can we best use them? (New Directions for Institutional Research, No. 109) (pp. 45–56). San Francisco, CA: Jossey-Bass.

Tuckman, B. W. (1999). Conducting education research (5th ed.). Fort Worth, TX: Harcourt Brace.


Refbacks

  • There are currently no refbacks.


Copyright © 2015-2018. European Journal of Open Education and E-learning Studies (ISSN 2501-9120) is a registered trademark of Open Access Publishing GroupAll rights reserved.

This journal is a serial publication uniquely identified by an International Standard Serial Number (ISSN) serial number certificate issued by Romanian National Library (Biblioteca Nationala a Romaniei). All the research works are uniquely identified by a CrossRef DOI digital object identifier supplied by indexing and repository platforms.

All the research works published on this journal are meeting the Open Access Publishing requirements and can be freely accessed, shared, modified, distributed and used in educational, commercial and non-commercial purposes under a Creative Commons Attribution 4.0 International License (CC BY 4.0).