Digital Object Identifier
In this edition of Higher Learning Research Communications (HLRC), our authors present research on quality in the context of higher education, using e-portfolios to encourage responsible feedback, and student gender bias in teaching evaluations.Authors Laura Schindler, Sarah Puls-Elvidge, Heather Welzant, and Linda Crawford present their literature review findings concerning quality and higher education. The authors focus on the difficulties of defining quality, revealing there seem to be four broad manners to conceptualize quality: purposeful, exceptional, transformative, and accountable. They also identified four distinct categories regarding quality indicators in higher education: administrative, student support, instructional, and student performance. One of the main difficulties identified in defining quality assurance is the regional context since, while in some areas quality assurance and accreditation are distinct terms, in other regions they are considered synonyms. After their review and analysis, the authors put forth recommendations for defining quality according to the existing state of quality assurance in higher education, depending on whether the institution has both a definition of quality and a set of quality indicators, either, or none.As technology enters more and more the classroom, new approaches to assess students and provide them with responsible feedback are needed. Educators Lucia Morales and Amparo Soler-Dominguez have found, through research and experience, that electronic portfolios are an effective way to provide students with responsible feedback and promote self-regulated learning, encouraging a student-centered approach. Morales and Soler-Dominguez argue that, given the positive manner in which eportfolios seem to promote two-way communication between teacher and student, they encourage responsible feedback in the classroom and self-regulated learning in postgraduate finance students.Finally, researchers Narissra Punyanunt-Carter and Stacy L. Carter present their findings regarding a preliminary study to assess whether there is or not a bias in teaching evaluations depending on the students’ gender. Since most research conducted on this particular topic was published during the 1980s and 1990s, Punyanunt-Carter and Carter based their research on a previous study published in 2000 in order to validate their findings. After analyzing their results, they found there seems to be a slight gender bias in teaching evaluations; however, they found no indications that its magnitude impacts the evaluations in a significant manner. This contrasts with current research that has found higher incidences different types of gender bias in academia, such as hiring practices, less women in STEM, and citation gaps. Since the gender bias in many of those other cases seems to come from peers and not students, there is a chance there is a generational gap at play. The authors recognize more studies are needed in this area, especially with larger student samples and also qualitative research tools, given that most research on this area comes from quantitative survey questions that might elude students’ perspectives on the matter.
Higher Learning Research Communications, 5 (3).