As a software supplier and partner to over 100 universities, Lorensbergs is exploring how universities are responding to the challenges of measuring and understanding student satisfaction levels. Being able to enhance the student experience is key when developing effective student-facing software - it's a topic of high importance to Lorensbergs and our university customers. In this post, we explore how universities are remaining focused on student satisfaction and teaching quality measurement but identify some wide variations in how it’s being approached. We hope you enjoy reading our findings.
What’s really getting measured?
It’s no exaggeration to say that established student satisfaction ratings, like the yearly Wall Street Journal/Times Higher Education College rankings, have become compulsory reading for all university administrators and faculty. Yet these surveys are not without their critics.
Are they reliable barometers of teaching quality and academic merit, or are they really measuring a different set of variables all together? A study of U.S. student satisfaction survey data by the University of Groningen provides some intriguing insights.
The study suggests that the data captures student enjoyment (influenced to some extent by how lenient a teacher is or how little effort is required to take the class) more often than they do learning value, yielding a worryingly misleading evaluation of the real academic merits of institutions in the Higher Education sector.
Another problem exists though, and that's the growing student despondency to the many attempts to measure their satisfaction. There's certainly a need to review the approach and develop one that's more sustainable.
Acknowledging the problem
Can we really obtain a clear understanding of the quality of student experience and the true value of their academic pursuits in one annual survey? Critics suggest that we can’t, and that a more agile and responsive approach to harvesting student feedback is sorely needed.
This isn’t to decry the inherent value of garnering comprehensive student opinion. But it is to question whether data obtained via survey machinery with such lengthy lead times each year is the best way of gauging it. Important information invariably gets lost with such approaches, no matter how many ‘pillars’ of student satisfaction are solicited.
Growing numbers of institutions in the Higher Education sector are becoming increasingly critical of the reductionistic approach to college rankings published in magazines like US News, which wield enormous influence over where students choose to attend.
What instead if we were to sample student experience on a more continuous basis, using more organic and easy-to-access methods of collecting student feedback? Some key initiatives from ‘across the pond’ in the UK suggest that significant improvements in accuracy can indeed be achieved.
Examples of good practice
An initiative from the University of Edinburgh points to a means of obtaining student satisfaction feedback which is vastly more reliable and valid. After launching its “Student Experience Project” in 2012, the university appointed a student survey team who devised a nimble, easy solution: a central point of contact for student feedback which fed the results directly back to staff. This resulted in a series of interventions with successful projects being developed into mainstream practices. Key to this work was the Students Communications Team, whose members kept the feedback loop alive by demonstrating that student opinion was being action upon.
Communication and agility in responsiveness are important factors, and so is time. A major study of 600 current students and graduates of U.S. universities found that those institutions that focused on enhancing student satisfaction and attachment on a long-term basis outperformed those who focused predominantly on prestige, significantly strengthening their brand reputation and recruitment attractiveness in the process. Published in the August 2016 edition of the Journal of Business Research, the study (The role of brand attachment in higher education) encompassed the satisfaction and attachment not only of current students but also graduates who had progressed into their careers.
Teaching quality was a significant parameter in student ratings, but so too were campus facilities, social life, atmosphere and employment opportunities. These deepened students’ feelings of connectedness to their university, which was sustained amongst graduates whose universities continued to make efforts to solicit their views and keep them connected to the institution.
Certainly, universities that view students as ‘customers’ who experience the ‘brand’ of an institution, are going to greater lengths in tracking their opinions. For example, one institution employs a customer experience consultancy and uses touchscreen kiosks, countertop tablets and online surveys to get same-day feedback on events and new initiatives. In this way, a near constant stream of opinion is obtained and relayed to relevant departments.
Clearly, technology can help in this long-term endeavour: disposing of cumbersome paper questionnaires and emails, and opting instead for rapid-response methods.
Can there be too much measurement?
Anecdotal evidence from faculty members suggests that it’s not uncommon for students to be asked to respond to more than 20 questionnaires a year. Is it useful to repeatedly ask students for their opinion like this? Or is it more likely to result in survey fatigue?
But when students see rapid responses and changes arising directly from their feedback, the evidence from Edinburgh suggests that such fatigue evaporates. Their Student Communications Team emerges as exemplary, winning sustained student participation and interest throughout the process.
The way forward
Agile, easily accessible and rapidly responsive sampling techniques enabled by digital technologies offer much promise, as results discussed above demonstrate. With these techniques, valid feedback can continue to be pooled and acted upon while survey fatigue becomes a thing of the past. Most importantly, student satisfaction and accomplishment can both continue to rise.