Learning gain and student engagement: opportunity or threat?
British higher education, like British politics, has its buzzwords. Key ideas that ebb and flow like the tides, with the ever-shifting policy landscape shaping different priorities for higher education institutions. In the last ten years alone we have seen discourse shift from a focus on research excellence and reputation to “enhancing the student experience” through to “student engagement and partnership” (which we hope will stay current for some time). The latest emerging priority in light of TEF proposals is the idea of learning gain, and specifically finding ways to measure it.
Learning gain is defined in the 2014 HEFCE-commissioned RAND report as “the ‘distance travelled’, or the difference between the skills, competencies, content knowledge and personal development demonstrated by students at two points in time” (RAND Europe, 2014). The report notes that the concept of learning gain has been established in US higher education for many years, but has received little attention in English HE, despite being commonplace in the British school sector.
Being able to demonstrate in a measurable way the impact higher education has on students’ knowledge, skills and competencies is, prima facie, a useful thing: for informing potential students; for directing institutional enhancement initiatives; and for public accountability purposes. After all, as institutions of learning, universities and colleges should be able to demonstrate that they do in fact facilitate students’ learning. But, as ever, the devil is in the detail, and the selection of metrics to measure learning gain is highly political and contested.
For those of us with an interest in student engagement, whilst we will have a keen interest in the metrics used to measure learning gain, even more interesting is the prospect of being able to demonstrate the impact of student engagement on tangible outcomes for students. The US/Australian model of student engagement as individual “participation in educationally effective practices, both inside and outside the classroom” (Kuh, Kinzie, Buckley, Bridges, & Hayek, 2007), has shown clear impacts on students’ educational outcomes. This has led to improved practice in the area of individual engagement in learning, with recommendations for good practice such as Chickering & Gamson’s Seven Principles of Undergraduate Education (Chickering & Gamson, 1987) solidly based on evidence of improved outcomes. The more UK-focused dimensions of student engagement in governance and the student voice have little measurable evidence of impact on students’ development beyond qualitative accounts of those taking on these roles (Trowler, 2010).
The 13 learning gain pilot projects funded by HEFCE will test a range of proposed metrics, including grades, standardised tests, surveys and qualitative measures. Many of the projects approach learning gain from an explicit employability perspective, which strongly suggests that “learning gain” will encompass all aspects of the higher education experience, rather than just classroom-based learning and “teaching excellence”. A danger of this approach to learning gain is that employability is not necessarily an appropriate proxy for learning gain, and the metrics we currently have for measuring “employability” are flawed. A simplistic measure of graduate salary after ten years does show differences between subjects and institutions, but also that your gender and your parents’ income has a strong impact (IFS, 2016). It is not difficult to make the case that the link between graduate salary and the learning gain experienced at university or college is tenuous at best, and certainly could not be described as causal.
A better approach to measuring “employability” may be reflective surveys or standardised tests, as these actually measure students’ perceptions of, or their actual skill development. Measures such as these, if designed correctly, could be useful indicators of effective educational practice and allow us to measure the tangible impact of student engagement initiatives. We must be wary, however, of the risks: with metrics like these included in the TEF and thereby linked to fees and institutional reputation, the risk of gaming on the part of students or institutions is certainly possible. It is to be hoped that HEFCE and BIS can be sure that the metrics they eventually propose for measuring learning gain will be valid proxies that will help institutions to enhance their provision. If so, a whole new area of research on the impact of student engagement initiatives will open up. If not, well, improving employability statistics as defined by the government may become the new key priority, pushing student engagement and partnership to the sidelines.
References
Chickering, A., & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. American Association of Higher Education Bulletin, Vol 39 No 7 pp 3-7.
IFS. (2016, April). How English domiciled graduate earnings vary with gender, institution attended, subject and socio-economic background. Retrieved from Institute for Fiscal Studies: http://www.ifs.org.uk/publications/8233
Kuh, G., Kinzie, J., Buckley, J., Bridges, B., & Hayek, J. (2007). Piecing Together the Student Success Puzzle: Research, Propositions, and Recommendations. ASHE Higher Education Report, Vol 32, No 5.
RAND Europe. (2014). Learning gain in higher education. Retrieved from HEFCE.ac.uk: http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Learning,gain,in,HE/Learning_gain.pdf
Trowler, V. (2010). Student engagement literature review. Retrieved from Higher Education Academy: https://www.heacademy.ac.uk/sites/default/files/studentengagementliteraturereview_1.pdf