PRISM network comments on the assessment of People, Culture and Environment in REF 2028

These comments were submitted to REF 2028 on 1 December 2023 in response to their call for feedback on the assessment of People, Culture and Environment (PCE). We're publishing them here to amplify the voices of research-adjacent staff at UK HEIs in the development of REF 2028. These perspectives come from an online discussion with members we held in late summer, plus an additional written feedback survey. We welcome discussions with other actors in the UK's research system to develop these ideas further. 
 

Introduction to PRISMs

The PRISM network is a grassroots organisation of Professional Research Investment and Strategy Managers (currently c.500 members). PRISMs predominantly work to enable the delivery and growth of large research investments. PRISMs work across the total grant cycle, including ideation, application, setup, delivery, monitoring and impact assessment. PRISM roles are very varied and typically require both research/teaching expertise and an academic background as well as stakeholder, project management and operational skills. Although only a minority of PRISMs produce traditional research outputs, many more produce non-traditional outputs. All build community and impact. This submission represents the views of network members. 
 

Inclusive definitions of quality

The greater weighting for the PCE element, and the inclusion of people and culture in addition to environment, would in theory require institutions to value the work of PRISMs more explicitly, as most of us spend our time delivering this and impact, as opposed to contributing to outputs. We welcome this, as many of our members report a lack of professional recognition at their institutions. However, whether the work of PRISMs (and other research-enabling professionals) contributes in practice to this measure very much depends on what metrics are chosen and how these are interpreted by institutions and reviewers. 

Many members are rather cynical about this change. In a REF discussion session in September, several described developing impact case studies for the last REF, where there was a gulf in professional opinion on what constituted good quality work between the PRISM and an academic. The academic’s choice of impact case study carried the day, it was written up by the PRISM, and then it was scored highly by academic referees in the REF, even though the PRISM (with significant professional impact experience) thought it was of relatively poor quality. We fear a similar gap in the next REF for PCE, in which an academic’s idea of good PCE scores highly with other academics, even though it describes a situation, structure or process which a PRISM (or other non-academic staff member) would consider to be poor PCE.

If PRISMs and other categories of non-academic, non-research, non-teaching staff get to contribute meaningfully to the choice of PCE metrics AND are clearly required by the REF to be part of a institution’s PCE contribution process AND are on REF PCE review panels, then things could be good! The middle criterion – that these staff are empowered within institutions – is essential to avoid a situation like the one that the HiddenREF is trying to combat with their 5% manifesto for outputs. Institutions have already demonstrated that they are highly risk averse in the REF (unsurprisingly) and are reluctant to submit material that isn’t the norm, hence the dominance of “traditional” research outputs in submissions. This suggests that they will be reluctant to move away from valuing academics’ definitions of good PCE. 
 

Choice of metrics

Measuring people and culture in British research institutions is new and unproven. How is the REF going to pilot and validate metrics? This is essential before rollout. 

The focus on demonstrable outcomes risks minimizing the importance of work which is hard to measure, as has already been shown for impact work. A lot of people and culture work is hard to measure as it is relatively subjective and depends on people’s lived experience, which may not be reflected in their behaviour. But that is no reason not to attempt to measure their experience. There is an existing literature on measuring psychological safety which would be useful but needs adaptation (and validation). 

Some suggestions for metrics which might be relevant to PRISMs’ experiences of PCE are:

  • Number/proportion of non-academic/research staff applying for grants.

  • Number/proportion of non-academic/teaching staff developing teaching/training.

  • Number/proportion of non-academic staff in senior management.

  • Existence of promotion pathways for non-academic staff.

  • Professional development spend per head, and/or training days used per annum, for non-academic staff.

  • Length of retention of non-academic staff on fixed term contracts.

  • Number of transitions of staff from fixed term contracts to permanent contracts.

  • Recruitment of academic/research/teaching staff with a professional background that is not 100% academic/research/teaching (e.g. at Frauenhofer institutes)

  • Recruitment of professional staff from outside the HE sector into middle and senior roles.

  • Measures of psychological safety for all professional roles, at all levels of seniority. 
     

Assessment unit

Assessing PCE at institutional and discipline levels is likely to erase the interdisciplinary work and inter-institutional work that a lot of PRISMs carry out, which is funded by big research council grants, e.g. networks and hubs. These have a disproportionate burden of community building and project complexity. There needs to be a specific plan to ensure that this work is not omitted from assessment, since it is a large part of the UK’s R&D investment.

Many PRISM network members report that their recognition depends on individual academics’ attitudes and is not an institution-wide or discipline-wide phenomenon. The proposed reporting structure is therefore likely to miss small spaces of good PCE practice within an overall poor institution and/or discipline (and vice versa). Reporting variability of satisfaction with culture, not just averages, might capture some of this.
 

Giving full credit

At a meta-level, it would be helpful if there were transparent reporting of who contributed and in what way to the PCE submission, for example using a Contributor Roles Taxonomy (CRediT). 
 

Conclusion

PRISMs can be empowered to develop and deliver good PCE, given our role building processes and anchoring communities. More than the other areas of assessment, PCE is something that is not wholly academic driven. The metrics and the processes need to reflect this.

Last edited: 8 December 2023

Contact: i.von-holstein@imperial.ac.uk