What is the REF?

The Research Excellence Framework (REF) is the UK’s system for assessing the excellence of research in UK higher education providers (HEIs). The REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities’ research.

The REF was first carried out in 2014, replacing the previous Research Assessment Exercise. Research England manages the REF on behalf of all four UK higher education funding bodies:

  • Research England
  • Scottish Funding Council
  • Medr, Wales’ Commission for Tertiary Education and Research
  • Department for the Economy, Northern Ireland

The funding bodies’ shared policy aim for research assessment is to secure a world-class, dynamic and responsive research base across the full academic spectrum within UK higher education.

The REF objectives are to:

  • provide accountability for public investment in research and produce evidence of the benefits of this investment
  • provide benchmarking information and establish reputational yardsticks, for use in the higher education sector and for public information
  • inform the selective allocation of funding for research

King's College London

Research England have announced that the Research Excellence Framework will take place in 2029, with submission in late 2028. Further information can be found here: REF2029

King’s is developing its plans for REF2029.

The Research Excellence Framework 2021

Infographic detailing REF 2022's confirmation of King's as one of the UK's top research universities. 6th in the UK for research power, 3rd among UK multidisciplinary institutions for impact and 6th in the UK for world-leading (4*) research

The Research Excellence Framework (REF) is the system for assessing the quality of research in UK universities and higher education colleges and replaces the Research Assessment Exercise (RAE).

The first REF exercise was run in 2014 , and the second one was run in 2021 .

King’s made an institutional submission to REF, which is broken down into disciplinary units known as Units of Assessment (UOAs).

The key purposes of the REF are:

  • To inform HEFCE ’s selective allocation of funding for research (QR Funding)
  • To provide accountability for public investment in research
  • To provide benchmarking information for use in the higher education sector and for public information

King's preparations for REF 2021 were led by Professor Reza Razavi , Vice President and Vice Principal (Research), and co-ordinated by Jo Lakey (REF and Research Impact Director) and Adelah Bilal (REF Policy Officer)

King's College London's REF2021 Submission

King's achieved excellent results in REF 2021, sustaining the College's position amongst the world’s best universities for research excellence and power.

King’s Ranking Summary

Rank
 Rank

Some of the key data is highlighted below:

  • Overall, 55.1% of the work submitted was rated 4* (world leading).
  • On Power 1 rankings (weighted quality multiplied by the size of the submission), King's maintained its position at 6th.
  • On Impact, King's ranked 3rd amongst multi-faculty universities.

Achievements by individual Units of Assessment:

  • Allied Health (including the Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care, the Faculty of Dentistry, Oral & Craniofacial Sciences, as well as the School of Life Course Sciences and the Institute of Pharmaceutical Sciences in the Faculty of Life Sciences & Medicine) is 1 st in the country for quality of research and has 100 percent 4* ranking for research environment;
  • Business and Management are 9 th in the country for quality of research;
  • Chemistry has 100 percent 4* ranking for impact, and is 5 th in the country for quality of research;
  • Classics (part of the Faculty of Arts & Humanities) is 1 st in the country for quality of research;
  • Clinical Medicine has 100 percent 4* ranking for research environment;
  • Engineering are 12 th in the country for quality of research;
  • Modern Languages has 100 percent 4* ranking for research environment;
  • Psychology, Psychiatry and Neuroscience is 2nd in the country for power and achieved 100 percent 4* ranking for research ‘environment’;
  • Politics and International Studies is 1st for power*;
  • Sport and Exercise Science has 100% 4* ranking for impact;
  • Theology and Religious Studies has 100 percent 4* ranking for research environment

The full REF results can be accessed at ref.ac.uk and examples of our impact case studies can be seen at https://www.kcl.ac.uk/news/spotlight .

1 The Power score takes into account both the quality and the quantity of research activity, with a weighting applied only to research at 4* and 3*.

2 The Quality Index is similar to the GPA but gives an additional weighting to the proportion of research at the higher star level. The index that the College has used is % 4* x 9, % 3* x 3, divided by 9. Different league tables may use different proportions for this.

3 GPA (Grade Point Average) represents an average score (out of four) for the submission to a unit of assessment and is derived by multiplying the percentage of the submission at each of the levels (4*, 3*, 2*, 1*) by the number of the star ranking and dividing by 100.

How did REF2021 assess Higher Education Institutions?

REF assessments are a process of peer review, carried out by an expert sub-panel in each UOA. Every element submitted to the REF was graded on a 5 point scale going from unclassified (did not meet REF criteria) to 4* (world leading).

REF 2021 submissions consisted of three main elements:

  • This consists of research outputs produced by the College during the assessment period (01/01/2014- 31/12/2020).
  • The output assessment contributes to 60% of the overall assessment.
  • 2.5 outputs are required per FTE of Category A submitted staff.
  • This consists of Impact Case Studies evidencing the benefits derived from our research.
  • Impact contributes to 25% of the overall assessment.

Environment:

  • Each UOA produces an environment statement describing how each Unit supports research.
  • The environment statement contributes to 15% of the overall assessment.

The REF 2021 Oversight Group (chaired by Professor Reza Razavi) had strategic oversight of King’s submission to the REF and was responsible for ensuring that progress was made, and oversaw and signed off arrangements for matters related to the university’s policy for submission. The Group reported to the Senior Management Team and to the College Research Committee.

Submissions to each of the four REF Main Panels were coordinated through a Main Panel Co-ordination group, whose responsibility was to determine the internal policy for submissions across all the Units of Assessment related to that Panel.

College Impact Committee

The College Impact Committee (CIC) brings together the expertise of academic and professional services staff across the College to support the development of research impact activities and to share best practice for impact development and evaluation in all research carried out at the College.

 The Chair of the CIC is the Dean of Research Impact, Professor Nigel Pitts. Membership of the CIC includes Academic Impact Leads for each faculty and the Professional Services Impact Leads.

Data collection and further information

research excellence framework unit of assessment

REF data collection

King's lawful basis under GDPR for collecting and using personal data

research excellence framework unit of assessment

REF 2021 website

The REF is the UK's system for assessing the quality of research in UK...

research excellence framework unit of assessment

Read about the impact King’s has on the world's greatest challenges

research excellence framework unit of assessment

Research impact

Our research tackles global issues, adding value to society and the economy

X

UCL Research

Menu

Research Excellence Framework

The Research Excellence Framework (REF) is the system for assessing the quality of research in UK higher education institutions (HEIs).

The REF is carried out approximately every six to seven years to assess the quality of research across 157 UK universities and to share how this research benefits society both in the UK and globally. It is implemented by Research England , part of UK Research and Innovation.

Submissions for REF 2021 closed on 31 March 2021 and the results were announced on 12 May 2022. See the REF 2021 results and visit the REF Hub to read over 170 impact case studies about how UCL is transforming lives.

The previous REF was REF 2014 . The next REF will be REF 2029 , with results published in December 2029.

For queries, contact the UCL REF team .

Purpose of the Framework

The main objectives of the REF are:

  • To provide accountability for public investment in research
  • To provide benchmarking information for use within the Higher Education sector and for public information.
  • To inform the selective allocation of quality-related (QR) funding for research

The REF assesses three distinct elements:

  • quality of research outputs
  • impact of research beyond academia
  • the environment that supports research.

Assessment details and process

The REF is a process of expert review , with discipline-based expert panels assessing submissions made by HEIs in 34 Units of Assessment (UOAs). Additional measures, including specific advisory panels, were introduced in REF2021 to support the implementation of equality and diversity, and submission and review of interdisciplinary research, during assessment. The main panels were as follows:

  • Panel A: Medicine, health and life sciences
  • Panel B: Physical sciences, engineering and mathematics
  • Panel C: Social sciences
  • Panel D: Arts and humanities 

The REF submission comprises three elements: research outputs, research impact and research environment. Sub-panels for each unit of assessment use their expertise to grade each element of the submission from 4 stars (outstanding work) through to 1 star (with an unclassified grade awarded if the work falls below the standard expected or is deemed not to meet the definition of research). The scores are weighted 60% (outputs), 25% (impact) and 15% (environment).

research excellence framework unit of assessment

Find out more about REF 2021

Follow @uclref.

Tweets by UCLREF

REF for UCL staff (login required)

  • Share on twitter
  • Share on facebook

REF 2021: Times Higher Education’s table methodology

How we analyse the results of the research excellence framework.

  • Share on linkedin
  • Share on mail

The data published today by the four UK funding bodies present the proportion of each institution’s Research Excellence Framework submission, in each unit of assessment, that falls into each of five quality categories.

For output and overall profiles , these are 4* (world-leading), 3* (internationally excellent), 2* (internationally recognised), 1* (nationally recognised) and unclassified (below nationally recognised or fails to meet the definition of research).

For impact , they are 4* (outstanding in terms of reach and significance), 3* (very considerable), 2* (considerable), 1* (recognised but modest) and unclassified (little or no reach or significance).

For environment , they are 4* (“conducive to producing research of world-leading quality and enabling outstanding impact, in terms of its vitality and sustainability”), 3* (internationally excellent research/very considerable impact), 2* (internationally recognised research/considerable impact), 1* (nationally recognised research/recognised but modest impact) and unclassified (“not conducive to producing research of nationally recognised quality or enabling impact of reach and significance”).

For the overall institutional table , Times Higher Education aggregates these profiles into a single institutional quality profile based on the number of full-time equivalent staff submitted to each unit of assessment. This reflects the view that larger departments should count for more in calculating an institution’s overall quality.

Institutions are, by default, ranked according to the grade point average (GPA) of their overall quality profiles. GPA is calculated by multiplying its percentage of 4* research by 4, its percentage of 3* research by 3, its percentage of 2* research by 2 and its percentage of 1* research by 1; those figures are added together and then divided by 100 to give a score between 0 and 4.

We also present research power scores. These are calculated by multiplying the institution’s GPA by the total number of full-time equivalent staff submitted, and then scaling that figure such that the highest score in the ranking is 1,000. This is an attempt to produce an easily comparable score that takes into account volume as well as GPA, reflecting the view that excellence is, to some extent, a function of scale as well as quality. Research power also gives a closer indication of the relative size of the research block grant that each institution is likely to receive on the basis of the REF results.

However, block grants are actually calculated according to funding formulas that currently take no account of any research rated 2* or below. The formula is slightly different in Scotland, but in England, Wales and Northern Ireland , the “quality-related” (QR) funding formula also accords 4* research four times the weighting of 3* research. Hence, we also offer a  market share metric. This is calculated by using these quality weightings, along with submitted FTEs, to produce a “ quality-related volume ” score; each institution’s market share is the proportion of all UK quality-related volume accounted for by that institution.

UK-wide research quality ratings hit new high in expanded assessment Output v impact: where is your institution strongest?

The 2014 figures are largely taken from THE ’s published rankings for that year – although research power figures have been retrospectively indexed.

Note that a small number of institutions may have absorbed other institutions since 2014. In these cases, rather than attempting to calculate a 2014 combined score for the merged institutions, we list only the main institution’s 2014 score.

We exclude from the main tables specialist institutions that entered only one unit of assessment (UoA); these are listed instead in the relevant unit of assessment table.

Note that the figure for number of UoAs entered by an institution counts multiple submissions to the same unit of assessment separately.

For data on the share of eligible staff submitted, note that some institutions have a figure greater than 100 per cent. According to the UK funding bodies, this is due to some research staff not being registered in official statistics due to internal employment structures at certain institutions. 

The separate tables for outputs , impact and environment are constructed in a similar way, but they take account solely of each institution’s quality profiles for that specific element of the REF; this year, those elements account for 60, 25 and 15 per cent of the main score, respectively. These tables exclude a market share measure.

The subject tables rank institutional submissions to each of the 34 units of assessment based on the GPA of the institution’s overall quality profiles in that unit of assessment, as well as its research power. GPAs for output, impact and environment are also provided.

Where a university submitted fewer than four people to a UoA, the funding bodies suppress its quality profiles for impact, environment and outputs, so it is not possible to calculate a GPA. This is indicated in the table by a dash.

Unit of assessment tables: see who’s top in your subject Who's up, who’s down? See how your institution performed in REF 2021

As before, 2014 scores for the subject tables are taken from THE ’s 2014 scores . However, there are a small number of cases where UoAs from 2014 have merged or split in 2021. Whereas there were four separate UoAs for engineering in 2014, there is only one in 2021. For reasons of comparability, we list 2014 scores for the engineering table based on the combined scores of the engineering UoAs in 2014, weighted according to the FTE submitted to each.

Contrariwise, while geography, environmental studies and archaeology was a single UoA in 2014, archaeology is a separate UoA in 2021. Since it is not possible to separate out archaeology scores from 2014, the 2014 scores listed for both the archaeology UoA and the geography and environmental studies UoA are the same.

Where an institution did not submit to the relevant unit of assessment in 2014, the relevant fields are marked “n/a”. Where an institution made multiple submissions to a UoA in 2014 and only one in 2021, the 2014 fields are also marked "n/a". 

In some UoAs, single institutions have made multiple submissions. These are listed separately and are distinguished by a letter: eg, “University of Applemouth A: Nursing” and “University of Applemouth B: Pharmacy”.

Where two universities have made joint submissions, these are listed on separate lines and indicated accordingly: eg, “University of Applemouth (joint submission with University of Dayby)”. By default, the institution with the higher research power is listed first.

On the landing page for each subject table, we also give GPA and FTE submission figures for the UoA as a whole, based on the “national profile” provided by the funding bodies.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter

Or subscribe for unlimited access to:

  • Unlimited access to news, views, insights & reviews
  • Digital editions
  • Digital access to THE’s university and college rankings analysis

Already registered or a current subscriber? Login

Related articles

Large research-intensives in regional centres rose in REF 2021

REF 2021: Golden triangle looks set to lose funding share

Although major players still dominate on research power, some large – and small – regional institutions have made their mark

REF 2021 submission rules help push quality to new high

REF 2021: Quality ratings hit new high in expanded assessment

Four in five outputs judged to be either ‘world-leading’ or ‘internationally excellent’

REF 2021 impact case studies show ‘real differences made to people’s lives’

REF 2021: Increased impact weighting helps push up scores

Greater weighting helps medical institutions in particular improve overall positions

Art collage with antique sculpture of Apollo face and numbers, geometric shapes. Beauty, fashion and health theme. Science, research, discovery, technology concept. Pop art style. Zine culture.

Don’t wait to tackle open access books cash challenge, REF told

Difficult conversations about how the REF’s post-2029 open access books mandate will be financed cannot be avoided, say experts

Is it safe to trust that the next REF will reward a wider range of outputs?

The 2029 Research Excellence Framework aims to assess ‘how institutions and disciplines contribute to healthy, dynamic and inclusive research environments’. But will panellists and university managers really move away from a focus on prestigious journal papers, asks Matthew Flinders

Featured jobs

research excellence framework unit of assessment

The University of Manchester

Alternatively, use our A–Z index

Aerial view of John Owens building, The University of Manchester

Research Excellence Framework 2021

The University of Manchester's position as a research powerhouse has been confirmed in the results of the 2021 Research Excellence Framework (REF).

Professor Dame Nancy Rothwell

These comprehensive and independent results confirm Manchester's place as a global powerhouse of research. Professor Dame Nancy Rothwell / former President and Vice-Chancellor of The University of Manchester

Key results

  • We have retained fifth place for research power 1 .
  • Overall, 93% of the University’s research activity was assessed as ‘world-leading’ (4*) or ‘internationally excellent’ (3*).
  • We ranked in 10th place in terms of grade point average 2 (an improvement from 19th in the previous exercise, REF 2014).
  • The Times Higher Education places us even higher at eighth on GPA (up from 17th place), as their analysis excludes specialist HE institutions.
  • In the top three nationally for nine subjects (Unit of Assessment by grade point average or research power).

The Research Excellence Framework (REF) is the system for assessing the quality of research in UK higher education institutions. Manchester made one of the largest and broadest REF submissions in the UK, entering 2,249 eligible researchers across 31 subject areas.

The evaluation encompasses the quality of research impact, the research environment, research publications and other outputs.

REF results

Overall, 93% of the University’s research activity was assessed as ‘world-leading’ (4*) or ‘internationally excellent’ (3*). The evaluation encompasses the quality of research impact (96% 3* or 4*), the research environment (99% 3* or 4*), research publications and other outputs (90% were 3* or 4*).

We ranked in 10th place in terms of grade point average, an improvement from 19th in the previous exercise, REF 2014. The Times Higher Education places us even higher at eighth on GPA (up from 17th place), as their analysis excludes specialist HE institutions. This result was built upon a significant increase in research assessed as ‘world leading’ (4*) between REF 2014 and REF 2021.

The University came in the top three for the following subjects (Unit of Assessment by grade point average or research power):

  • Allied Health Professions, Dentistry, Nursing and Pharmacy
  • Business and Management Studies
  • Drama, Dance, Performing Arts, Film and Screen Studies
  • Development Studies
  • Engineering

The University had 19 subjects in the top ten overall by grade point average and 15 when measured by research power.

Research impact

Social responsibility underpins research activity at Manchester, and we combine expertise across disciplines to deliver pioneering solutions to the world’s most urgent problems.

We’re ranked as one of the top ten universities in the world for delivering against the UN’s Sustainable Development Goals ( Times Higher Education Impact Rankings) and our research impact showcase includes examples of the positive impact we’ve made across culture and creativity, economic development and inequalities, health and wellbeing, innovation and commercialisation, and sustainability and climate change.

Professor Dame Nancy Rothwell, former President and Vice-Chancellor of The University of Manchester, said: "These comprehensive and independent results confirm Manchester's place as a global powerhouse of research.

“We create an environment where researchers can thrive and exchange ideas. Most importantly the quality and impact of our research is down to the incredible dedication and creativity of our colleagues who work every day to solve significant world problems, enrich our society and train the next generation of researchers.

“The fact that our REF results are accompanied by examples of the real difference we’ve made in the world, all driven from this city makes me very proud.”

Research environment

The REF exercise also evaluated the University’s work to provide a creative, ambitious and supportive research environment , in which researchers at every career stage can develop and thrive as leaders in their chosen field.

In this category, the University achieved a result of 99% ‘internationally excellent’ or ‘world-leading’, making it one of the best places in the country to build a research career.

1  Research power is calculated by grade point average, multiplied by the number of FTE staff submitted (FTE – full-time equivalent head count) and gives a measure of scale and quality. Grade point average (GPA) measures the overall or average quality of research, which takes no account of the FTE submitted.

2  Grade point average is a measure of the overall or average quality of research calculated by multiplying the percentage of research in each grade by its rating, adding them all together and dividing by 100.

REF 2021 results

View the University’s full set of results by unit of assessment.

REF 2021 submissions

View the University’s full list of 160 submissions.

research excellence framework unit of assessment

Research impact showcase

Find out how we're solving the world's most urgent problems.

research excellence framework unit of assessment

  • How to search
  • Terms of Use
  • REF2014 Home

REF2014 Impact Case Studies

What is the research excellence framework (ref).

The REF is a system for assessing the quality of research in UK higher education institutions implemented for the first time in 2014.

The primary purpose of REF 2014 was to assess the quality of research and produce outcomes for each submission made by institutions:

  • The four higher education funding bodies will use the assessment outcomes to inform the selective allocation of their grant for research to the institutions which they fund, with effect from 2015-16.
  • The assessment provides accountability for public investment in research and produces evidence of the benefits of this investment.
  • The assessment outcomes provide benchmarking information and establish reputational yardsticks, for use within the higher education (HE) sector and for public information.

For background information on the development of the REF, please visit: http://www.ref.ac.uk/about/background/

The introduction of the ‘Impact’ element is a key change to research assessment in the UK. As part of the REF, HEIs were required to showcase the impact of research beyond academia via impact case studies (REF3b) and statements on the HEIs approach to research impact (impact template. REF3a).

Other key features of REF 2014 that differ from RAE 2008 include the following:

  • The RAE units of assessment (67 sub-panels under the guidance of 15 main panels) were amalgamated to form 36 REF units of assessment, under four main panel discipline areas. This has resulted in broader discipline areas being covered by one sub-panel.
  • Research environment continues to be assessed. However the criteria and structure of this part of the assessment changed and it is therefore not directly comparable with the same element in the RAE.
  • Additional measures to support equality and diversity were developed for the REF. This included the submission of Codes of Practice on the selection of staff, approved in advance by the REF Equality and Diversity Panel; and a systematic approach to considering individual staff circumstances that constrained the ability of staff to produce four research outputs.

For further information on REF requirements please see:

Assessment framework and guidance on submissions (available at: http://www.ref.ac.uk/pubs/2011-02/ )

Panel criteria and working methods (available at: http://www.ref.ac.uk/pubs/2012-01/ )

WHAT ARE HIGHER EDUCATION INSTITUTIONS?

Higher education institution (HEI) is a term from the Further and Higher Education Act 1992. According to the Act, it means any provider which is one or more of the following: a UK university; a higher education corporation; a designated institution. HEFCE may choose to fund higher education institutions for teaching and research if they meet the conditions of grant. Higher education institutions are also required to subscribe to the Office of the Independent Adjudicator.

154 Higher Education Institutions submitted to the REF from across the UK. A full list of these is available on the REF website at: http://results.ref.ac.uk/Results/SelectHei

WHAT ARE UNITS OF ASSESSMENT?

Institutions were invited to make REF submissions in 36 subject areas, called units of assessment  (UOAs). The REF submissions were assessed by an expert sub-panel for each UOA.

For further information see: http://www.ref.ac.uk/panels/unitsofassessment/

WHAT ARE THE MAIN PANELS?

The expert sub-panels who assessed the REF submissions were grouped into broad subject areas and worked under the guidance of four main panels.

For further information sees: http://www.ref.ac.uk/panels /

The RAE units of assessment (67 sub-panels under the guidance of 15 main panels) were amalgamated to form 36 REF units of assessment, under four main panel discipline areas. This has resulted in broader discipline areas being covered by one sub-panel

WHAT IS REF IMPACT?

In the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia.

REF impact was assessed in the form of impact case studies and impact templates, where HEIs provided further information about their approach to supporting and enabling impact. For access to the impact templates please see full REF submissions at: http://results.ref.ac.uk/Results

WHAT IS A REF IMPACT CASE STUDY?

Each HEI submitted a selection of impact case studies for assessment in the REF. An impact case study is a four-page document, describing the impact of research undertaken within the submitting department. It also contains information about the research that underpins the impact that took place. Further information about the criteria for the submission of impact case studies can be found in the two key REF guidance documents:

WHAT IS A SUBMITTING INSTITUTION?

A submitting institution is a Higher Education institution that submitted to the REF. A full list of the 154 Higher Education Institutions that submitted to the REF from across the UK is available on the REF website at: http://results.ref.ac.uk/Results/SelectHei

HOW MANY CASE STUDIES ARE SUBMITTED PER INSTITUTION?

Number of Category A staff submitted (FTE) Required number of case studies
Up to 14.99 2
15 – 24.99 3
25 – 34.99 4
35 – 44.99 5
45 or more 6, plus 1 further case study per additional 10 FTE

Joint submissions

Higher Education Institutions were able to make submissions with one or more other UK HEI where this is the most appropriate way of describing research they have developed or undertaken collaboratively. All impact case studies will have been submitted jointly, and could stem from research undertaken either collaboratively or at either of the submitting HEIs. The case studies are not designated to a particular HEI within the joint submission.

Multiple submissions

Institutions would normally make one submission in each unit of assessment (UOA) they submit in. They could, by exception, make multiple submissions (with prior approval) to one UOA. An HEI might have wanted to make more than one submission to a UOA if they had two bodies of research that fell within the scope of the assessment panel but were clearly academically distinct. Case studies in the same unit of assessment are not indexed by separate multiple submissions within this database.

For further information, see Assessment framework and guidance on submissions (available at: http://www.ref.ac.uk/pubs/2011-02/ )

WHAT IS IN THIS DATABASE?

The REF Impact case studies database includes 6,637 documents (at 18 November 2015) submitted by UK Higher Education Institutions (HEIs) to the 2014 Research Excellence Framework (REF2014). The documents have been processed by Digital Science, normalised to a common format to improve searchability and tagged against select fields.

WHAT IS THE TEMPLATE OF ORIGINAL DOCUMENTS

REF impact case studies generally follow a template as set by the REF criteria (see ‘Assessment framework and guidance on submissions’ Annex G for impact case study template and guidance ( http://www.ref.ac.uk/pubs/2011-02/ ) . This template has a Title and five main text sections, plus the name of the Submitting Institution and the Unit of Assessment.

Some submitting institutions omitted some of this information or modified the template. The name of the Submitting Institution and the Unit of Assessment has therefore been added as metadata tags.

In addition to the Title of the case study, the text sections of the template and the   indicative lengths, as recommended in the REF criteria are:

1 Summary of the impact 100 words
2 Underpinning research 500 words
3 References to the research Six references
4 Details of the impact 750 words
5 Sources to corroborate the impact 10 references

In some case studies   the sections vary considerably from the indicative word length, however all case studies were restricted to four pages in total..

Some case studies include non-text items such as institutional shields, photographs, other images, tables and embedded links.

DO CASE STUDIES AS DISPLAYED DIFFER FROM SUBMITTED ORIGINALS?

The format of all original documents has been modified in the database to bring them to a similar format on the website and to enable the content to be searched more easily.

Links have been added to references, which may be slightly modified for clarity, to enable users to link to commercial database for the full article record.

Metadata has been added to associate the impact case studies with research subject areas, impact locations and impact type.

The PDF of each original document, as submitted, can be downloaded for comparison from the page displaying the individual case study. All case studies are also available as part of the REF submissions data: http://results.ref.ac.uk/

Text has been removed (or ‘redacted’) from some case studies by the submitting institutions because it is commercially sensitive or otherwise needs to be restricted. Generally this is indicated by the formula phrase “[text removed for publication]”, which was recommended in the REF guidance, but variants of this phrase do occur.  

WHAT IS THE SOURCE OF THE CASE STUDY TITLE?

REF impact case study titles in this database are drawn from the HEFCE REF database. They are the titles inserted by the HEI into the REF submissions form when they submitted their case studies to the REF.

ARE ALL THE REF IMPACT CASE STUDIES IN THE DATABASE?

HEIs were able to notify the REF team that certain case studies were ‘not for publication’. These have not been included in the Database. For further information, please see: www.ref.ac.uk/about/guidance/datamanagement/confidentialimpactcasestudies/

There are also some impact case studies that are not in the Database in order to satisfy re-use and licensing arrangements. These were removed where all other case studies from a particular HEI have been made available under a CC BY 4.0 license.

The total number of Impact case studies submitted to the REF is 6,975.

The number of Impact case studies in the Database is 6,637 (at 18 November 2015).

WHAT IS REDACTION?

Redaction is the censoring or obscuring of part of a text.

Some case studies have parts of the text   removed (redacted) for confidentiality reasons (for instance, commercial sensitivity). These case studies are mostly included in the database but with this text removed. Where HEIs notified us of the need to remove elements of the text after 1 October 2014, the entire document has been removed from the database.

WHAT DOES ‘VIEW BY REGION’ MEAN?

Submitting institutions can be grouped by UK region (Northern Ireland, Scotland, Wales, and nine regions for England). This is shown as part of the ‘browse by index’ on the website.

WHAT DOES ‘VIEW BY INCOME CATEGORY’ MEAN?

Submitting institutions can be grouped according to their relative and absolute research income. The UK Higher Education Statistics Agency (HESA) has assigned them to economic peer groups on the basis of incomme data available in 2004-05 (see www.hesa.ac.uk ). This is shown as part of the ‘browse by index’ on the website.

WHAT IS SUMMARY IMPACT TYPE?

Case studies are assigned to a single ‘Summary Impact Type’ by text analysis of the ‘Summary of   the Impact’ (Section 1 of the Impact case study template) . This is an indicative guide to aid text searching and is not a definitive assignment of the impact described.

There are eight Summary Impact Types. These follow the PESTLE convention (Political, Economic, Societal, Technological, Legal, and Environmental) widely used in Government policy development. For the purposes of introductory guidance in REF impact searching, Health and Cultural impact types (otherwise subsumed within Societal) have been added to the six standard categories.

The category names have a particular meaning for the purposes of analysis. This may vary between users. For example, JISC suggests:

Political: worldwide, European and UK national and local Government directives, public body policies, national and local organizations’ requirements, institutional policy.

Economic: funding mechanisms and streams, business and enterprise directives, internal funding models, budgetary restrictions, income generation.

Societal: societal attitudes to and impacts of education, government directives and employment opportunities, lifestyle changes, changes in populations, distributions and demographics, the societal impact of different cultures.

Most REF impact case studies relate at some level to more than one type of impact. Some case studies arguably cover all eight. Tagging supports rapid initial searching that will reveal a deeper and more diverse range of impact; user perspectives on this will vary.

Some analysts would assign REF Impact case studies differently to the categorization applied here. For example, many REF impact case studies refer to spin-outs. What is the research impact of such research where it leads to a commercially valuable device of medical benefit? It has a proximate technological impact that, once developed, might have economic impact for a company and later leads to health impact for society. If the research is relatively recent and the spin-out is new then in this database the Summary Impact is tagged as technological since the economic and health impacts remain latent.

See also: http://www.jiscinfonet.ac.uk/tools/pestle-swot/

WHAT IS RESEARCH SUBJECT AREA?

The REF Impact case studies are assigned to one or more Research Subject Areas (to a maximum of three) by text analysis of the ‘Underpinning research’ (Section 2 of the Impact case study template). This is an indicative guide to aid text searching via a more fine-grained disciplinary structure than is immediately available in the 36 REF Units of Assessment.   It is not a definitive assignment of research discipline.

The Research Subject Area is equivalent to the 4-digit Group level of granularity in the Fields of Research of the Australia-New Zealand Standard Research Classification ( http://www.arc.gov.au/pdf/ANZSRC_FOR_codes.pdf ). This is hierarchical with 22 Divisions at the 2-digit level and 157 Groups at the 4-digit level (there are also 1,238 Fields at the 6-digit level but these are not used here).

In search result lists, the impact case study details for Research Subject Area display the 2-digit Division name in bold and 4-digit Group name in normal type. Some case studies can be associated with multiple Research Subject Areas drawn from different 2-digit Divisions. For example, a case study might link Statistics (0104) in Mathematical Sciences (01) with Ecology (0602) in Biological Sciences (06). Such instances are tagged in the database as Interdisciplinary and can be filtered in search results.

WHAT IS IMPACT UK LOCATION?

REF impact case studies are tagged with one or more UK locations on the basis of places (UK cities and towns, as found in the GeoNames database http://www.geonames.org ) referenced in the text of either Section 1 (Summary of impact) or Section 4 (Details of the impact) of the document. This is an indicative guide to aid text searching. It is not a definitive identification of where UK impact has occurred as some text makes passing references to associated locations; other text references impact beneficiaries without a specific location.

It should be noted that the automated indexing cannot distinguish between e.g. Dover as a town, as the name of a street and/or as a person’s surname.

WHAT IS IMPACT GLOBAL LOCATION?

REF impact case studies are tagged with one or more global locations on the basis of places (place names, as found in the GeoNames database http://www.geonames.org ) referenced in the text of either Section 1 (Summary of impact) or Section 4 (Details of the impact) of the document. Global locations outside the UK are grouped by country. This is an indicative guide to aid text searching. It is not a definitive identification of where the impact has occurred as some text makes passing reference to associated locations, while other text references impact beneficiaries without a specific location.

It should be noted that the automated indexing cannot distinguish between e.g. Brazil as a country, as the name of a street and/or as a person’s surname.

WHAT IS THE SEQUENCE FOR SEARCH RESULTS?

What are interdisciplinary case studies.

If a REF impact case study can be associated with multiple Research Subject Areas that are drawn from different broad Divisions then they are tagged in the database as Interdisciplinary.   This assignment is an indicative guide to aid text searching via a more fine-grained disciplinary structure than is immediately available in REF Units of Assessment . It is not a definitive assignment of a case study’s interdisciplinary nature.

REF impact case studies that are interdisciplinary in the terms of this database can be filtered in search results.

WHAT ARE SIMILAR DOCUMENTS?

The similarity of REF impact case studies is estimated by text analysis of Section 2 (Underpinning research), using Latent Semantic Analysis (LSA). This gives a compact representation of key semantic concepts contained in documents as defined by the co-occurrence of words within documents. Similar documents are defined as those that refer to the same semantic concepts.

The “view similar case studies” button allows users to see associations between REF impact case studies that use similar source research or work in the same research area, though the impacts may differ. This is an indicative guide to aid text searching. It is not a definitive indicator of a specific aspect of similarity between case studies.

CAN I CHECK REFERENCES?

The References to the research (Section 3 of theimpact template) have been extracted and where they can be disambiguated and unequivocally identified, usually by Digital Object Identifier (DOI), then they are linked to external citation databases. Not all authors provided DOIs.

Thomson Reuters processed all article records, and some other documents, for the HEFCE REF impact case studies database so as to address the deficit in DOI links. Where possible, Thomson Reuters matched the author-provided information with article records in the Web of Science TM . This has created a significant improvement in DOI-linked coverage which benefits the information recovery for users.

A DOI link is indicated by a small icon identifying the external source and placed below the reference. If the user is not a subscriber to the source then the link will take them to a preview page containing partial information rather than the fully annotated article record.

Where the case study refers to a letter of support for a claim made, this is held by the submitting HEI. These were required by the REF team where it was deemed necessary to verify the claim made and have not been systematically collected during the assessment process. Such letters are not available within this database.

WHAT IS ALTMETRIC?

Research references in REF impact case studies include journal articles.   Where possible, the number of times that each of these have been mentioned on mainstream and social media is indicated by a link to http://www.altmetric.com/ , which collates and indexes these data and also displays the context for the mention. Altmetric mentions are complementary to and not necessarily correlated with conventional citations.

WHAT ARE PROJECT FUNDERS?

  • Research Councils UK
  • Arts and Humanities Research Council
  • Biotechnology and Biological Sciences Research Council
  • Economic and Social Research Council
  • Engineering and Physical Sciences Research Council
  • Medical Research Council
  • Natural Environment Research Council
  • Particle Physics and Astronomy Research Council
  • Council for the Central Laboratory of the Research Councils
  • Science and Technology Facilities Council
  • Royal Society
  • Royal Academy of Engineering
  • British Academy
  • UK Space Agency
  • Innovate UK

The Wellcome Trust co-funded the development of this database. They are also listed as a searchable research funder.

WHY ARE SOME CATEGORIES ABSENT FROM RESULTS?

Can i download case studies.

Sets of case studies can be downloaded in the following formats:

  • Excel spreadsheet
  • HTML document
  • Zipped PDF files

In the case of Excel and HTML downloads, there is no limit to the number of case studies that may be downloaded in a single session.

In the case of PDF files it is recommended that you restrict your download to the set of case studies that would be returned from a single index selection or specific text search.  There is an absolute maximum limit of 300 case studies in any one download and such a download may take an appreciable time.

WHAT CAN I DO WITH THE DATABASE?

The full Terms of Use for this database are available.

This summary is not designed to replace the Terms of Use.

Most HEIs represented in the Database have agreed to license their case studies under a CC BY 4.0 licence. The following use is permitted under these licence conditions: http://creativecommons.org/licenses/by/4.0/legalcode

A more user friendly version is available here but does not replace the full legal code: http://creativecommons.org/licenses/by/4.0/

31 HEIs who were not in a position to license their case studies under CC BY 4.0 and are listed under Terms of Use . For such case studies, users are able to search the Database and undertake their own analysis and must comply with all relevant laws including fair dealing provisions. In order to comply with fair dealing provisions you should only copy as much text as is needed to make the point and you may not use the material for commercial purposes (this includes income generation without profit); you must attribute the source; incidental copying for text or data mining is permitted for non-commercial research as is text and data analysis by researchers for the purpose of carrying out computational analysis of the work. You are able to do this without having to obtain additional permission from the rights holder(s).

For case studies from HEIs who have not agreed that they can be used under a CC BY 4.0 licence, any user who wishes to copy or re-use material from the Database for another purpose, will need to seek permission from the relevant rights holder (the HEI who submitted the case study in the first instance). For the avoidance of doubt you will need to seek their permission should you wish to make multiple copies, for example put the work on a shared drive, computer network, intranet or website; send the material by email to multiple recipients or put it on a discussion list etc.

IF AN HEI WOULD LIKE TO FURTHER REDACT OR REMOVE A CASE STUDY FROM THIS DATABASE, WILL HEFCE ACCEPT REPRESENTATIONS TO DO THIS?

It will be possible for impact case studies to be completely removed from the Database but we are unable to redact case studies and then replace the original with the redacted version.

Please note that all case studies are also available as part of the REF submissions data at: http://results.ref.ac.uk/ . Requests for case studies to be further redacted or removed will be considered for the main REF website, although we cannot guarantee they will be accepted. To make such a request please email: [email protected]

Research Excellence Framework 2021

Results and submissions

Introduction to the ref results, filter by unit of assessment.

  • Open access
  • Published: 18 October 2024

A study of implementation factors for a novel approach to clinical trials: constructs for consideration in the coordination of direct-to-patient online-based medical research

  • Peter F. Cronholm 1 , 2 , 3 ,
  • Janelle Applequist 4 ,
  • Jeffrey Krischer 5 ,
  • Ebony Fontenot 1 ,
  • Trocon Davis 1 ,
  • Cristina Burroughs 5 ,
  • Carol A. McAlear 6 ,
  • Renée Borchin 5 ,
  • Joyce Kullman 7 ,
  • Simon Carette 8 ,
  • Nader Khalidi 9 ,
  • Curry Koening 10 ,
  • Carol A. Langford 11 ,
  • Paul Monach 12 ,
  • Larry Moreland 13 ,
  • Christian Pagnoux 8 ,
  • Ulrich Specks 14 ,
  • Antoine G. Sreih 6 ,
  • Steven R. Ytterberg 14 ,
  • Peter A. Merkel 15 &

Vasculitis Clinical Research Consortium

BMC Medical Research Methodology volume  24 , Article number:  244 ( 2024 ) Cite this article

Metrics details

Traditional medical research infrastructures relying on the Centers of Excellence (CoE) model (an infrastructure or shared facility providing high standards of research excellence and resources to advance scientific knowledge) are often limited by geographic reach regarding patient accessibility, presenting challenges for study recruitment and accrual. Thus, the development of novel, patient-centered (PC) strategies (e.g., the use of online technologies) to support recruitment and streamline study procedures are necessary. This research focused on an implementation evaluation of a design innovation with implementation outcomes as communicated by study staff and patients for CoE and PC approaches for a randomized controlled trial (RCT) for patients with vasculitis.

In-depth qualitative interviews were conducted with 32 individuals (17 study team members, 15 patients). Transcripts were coded using the Consolidated Framework for Implementation Research (CFIR).

The following CFIR elements emerged: characteristics of the intervention, inner setting, characteristics of individuals, and process . From the staff perspective, the communication of the PC approach was a major challenge, but should have been used as an opportunity to identify one “point person” in charge of all communicative elements among the study team. Study staff from both arms were highly supportive of the PC approach and saw its promise, particularly regarding online consent procedures. Patients reported high self-efficacy in reference to the PC approach and utilization of online technologies. Local physicians were integral for making patients feel comfortable about participation in research studies.

Conclusions

The complexity of replicating the interpersonal nature of the CoE model in the virtual setting is substantial, meaning the PC approach should be viewed as a hybrid strategy that integrates online and face-to-face practices.

Trial registrations

1) Name: The Assessment of Prednisone In Remission Trial – Centers of Excellence Approach (TAPIR).

Trial registration number: ClinicalTrials.gov NCT01940094 .

Date of registration: September 10, 2013.

2) Name: The Assessment of Prednisone In Remission Trial – Patient Centric Approach (TAPIR).

Trial registration number: Clinical Trials.gov NCT01933724 .

Date of registration: September 2, 2013.

Peer Review reports

Contributions to the literature

Research has documented the variety of challenges that clinical trials have faced regarding recruitment and engagement. Rare diseases face additional obstacles when accounting for smaller populations.

One novel solution is the use of social media recruitment, coupled with a web-based platform where patients with vasculitis could participate in a clinical trial virtually (consent/enroll, report their symptoms/self-report measures, and taper their prednisone dosage) – deemed the patient-centered (PC) approach.

Our comparison of the implementation of the PC approach to the traditional Center of Excellence (CoE) approach has important implications, including different types of studies that may be best suited for virtual design.

Establishing the evidence-base for treating rare diseases is a challenging but critical area of focus. A rare disease is defined as a condition with a prevalence of less than one in 2,000 (Europe) or less than 200,000 (United States) [ 1 ]. The ability to conduct trials and advance treatments for populations with rare diseases is limited by access to patients [ 2 ]. Advances in rare disease research have included consortium building, which leverages economies of scale related to linking loci of clinical expertise and patient access to support a broader research infrastructure targeting the needs of patients with rare diseases [ 3 ].

Parallel to the focal aggregation of clinical resources at academic health centers in responding to the needs of patients with rare diseases is the traditional research infrastructure of Centers of Excellence (CoE) for a given condition. CoEs, each with a concentration of patients with certain rare diseases, can work together to increase the sample size of natural history studies and clinical trials. While the CoE approach to research leverages hubs of clinical treatment and research infrastructure, CoEs are often geographically and economically isolated from the majority of affected patients, limiting access to cutting-edge care and participation in research studies [ 4 ]. Developing novel strategies to increase participation in clinical trials presents a challenging opportunity for the field of implementation science, as consistent evaluations of the performance of novel methods must be assessed in order to determine best practices for future integration in clinical trial settings [ 5 , 6 , 7 ].

To improve recruitment of patients, a novel approach was designed to address patient recruitment and ongoing engagement capitalizing on social medial and web-based platforms to overcome common barriers to participation in traditional clinical trials (e.g. travel distance and small patient recruitment pools). A key innovation of the approach included the ability to recruit patients and collect clinical outcomes without the need for office visits, using direct-to-consumer advertising and marketing principles similar to those utilized by the pharmaceutical industry [ 8 , 9 ]. Thus, the PC approach featured a process that was primarily conducted in an online setting, with a direct-to-patient website created where participants could enroll in the study, access online informed consent, and view a personalized portal that housed all study materials.

To capture the process and product of the study efforts, the project utilized a hybrid effectiveness-implementation framework with quantitative assessments of recruitment and retention mixed with qualitative assessments of key stakeholder perspectives on the development and implementation of the designed approach. Quantitative results published in a separate study reflect the greater success of enrollment of those confirmed eligible by their physician for CoEs ((96%) when compared to the PC approach (77%), with no significant difference found regarding subject eligibility and provider acceptance for each approach [ 10 ]. While the quantitative portion of such assessment is critical, to develop a greater understanding of why the PC approach was not as successful, qualitative feedback on the drivers of implementation process and outcomes is necessary.

As patients become more active participants in their healthcare, it is important that research investigates not only the study team members’ perspectives on implementation, but also the patients’ perspectives to ensure the needs and preferences of all stakeholders are met [ 11 ]. Previous research analyzing engagement of patients with vasculitis in research confirmed that involvement in research design and development positively impacts patients collaborating on the study team and study investigators [ 12 ]. However, there remains a lack of deep understanding of the factors involved in the successful and challenging aspects of novel recruitment methods. The specific aims of this study were to describe qualitative findings from interviews with study team members and patients involved in the clinical trial to assess the drivers of success and challenges to the implementation of the PC model of patient recruitment and engagement.

Study design

To evaluate the drivers of implimentation for this novel approach to support recruitment of participants into a randomized controlled trial (RCT) the study team designed an organizational structure as the key implementation component to develop and implement a direct-to-patient recruitment (i.e., the patient-centric, PC) arm of the trial to be compared to the traditional CoE approach. The clinical trial was designed to test the effectiveness of low-dose prednisone (i.e., 5 mg daily) as a maintenance regimen compared to no prednisone for patients with granulomatosis with polyangiitis. A notable component of the study involved the strategic inclusion of team members with expertise in vasculitis, clinical trials, and process evaluation/implementation to provide an implementation assessment that included contrasting more traditional approaches. This expertise was further organized to create distinct teams (marketing, recruitment, and retention; protocol implementation; protocol oversight/management; novel consent/regulatory), where individual skillsets were utilized across the study (see Fig.  1 , Organizational Structure. Directive framework for the development and implementation of the study). Team structures also incorporated patient advocacy group representatives, clinicians, and technology support staff, who were central to the outreach approach utilized. Incorporation of collaborative teams was a central reflexivity strategy, as individuals in varying roles allowed for assumptions to be challenged and diversity of perspectives to be considered at multiple points, particularly during creation of the interview guide [ 12 , 13 ]. Interviews were used to explore stakeholder perspectives regarding the process and product of the PC arm strategies and implementation in comparison to the more traditional CoE arm of the trial. The consolidated criteria for reporting qualitative research (COREQ) checklist was used throughout all phases of this research to ensure that data were explicitly and comprehensively reported [ 13 ].

figure 1

Organizational Structure and directive framework for the development and implementation of the study. Notes: CoE = Center of Excellence; DMCC = Data Management and Coordinating Center; PI = Principal Investigator; NIH = National Institutes of Health; VCRC = Vasculitis Clinical Research Consortium; SAE = Serious Adverse Events; * = VCRC Lead; § = DMCC Lead; # = Work ended upon finalization of consent process

As of June 30, 2018 (the date when the PC arm was closed), a total of 61 patients in the CoE arm and all 10 patients in the PC arm completed the study by either having met a study endpoint or having completed six-months on study.

The qualitative sample for the current research included semi-structured interviews with 32 total participants. Interviews were conducted purposively across stakeholder groups with 17 study team members (10 in the CoE arm, including site physician leads and research coordinators and seven in the PC arm, representing the various committees described in Fig.  1 ), and interviews were conducted with 15 patients (two who dropped out of the study, 11 in the CoE arm and two in the PC arm). Two patients participated in study entry and follow-up interviews, which were included in the dataset. For demographic information on patients interviewed, see Table  1 .

Data collection

The multi-disciplinary evaluation team, including experts in program evaluation, qualitative methods, and clinical trials research, developed semi-structured interview guides for faculty and staff involved in the study infrastructure. In-depth interviews were used, as they permit the collection of rich, complex data suitable for making sense of complex processes [ 14 , 15 ].

Interview guides were developed based on our team’s methodological experience and experience with the recruitment and retention of participants with rare diseases in randomized trials. Interview guides were piloted and feedback sought from collaborating faculty and staff. While tailored to elicit the study-related experiences of each stakeholder group (i.e. patients, study faculty, and staff), patient interview guides were designed to additionally explore how patients heard about the study, their drivers for participation and perceptions of risk, the influence of others on their decision to participate, reflections of their randomization arm, and their experience with the consenting process. Study team interview questions asked respondents to reflect on their role in study development and implementation, patient recruitment efforts, and the consenting process. See Additional File 1 for final interview guide. This research focused identifying implementation outcomes through evaluation of a novel approach to RCTs. Importantly, data were also collected and analyzed from the CoE arm where participants were asked to extrapolate reactions to elements of the PC arm to capture patient perspectives on PC arm elements regardless of arm assignment. This tactic supported assessing how well the design evaluation was carried out in practice as communicated by patients and team members.

Interviews were conducted by phone with all participants in 2018. Interviews were audio recorded, labeled with a confidential unique identifier, de-identified and professionally transcribed, and entered into NVivo 10.0 (QSR NVivo), a software package used to support qualitative data coding and analysis.

Framework for implementation analysis

For the initial review of early interview transcripts, the study team used a phronetic iterative approach to develop a set of codes to apply to all interview data. This process moves back and forth between existing theory, predefined questions, and emergent qualitative findings being produced by the emergent data [ 16 , 17 ]. This permits for an analysis of patterns, themes, and constructs present within a given phenomenon.

To provide a structure for organizing findings, the study team augmented its iterative approach with the Consolidated Framework for Implementation Research (CFIR). The CFIR provides a useful framework for exploring the primary domains driving the development and implementation of innovative approaches to improving health and healthcare systems [ 18 ]. The CFIR elements include characteristics of the intervention , the inner setting , characteristics of individuals , and process . The described CFIR domains have been widely employed to aid researchers and practitioners in understanding the complex elements that impact the execution and long-term viability of implementation endeavors. As such, the CFIR was used to help guide the evaluation of the novel approach.

Two coders (who served as interviewers) used a constant comparative approach when analyzing all data. In this approach, two coding schemes were applied the to the transcribed qualitative data: 1) open and axial codes that further emerged from close, line-by-line readings of the data; and 2) the a priori set of codes representing the most relevant CFIR constructs to emerge from the grounded data collection. Each code was defined and decision rules for the appropriate application of each code were developed and included in the codebook. A total of 20% of the data were coded by the two coders using the inter-rater agreement assessments available in NVivo to identify and correct areas of disagreement through group consensus to ensure coding reliability and accuracy. To enhance rigor, a third coder with expertise in health communication and recruitment methods (who did not serve as an interviewer) independently analyzed 100% of the data to confirm findings as represented with the CFIR constructs. This approach served to enhance investigator triangulation.

The findings below have been categorized according to applicable constructs of the CFIR. An overview of major findings can be seen in Table  2 .

Characteristics of the Intervention: complexity, adaptability, trialability, and relative advantage

For recruitment purposes, individual Facebook, Twitter, Google + , and YouTube accounts were created to disseminate advertisements for the study to various populations (focused by age, location, etc.). The primary element driving the novel method (i.e. the PC approach) for clinical trial innovation was the utilization of a website designed for patient recruitment, engagement, and consent.

While the use of social media and internet technologies was a central aspect of recruitment for the PC arm, all study staff noted considerable complexity in matching engagement strategies with social media platforms patients were already using. Web-based recruitment strategies supporting outreach across multiple stakeholder attributes (e.g., age and disease specifics [ 10 ]) required presenting complex information related to study intention and process. Ensuring that all recruitment materials were “user-friendly” and comprehendible for patients was a focus for the PC arm strategy. To accomplish this, designing a clear, linear patient consent process became a focal point of the PC arm’s recruitment approach. From the PC arm staff perspective, approximately half of respondents indicated that their enthusiasm for the use of a web-based platform for patient engagement throughout the study was tempered by the recognition that the intervention that was designed as a technology-based outreach would struggle to incorporate patients with technology access issues.

“…I think you need to know your audience…and then you need to know what technology is at hand and what’s available to that audience…I think especially with the fact that we only have ten centers, they’re in densely populated places, but getting the people that live out in the middle of nowhere…I think in one way we are getting them. But on the other hand, there are some patients regardless of what means you use (even if it’s the means they already use, they’re just totally disinterested still.” (PC Staff A)

Participants reported that ensuring that a trial chosen for this method is conducive to web-based technologies becomes important when considering the assessment of clinical information. Thus, the adaptability of this novel method was possible because the study had well-defined self-report measures and procedures for patients that supported a virtual engagement.

“…so this had to be data that we could collect directly from patients. And I think that we designed a trial and chose end points that are very likely to be accurate when it’s reported by patients. …so I think that’s the key issue. You can only do a study like this if it’s amenable to accurate data collection directly from patients.” (CoE Staff A)

PC staff often reflected on the tough-to-answer questions that their teams had to consider concerning the trialability (i.e., how easily potential adopters can explore your innovation) of this approach, such as how patients could best be guided through the novel approach process, as the PC arm staff often felt they had “one shot” to present the web-based information to patients, as opposed to the CoE arm which benefits from a consistent dialogue available during standard recruitment approaches and explanations.

Many PC arm staff reported patient understanding and the nuances of the language used to recruit patients as being central to the trialability of this process as well, often leading to discussions regarding the importance of patient-centered language being further developed for such web-based approaches. Soliciting patient perspective and feedback was mentioned as an important step toward ensuring that appropriate literacy levels were in place. For example, a group of patients from the Vasculitis Foundation, the major patient advocacy group for vasculitis, participated in user acceptance testing, through which they provided feedback on the study description, consent forms, and registration process. This information was used in subsequent design phases to help create a series of FAQs for the study website where patients could seek further clarification.

Representation from patient partners, patient advocacy group representatives, and clinicians within the team was a central consideration in developing an approach to outreach that would connect to patients not channeled to recruitment through traditional CoE approaches. The involvement of the Vasculitis Foundation provided a communication venue for patients using the Foundation as an information source, as well as linkage for non-academic feedback on implementation approaches and tools.

The patients interviewed from the PC arm reported the use of web-based tools as providing a relative advantage to the traditional CoE model. These patients reported having previous engagement with online resources and social media. This appeared to be a common venue for capturing the attention of participants for providing study-related information and supporting the enrollment process. As one PC arm patient reported:

“I belong to the Facebook, Wegener’s granuloma vasculitis Facebook chat page, for  everybody who has it, or somebody they know who has it, and they’ve joined. Then we  compare things that happened to us versus what we take and how we’re treated. And Then, of course, seeing that link, that’s how I found it, and I decided to try it.” (PC Patient A).

Yet, most patients interviewed in the CoE arm described in-person conversations with their physicians or study coordinators as being integral in their joining the study. Thus, the relative advantage – i.e., the advantage of implementing the intervention versus an alternative solution – for the perspective of CoE staff was based on all interactions being face-to-face conversations with patients as opposed to focusing on the PC arm’s online approach. CoE arm patients often cited physicians as trusted sources of information about the study, providing details about the process and eligibility that influenced how comfortable patients reported feeling about enrolling.

“ Well, the doctor and his assistant in this case, [NAME]… they were very, very knowledgeable and they kept assuring me. They said, do this only if you want to be involved, but they thoroughly went over the agenda for the study and the purpose.” (CoE Patient A)

Inner setting: available resources, culture and climate, and readiness for implementation

All PC arm staff discussed how the development of a novel approach required broadening the study team to include expanded skillsets involving information technologies and mass communications in addition to the expected expertise involved in standard implementation and evaluation of clinical trials. This often presented a challenge given that the available resource of experience did not always necessarily reflect the identified specialty areas, particularly in the areas of advertising and marketing.

“You have to be able to present something [online marketing] that looks real, that looks…like a real study. I mean, this is different than any other study most physicians probably have seen having a patient bring them study materials in. So how do you make it look real, look valid, but also make it very simple that there’s not a lot of – they don’t have to do a lot of research to understand the study.” (CoE Staff B)

Obtaining feedback from patients as to why they did or did not sign up for the study was cited as an important step that must be incorporated. One PC staff member raised an important point regarding resources, commenting that they felt resources for recruitment online (time and money) were sufficient, but that proper implementation procedures related to recruitment and retention reviewed during study team meetings were crucial for addressing patient concerns and questions. Thus, adequate resources tied in with the culture and climate regarding availability of feedback and communication among staff members.

“…we launched the study on February 17 th and I had social media, a document with social media messages and for the website and things like that for our newsletter, but I’ve gotten no further direction on that. And I don’t know if there’s supposed to be more…I’m just wondering if there’s a phase 2 of that. And I emailed [NAME] about it a couple of weeks ago but I haven’t heard from her. So I – I just need to follow-up with her and say, you know, is there a phase 2 of the materials. Do we need to change the message, is there round 1, round 2, round 3 of messaging…if people have signed up or have not signed up for one reason or another, if they’ve given us, can we adjust our message to address their problems – their questions.” (PC Staff B)

The culture and climate of the PC arm was described by study team respondents as one that had an overall readiness for implementation among the study team, with respondents expressing their commitment to the novel approach. Yet, these respondents reiterated the importance of all team members communicating effectively with one another, ideally with one individual appointed as being responsible for the chain of communication. PC arm staff often reported that the “point person” for all communication needs to be responsible for garnering feedback from staff members, but that deadlines for this feedback need to be firm. Waiting on feedback from others was sometimes cited as a source of frustration within PC arm communications.

“…if you had the teams that, or committees that, stuck to the deadlines and then had one person to sign off on everything, I think that would have increased the concept to implementation time by at least six months.” (PC Staff A).

Characteristics of Individuals: Knowledge and beliefs about the intervention, self-efficacy, and personal attributes

Though committed to the development and testing of novel approaches to clinical trials, half of total staff respondents focused on the dominance of the CoE model, relating to the knowledge and beliefs surrounding the implementation process. However, all PC and CoE staff respondents were also aware that finding an appropriate balance of novel methods in conjunction with the traditional CoE model showed great promise.

“…this is something that wouldn’t have been possible, even ten years ago. So it provides an opportunity to do certain types of research in a way that we’ve never done before. But again, there are certain studies you will never be able to do through this pathway. So the types, it’s not going to replace as far as that type of study, that still needs to be done on a [trial] basis, either in practices or in academic centers, where you have people receiving a medication and having a face-to-face consenting and data collection aspect. But it may provide a way to do certain types of studies that are very feasible and hopefully very time-efficient and cost-efficient and allows us to research even better.” (CoE Staff C)

Nearly one third of all study team respondents voiced concern about the perceived self-efficacy of their patients to complete study-related tasks, particularly the older patient population associated with granulomatosis with polyangiitis, being “left behind” due to gaps in technology use. One study team respondent noted that the nature of the PC arm being technology-based would inherently limit participation to those with the access and capacity to navigate the technology.

While study team members often commented on their concern of using technology to recruit study participants, some PC arm patients expressed high self-efficacy and willingness to learn more about a study via social media or the internet, highlighting a discrepancy regarding perceived versus actual experiences. PC arm patients also cited the PC approach as one that still felt personalized, making it a benefit to study participation. However, positive feelings continued to be commonly discussed by CoE patients in association with their experiences of face-to-face communication with study members.

“ I mean the emails were very personable and once I accepted it online, I’m always being contacted through an email and being checked up on and things like that so that’s always reassuring. It’s kind of like you’re not just a number kind of thing .” (PC Patient B)
“Again, my kudos to my specialist and his staff, who have walked me through the process, the administrative side of this process more than the doctor has. They’ve been very honest, very open, they’ve, even to the point where if I had a question that they did not know the answer to, they were honest enough to say, I do not know the answer to that, and then tried to, you know, find an answer for me.” (CoE Patient B).

The study team identified barriers associated with the absence of local (non-CoE) physician recruitment of participants in the PC arm. Traditional CoE roles were somewhat inverted in the PC arm where interested patients would provide study information to their local rheumatologist to consider the appropriateness of the study for the enrolling patient and to provide testing results to assure eligibility. The protocol was designed to not require the local treating physician to be involved in the study team or IRB process. One staff participant noted that “ because patients have so many questions that require the physician’s input to answer them, that this can only be done through face to face visits with the experts .” Several study team participants attributed the low participant follow-through (from registration to consent to randomization) to the absence of the involvement of their local treating physician.

Staff members discussed the nature of the outreach process. As a patient would engage in the study, they then needed to approach their treating physician to validate their eligibility. Several staff assumed the local treating physician would not be immediately supportive, that they might feel “threatened,” unsupported, or vulnerable to legal ramifications in the event of poor study outcomes. One staff member explained, “ We’ve already convinced the patient if they’ve downloaded it. We’ve got to convince the doctor now. ” Direct-to-patient recruitment did not reduce the important role a patient’s treating physician played in their study experience.

“And then, again, of course, to make sure I had the approval and the participation with my doctor here, too. That’s because if he didn’t, then I wouldn’t have enrolled, because he’d have to work with me on this.” (PC Patient A)

Local treating physicians were framed as critical in a shared decision-making process with patients. Although this study was designed to alleviate the burden put on the local treating physician, patients consistently referenced how important their local treating physician was in determining their eligibility and approval for the study.

Process: Reflecting and evaluating, key stakeholders, engaging, and opinion leaders

A particular innovation for this clinical trial included a process for online consenting. Respondents were provided study-related information about potential risks and benefits of participation via the web-based platform. When reflecting and evaluating on this study experience, study team respondents varied in their level of commitment to the novel approach, with some feeling that the face-to-face discussions were central to the standard consenting process, and few seeing little difference in terms of the information provided face-to-face versus online.

Some staff reflected on the importance of team members evaluating how they communicate with patients. Some commented on the responsibility of team members to inform participants that while web-based approaches may be novel, participants would still “be in good hands” with knowledgeable, experienced individuals that know what they are doing. Given that staff reflected on the challenges associated with engaging with patients via web-based technologies, it became all the more important that such interfaces were designed with the patient perspective in mind.

“…and so it was important for us to make sure that all of the essential components to that informed consent were written in a way that was concise and that was thorough so everything was there, but that really could keep the – I guess keep the attention of someone who was just clicking through and finding this and reading through it – that they weren’t gonna have to scroll through ten pages to get to the end and agree to Participate because we felt that we would lose patients if that was the case…” (PC Staff C).

Time was discussed as a primary factor impacting individual PC arm staff members’ ability to successfully complete tasks associated with the online recruitment and consent phases. Associated tasks included communicating their interest in study participation with the study team, working with their treating physician to validate their study eligibility, and participating in the consent process for being randomized into the trial. The constant, iterative communication among team members was integral for a smoother rollout of the novel approach, yet most study staff respondents reflected on the process using “all or nothing” language, reflecting on whether things “worked” or “did not work,” rather than emphasizing how lessons learned could be applied toward future efforts. However, many staff reiterated that one of the greatest impacts of this approach was having various staff members re-think the way they previously designed studies for the traditional CoE route. Being able to have a more holistic view of the process permitted increased understanding of various perspectives integral to the overall research process. This mindset relied upon having a broad understanding of the way that clinical trials are typically deployed, and taking a step back to evaluate the ways in which traditional processes can be adjusted to improve overall workflow.

In cases where patients heard about the study from their healthcare providers, it was clear that treating physicians held great amounts of authority in helping a patient decide whether to enroll. Respondents expressed feeling more interested in participating when their treating physicians took the time to describe how research helps impact broader patient populations. Thus, patients framed their local treating physician as informal opinion leaders .

“…but he [my doctor] was the one that came to me when I was at an appointment with him and said, “…let me tell you about this, would you be interested?” …and he said “if you participate in this study, I mean, it will help other people and it will also help me further my understanding of whether or not to keep people on a certain dosage or to taper you off completely.” So I was very happy to do it because I want to help people in the future and I want to help myself in the future, too.” (PC Patient C)

Using a lens of implementation evaluation, this manuscript explored the development and evaluation of a novel method of recruitment and engagement in a clinical trial that leveraged technology and web-based platforms to support virtual recruitment, engagement, retention, and assessment to increase access for patients beyond the traditional approach to trials based at CoEs. The primary findings include a rich description of the challenges faced in designing and managing the study team as necessary when orchestrating such a complicated endeavor, as well as study team and patient-level perspectives on the components driving implementation. While technology continues to enhance the ability to model and study rare diseases, the use of web-based and social media enhanced recruitment and engagement strategies proved quite challenging for the model developed within the context of the population sampled. Just as with email blasts being sent to contact registries, social media approaches appeared to also have limited reach in the study. Study team members noted the low number of social media followers, little traffic on pages, and few comments on Facebook posts.

Low participation rates in clinical trials are due in part to the time and costs associated with frequent in-person visit to clinical sites [ 19 ]. The burden is heightened when considering rare diseases where patients are may be geographically and economically isolated from the few centers of excellence provided cutting-edge care for the patient’s condition [ 20 ]. Technology has the potential to provide wide-based information sharing through social media and other web-based technologies increasing awareness of clinical trial availability vastly beyond the locus of centers of excellence and support not only recruitment and enrollment/consent, but also care provision and testing within their local care networks.

As true for any trial design, while focused on developing a novel approach to recruitment, engagement, and data collection, it was critical to develop processes and workflows that were highly feasible and linked to patient preferences. There are many variables to consider when distilling the conclusions of this study. In assessing the described implementation methodology, it is difficult to separate the capacity and challenges within the inner setting driving study team function with the nuances of patient engagement with the studied clinical trial innovations. In reflecting on the implementation broadly, the data illustrate key components and considerations related to study team factors not often explored in traditional implementation studies laid out across the CFIR domains of characteristics of the intervention, inner setting, characteristics of individuals, and process.

While this manuscript considered the implementation of a novel approach for clinical trial recruitment for a specific rare disease, its findings are useful for a range of conditions or disease areas interested in increasing recruitment and accrual rates. It is clear that more research, and expanded timelines for implementation when compared to a traditional CoE approach, are necessary for such novel methods to be effective. An important consideration for future research will be the incorporation of a more holistic approach to reach patients of specialized populations. Utilizing web-based approaches may not be sufficient, but when combined with other approaches such as phone-based texting and app platforms, could prove to be quite useful for reaching a broader number of potential participants. Thus, rather than envisioning this novel method for implementation as a replacement for the traditional CoE model, future research should explore hybrid approaches whereby patients may be recruited, screened, and consented online, but are then provided with the face-to-face treatment and interaction at physical clinical sites. As one participant explained:

“I mean, I don’t know if I could choose one arm over another. I mean, I definitely think there is value to both. And I think there’s a lot of potential in the patient-centric, the internet arm of the study, just because we are becoming a very computer-dependent society and more people are doing things that way. So, you know, there’s no reason why we shouldn’t have a presence there; otherwise, I mean, we’re kind of gonna be – I don’t know. I think we need to modernize as much as anything else, just because that’s the way the world is starting to work now. But due to the nature of the fact that we are talking about clinical care, medical care, people’s health. I mean, there are also – it can’t be completely computerized. There has to be a human element present all the time, especially since, I mean, we’re talking about someone’s health. So, I mean, I think there’s room – I don’t know that there necessarily means both approaches will keep existing separately, or whether they’ll kind of combine somehow. But, I mean, yeah, it’ll be interesting to see. I think they both have their value, and maybe they might kind of morph into one hybrid or completely new thing. I don’t know that, at this point, I could say that one is better over the other.” (PC Staff D).

Our goal was to fill an important gap in the literature (to date, we are not able to find a comparison study of this nature), with existing research on individual approaches to PC versus CoE approaches for RCTs supporting our findings. One systematic review of 30 PC approaches to RCTs emphasized the importance of health professionals improving their qualitative communication skills in order to facilitate the decision-making process for patients and team members, consistent with our findings where patients often sought out feedback from their physician or support staff [ 21 ]. While this study was designed to not rely on the treating physician, it is important that all study staff be trained in communicative skills and all study elements to be best equipped to engage patients. In the context of web-based approaches used for PC innovations for RCTs, one study that utilized an online platform for chronic myeloid leukemia patients to test patient-centered innovation capabilities found patient medication adherence and physician guideline adherence to be challenging [ 22 ]. While the web-based platform resulted in increased patient empowerment and guideline adherence, the study showcased how such innovations seek to serve as a value-added feature to more traditional models of patient care, consistent with our findings [ 22 ].

Limitations

In interpreting the results of this research, some limitations should be considered. The qualitative data presented should be used for hypothesis generation. First, not every study team member was interviewed and low numbers of patients in the PC arm participating in interviews may limit the generalizability of our findings. It is possible that additional feedback would further advance understanding of the drivers of implementation for this study design. Due to small sample sizes, issues of generalizability or transferability should be considered with possible over-representation of the perspectives of patients choosing to participate in the study. It should be noted, however, that saturation was reached for all constructs. Finally, member checking was not performed for this research, which could increase validity in allowing participants the opportunity to confirm findings.

Future efforts should explore how existing online patient networks may be utilized to reach more patients via social media. For example, one patient commented on their interest in this study being piqued more because of it being shared within a vasculitis support group on Facebook. Thus, source credibility for web-based messages should be a consideration when aiming to develop novel approaches to clinical trial recruitment via the Internet. This argument extends beyond social media in that mutually beneficial relationships with relevant patient advocacy groups should be established and maintained to leverage various communication channels available for disseminating study calls.

Existing research has provided suggestions for ways to enhance and refine novel approaches to trial activities, for patients and providers alike. Understandably, patients are a stakeholder holder group needing to be educated on at-home RCT tasks, and communication with study staff and providers for success of such endeavors are key. Research has recommended that provider and study staff communicative training may be beneficial to facilitate increased qualitative skills in PC approaches, with previous results highlighting increased shared decision making amongst patients and team members alike [ 21 , 23 , 24 ]. This is consistent with our results, which showcase a need for more clear, increased communication amongst all stakeholders.

It is important to note the rise of decentralized clinical trials (DCTs). While this research utilized the RCT model in order to compare PC and CoE approaches for implementation, the DCT relies on a patient-centric approach with an emphasis on new technologies, advanced analytics, and innovative procedures to maintain communication with study participants in their home environments [ 24 , 25 ]. The Drug Information Association Innovative Design Working Group provided a comprehensive list of considerations for the planning and conduct of DCTs (and PC approaches) [ 24 ]. The Group’s recommendation to operationalize novel roles for study staff as technological needs are added and adapted is particularly relevant to our findings. Additional guidelines include the incorporation of plans for patient monitoring and safety and the use of statistical analysis plans that meet the needs of a study [ 24 ].

Also consistent with our study’s structure, previous research has recommended for the creation and involvement of a variety of stakeholders for the development of communication and logistics in PC approaches (see Fig.  1 ) [ 24 ]. Additionally, it is important to acknowledge that technological advances come with challenges for patients. One of our participants addressed issues of patient access and familiarity with technology as areas of concern. Study budgets should be reflected to provide equipment and resources to encourage participation for a variety of scenarios. Yet, if remote participation does not include access to reasonably reliable web-based platforms and Internet, the option should not be utilized. Regarding navigability of web-based content, studies should be built with patient experience in mind. Providing clear audio-visual instructions related to online content, 24/7 support systems for when technical difficulties arise, and chatbots pre-loaded with answers to frequently asked questions related to the study are all ways to increase patient engagement [ 26 ]. As web-based platforms allow for increased patient engagement, it stands that more burden is being transferred to participants as they carry out research procedures and input data. Therefore, the importance of the informed consent process is critical, highlighting another opportunity for communication between patients and providers to be clear and consistent.

This research demonstrated both the ability to recruit using direct-to-patient techniques for a clinical trial via social media and web-based approaches, and the challenges to these strategies. Future research needs to explore further the ways that virtual platforms can increase access to clinical trials for patients experiencing rare conditions. As technologies continue to expand, it is important that research practices evolve in an effort to address recruitment challenges. Beyond social media, augmented and virtual realities in particular present future research trends that should be explored for rare disease research recruitment, considering each technology’s ability to facilitate screening, eligibility, record management, and remote data collection [ 27 ]. The realities of rare conditions drive nuances specific to their study, including the role of local providers as they relate to patient access and engagement in clinical trials. The same realities resulting in barriers to care (e.g., geographic isolation from CoEs) demand increased attention to the opportunities that social media and virtual platforms provide in improving access to information and further tethering of remote communities to the resources available within CoEs.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Rare Disease Day: What is a Rare Disease? https://www.rarediseaseday.org/article/what-is-a-rare-disease . 2018.  Accessed 21 Feb 2022.

Garrino L, Picco E, Finiguerra I, Rossi D, Simone P, Roccatello R. Living with and treating rare diseases: experiences of patients and professional health care providers. Qual Health Res. 2015;25:6. https://doi.org/10.1177/1049732315570116 .

Article   Google Scholar  

Austin CP, Cutillo CM, Lau LPL, Jonker AH, Rath A, Julkowska D, et al. Future of rare diseases research 2017–2027: an IRDiRC perspective. Clin Transl Sci. 2018;11:1. https://doi.org/10.1111/cts.12500 .

Gray S. Reducing barriers to participation in clinical trials for rare diseases. ACRP. 2021;35:6. https://acrpnet.org/2021/08/16/reducing-barriers-to-participation-in-clinical-trials-for-rare-diseases/ . 2021.  Accessed 2022 May 10.

Paul J, Seib R, Prescott T. The internet and clinical trials: background, online resources, examples and issues. J Med Internet Res. 2005.  https://doi.org/10.2196/jmir.7.1.e5 .

Santoro E, Nicolis E, Franzosi MG, Tognoni G. Internet for clinical trials: past, present, and future. Control Clin Trials. 1999;20:2. https://doi.org/10.1016/S0197-2456(98)00060-9.S0197245698000609 .

Applequist J, Burroughs C, Ramirez A, Merkel P, Rothenberg ME, Trapnell B, et al. A novel approach to conducting clinical trials in the community setting: utilizing patient-driven platforms and social media to drive web-based patient recruitment. BMC Med Res Methodol. 2020;20:58. https://doi.org/10.1186/s12874-020-00926-y .

Article   PubMed   PubMed Central   Google Scholar  

Applequist J. Broadcast pharmaceutical advertising in the United States: primetime pill pushers. Lanham: Lexington; 2016.

Google Scholar  

Applequist J, Ball JG. An updated analysis of direct-to-consumer television prescription advertisements for prescription drugs. Ann Fam Med. 2018;16(3). https://doi.org/10.1370/afm.2220 .

Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, et al. Experience with direct-to-patient recruitment for enrollment into a clinical trial in a rare disease: a web-based study. J Med Internet Res. 2017. https://doi.org/10.2196/jmir.6798 .

Hareendran A, Gnanasakthy A, Winnette R, Revicki D. Capturing patients’ perspectives of treatment in clinical trials/drug development. Contemp Clin Trials. 2012;33. https://doi.org/10.1016/j.cct.2011.09.015 .

Olmos-Vega FM, Stalmeijer, RE, Varpio L, Kahlke R. A practice guide to reflexivity in qualitative research: AMEE guide no. 149. Med Teach. 2023;45(3). https://doi.org/10.1080/0142159X.2022.2057287 .

Linabary JR, Corple DJ, Cooky C. Of wine and whiteboards: Enacting feminist reflexivity in collaborative research. Qual Res. 2021;21(5). https://doi.org/10.1177/1468794120946988 .

Young K, Kaminstein D, Olivos A, Burroughs C, Castillo-Lee C, Kullman J, et al. Patient involvement in medical research: what patients and physicians learn from each other. Orphanet J. Rare Dis. 2019;14(21). https://doi.org/10.1186/s13023-018-0969-1

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19:6. https://doi.org/10.1093/intqhc/mzm042 .

Creswell JW. Qualitative inquiry and research design: choosing among five traditions. Thousand Oaks: Sage; 1998.

Lindlof TR, Taylor BC. Qualitative communication research methods. 2nd ed. Thousand Oaks: Sage; 2002.

Tracy SJ. A phronetic iterative approach to data analysis in qualitative research. J Qual Res. 2018;19:2. https://doi.org/10.22284/qr.2018.19.2.61 .

Tracy SJ. Qualitative research methods: collecting evidence, crafting analysis, communicating impact. Hoboken: Wiley Blackwell; 2020.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. https://doi.org/10.1186/1748-5908-4-50 .

Richesson RL, Lee HS, Cuthbertson D, Young K, Krischer JP. An automated communication system in a contact registry for persons with rare diseases: scalable tools for identifying and recruiting clinical research participants. Contemp Clin Trials. 2009;30:1. https://doi.org/10.1016/j.cct.2008.09.002 .

Rees CA, Pica N, Monuteaux MC, Bourgeois FT. Noncompletion and nonpublication of trials studying rare diseases: a cross-sectional analysis. PloS Med. 2019;16(11). https://doi.org/10.1371/journal.pmed.1002966 .

McMillan SS, Kendall E, Sav A, King MA, Whitty JA, Kelly F, et al. Patient-centered approaches to health care: a systematic review of randomize controlled trials. Med Care Res Rev. 2013;70(6). https://doi.org/10.1177/1077558713496318 .

Ector GICG, Westerweel PE, Hermens RPMG, Braspenning KAE, Heeren BCM, Vinck OMP, et al. The development of a web-based, patient-centered intervention for patients with chronic myeloid leukemia (CMyLifeCmyLife): design thinking development approach. J Med Internet Res. 2020;22(5): e15895. https://doi.org/10.2196/15895 .

Morgan S, Yoder LH. A concept analysis of person-centered care. J Holist Nurs. 2012;30(1). https://doi.org/10.1177/0898010111412189 .

Ghadessi M, Di J, Wang C, Toyoizumi K, Shao N, Mei C, et al. Decentralized clinical trials and rare diseases: a Drug Information Association Innovative Design Scientific Working Group (DIA-IDSWG) perspective. Orphanet J Rare Dis. 2023 18(79). https://doi.org/10.1186/s13023-023-02693-7 .

Van Norman GA. Decentralized clinical trials: the future of medical product development?. JACC Basic to Transcl Sci. 2021;6(4). https://doi.org/10.1016/j.jacbts.2021.01.011 .

Download references

Acknowledgements

The authors gratefully acknowledge the contribution to this study of the many patients who helped review aspects of the trial communications and processes and the patients who participated in the interviews associated with this project. A full list of Vasculitis Clinical Research Consortium members, whom we would like to acknowledge, has been provided.

Renée Borchin 5 , Cristina Burroughs 5 , Jeffrey Krischer, PhD 5 , Julie Martin, RN, MEd 5 , Leah Madden 6 , Carol McAlear, MA 6 , Brian Rice 6 , Antoine Sreih, MD 6 , Jacquelyn Thomas, MPH 6 ,Joyce Kullman 7 ,Simon Carette, MD 8 , Julia Farquharson 8 , Samyukta Jagadeesh 8 , Christian Pagnoux, MD, MSc, MPH 8 , Nader Khalidi, MD 9 , Sandra Messier 9 , Diana Robins 9 Martha Finco, MA 10 , Jennifer Godina 10 , Jessica Gonzalez 10 , Curry Koening, MS, MD 10 , Julieanne Nielsen 10 Sonya Crook, RN 11 , Kathleen Gartner 11 , Elizabeth Kisela 11 , Carol Langford, MHS, MD 11 , Lori Strozniak 11 ,Paul Monach, MD, PhD 12 , Laurie Hope, RN 13 , Dawn McBride, RN 13 , Larry Moreland, MD 13 , Cynthia Beinhorn 14 , Yeoniee Kim 14 , Kathleen Mieras 14 , Ulrich Specks, MD 14 , Steven Ytterberg, MD 14 , Peter Merkel, MPH, MD 15

This project was supported by the National Heart, Lung, and Blood Institute (R01HL115041). The Vasculitis Clinical Research Consortium (VCRC) has received support from the National Institute of Arthritis and Musculoskeletal and Skin Diseases (U54 AR057319), the National Center for Research Resources (U54 RR019497), the Office of Rare Diseases Research (ORDR), and the National Center for Advancing Translational Science (NCATS). The VCRC was part of the Rare Diseases Clinical Research Network (RDCRN), an initiative of ORDR and NCATS. The RDCRN Data Coordinating Center (U01TR001263) was also supported by ORDR, NCATS. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Author information

Authors and affiliations.

Department of Family Medicine and Community Health, University of Pennsylvania, 51 North 39th Street, 6th Floor Mutch Building, PhiladelphiaPhiladelphia, PA, 19104, USA

Peter F. Cronholm, Ebony Fontenot & Trocon Davis

Center for Public Health, University of Pennsylvania, Philadelphia, PA, USA

Peter F. Cronholm

Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA

University of South Florida, Zimmerman School of Advertising & Mass Communications, Tampa, FL, USA

Janelle Applequist

Health Informatics Institute, University of South Florida, Tampa, FL, USA

Jeffrey Krischer, Cristina Burroughs & Renée Borchin

Division of Rheumatology, University of Pennsylvania, Philadelphia, PA, USA

Carol A. McAlear & Antoine G. Sreih

Vasculitis Foundation, Kansas City, MO, USA

Joyce Kullman

Mount Sinai Hospital, Toronto, ON, Canada

Simon Carette & Christian Pagnoux

Joseph’s Healthcare Hamilton, Hamilton, ON, Canada

Nader Khalidi

University of Utah, Salt Lake City, UT, United States

Curry Koening

Cleveland Clinic Foundation, Cleveland, OH, United States

Carol A. Langford

Boston University School of Medicine, Boston, MA, United States

Paul Monach

University of Pittsburgh, Pittsburgh, PA, United States

Larry Moreland

Mayo Clinic, Rochester, MN, United States

Ulrich Specks & Steven R. Ytterberg

Division of Rheumatology, Department of Medicine, and the Division of Epidemiology, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA, United States

Peter A. Merkel

You can also search for this author in PubMed   Google Scholar

  • Renée Borchin
  • , Cristina Burroughs
  • , Jeffrey Krischer
  • , Julie Martin
  • , Leah Madden
  • , Carol A. McAlear
  • , Brian Rice
  • , Antoine G. Sreih
  • , Jacquelyn Thomas
  • , Joyce Kullman
  • , Simon Carette
  • , Julia Farquharson
  • , Samyukta Jagadeesh
  • , Christian Pagnoux
  • , Nader Khalidi
  • , Sandra Messier
  • , Diana Robins
  • , Martha Finco
  • , Jennifer Godina
  • , Jessica Gonzalez
  • , Curry Koening
  • , Julieanne Nielsen
  • , Sonya Crook
  • , Kathleen Gartner
  • , Elizabeth Kisela
  • , Carol A. Langford
  • , Lori Strozniak
  • , Paul Monach
  • , Laurie Hope
  • , Dawn McBride
  • , Larry Moreland
  • , Cynthia Beinhorn
  • , Yeoniee Kim
  • , Kathleen Mieras
  • , Ulrich Specks
  • , Steven R. Ytterberg
  •  & Peter A. Merkel

Contributions

Peter A Merkel, Jeffrey Krischer, and Peter Cronholm conceptualized the study. Peter Cronholm and Janelle Applequist wrote the main manuscript text and prepared figures and supplementary tables. Jeffrey Krischer, Ebony Fontenot, Trocon Davis, Cristina Burroughs, Carol A McAlear, Renée Borchin, Joyce Kullman, Simon Carette, Nader Khalidi, Curry Koening, Carol A Langford, Paul Monach, Larry Moreland, Christian Pagnoux, Ulrich Specks, Antoine G Sreih, Steven R. Ytterberg and Peter A Merkel reviewed the manuscript and provided substantive edits.

Corresponding author

Correspondence to Peter F. Cronholm .

Ethics declarations

Ethics approval and consent to participate.

In accordance with the Declaration of Helsinki, the study was reviewed and approved by the Institutional Review Boards (IRB) at the University of South Florida (protocol number: Pro0001324) and the University of Pennsylvania (protocol number: 818769). Informed consent was obtained from staff member and patients prior to starting interviews at the time of data collection.

Consent for publication

In cases where interview quotes have been reported in this study, pseudonyms have been used to protect participant identity. Thus, consent for publication of data is not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

12874_2024_2352_moesm1_esm.docx.

Additional file 1. Interview Guides. Interview guides for: patient participants (PC and CoE arms); NMCT research staff, CoE PIs, study coordinators; physicians (PC and CoE arm); opt-out interviews (CoE and PC arms).

Additional file 2.

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cronholm, P.F., Applequist, J., Krischer, J. et al. A study of implementation factors for a novel approach to clinical trials: constructs for consideration in the coordination of direct-to-patient online-based medical research. BMC Med Res Methodol 24 , 244 (2024). https://doi.org/10.1186/s12874-024-02352-w

Download citation

Received : 25 March 2023

Accepted : 25 September 2024

Published : 18 October 2024

DOI : https://doi.org/10.1186/s12874-024-02352-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical trial
  • Research subject recruitment
  • Social media
  • Direct-to-consumer
  • Advertising
  • Granulomatosis with polyangiitis

BMC Medical Research Methodology

ISSN: 1471-2288

research excellence framework unit of assessment

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

  • Department of Health & Social Care

Review into the operational effectiveness of the Care Quality Commission: full report

Updated 17 October 2024

Applies to England

research excellence framework unit of assessment

© Crown copyright 2024

This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected] .

Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.

This publication is available at https://www.gov.uk/government/publications/review-into-the-operational-effectiveness-of-the-care-quality-commission-full-report/review-into-the-operational-effectiveness-of-the-care-quality-commission-full-report

I was asked to carry out a review of the Care Quality Commission ( CQC ) in May 2024. Over the last 4 months, I have spoken to around 200 senior managers, caregivers and clinicians working in the health and care sector, as well as with a wide range of patient and user [footnote 1] groups, and over 100 people within CQC - the majority of the executive team, all non-executive directors, senior managers, operational staff, the staff forum, unions and national professional advisers. I have received over 125 emails from members of staff within CQC .

Virtually all have shared with me considerable concerns about the functioning of the organisation, with a high degree of consistency in the comments made. At the same time, they recognise the need for a strong, credible and effective regulator of health and social care services.

I particularly want to mention, and thank, the large number of people within CQC , at all levels of the organisation, who have been open in sharing their concerns. The last couple of years has not been an easy time for them, seeing changes imposed that they recognised would be problematic. They have been, and continue to be, professional in their approach to work and committed to ensuring high-quality health and care services. 

An interim report of my work, providing a high-level summary of my emerging findings, was published in July 2024 to inform thinking around changes needed to start the process of improving CQC . This full report reflects the recent conversations I have had with user groups and a larger number of staff than had been possible previously, and provides greater depth to my findings and recommendations.

There is an urgent need for a rapid turnaround of CQC - a process that has already started with the appointment of an interim chief executive in June 2024 and the announcement of further changes following the publication of my interim report. I am pleased to see the openness and honesty with which the organisation has begun to address the changes required.

The health and care sector accounts for around 12% of the economy [footnote 2] and 21% of public expenditure [footnote 3] , and is one of the most significant drivers of health, public satisfaction and economic growth [footnote 4] . It needs - and deserves - a high-performing regulator.

Dr Penelope Dash, Independent Lead Reviewer

Executive summary

CQC  is the independent regulator of healthcare and adult social care in England. A review of  CQC  started in May 2024. The review has heard from over 300 people from across the health and care sectors (providers, user and patient groups, and national leaders) and within  CQC , and has analysed  CQC ’s performance data.

In 2021,  CQC  launched a major change programme to make the assessment process for health and social care providers simpler and more insight driven, drawing on a wide range of data about quality of care, with the ability to prioritise assessments and inspections. The change programme included new IT systems, changes to how operational (inspection) teams are structured and a new regulatory approach - the  single assessment framework ( SAF ) . 

The review’s findings

The review has found significant failings in the internal workings of  CQC , which have led to a substantial loss of credibility within the health and social care sectors, a deterioration in the ability of  CQC  to identify poor performance and support a drive to improve quality - and a direct impact on the capacity and capability of both the social care and the healthcare sectors to deliver much-needed improvements in care.

The conclusions of the review are summarised around 10 topics.

Conclusion 1: poor operational performance 

There has been a stark reduction in activity with just 6,700 inspections and assessments carried out in 2023, compared with almost 15,800 in 2019. This has resulted in:

  • a backlog in new registrations of health and care providers
  • delays in re-inspecting after a ‘requires improvement’ or ‘inadequate’ rating
  • increasing age of ratings

The review has concluded that poor operational performance is impacting  CQC ’s ability to ensure that health and social care services provide people with safe, effective and compassionate care, negatively impacting the opportunity to improve health and social care services, and, in some cases, for providers to deliver services at all. 

Conclusion 2: significant challenges with the provider portal and regulatory platform

New IT systems were introduced at  CQC  from 2021 onwards. However, the deployment of new systems resulted in significant problems for users and staff.

The review has concluded that poorly performing IT systems are hampering  CQC ’s ability to roll out the  SAF , and cause considerable frustration and time loss for providers and  CQC  staff.

Conclusion 3: delays in producing reports and poor-quality reports

All sectors told the review that they can wait for several months to receive reports and ratings following assessments. The review has heard multiple comments about poor-quality reports - these have come from providers and from members of the public. 

Poor-quality and delayed reports hamper users’ ability to access information, and limit the credibility and impact of assessments for providers.

Conclusion 4: loss of credibility within the health and care sectors due to the loss of sector expertise and wider restructuring, resulting in lost opportunities for improvement

CQC  underwent an internal restructuring in 2023, alongside the introduction of the  SAF  and new IT systems. The restructuring moved operational staff from 3 directorates with a focus on specific sectors into integrated teams operating at a local level, resulting in a loss of expertise.

The review has found that the current model of generalist inspectors and a lack of expertise at senior levels of  CQC , combined with a loss of relationships across  CQC  and providers, is impacting the credibility of  CQC , resulting in a lost opportunity to improve health and social care services.

Conclusion 5: concerns around the single assessment framework ( SAF ) and its application

The  SAF has set out 34 areas of care quality (called ‘quality statements’) that could be applied to any provider of health or social care with a subset applied to assessments of integrated care systems ( ICSs ) and local authorities. These align to the 5 domains of quality used for many years and referred to as ‘key questions’ within the  SAF . For each of the 34 quality statements, there are 6 ‘evidence categories’. These are: people experience, staff experience, partner experience, observations, processes and outcomes.

The review has identified 7 concerns with the  SAF as follows:

  • the way in which the  SAF  is described is poorly laid out on the  CQC  website, not well communicated internally or externally, and uses vague language
  • there is limited information available for providers and users or patients as to what care looks like under each of the ratings categories, resulting in a lack of consistency in how care is assessed and a lost opportunity for improvement
  • there are questions about how data on user and patient experience is collected and used
  • more could be done to support and encourage innovation in care delivery
  • there is insufficient attention paid to the effectiveness of care and a lack of focus on outcomes (including inequalities in outcomes)
  • there is no reference to use of resources or the efficient and economic delivery of care, which is a significant gap
  • there is little reference to, or acknowledgement of, the challenges in balancing risk and ensuring high-quality care across an organisation or wider health and care system

Conclusion 6: lack of clarity regarding how ratings are calculated and concerning use of the outcome of previous inspections (that often took place several years ago) to calculate a current rating

The review has learnt that overall ratings for a provider may be calculated by aggregating the outcomes from inspections over several years. This cannot be credible or right. Furthermore, providers do not understand how ratings are calculated and, as a result, believe it is a complicated algorithm, or a “magic box”.

Ratings matter - they are used by users and their friends and family, they are used by commissioning bodies (the NHS, private health insurers and local authorities), and they drive effective use of capacity in the sector. They are a significant factor in staff recruitment and retention.

Conclusion 7: there are opportunities to improve  CQC ’s assessment of local authority Care Act duties

The  Health and Care Act 2022  gave powers to  CQC  to assess local authorities’ delivery of their adult social care duties after several reports and publications identified a gap in accountability and oversight of adult social care. The review found broad support for the overall assessment framework but also heard feedback that the assessment process and reporting could be improved. 

Conclusion 8:  ICS assessments are in early stages of development with a number of concerns shared

The Health and Care Act 2022 introduced a new duty for CQC  to review and assess  ICSs . Statute sets out 3 priority areas for CQC to look at: leadership, integration and quality of care; and the Secretary of State can set priorities on other themes. CQC  developed a methodology for these assessments, which was tested in pilots in Dorset and Birmingham and Solihull, but wider rollout has been paused as a result of a number of concerns shared with the review.

Conclusion 9:  CQC  could do more to support improvements in quality across the health and care sector

The review heard a consistent comment that  CQC  should not be an improvement body per se, but, at the same time, could do more to support the health and care sectors to improve. It could do this, for example, through the description of best practice and greater sharing of new models of care delivery, leading international examples of high-quality care and more innovative approaches - particularly the use of technology. 

Governance structures within organisations are crucial to improvement. A greater focus on how organisations are approaching and delivering improvement, rather than looking at input metrics, could enable more significant improvements in quality of care.

Conclusion 10: there are opportunities to improve the sponsorship relationship between  CQC  and the Department of Health and Social Care ( DHSC )

DHSC ’s sponsorship of  CQC  should promote and maintain an effective working relationship between the department and  CQC , which should, in turn, facilitate high-quality, accountable, efficient and effective services to the public.

The review has found that  DHSC  could do more to ensure that  CQC  is sponsored effectively, in line with the government’s  Arm’s length body sponsorship code of good practice .

The review’s recommendations

The health and care sector is one of the most significant drivers of health, public satisfaction and economic growth. It needs - and deserves - a high-performing regulator.

In order to restore confidence and credibility and support improvements in health and social care, there is a need to:

  • rapidly improve operational performance, fix the provider portal and regulatory platform, and improve the quality of reports
  • rebuild expertise and relationships with providers
  • review the  SAF  to make it fit for purpose with clear descriptors and a far greater focus on effectiveness, outcomes and use of resources
  • clarify how ratings are calculated and make the results more transparent
  • continue to evolve and improve local authority assessments
  • formally pause  ICS  assessments
  • strengthen sponsorship arrangements

A second review considering the wider landscape for quality of care , with an initial focus on safety, will be published in early 2025.

Background and context

Introduction.

CQC  is the independent regulator of healthcare and adult social care in England. It was established in 2009 under the  Health and Social Care Act 2008  and brought together the Commission for Social Care Inspection ( CSCI ), the Mental Health Act Commission and the Healthcare Commission. It is an executive non-departmental public body sponsored by DHSC .

CQC  monitors, inspects and regulates services to make sure they:

  • meet fundamental standards of safety
  • provide effective care to maximise outcomes
  • are caring and responsive to all users
  • are well led with robust governance structures [footnote 5] and processes in place

It takes action to protect those who use services. It conducts performance assessments and rates providers of services (with some exceptions). Assessments and ratings are publicly available.

While this review is focused on the SAF , it should be noted that CQC has a range of other statutory responsibilities - for example, its role in market oversight, monitoring the Mental Health Act 1983 and publishing the annual State of Care report.

In its Principles of effective regulation , the National Audit Office ( NAO ) states that regulation plays a crucial role in many aspects of public policy, serving diverse purposes such as protecting and benefiting individuals, businesses and the environment, as well as supporting economic growth. Regulation can take various forms, ranging from strict, prescriptive rules and enforcement measures to lighter-touch approaches that include guidance and codes of practice. To ensure regulatory effectiveness, NAO outlines principles in 4 main areas: design, analysis, intervention and learning.

These principles include:

  • defining a clear overall purpose based on a good understanding of the issues that the regulation aims to address
  • setting specific regulatory objectives
  • ensuring accountability
  • designing an appropriate organisational structure 

Before the early 1990s, there was little objective assessment of the quality of health and care, despite the early attempts of notable figures such as Florence Nightingale and Ernest Codman [footnote 6] . With increasing recognition of the high levels of variation in quality of care, combined with high-profile exposés of very poor outcomes, such as the Bristol heart scandal, the decision was taken to set up an independent reviewer of quality.

The National Care Standards Commission was established by the Care Standards Act 2000 as a non-departmental public body to regulate independent health and social care services, and improve the quality of those services in England.

In 2004, 2 separate bodies were established by the Health and Social Care (Community Health and Standards) Act 2003 . CSCI ’s role was to inspect and regulate social care services. The Healthcare Commission was responsible for assessing and reporting on the performance of both NHS and independent healthcare organisations to ensure they provided high standards of care. In parallel, the Mental Health Act Commission had been overseeing mental health services since the early 1980s.

CQC was established in the Health and Social Care Act 2008 and was operational from 1 April 2009. It replaced the Healthcare Commission, Mental Health Act Commission and CSCI . The formation of CQC streamlined oversight and introduced inspection frameworks.

CQC has been through a number of iterations since its formation. The first phase was based on a generalist approach to inspection and the emphasis was on compliance or non-compliance with standards.

In 2013, CQC introduced a new approach to inspections and ratings following numerous critical reports. The new approach used larger and more expert inspection teams to produce provider ratings for 5 quality domains:

Three chief inspectors were appointed who led both the development and delivery of the new inspection programmes. 

In 2021, CQC launched a strategy, A new strategy for the changing world of health and social care , to drive improvements and encourage innovation across the health and care system, and tackle health inequalities. Alongside this strategy, CQC embarked on an ambitious transformation programme. The wide-ranging changes implemented from 2021 to 2022 included:

  • introducing a new regulatory approach for health and care providers, ICSs and local authorities, a core component of which was the single assessment framework ( SAF ) . The new framework was intended to make the assessment process simpler and more insight driven by drawing on a wide range of data about quality of care, with the ability to prioritise assessments and inspections
  • establishing a new regulatory leadership team to shape CQC ’s priorities and drive improvement
  • changing how operational (inspection) teams are structured
  • implementing a new provider portal and regulatory platform

The Health and Care Act 2022 gave CQC powers to assess care at local authority and ICS level. Assessment of local authorities began in December 2023 but ICS assessments have not yet commenced. 

The history of quality regulation in the NHS is shown in ‘Appendix 1: quality and safety history and context’ below.

Different sectors

All providers of healthcare and adult social care regulated activities in England must register with  CQC including:

  • acute and specialist hospitals
  • mental health and community care services
  • social care providers
  • the independent sector

The size and scale of these sectors vary considerably. Social care providers are the largest sector by number of organisations (over 50% of all organisations regulated by CQC ), while NHS trusts comprise the largest by revenue (total revenue of NHS trusts is around £121.2 billion [footnote 7] compared with around £38 billion [footnote 8] for social care providers). More information is shown in the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report.

While there are many commonalities across healthcare and social care, and the sub-sectors within these, there are also important differences, particularly around the model of commissioning (self-pay, private health insurance, or local authority or NHS commissioned) and the structure of provision (large private or independent sector, small private or independent sector, or NHS provider). The different sectors are described in more detail below.

Social care

Social care is partly a consumer market - 37% of residential social care is purchased privately by individuals or their families or carers [footnote 9] and 23% of the community care (largely domiciliary care) market is funded privately [footnote 10] . Local authorities have responsibility for ensuring the quality and safety of care, and care availability for all care users, as well as responsibility for commissioning the majority of social care packages (in residential or community settings).

Users and their carers, family or friends look for objective data on the quality of care [footnote 11] and assessment ratings can play a significant role in this. In the CareAware survey carried out by CQC , 76% of respondents considered an inspection report and rating for a care home when choosing a home. One in 10 people changed their choice of care home after checking the CQC inspection report and rating.

Social care is predominantly privately provided. There are around 19,000 care providers (organisations) [footnote 12] . Of the 6,000-plus care home providers (organisations), the largest 10 comprise about 18% of the market [footnote 12] .

Across these, CQC has the power to assess and rate approximately 40,900 locations. There has been an increase in the number of social care locations that CQC has the power to assess and rate over the last 5 years, with the total number increasing from around 28,500 in 2019 to around 29,300 in 2024 - a 14% increase. This is largely driven by an increase in domiciliary care providers, increasing from around 9,700 in 2019 to around 13,600 in 2024 (see the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report and ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024: supplementary data tables’).

Around 10% of the market is private equity backed [footnote 13] with this concentrated in the largest providers. The larger providers will typically have their own quality assurance processes, reviewing and monitoring quality of care, putting in processes to improve, and incentivising high-quality care - but this may not be the case for smaller organisations.

Independent healthcare providers

Independent healthcare providers also operate in a mixed market. The independent healthcare sector is worth around £21 billion [footnote 14] with direct consumer self-pay accounting for around £1.4 billion in 2021 [footnote 15] and private health insurance commissioning (or paying for) around 7 billion in 2022 [footnote 16] . Separate sources provide an estimate of the value of NHS commissioning from the private sector in England of around £13 billion in 2022 to 2023 [footnote 17] . The private health insurance sector has a high level of market dominance by the 2 largest insurers - BUPA (35%) and AXA PPP (32%), according to 2020 figures [footnote 16] .

In the self-pay market, users are fully reliant on an independent regulator that can assure minimum safety standards and provide information on the wider quality of care, particularly the outcomes of care, in order to support patients and users to make informed choices based on reliable information. 

Some private health insurers have systems in place to vet and monitor the healthcare providers within their network. They may require providers to meet specific safety and wider quality criteria, undergo regular audits, and adhere to best practice guidelines. They may also engage in selective contracting, partnering only with those providers who meet their quality standards, thereby providing an informal form of regulation within their networks.

NHS commissioners (largely integrated care boards ( ICBs )) are responsible for assuring the quality of care purchased from independent providers on behalf of local residents for whom they commission care.

The provision of private healthcare is, in parts, a fragmented market with a small number of large providers in mental health care provision and the surgical inpatient and diagnostic sectors, but an increasing number of smaller providers in the wider sector. As in social care, larger providers are likely to have their own quality assurance mechanisms in place, but this is less likely in smaller providers.

Dentists and GPs

Dentists and GPs are independent contractors with much of their income from the NHS (particularly in the case of GPs), and are increasingly overseen by ICBs that contract for services and actively manage their performance in order to ensure high-quality care and improve efficiency in the provision of care.

NHS providers

For NHS providers, there are substantive governance structures in place with quality committees in all providers and a requirement for these to report to boards (with opportunities for detailed scrutiny). They are predominantly commissioned by ICBs , which have a statutory requirement to assure quality of care in all providers. This is supplemented by regional and national governance structures within NHS England.

Local authorities

Local authorities are fundamentally different. They are responsible for direct work with people through social work and integrated delivery with local NHS services, alongside a role in assessing needs, commissioning services and/or arranging services for those who need help and advice. They also have a local safeguarding and quality assurance role.

CQC is looking at both the regulated services provided and commissioned by local authorities, and the quality of adult social care more widely.

Integrated care systems ( ICSs )

Similarly, ICSs are not providers per se. Rather, an ICS is a coalition of health and social care providers along with their commissioners (local authorities and ICBs ) within a defined geography.

ICBs are accountable to NHS England. Local authorities are accountable to their local populations through 4-yearly voting.

These differences in governance structures and oversight mechanisms across sectors are significant. The regulation of services should reflect these differences.

The impact of CQC on the improvement of health and social care providers

Over the past 20 years, the quality of the health and care sector in England has seen significant improvement driven by:

  • advances in technology
  • increasing transparency about variation in the delivery and quality of care
  • an increased emphasis on seeing patients and users as consumers who can both choose where and by whom to receive care, and play a more active role in considering what care they receive

Since its creation in 2009, CQC has established quality standards, conducted inspections, published the outcomes of those inspections and enforced compliance - all with the aim of providing more information to users and patients, holding providers accountable and fostering a culture of continuous improvement. CQC has produced guidance for providers, and shared best practice, through targeted projects in sectors and across themes, and aggregated insights to show systemic trends across health and social care.

Examples include:

  • spotlighting systemic issues around oral health in care homes - see Smiling matters: oral health care in care homes
  • an in-depth review into restraint, segregation and seclusion, convening partners to drive improvement in specific services such as supported living - see Out of sight - who cares?
  • working with clinicians to identify best practice in urgent and emergency care - see Patient FIRST and  PEOPLE FIRST

One notable example of CQC ’s achievement, through its regulatory activities, is its involvement in the investigation of poor-quality maternity care at Morecambe Bay. The Morecambe Bay Investigation was triggered by concerns over a series of maternal and neonatal deaths at Furness General Hospital, part of the University Hospitals of Morecambe Bay NHS Foundation Trust, between 2004 and 2013.

CQC ’s inspections uncovered poor clinical practices, inadequate staffing, and weak leadership, leading to significant scrutiny of the hospital’s management. This prompted a detailed investigation with recommendations for improvement and resulted in substantial policy and practice changes, improved safety protocols, enhanced staff training and better governance.

A report into the impact of CQC on provider performance carried out by Alliance Manchester Business School and the King’s Fund, published in September 2018, examined how CQC ’s regulatory activities affect health and social care providers. The report developed a framework that outlines 8 mechanisms through which regulation can impact provider performance: anticipatory, directive, organisational, relational, informational, stakeholder, lateral and systemic.

Overall, the report found that providers support the need for quality regulation and viewed CQC ’s approach as an improvement over the previous system. The report was able to identify impact through qualitative data, and noted that regulatory interactions can lead to internal reflections and improvements in team dynamics, leadership and organisational culture. Positive ongoing interactions between CQC inspectors and providers can facilitate continuous improvement and better communication. However, the report also found that it was difficult to identify significant impact of inspections and ratings but noted that CQC ’s conceptualisation of the quality of care in the 5 domains - safe, effective, caring, responsive and well led - had been embraced and had become a pervasive framing of the quality of care.

In 2019, CQC commissioned a further report from the Alliance Manchester Business School into its impact on quality of care as part of the evaluation of CQC ’s 5-year strategy, A new strategy for the changing world of health and social care , which had been published in May 2016. CQC ’s impact on the quality of care: an assessment on CQC ’s contribution and suggestions for improvement (PDF, 893KB) , published in 2020, found that CQC could be an important driver for change, but more work needed to be done to enable this to happen more consistently and across the whole of CQC .

CQC continues to carry out a number of research and evaluation projects, which are outlined in its annual reports .

Methodology

The terms of reference for this review were to examine the suitability of the  SAF  methodology for inspections and ratings with specific questions regarding the degree to which CQC supports innovation in care delivery and the efficient and economic delivery of health and care services. The full terms of reference are in ‘Appendix 2: review terms of reference’.

While  CQC  has responsibility for assessing the provision of care in a very wide range of settings - for example, in prisons, children’s homes and defence medical services - this review has been limited to the overwhelming majority of its work, which includes:

  • social care providers (residential and community based)
  • NHS providers (trusts and GPs)
  • independent sector (private and charitable) healthcare providers

The review has been informed by one-to-one interviews and roundtable discussions with around 300 people. This includes:

  • patients and users, and their representative groups
  • social care providers (both smaller and larger providers)
  • executive directors and regional directors of NHS England
  • ICB CEOs and chairs
  • NHS trust CEOs, chairs, medical directors and nurse directors
  • the British Dental Association
  • independent sector healthcare providers
  • senior managers of local authorities

At CQC , the review has spoken to:

  • the majority of the executive team (including the chief executive who served from 2018 to June 2024)
  • the chair and non-executive directors
  • national professional advisers
  • most of the wider leadership team
  • members of the staff forum
  • trade union representatives

A list of all participants is shown in ‘Appendix 3: list of people spoken to for this review’.

The interim report was informed by a snapshot of data and summary tables provided by CQC to the review team in June 2024. The review team has since constructed a historical data set, using management information provided by CQC in August and September 2024, to independently analyse data on CQC performance. A full summary is available in the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report with significant findings referenced in the report.

Conclusions

This review has found significant failings in the internal workings of  CQC , which have led to a substantial loss of credibility within the health and social care sectors, deterioration in the ability of  CQC  to identify poor performance and support a drive to improved quality, and a direct impact on the capacity and capability of both the social care and healthcare sectors to deliver much-needed improvements in care.

The conclusions are summarised around 10 topics:

  • Poor operational performance.
  • Significant challenges with the provider portal and regulatory platform.
  • Delays in producing reports and poor quality of reports.
  • Loss of credibility within the health and care sectors due to the loss of sector expertise and wider restructuring.
  • The SAF and its application.
  • Lack of clarity regarding how ratings are calculated and concerning use of the outcome of previous inspections (often several years ago) to calculate a current rating.
  • CQC ’s assessment of local authorities’ Care Act duties.
  • ICS assessments.
  • Supporting providers to improve quality across the health and care sector.
  • The sponsorship relationship between CQC and DHSC .

The review has heard that many people within CQC tried to raise concerns about the changes made over the last few years - the introduction of the SAF , the new provider portal and regulatory platform, and the organisational restructure. Members of CQC ’s staff forum, trade union representatives at CQC , senior professional advisers and a number of executives all told the review that they gave feedback and raised concerns about the changes being implemented but did not feel listened to:

We expressed numerous concerns around how the SAF /new ways of working would present challenges for hospitals/mental health service inspections as the SAF didn’t seem to consider the complexities of these. We said how the changes would hold us back from having oversight of risk within services, not being able to inspect as regularly as leaders envisioned, problems with the new IT system, as well as issues with staffing and morale. We weren’t listened to, leaders went ahead anyway, and now we are where we are.

– CQC staff member

Staff engagement sessions, ‘task and finish’ groups, and invitations for staff to provide feedback all took place, but did not appear to have any impact on the decisions being taken:

We’ve had willing people in the organisation who wanted to make a real contribution to these changes that were just point blank ignored and isolated.
There wasn’t really any meaningful consultation process in place at the time.

The review has received more than 125 emails from CQC staff members, providing a consistent account of the last few years, and has seen 2 letters from the recognised trade unions of CQC to former Secretaries of State for Health and Social Care informing them of significant issues.

The review acknowledges that there will always be internal resistance to any change or transformation programme, but the scale and consistency of comments is striking.

In order to ensure that health and social care services provide people with high-quality care,  CQC registers organisations that apply to carry out regulated activities, carries out inspections of services and publishes information on its judgements.

Operational performance was impacted by the COVID-19 pandemic. In March 2020,  CQC  paused routine inspections and focused activity on where there was a risk to people’s safety . Despite this, and almost 2 and a half years since the steps in the previous government’s  Living with COVID-19 strategy  were implemented, the review has heard that  CQC ’s operational performance is still not where it should be.

There has been a stark reduction in activity with just 6,700 inspections and assessments carried out in 2023 to 2024, partly due to the rollout of the  SAF [footnote 18] . This compares with around 15,800 inspections conducted in 2019 to 2020 (see the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report and ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024: supplementary data tables’). CQC ’s unpublished business plan for 2024 to 2027 includes a target for 16,000 assessments to be carried out in 2024 to 2025.

The reduction in activity has resulted in considerable delays in re-inspecting providers after a ‘requires improvement’ or ‘inadequate’ rating. While there is a risk-based approach to prioritising assessments, which means providers with poorer ratings are inspected more frequently, a number of providers are currently “stuck in ‘requires improvement’” despite improving services and making the changes required by CQC . Data from CQC shows that the time taken to carry out a re-inspection after an ‘inadequate’ rating has increased from 87 days in 2015 to 136 days in 2024, and the time to carry out a re-inspection following a ‘requires improvement rating’ has risen from 142 days to 360 days in the same time frame (see the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report and ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024: supplementary data tables’). Interviewees told the review that this could result in hospital discharge teams refusing to discharge people to them, or local authorities refusing to contract with providers, with a further knock-on impact on capacity, staff morale within providers and the overall ability of providers to operate in a sustainable way:

LAs [local authorities]/ ICBs won’t contract with us, banks won’t lend money to care homes without a CQC approval, investors get put off…

– Head of quality at a care home provider

The reduction in activity has resulted in some organisations not being inspected for several years. Data provided by  CQC  suggests that the oldest rating for a social care organisation is from February 2016 (over 8 years old) and the oldest rating for an NHS hospital (acute non-specialist) is from June 2014 (around 10 years old). The average age of current ratings across all locations is 3.9 years (or 3 years and 10 months) as of 30 July 2024, although this varies by location type. Furthermore, of the locations  CQC  has the power to inspect, CQC estimates that around 19% have never been rated (see the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report).

As well as carrying out fewer inspections overall, CQC has moved away from ‘comprehensive’ inspections to more ‘focused’ inspections examining a few service areas or key questions (quality domains). Of inspections carried out between 1 January 2024 and 30 July 2024, 43% were assessments under the new SAF , a fifth were ‘comprehensive’ and 36% were ‘focused’. In 2023, of 6,734 inspections, 3,107 were ‘comprehensive’ and 3,598 were ‘focused’. This compares with years 2015 to 2019 where around 90% of all inspections were ‘comprehensive’ (see the accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report and ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024: supplementary data tables’).

The reduction in activity and lack of timely (or no) inspections and ratings means that patients and users are unable to compare services with others in order to help them choose care, and providers do not have the insights from an expert inspection, resulting in a missed opportunity for improvement. In some sectors, alternative quality ratings are beginning to come to the fore to fill the gap [footnote 19] - care home quality leads told the review they end up relying on local authority provider assessment and market management solution reports to provide an objective view of quality.

As well as delays in carrying out inspections, there is a backlog in registrations of new providers, though this does need to be set against the increase in demand for registration over the last few years. The total number of locations CQC has the power to rate has increased by around 11% from 2019 to 2024. According to CQC ’s ‘Corporate performance report (2023/24 year end)’, at the end of 2023 to 2024, 54% of applications pending completion were more than 10 weeks old [footnote 18] .  CQC  has a key performance indicator ( KPI ) to reduce this proportion, but it increased from 22% at the end of 2022 to 2023. The review heard that the backlog in registrations was a particular problem for small providers trying to set up a new care home, domiciliary care service or a healthcare service, and could result in lost revenues and investment, which had a knock-on impact on capacity.

The performance of CQC ’s call centre is poor, with interviewees telling the review that calls took a long time to be answered. Data from  CQC  shows the average time for calls in relation to registration (the most common reason for calling) to be answered between January and June 2024 was 19 minutes.  CQC  does have a  KPI  to achieve a 60% to 80% response rate on its national customer service centre call lines and this was achieved in 2023 to 2024 with a response rate of between 63% and 76% across the 4 lines. This means that between a quarter and a third of calls were dropped before they were answered. For calls related to registrations, only 66% were answered with 34% abandoned, equivalent to nearly 19,000 calls. Almost a third of safeguarding-related calls were abandoned despite an average wait of only around 7 minutes. More recent data shared by  CQC showed that there has been some improvement over recent months, with 79% of general enquiry-related calls and 76% of mental health-related calls answered before the caller rang off.

Poor operational performance has not been helped by poor data management within CQC . In developing this report, the review has found it challenging to use CQC data sets to analyse trends and patterns over time.

Conclusion 2: significant challenges with the provider portal and the regulatory platform

New IT systems were introduced into  CQC  from 2021 onwards. The provider portal started in July 2023 but was not used in significant numbers until April 2024. The regulatory platform started in November 2023 for assessment, and included registration and enforcement by April 2024.

They were implemented with the intention of:

  • improving operations and communications with providers
  • enabling a move to a much more insight-driven approach to regulation
  • highlighting emerging risks
  • supporting more risk-informed, responsive assessments and inspections

However, the deployment of new systems resulted in significant problems for users. The review has heard that they cannot easily upload documents, there are problems if the named user is away or off sick and it can take hours to receive a password reset. This takes staff away from delivering or supporting frontline care and causes considerable frustration:

There are so many practical problems with the portal.

– Care provider

… Simple things like not having an auto response to let you know the data has been uploaded correctly are not there and that adds to the stress and difficulties.

These problems are not limited to providers, with CQC staff also telling the review about numerous problems with the system on their side. The review heard that uploading evidence to the platform is the single biggest issue:

It is demoralising, it takes so long and literally is frustrating.
… the time it takes to gather information and upload it to the system has meant that it now takes much longer to carry out an inspection, upload information and publish a report. We produced far more inspection reports with our old ways of working. I am not able to inspect/assess anywhere near the same frequency as I did before.

The review has heard of a number of issues with how the new system and regulatory platform is managing safeguarding concerns and reports of serious untoward incidents. It is not the role of CQC to investigate safeguarding concerns - responsibility for this sits with local authorities under the Care Act 2014 . However, if CQC is the first recipient of safeguarding information, it should be classified as a ‘priority 1’ and a referral made to the local authority. Changes made during development of the regulatory platform have led to unintended consequences with referrals not containing adequate information for the local authority to act on and challenges for CQC staff in ensuring concerns raised are being managed in a timely manner. 

The review has concluded that poorly performing IT systems are hampering  CQC ’s ability to roll out the  SAF and appropriately manage concerns raised with CQC , and cause considerable frustration and time loss for providers and CQC staff.

As well as delays in carrying out assessments, all sectors told the review that they can wait for several months to receive reports and ratings following assessments. This increases the burden on, and stress of, staff and results in lost time when quality improvements could have been made.

The review has heard multiple comments about poor-quality reports - these have come from providers and from members of the public. Specific issues raised include:

  • poor structuring of reports, which are hard to read and follow
  • different messages in summaries than in the main report
  • some sections copied from other providers
  • disparity between the tone and evidence used in a report, and the subsequent rating awarded
  • difficulty for users in understanding assessment reports, lacking explanation of type of residents and limited information on number of beds
  • lack of clarity in dates of assessments - for example, providing a recent date for an update when information was in fact gathered by CQC often years before, without clear explanation
  • a lengthy and complicated scoring system that is not easy to understand

The review has heard that some of this is due to challenges with the information technology within CQC . Irrespective of cause, poor-quality and delayed reports hamper users’ ability to access information, and limit the credibility and impact of assessments for providers. There should be greater consistency in the quality of reports and learning from examples of better-quality outputs that have been published.

CQC underwent an internal restructuring in 2023, alongside the introduction of the SAF and the new IT systems. This was an ambitious combination:

… Changing team structure, changing IT and introducing a new framework all at the same time was bonkers.

– Senior CQC executive

The restructuring moved operational staff from 3 directorates with a focus on specific sectors into integrated teams operating at a local level. These integrated assessment and inspection teams ( IAITs ) include a combination of inspectors and assessors, with those moving from the previous model retaining their previous specialisms. Under the previous model, ‘inspection managers’ worked within the sector-based directorates. Now ‘operational managers’ oversee an integrated team of inspectors and assessors with different specialisms.

When recruiting new staff, CQC does try to recruit individuals with a health or social care background, although the type and level of experience will vary. On recruitment, new inspectors and assessors are allocated to a specialism - for example, ‘inspector, adult social care’ or ‘assessor, primary care’. An adult social care inspector will then, for the most part, do adult social care inspections and not generally be expected to work outside of their sector.

However, in practice, local teams may have an unequal balance of inspectors with different specialisms, with the composition of IAITs dictated by individuals’ geographical locations, regardless of their previous specialism. Where there are vacancies within a team, or recruitment, capacity or demand challenges, inspectors are required to inspect in an area that is new to them and does not match their specialism (for example, an adult social care inspector will need to conduct inspection work for a mental health service if that is a priority activity for their team). Similarly, some operational managers provide assurance or support for the activity of inspectors for a sector in which they may not have previous experience.

In these cases, CQC seeks to ensure that inspectors are supported by others with greater experience in the sector, ‘experts by experience’ (people who have recent personal experience of using or caring for someone who uses health or social care services), senior specialists or national professional advisers, as well as ensuring that inspectors have a strong understanding of the regulations and fundamental standards.

However, the review heard that:

… the new structure may result in situations where there is an isolated single person, with a mental health background or knowledge of mental health, on an inspection team looking at a mental health unit.

– Senior executive at CQC

… [staff] lost their professional home, they lost their sense of team, they lost their expertise.

– Former senior inspector at  CQC

While the review recognises that prior experience of a sector is not a prerequisite for someone to be a credible assessor and that experience does not, in itself, bring credibility, the review has heard consistently from providers about a reduction in expertise and seniority of inspection teams over the last couple of years:

… [it is] not clear they have the expertise/capabilities to carry out well led reviews - we often have very different views of the performance of people and boards, having worked with them over long periods of time.

– NHS England regional director

The loss of sector expertise has been compounded by changes to the roles of chief inspector. Where previously there were 3 roles - chief inspectors of social care, primary care and hospitals, each headed up by highly respected national figures with deep expertise in their sector - there are now 2 chief inspectors: one for adult social care and integrated care, and one for healthcare.

The healthcare leadership team consists of a mental health nurse, a pharmacist and an NHS manager. They are supported by a medical director and national professional advisers who are drawn from a wide range of backgrounds. It has been over a year since the previous chief inspector of healthcare was unfortunately taken ill and has been unable to work. A request to bring in a new chief inspector for healthcare was first discussed with  DHSC  in September 2023, with a formal request made in February 2024.

The lack of sector expertise results in providers not trusting the outcomes of reviews and not feeling they have the opportunity to learn from others, especially from highly regarded peers.

This lack of expertise has been compounded by a reduction in ongoing relationships between  CQC  staff and providers. The chief executive in post prior to 2018 and chief inspectors would spend considerable time with senior members of the health and care sectors, building relationships, hearing their perspectives on care delivery, and explaining and sharing the insights  CQC  was gathering. This has been described on both sides as invaluable and has been largely lost.

At the local level, inspection teams would similarly build relationships with senior leaders from across the sectors to build confidence and support early awareness of emerging problems:

We’ve lost local relationships - lost the ability to speak to people who understand what’s happening locally.

The SAF was developed as part of CQC ’s 2021 strategy, A new strategy for the changing world of health and social care , to address the duplication and inefficiency inherent in the predecessor assessment frameworks, and more fully realise its 2016 strategic ambition to develop a single, shared view of quality for health and care across all sectors.

It was hoped that data and insights could be collected in advance across all areas of care in order to have an ‘always on’ assessment model. Where new evidence was collected or received by CQC , it could be considered (based on professional judgement) with quality statements and ratings updated accordingly. Data and insights are supplemented by the evidence collected by CQC ’s operations teams and support CQC to have a continued focus on the risk of poor care, so that emerging problems can be identified at an early stage, and organisations with data suggesting poorer-quality care can be assessed and inspected more frequently.

This review considers this concept and approach to be sensible and in line with regulation in other sectors, but it is clearly dependent on reviewing robust data and insights in a timely manner.

In 2022,  CQC  shifted its planned launch of the new  SAF  from January 2023 to later in 2023 because of internal delays and feedback from providers. The  SAF  was subsequently rolled out in November 2023 to a small number of providers across sectors as part of the early adopter programme with rollout continuing in a phased manner.

The framework has set out 34 areas of care quality (called ‘quality statements’) that could be applied to any provider of health or social care and a subset to assessments of ICSs and local authorities. These align to the 5 domains of quality used for many years and referred to as ‘key questions’ within the SAF .  CQC describes these in their documentation on the SAF as below:

  • safety is a priority for everyone and leaders embed a culture of openness and collaboration
  • people are always safe and protected from bullying, harassment, avoidable harm, neglect, abuse and discrimination
  • people’s liberty is protected where this is in their best interests and in line with legislation
  • people and communities have the best possible outcomes because their needs are assessed
  • their care, support and treatment reflects these needs and any protected equality characteristics
  • services work in harmony, with people at the centre of their care. Leaders instil a culture of improvement, where understanding current outcomes and exploring best practice is part of everyday work
  • people are always treated with kindness, empathy and compassion
  • people understand that they matter and that their experience of how they are treated and supported matters
  • their privacy and dignity is respected
  • every effort is made to take their wishes into account and respect their choices, to achieve the best possible outcomes for them. This includes supporting people to live as independently as possible
  • people and communities are always at the centre of how care is planned and delivered
  • the health and care needs of people and communities are understood and they are actively involved in planning care that meets these needs
  • care, support and treatment is easily accessible, including physical access
  • people can access care in ways that meet their personal circumstances and protected equality characteristics
  • there is an inclusive and positive culture of continuous learning and improvement. This is based on meeting the needs of people who use services and wider communities, and all leaders and staff share this
  • leaders proactively support staff and collaborate with partners to deliver care that is safe, integrated, person-centred and sustainable, and to reduce inequalities

For each of these 34 quality statements, there are 6 ‘evidence categories’ where information about quality of care is collected. These are:

  • people experience
  • staff experience
  • partner experience
  • observations

However, the  SAF  is intended to be flexible and not all areas are considered for all 34 quality statements. It is tailored to different sectors by determining which evidence categories are relevant for each quality statement for each sector. Within the SAF , there are priority quality statements identified for each sector, which are the starting point of a planned assessment. Additional quality statements may then be chosen based on the risk and improvement context for providers.

The number of priority quality statements and priority evidence categories assessed varies across sectors and sub-sectors. For the sectors reviewed, the quality statements assessed under each key question vary considerably. The average number of quality statements currently used in SAF assessments is 9.2 (as of 30 July 2024) or under a third of the total 34 quality statements. The most commonly assessed quality statement under ‘safe’ was ‘safe and effective staffing’, with 98% of ‘safe’ assessments across the 6 sectors being assessed against this quality statement. This compares with only 30% of ‘safe’ assessments looking at ‘safe systems, pathways and transitions’. The accompanying ‘Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024’ report illustrates which evidence categories are considered for each quality statement and which ones are priority quality statements for different sectors.

The review has identified 7 concerns with the  SAF :

  • The way in which the  SAF  is described is poorly laid out on the  CQC  website, not well communicated internally or externally and uses vague language.
  • There is limited information available for providers and users or patients as to what care looks like under each of the ratings categories, resulting in a lack of consistency in how care is assessed and a lost opportunity for improvement.
  • There are questions about how data on user and patient experience is collected and used.
  • More could be done to support and encourage innovation in care delivery.
  • There is insufficient attention paid to the effectiveness of care and a lack of focus on outcomes (including inequalities in outcomes).
  • There is no reference to use of resources or the efficient and economic delivery of care in the SAF , which is a significant gap, despite this being stated in  section 3 of the Health and Social Care Act 2008 .
  • There is little reference to, or acknowledgement of, the challenges in balancing risk and ensuring high-quality care across an organisation or wider health and care system.

Concern 1: the way in which the  SAF  is described is poorly laid out on the  CQC  website, not well communicated internally or externally and uses vague language

The summary of the  SAF  (as set out at the start of this section) should be easy for any health or care provider, or any member of the public, to find and access. However, the  SAF  is instead described on  CQC ’s website  in a way that the review found confusing. Poorly laid-out pages lack structure and numbering, and do not have a summary as per the section above.

Furthermore, the descriptions of each of the 5 key questions (safe, effective, responsive, caring and well led) are extremely generic, using vague language, and lack tangible objective measures of quality, and the impact of those on users and patients:

The SAF , although [it] supposedly reduces repetition, duplicates statements, is confusing, [and] badly worded.
The 117 pages of the woolly SAF are unwieldy, hard to use, difficult to comprehend and purport to cover all care services. The style is off-putting with the ‘we’ and ‘I’ statements.

– Representative of a patient or user group

Furthermore, the review has found that, across the executive team, few were able to describe the 34/6 framework (34 quality statements and 6 evidence categories), the rationale for prioritising particular quality statements in each sector, the rationale for which evidence categories are used for different quality statements, the way in which ratings are calculated and so on. This should be widely understood in  CQC  irrespective of role or background.

Concern 2: there is limited information available for providers and users or patients as to what care looks like under each of the ratings categories, resulting in a lack of consistency in how care is assessed and a lost opportunity for improvement

The descriptions as to what ‘good’ looks like for each quality statement do exist. For example, for the ‘safeguarding’ quality statement, the description of ‘good’ says: “We work with people to understand what being safe means to them as well as with our partners on the best way to achieve this. We concentrate on improving people’s lives while protecting their right to live in safety, free from bullying, harassment, abuse, discrimination, avoidable harm and neglect. We make sure we share concerns quickly and appropriately.”

There is more detail available in guides to providers - for example, that shown in ‘Appendix 4: example of a rating descriptor’ and on the CQC website. However, while there used to be tangible, clear and objective descriptions of each area assessed in the old model for each of the 4 ratings categories - see ‘Appendix 5: examples of rating descriptors in previous assessment model’ - this has not yet been developed for the new framework.

The lack of clear descriptions does not support organisations to improve. The review heard time and time again from providers that they struggle to know what inspectors are looking for, they are not learning from them and, as a result, they don’t know what they need to do to be better:

What does outstanding look like? We don’t have it.

– CEO at an NHS foundation trust

They should not be an improvement body but should have a clear view as to what good/outstanding care looks like.

– NHS England senior executive

[There is a] failure from CQC to clearly articulate what it means to be each rating.

– Senior executive at a care provider

The review also heard that inspectors struggled to articulate the  SAF  and the definitions they should be working to. It was told that, in some instances, operational staff lacked guidance on how to inspect or assess providers under the SAF , with previous information on the intranet no longer accessible. Strict limits on the number of words that could be written for each evidence category could mean that issues are not able to be reported in detail in order to help organisations improve:

We previously had detailed ratings guidance and would carefully assess each rating against that. Deputy chief inspectors would assess each rating to check.

– Previous senior inspector at CQC

It’s not clear an assessor/inspector knows how to allocate a score of 1 to 4 - it is very subjective.
[The] lack of a clear description for each quality statement and evidence category means it’s hard to know why CQC decides to prosecute one provider and not another.

– Lawyer working in the healthcare sector

Many providers referred to a lack of consistency in ratings awarded to providers. This is not a new issue for CQC , but the review found that this has, so far, not been effectively addressed by the move to the SAF . Those who work across multiple sites (for example, a large care home provider with multiple sites or a large group of GP practices) report ratings differing from one site to another, when they know, having spent far more time with them, that their performance does not align with the reports. This applies in all directions - for example, their poorer-quality providers are getting better ratings than their top providers and vice versa:

There is [a] major problem at the moment with consistency - even within a single large provider, one home can be rated as ‘requires improvement’ while another is ‘outstanding’. [The] difference may be down to very small things such as signage. That’s not credible. CQC needs to be much clearer on what good looks like - which would then make it easier for assessors/inspectors.

– Senior executive at a large social care provider

Concern 3: there are questions about how data on user and patient experience is collected and used

With the development of the  SAF ,  CQC  has sought to have a greater emphasis on people’s experience of the care they receive. Indeed, a senior inspector stated: “People’s experience is the most important thing we look at.”

CQC has a range of methods for gathering feedback on people’s experience. Data is drawn from national surveys (for example, the  NHS Patient Survey Programme , NHS England GP Patient Survey  and the  Personal Social Services Adult Social Care Survey , also known as the Adult Social Care Survey), all of which have had some statistical analysis applied and remove any results related to less than 30 users at any one site.

Surveys are supplemented by interviews with service users. The review heard concerns from providers that relatively small numbers of users or patients are interviewed, with sometimes as few as tens of users of a service that looks after thousands of people a year in the case of a GP practice, or hundreds of thousands in the case of a hospital.

The review heard from users, patients and their representative groups that there is confusion about the mechanisms through which patients and users can give feedback to CQC about individual providers. While CQC can use complaints as evidence, it is not CQC ’s role to investigate or respond to complaints. 

The ways in which complaints are managed across health and social care will be further considered in a second review, looking at the wider safety landscape. That review will also consider the breadth of bodies currently collecting patient and user experience feedback, and how this could be more effectively channelled and used as a basis for assessment and improvement.

CQC  does assess providers on whether they are actively seeking, listening and responding to the views of people who have had, or are most likely to have, a poorer experience of care, or those who face more barriers to accessing services.

There is similarly a need to ensure representative surveys of all staff. CQC has been using NHS Staff Survey data as a long-standing source of evidence in their assessments and ratings of NHS trusts. This is established in dashboards available for operations teams, and used in preparatory analysis carried out for specific NHS trust inspections. It is supplemented by interviews with staff on site during inspections, with similar concerns expressed about the representativeness of views collected from a small proportion of staff.

Concern 4: more could be done to encourage and support innovation in care delivery

The review was asked to consider how  CQC  considers innovation in care delivery.

Innovation is considered within the SAF under the ‘learning, innovation and improvement’ and ‘governance, management and sustainability’ quality statements under the ‘well led’ key question, and the ‘delivering evidence-based care and treatment’ quality statement under the ‘effective’ key question. There is also some  CQC  guidance on using and sharing innovation to reduce health inequalities .

Furthermore, CQC is actively seeking to develop operational guidance on inspection of new approaches such as virtual wards and vision-based monitoring systems in inpatient mental health units to help operational colleagues understand how they can assess these appropriately. The Capturing innovation to accelerate improvement project has developed 4 innovation case studies and a resources map to support services. CQC  has sought to support digitisation of social care records by including the adoption of records as a measure of best practice in the  SAF , and communicating future expectations for providers on the role of digitisation in improving and maintaining care quality.

There is some feedback from provider surveys: CQC ’s own annual survey of providers showed that just under 50% of providers agreed or strongly agreed that CQC ’s information supports their service’s efforts to innovate or adopt innovation from elsewhere. In 2024, NHS Providers published research, Good quality regulation: how CQC can support trusts to deliver and improve , that highlighted the need for  CQC  to do more to “support and encourage improvement and innovations”.

The review heard from providers that there is insufficient focus on encouraging, driving and supporting innovation in care delivery:

Regulator doesn’t have a great tradition of innovation. Used to hear from members: ‘inspector didn’t like this, that or the other innovation’.
While the then-chief inspector of adult social care recognised and applauded our innovation, the local inspectorate teams did not have the same response.

While CQC is seeking to identify some innovations that can be woven into its assessment framework, the opportunities to endorse and embed change and improvement are far greater. For example, a recent report highlighted multiple opportunities for greater adoption of technology and wider innovation in care delivery [footnote 20] .

Supporting innovation could be a galvanising factor in driving better-quality, more efficient and more responsive care across all sectors. There is scope for CQC to further expand its work to ensure a greater focus on driving innovation - and sharing best practices. 

In comparison, for example, Ofsted works closely with the Department for Education to consider what changes and improvements to schools are planned, and incorporates those into its inspection framework. While CQC does work closely with DHSC to incorporate new legislative requirements into the assessment framework, CQC and DHSC should consider how to ensure a greater focus on innovation and new models of care.

Concern 5: there is insufficient attention paid to the effectiveness of care and a lack of focus on outcomes (including inequalities in outcomes)

The 5 key questions of quality (previously called quality domains) - safe, effective, caring, responsive and well led - are well established and intended to be used as a holistic overview of quality. However, there has been an increasing focus on the ‘safe’ key question with a surprising lack of attention to ‘effective’. While this may reflect an increasing focus on organisations where there are concerns about safety, it fails to recognise the crucial importance of effectiveness, which looks at the impact (or outcomes) of care on improving people’s health and wellbeing.

Between 4 December 2023 and 30 July 2024, CQC completed 980 assessments under the new framework (this figure includes unpublished assessments). Of these, 75% looked at the ‘safe’ key question, whereas only 38% looked at the ‘effective’ key question. The proportion of assessments looking at each key question varies across sectors, with 90% of adult social care assessments looking at ‘safe’ compared with 67% of secondary and specialist care assessments (which includes NHS acute hospitals). No community health assessments had looked at the key questions ‘effective’ and ‘responsive’, and no secondary and specialist care assessments had looked at the key question ‘caring’.

Of all the quality statements considered during the same time period, 40% were within the ‘safe’ key question and 13% within the ‘effective’ key question. Among secondary and specialist care assessments carried out, 63% of quality statements considered were within the ‘safe’ key question and 9% within the ‘effective’ key question.

The number of quality statements focused on during assessments also varied from sector to sector. On average, 9.2 out of 34 quality statements were used with 2.1 for mental health assessments and 11.6 for adult social care assessments. Of the assessments that looked at the key question ‘effective’, only 34% considered the quality statement ‘monitoring and improving outcomes’. 

Within the framework, there is an evidence category for outcomes data but little attention is given to this. For example, within primary care, only 2 of 100 (2%) evidence categories considered across 34 quality statements refer to outcomes of care. Of the 34 quality statements considered for primary care, only 6% have outcomes as an evidence category (compared with 68% looking at people’s experience). Even when outcomes are considered, there is a very narrow set of metrics routinely reviewed. This is despite there being considerable data available about outcomes of care in primary care, such as how well controlled an individual’s diabetes is and the impact of that care - for example, rates of admission to hospital with the complications of a long-term condition (renal failure in people with diabetes, acute cardiovascular events in people with high blood pressure or atrial fibrillation).

A number of providers commented that inspections tended to concentrate on more trivial issues. For example, at a primary care providers roundtable, the review heard that the failure to calibrate 1 out of 15 sets of weighing scales was reported on. Other providers referred to an overemphasis on documentation of things like employment practices, which, while being important, should not be done at the expense of assessing all key questions including the outcomes of care:

[It is] not clear they focus on the real things that matter.

– NHS England director

200 pieces of ‘evidence’ were asked for but it was not clear what they were used for.

– Social care provider

While there is limited nationally consistent data that is available on the outcomes of social care at a care provider level,  CQC  does collect  data from providers on adverse negative events in care  including serious injuries, safeguarding incidents and deaths. At a local authority level, slightly more outcomes data [footnote 21]  is available and assessments do take into account  measures from the Adult Social Care Outcomes Framework  including social care-related user quality of life.

Within healthcare,  NHS England publishes over 30 national clinical audits  and the  Getting It Right First Time ( GIRFT ) programme  - more of which could be drawn on to provide comparisons of outcomes across providers. The  GIRFT  programme provides data on mortality rates, length of stay, post-surgical infection rates and hospital re-admission rates in more than 40 surgical and medical specialties but is not used by  CQC .

The review has heard positive feedback about the work of the primary medical services team in  CQC  in introducing clinical searches of GP records. These allow for reviews of prescribing, monitoring of high-risk medicines, management of long-term conditions and potential missed diagnoses. An initial evaluation of the searches found that 97% of inspectors, 91% of specialist clinical advisers and 65% of providers believed they helped to identify risks around safe and effective patient care.

Across effectiveness and outcome data, the review has struggled to find reference to measures looking at outcomes by different population groups, in particular the  Core20PLUS5  groups, though the review understands this is being considered in  CQC ’s developing approach to assessing  ICSs .

The review was similarly surprised to see the lack of measures of outcomes in independent sector providers, particularly given the emphasis on this in the  Paterson Inquiry report . The Paterson case exposed significant gaps in the oversight of private healthcare providers and across the NHS - the accompanying review found:

  • a lack of effective oversight
  • the need for a regulatory framework that not only enforces standards but also actively monitors and audits clinical practices and outcomes within the independent sector
  • a need for more accessible whistleblowing mechanisms within the private sector

Following the Paterson case, there has been significant work in the private sector, supported by  CQC , to strengthen clinical governance across the industry including the development of the  Medical Practitioners Assurance Framework  ( MPAF ), which has been embedded in both the  CQC  inspection regime and the  NHS Standard Contract . The  MPAF  is a contemporary consensus view of medical best practice designed as a framework for clinical governance across the sector.

Concern 6: there is no reference to use of resources or efficient and economic delivery of care in the  SAF , despite this being stated in section 3 of the Health and Social Care Act 2008

The review was asked to consider how CQC supports the efficient, effective and economic delivery of care. This is part of the scope of  CQC  and was set out in the  Health and Social Care Act 2008 . However, within the  SAF , there is no quality statement that considers use of resources or efficient delivery of care. The review understands that ‘use of resources’ assessments used to be conducted by NHS England, but were paused during the pandemic and are no longer done.

The lack of an objective assessment of the efficient and economic delivery of care is disappointing as effective use of resources is one of the most impactful ways of improving quality of care for any provider. More efficient deployment of staff and more efficient use of assets (such as beds, diagnostics and theatres) enables more people to be cared for and better care for individual patients. Furthermore, a number of recognised metrics of high-quality care are also good metrics of efficient services and good use of resources - for example, length of stay in an inpatient facility [footnote 22] .

The quality statement on safe and effective staffing assesses staffing levels based on whether the provider has the staff numbers recommended in guidance from national bodies, but does not independently consider whether services could be delivered in a more efficient and effective way:

We could just add staff and we would get a better rating, but it wouldn’t necessarily improve care.

The review understands that inspectors are encouraged to discuss staffing levels with providers before making an assessment, but there appears to be a disconnect between CQC policy on this issue and providers’ experience of inspections:

NHS organisations are continually doing dynamic assessments of staffing levels/risk levels, but CQC come in and simply say you’re below establishment and need more staff.
CQC has a tendency to always say you need more staff.

The review has understood that ‘safe staffing’ levels are set out by NHS England and other regulatory or professional bodies. It is not clear how the ‘safe staffing’ levels in the guidance are set and whether they are grounded in an efficient model of care and/or promote scenarios where technology, scale and optimal process management are used to best effect. CQC should clarify the evidence base or lack thereof.

While ‘safe staffing’ can be a valid and useful concept, when bluntly applied, ‘safe staffing’ ratios can be a significant constraint on ensuring efficient, effective and economic delivery of care, restricting innovation and improvement in care delivery, diverting resources to specific areas, and putting quality of care at risk.

An independent regulator should not be assessing against provider-stipulated inputs, but should be focused on overall outcomes for patients and users, and challenge provider capture, as can be seen in other national regulators such as  Ofgem .

Concern 7: there is little reference to, or acknowledgement of, the challenges in balancing risk and ensuring high-quality care across an organisation or wider health and care system

The review has heard from a number of providers - and commissioners of care - about the lack of recognition within the SAF to the challenges in balancing risk in the health and care sectors. Providers have referred to a number of examples where one aspect of care is assessed in isolation of the consequences for other areas of care and, therefore, other patients and users.

For example, within the social care sector, the review heard of a requirement that staff do not lift someone following a fall. This can then result in an ambulance being called and a person being taken to hospital with implications for their own wellbeing (recognising the negative impact of hospital admissions on older, frail people) and for others who may then incur a delay waiting for an ambulance.

Within the healthcare sector, providers referred to ‘safe staffing’ assessments and CQC requirements to add staff to one area of care (for example, an inpatient ward) that can then result in other services (for example, community services) not being available and a greater number of patients or users being compromised [footnote 23] [footnote 24] .  

Across health and care systems, the review heard of care homes refusing to accept users after 4pm as “it will be marked down by CQC ” and this in turn leading to further time in hospital with implications for the user and a knock-on impact on others awaiting admission to hospital. CQC has said that none of these examples are stipulated by CQC or are regulatory requirements so it is not clear why these perceptions persist.

Health and social care are both complex and have inherent risks. Leaders and staff working in the sectors need to balance risks within and across services on a daily basis. This should be given greater recognition within the CQC assessment framework to ensure a far greater focus on improving outcomes for users, patients and populations.

The review has been concerned to find that overall ratings for a provider may be calculated by aggregating the outcomes from inspections over several years. Examples of this are shown in ‘Appendix 6: examples of combined ratings’. This cannot be credible or right.

The review understands that this approach is longstanding and did not change as a result of the introduction of the  SAF , but may not have been transparent before.

The  SAF  was intended to prevent the use of inspections (and associated ratings) from previous years as more frequent assessments would be undertaken based on emerging data and intelligence, but, because CQC  is not doing the number of assessments required to update ratings, the problem continues. CQC  intends to mitigate this by using individual quality statement and key question scores instead of aggregated ratings, and by assessing more quality statements to improve robustness.

Providers do not understand how ratings are calculated and, as a result, believe it is a complicated algorithm, or a “magic box”. This results in a sense among providers that it is “impossible to change ratings”:

During the assessment and the assessment feedback, it was mentioned by the inspectors that there was evidence that ‘well led’ had improved significantly and, when queried why more quality statements were not assessed, they indicated that this wasn’t an option available to them, while also noting that it was probable that the rating would have changed with the improvements noted.

– CEO of a care provider

A similar theme was heard from people working within CQC , with some inspectors and senior professional advisers also commenting that they didn’t know how ratings were calculated.

CQC  is seeking to bring greater clarity to how ratings are calculated, and is developing materials to facilitate communication and build transparency.

Ratings matter - they are used by users and their friends and family, they are used by commissioning bodies (NHS, private health insurers and local authorities), and they drive effective use of capacity in the sector. They are a significant factor in staff recruitment and retention:

CQC ratings have significant implications for care homes - if CQC says something isn’t ‘safe’, then the home is at risk of being prosecuted.

Ratings need to be credible and transparent.

Conclusion 7: there are opportunities to improve CQC ’s assessment of local authority Care Act duties

The Health and Care Act 2022 gave powers to CQC to assess local authorities’ delivery of their adult social care duties after several reports and publications identified a gap in accountability and oversight of adult social care [footnote 25] .

Local authorities are assessed across 4 themes (working with people, providing support, ensuring safety within the system and leadership), with a total of 9 quality statements across these 4 themes, which are a subset of the 34 quality statements in the SAF .

Formal assessment commenced in December 2023 following a pilot phase involving 5 volunteer local authorities. CQC aims to complete baseline assessment of all 153 local authorities by December 2025. The review spoke to all 9 local authorities who have already been through the entire assessment process (with reports published) as well as representative bodies within the sector. The review found broad support for the assessment framework as it is designed, in line with a wider sector response [footnote 26] . However, it also heard feedback that the assessment process and reporting could be improved. Both were regarded less favourably than Ofsted reviews of children’s social care.

A number of areas were commented on, including:

  • the relatively small number of cases reviewed. Typically, 6 individual cases are tracked - a small number in proportion to the number of long-term users of care supported by local authorities (the average number of long-term users supported over a year is around 5,600, ranging from 30 in the Isles of Scilly to over 22,000 in Lancashire [footnote 27] ). The review understands that CQC also gathers feedback on people’s experiences
  • engagement between the CQC team and local authority staff. This received mixed feedback - some reporting that this worked well, others feeling a stronger relationship could have been built. This was contrasted with Ofsted’s ‘keeping in touch’ meetings . Some local authorities commented that there was insufficient opportunity to discuss and reflect on CQC ’s findings during the assessment. This was felt to be a missed opportunity for learning and improvement. CQC recognises that there is more to be done to build relationships with individual local authorities and is considering the introduction of relationship owners and annual engagement meetings in addition to current feedback meetings following an assessment
  • the expertise of teams. There were fewer comments than in other sectors, but there was a perception among some local authorities that the assessment teams lacked the expertise and insight into how local authorities work in adult social care - there were reports that very few had social work experience. Views on executive reviewers were mixed and, in some cases, there was concern about their seniority
  • insufficient descriptors of what ‘good’ or ‘outstanding’ looks like to enable local authorities to improve
  • the length of time between the request for information and inspection was difficult to manage with an impact on capacity and morale within local authorities
  • there were challenges in the process for factual accuracy checking - with local authorities stating that this was burdensome and conclusions sometimes being based on data that was significantly out of date

A particular gap noted was in the commissioning of social care. The review was told that ministers who initiated the review of local authorities had expected that CQC would look at how effectively they were commissioning services. This means:

  • understanding residents’ needs (now and into the future)
  • building a deep knowledge of different models of care provision, including more innovative models of care
  • having analytical insights into the most efficient and effective models of care delivery (taking into account scale, technology and different workforce models)
  • agreeing and negotiating contracts with providers over longer periods of time (to avoid spot purchasing and competing against other local authorities)
  • ensuring a stable and robust social care market

There are concerns as to how well local authorities enact these functions with the lack of effective commissioning highlighted in a recent report on care for older people [footnote 20] and in comments made in the review:

Commissioning is a major gap everywhere - fragmented across too many bodies - too many commissioning LAs [local authorities], not linked in enough with ICBs .

– Senior policy leader

It is not clear from current assessments how comprehensibly commissioning functions are assessed, which misses an opportunity to improve commissioning capabilities and, as a result, quality and efficiency of care.

Given that the approach to re-assessment is still to be designed, it is difficult to comment on the role of CQC in local authority improvement. The review understands CQC is in the process of designing reassessment with the sector. At a roundtable for local authority representatives, the review heard that:

Ofsted’s ILACs [Inspecting local authority children’s services] allows for a proportionate reassessment/review period, so shorter for those that have been deemed ‘requiring improvement’ or ‘poor’ [and] longer period for those rated ‘good’. CQC could consider adoption of a similar approach.

Conclusion 8: integrated care service ( ICS ) assessments are in early stages of development with a number of concerns shared

Under the Health and Care Act 2022 , CQC was given the duty to review and assess ICSs . The assessment was intended to provide independent assurance to the public and Parliament about how well health and social care organisations within an ICS area collaborate to plan and deliver high-quality care.

CQC is required by statute to look at 3 areas: leadership, integration and quality of care - the latter, presumably, to pull together information from individual providers within an ICS . CQC developed a methodology for these assessments, which was tested in pilots in Dorset and Birmingham and Solihull, but wider rollout has been paused. 

The NHS Confederation and CQC have both shared feedback to date, as have a number of people spoken to as part of this review. 

The following concerns were highlighted:

  • questions as to whether processes are being assessed or outcomes. ICSs have 4 objectives (improving outcomes in population health and healthcare, reducing inequalities in outcomes, experience and access, enhancing productivity and value for money, and helping the NHS support broader social and economic impact) - however, the assessment process has not particularly focused on these objectives and it is not clear how progress against them will be measured
  • lack of descriptors as to what ‘good’ looks like, particularly recognising different structures or arrangements across ICSs
  • questions as to what specific data (metrics) should be considered to enable a meaningful assessment of leadership, integration and quality across an ICS - and the performance of the ICS against its 4 objectives
  • concerns around duplication of provider assessments - some data requests related to provider data that could or should have been considered in provider assessments rather than ICS assessments
  • the need for CQC to recognise the challenges of clinical risk management in working effectively across multiple providers
  • difficulties in meaningfully hearing from residents about their views of quality of care across a whole system
  • the time taken to prepare for the CQC assessment and the associated costs - both direct ( CQC fees) and indirect (time of staff), given this is on top of the costs for providers within an ICS
  • overlaps with the NHS England Oversight Framework , which sets out how NHS England will assess ICBs and ICSs

Conclusion 9: CQC could do more to support improvements in quality across the health and care sector

The review heard a consistent comment that CQC should not be an improvement body per se, but, at the same time, could do more to support the health and care sectors to improve. 

CQC has been, and continues to be, an impactful prompt with many providers highlighting that preparation for an inspection can be a positive opportunity for self-reflection and learning - and that a poor rating from CQC is a very strong incentive to improve:

What CQC say[s] does matter. People do take notice - there is credibility and people do take it seriously.

There are opportunities for far more than this. Specifically:

  • as highlighted previously, the description of best practice, and of what ‘good’ and ‘outstanding’ delivery of care looks like, should be a central source of guidance during any inspection. While there are some examples of this on CQC ’s website, there could be far more, such as around new models of care delivery, leading international examples and more innovative approaches - particularly the use of technology.  CQC  could be a substantive repository of high-quality and innovative models of care
  • reports need to be clearer with opportunities to improve set out and encouragement to develop clear action plans. Ideally, these would then be followed up on in a timely manner
  • inspection teams should inspire as much as instruct - helping organisations to understand where there are opportunities for improvement and setting out what a better model of care could look like. This requires having high-calibre teams carry out inspections who are seen as credible and knowledgeable
  • many organisations, both within and outside the health and care sectors, take a very proactive approach to the collection of, and use of, user feedback - this could be far more systematic and comprehensive in the health and care sector  
  • governance structures within organisations are crucial to improvement - with clear roles, responsibilities and accountabilities, all aimed at continuously innovating and improving. A greater focus on how organisations are approaching and delivering improvement (becoming self-improving organisations) rather than looking at input metrics, could enable more significant improvements in quality of care. The review notes CQC ’s research into improvement approaches within organisations - see the Rapid literature review: improvement cultures in health and adult social care settings

NHS Providers’ report Good quality regulation: how CQC can support trusts to deliver and improve explores how CQC ’s approach could become more supportive and constructive in the future. The report advocates for CQC to actively support improvement and innovation by sharing best practices and engaging in improvement-focused conversations. As the report says: “ CQC should make the most of its privileged observer position by sharing good practice, engaging in improvement-focused conversations with providers, and working with organisations that have a direct role in improvement.”

Conclusion 10: there are opportunities to improve the sponsorship relationship between CQC and DHSC

DHSC ’s sponsorship of CQC should promote and maintain an effective working relationship between the department and CQC , which should, in turn, facilitate high-quality, accountable, efficient and effective services to the public.

The review has found that DHSC could do more to ensure that CQC is sponsored effectively, in line with the government’s Arm’s length body sponsorship code of good practice . For example, DHSC should do more to ensure that CQC is meeting its KPIs in terms of operational performance and hold it to account through regular discussion of timely management information linked to those KPIs . The review has heard of long delays in DHSC responding to requests from CQC - for example, to replace senior roles. This has not helped CQC and should be addressed.

The National Quality Board ( NQB ) is responsible for ensuring high-quality health and social care services. As such, it could be expected that it would have extensively reviewed the SAF prior to its launch. The review understands that the  NQB  had an initial presentation on the  SAF  in 2021, but this was a very high-level description of the aims of the  SAF , including the intention to focus on user experience. It appears there was no further discussion and so the opportunity was lost to consider the details of the 34 quality statements, the 6 evidence categories, and what specific measures would be used to ensure the regulator was able to support high-quality, efficient and effective care.

The NQB should consider its role in agreeing:

  • definitions of high-quality care
  • how to measure and quantify the effectiveness of care - in particular, what outcome metrics to use
  • how to weave innovation - particularly the use of technology and new models of care - into CQC ’s assessment framework
  • how to quantify use of resources to support optimal allocation of resources to improve health and wellbeing, and optimal deployment of resources within providers
  • how to assess trade-offs or balance risk considerations
  • use of resources, and identifying how and where limited public funds could be spent most effectively

Other areas for further consideration

A number of areas have been raised with the review team but not yet considered in detail. These are:

  • One-word ratings.
  • Finances within CQC - both how CQC is funded, and the costs of running the organisation efficiently and effectively.
  • The need to ensure the NHS Federated Data Platform results in a single ‘data lake’ across the health and social care sectors.
  • The wider regulatory landscape and the burden of regulation, including the relationship between CQC and the NHS England Oversight Framework.

More details on each of these follow.

1. One-word ratings

While the pros and cons of one-word ratings hasn’t particularly been raised during conversations carried out for the review, the government recently announced that Ofsted would end the use of one-word ratings and so it would be reasonable to similarly consider their use in health and social care. A recent review of regulation by NHS Providers, A pivotal moment for regulation: regulation and oversight survey 2024 , raised concerns about the use of one-word ratings among a wider range of topics as referred to earlier. Following the Ofsted announcement, the Local Government Association called for CQC to scrap the use of single-word ratings in its assessments of local authorities’ adult social care services .

Changes to one-word ratings could be beneficial in allowing greater clarity to be brought to the different key questions of quality, allowing a ‘balanced scorecard’ approach across ‘safe’, ‘effective’, ‘responsive’/’caring’ and ‘well led’. Consideration could also be given to providing greater transparency of ratings across different service lines and sites in larger providers. All this needs to be set against the need for a straightforward narrative that is accessible for users and patients.

2. CQC finances

CQC is currently funded largely through fees charged to providers. CQC is required through HM Treasury guidance Managing public money to recover the full cost of its regulatory services through fees charged to registered providers - so-called full chargeable cost recovery. CQC must consult on any changes to its statutory scheme of fees, and both HM Treasury agreement and Secretary of State consent is required before changes to fee levels can come into effect.

The current funding model presents a number of challenges, namely how to:

  • ensure efficient and effective service delivery from CQC when providers are obligated to pay
  • decide on fee levels
  • ensure that resources available match the requirements of CQC - while still remaining efficient and effective

While this is presumably a role for DHSC as the sponsor organisation, greater consideration could be given to how to quantify the above - and where or how decisions regarding resourcing are made.

3. Single ‘data lake’ across the health and care sectors

The review has heard that there is a significant opportunity to build a single repository of data on quality of care (including use of resources) across the health and care sectors. This would benefit CQC (and all organisations in the health and care sector), and would bring a more streamlined and efficient approach to performance management and improvement across all services.

The Federated Data Platform was established to create a “single version of the truth” approach to data within healthcare. Consideration should be given to how best to build on this to have a common set of data about quality of care (across all domains and sectors) that could be used.

4. The wider regulatory landscape

The wider regulatory landscape is extensive with a growth in the number of bodies over recent years. Overlapping responsibilities of these bodies can create confusion for providers and create an unhelpfully large burden of regulation:

With over 12 different regulatory/inquiry body frameworks for maternity care - resulting in, for example, well over 100 recommendations for Shrewsbury and Telford maternity unit - it’s an extremely complex and crowded landscape.

– Senior manager at NHS England

A mapping of the regulatory landscape of healthcare identified over 100 organisations which exert some regulatory influence on NHS provider organisations and recommended a review to ensure more effective and responsive regulation [footnote 28] .

Within the NHS, ICBs are responsible for ensuring high-quality providers (NHS trusts, GPs and services commissioned from independent providers) and NHS England is responsible for overseeing ICBs to ensure they are delivering against their 4 objectives as outlined above.

As a result, there is significant overlap between the role of NHS England, ICBs and CQC .

Recommendations

There are 7 recommendations:

  • Rapidly improve operational performance, fix the provider portal and regulatory platform, improve use of performance data within  CQC , and improve the quality and timeliness of reports.
  • Rebuild expertise within the organisation and relationships with providers in order to resurrect credibility.
  • Review the  SAF  and how it is implemented to ensure it is fit for purpose, with clear descriptors, and a far greater focus on effectiveness, outcomes, innovative models of care delivery and use of resources.
  • Clarify how ratings are calculated and make the results more transparent.
  • Continue to evolve and improve local authority assessments.
  • Formally pause ICS assessments.
  • Strengthen sponsorship arrangements to facilitate CQC ’s provision of accountable, efficient and effective services to the public.

Recommendation 1: rapidly improve operational performance, fix the provider portal and regulatory platform, improve use of data, and improve the timeliness and quality of reports

The interim chief executive of  CQC  is already making progress towards redressing poor operational performance including bringing in more staff, particularly those with prior experience of working in CQC .  CQC  should agree operational performance targets or KPIs in high-priority areas, in conjunction with  DHSC , to drive and track progress.

CQC will need to set out how, and by when, it will make the changes required to the provider portal and regulatory platform. CQC should also ensure that there is far more consideration given to working with providers to seek feedback on progress.

Urgent action is needed to ensure a timely and appropriate response to concerns raised around safeguarding and serious untoward incidents.

The quality of reports needs to be significantly improved with clear structure, labelling and findings.

Recommendation 2: rebuild expertise within the organisation and relationships with providers in order to resurrect credibility

There is an urgent need to appoint highly regarded senior clinicians as Chief Inspector of Hospitals and Chief Inspector of Primary and Community Care. Working closely with the chief inspectors and the national professional advisers, there should be rapid moves to rebuild sector expertise in all teams.

The review heard a strong message from providers across sectors about the opportunity to create a sense of pride and incentive in working as a specialist adviser with  CQC . Consideration should be given to a programme whereby the top-performing managers, carers and clinicians from across health and social care are appointed or apply to become assessors for 1 to 2 weeks a year with a high accolade being given to those accepted on the programme.

The executive leadership team of  CQC  - which should include the 3 chief inspectors - should rebuild relationships across the health and care sectors, share progress being made on improvements to  CQC  and continually seek input.

Recommendation 3: review the SAF and how it is implemented to make it fit for purpose

There needs to be a wholescale review of the  SAF  to address the concerns raised.  Professor Sir Mike Richards is now working with CQC to initiate this.

Specifically, to:

  • improve the quality of documentation on the  CQC  website
  • appropriately describe each key question
  • set out clear definitions of what ‘outstanding’, ‘good’, ‘requires improvement’ and ‘inadequate’ looks like for each evidence category and for each quality statement, as per the previous lines of enquiry
  • request credible sector experts to revisit which quality statements to prioritise and how to assess and measure them
  • give greater emphasis to the ‘effective’ key question
  • give greater emphasis to, and use of, outcome measures. CQC should build on the work it has done with partners, particularly the Healthcare Quality Improvement Partnership, GIRFT and national clinical audits, to expand the range of outcome measures it uses
  • give greater emphasis and prominence to use of resources within the ‘effective’ and ‘well led’ key questions - and build skills and capabilities to assess
  • build recognition and understanding of the balance of risk within and across organisations, and adapt the SAF accordingly
  • significantly improve transparency and robustness of the data used for patient, user and staff experience
  • build greater knowledge and insights into innovation in healthcare and social care - including new models of care - and weave these into the quality statements
  • ensure inspectors and other operational staff are fully trained in how to conduct inspections under the  SAF  and have access to all relevant materials

Recommendation 4: clarify how ratings are calculated and make the results more transparent, particularly where multi-year inspections and ratings have been used

The approach used to calculate ratings should be transparent and clearly explained on  CQC ’s website. It should be clear to all providers and users. The use of multi-year assessments in calculating ratings and in reports should be reconsidered and greater transparency given to how these are being used in the meantime.

Recommendation 5: continue to evolve and improve local authority assessments

CQC has been clear that the assessment process for local authorities will evolve during baselining. It is now 9 months into the 2-year baselining period. CQC should consider feedback it has received, alongside the findings in this review, in order to improve the process of assessment, continuously improving its robustness and the experience of local authorities.

Recommendation 6: pause ICS assessments

Given the difficulties to date in agreeing how best to assess ICSs , a need to ensure alignment with the NHS England Oversight Framework and considerable challenges within CQC , it is recommended that ICS assessments be paused for now with the nascent  ICS  assessment team redeployed within CQC .

Recommendation 7: strengthen sponsorship arrangements to facilitate CQC ’s provision of accountable, efficient and effective services to the public

Given the need for DHSC support, CQC and DHSC should work together to strengthen DHSC ’s arrangements for sponsorship of CQC , reaching an advanced level of maturity against the Arm’s length body sponsorship code of good practice . This should be underpinned by more regular performance reviews between  DHSC  and  CQC  to reinforce and check progress against the recommendations in this report.

Metrics for performance review should be enhanced with clear performance targets set for the next 6 to 12 months. Meetings should include senior civil servants at  DHSC (ideally the relevant directors general) and should take place on a monthly basis. CQC should consider strengthening partnerships with those it regulates, including setting out more clearly what providers can expect from the regulator. Strengthened sponsorship arrangements will further reinforce accountability.

It is recognised that a number of the recommendations made within this report will require wider system consideration - for example, how to ensure a sufficient focus on effectiveness and outcomes, and that use of resources is woven through all quality assessments and recommendations.  DHSC will need to lead or co-ordinate this work. As part of it, the terms of reference of the  NQB  should be reviewed.

DHSC should support CQC in progressing the next steps.

Over the next 4 months, a second review will report on proposed improvements to the wider landscape for quality of care , with a focus on patient safety.

Over the next 6 months, there needs to be:

  • rapid improvements to operational performance within CQC
  • significant steps taken towards rebuilding expertise within CQC
  • significant steps taken towards fostering stronger relationships with providers and the wider sectors in order to resurrect credibility

Over the next 12 months, the SAF needs to be fundamentally enhanced and improved with:

  • a review of quality statements
  • far greater emphasis on effectiveness, outcomes, innovation and use of resources
  • clear descriptors for each quality statement or evidence category

Appendix 1: quality and safety history and context

This list is not exhaustive.

Increasing interest in quality of care accompanied by increasing role of clinical audit.

Under the Labour administration, the Department of Health published ‘Quality in the NHS’ in 1998 which was seen as a step change in focusing on systematic improvement in the quality of care.

Investigation into the Bristol heart scandal.

Establishment of the Commission for Health Improvement ( CHI ). The statutory functions conferred on CHI were set out in section 20 of the Health Act 1999 and the Commission for Health Improvement (Functions) Regulations 1999 to 2000 .

Establishment of the National Care Standards Commission by the Care Standards Act 2000 as a non-departmental public body to regulate independent health and social care services, and improve the quality of those services in England. It was set up on 9 April 2001.

Creation of the National Patient Safety Agency ( NPSA ).

Establishment of the Shipman Inquiry.

Creation of Council for Healthcare Regulatory Excellence.

Establishment of Medicines and Healthcare products Regulatory Agency.

Establishment of the Commission for Social Care Inspection by the Health and Social Care (Community Health and Standards) Act 2003 . Under the terms of the 2003 act, the commission assumed responsibility for the functions previously exercised by  CHI .

Creation of the National Patient Safety Forum.

Establishment of NQB .

Establishment of CQC , replacing the HCI, Mental Health Act Commission and the Commission for Social Care Inspection

NPSA  publishes first version of Never Events policy and framework.

NPSA publishes a National Framework for Reporting and Learning from Serious Incidents Requiring Investigation (the Serious Incident Framework).

Council for Healthcare Regulatory Excellence becomes the Professional Standards Authority for Health and Social Care.

NPSA transferred to NHS England under provisions in the Health and Social Care Act 2012 .

Robert Francis QC’s Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry published in February.

CQC introduced its new regulatory model.

Introduction of the Regulation 20: duty of candour for trusts through the Health and Social Care Act 2008 (Regulated Activities) Regulations 2014 .

Establishment of Patient Safety Collaboratives .

CQC roll out new approach to regulating adult social care in England.

Ongoing implementation of comprehensive inspections and ratings for all NHS and care providers by CQC , and a focus by CQC on patient safety in response to the Mid Staffordshire NHS Foundation Trust Public Inquiry.

Launch of the GIRFT programme.

CQC publishes its review into learning from deaths, Learning, candour and accountability .

DHSC establishes the Healthcare Safety Investigation Branch.

CQC publishes Opening the door to change , looking at why patient safety incidents like ‘never events’ were still occurring.

The first NHS Patient Safety Strategy is published by NHS England.

CQC introduced the single assessment framework .

New duty on CQC to assess local authorities’ delivery of their duties under part 1 of the Care Act 2014 .

Health Services Safety Investigations Body is established as an arm’s length body.

Appendix 2: review terms of reference

To examine the suitability of CQC ’s new SAF methodology for inspections and ratings. In particular to:

  • ensure the new approach supports the efficient, effective and economic provision of health and care services
  • contrast the previous approach, which prioritised routine and/or some reactive inspections with the new more sophisticated approach informed by emerging risks and data. The new inspection methodology for  ICSs  - both the NHS and social care components - will also be reviewed
  • consider what can be done to ensure appropriate alignment between the NHS Oversight Framework and  CQC  inspection and ratings
  • consider what can be done to ensure trusts respond effectively, efficiently and economically to  CQC  inspections and ratings
  • consider whether  CQC  is appropriately set up, in both its leadership and staffing, to ensure that its new statutory role of assuring local government social care functions is as effective as possible alongside its wider responsibilities, and how it will review and monitor this over time
  • examine how senior NHS leaders can be more involved in  CQC  inspections and actions they can take to positively support  CQC  activity to ensure  CQC ’s work is translated into strong outcomes for patients
  • examine how social care inspections and ratings make use of the user voice and capture patient experience

Areas of focus

The main areas of focus include:

  • staffing and service innovation - is CQC inspection an actual or perceived barrier to workforce reform and change, and service innovation? If so, how are these barriers being addressed?
  • provider responses - do CQC ’s investigations and ratings drive the correct responses among providers, in respect of ensuring the delivery of safe and efficient, effective and economic services?
  • data quality and learning - what more could be done to ensure that regulated bodies understand the importance of ensuring that the data they produce on which regulation activity is based is of sufficient quality?
  • patient satisfaction and access - do inspections and ratings take account not only of the access statistics but patient experience of service access more broadly? Are patients’ voices are being heard, both in health and social care?
  • ICSs (health and social care aspects)
  • local government social care functions?
  • how will the new scoring system affect ratings and the 5 key areas (safe, caring, responsive, effective and well led)?
  • are CQC ’s regulations and processes fit for an age of digital healthcare?

Appendix 3: list of people spoken to for this review

Over the course of the review, we spoke to:

  • 6 CQC executives including the previous CEO
  • the CQC chair and 8 CQC non-executive directors
  • 16 members of the CQC senior leadership team
  • 11 CQC organisational leads
  • 12 CQC specialist advisers and national professional advisers
  • 34 wider CQC staff members
  • 7 representatives from trade unions
  • 8 DHSC senior civil servants
  • 1 senior civil servant from the Ministry of Housing, Communities and Local Government (previously the Department for Levelling Up, Housing and Communities)
  • 9 NHS England national directors including chair
  • 7 NHS England regional directors
  • 52 NHS trust leaders including chairs, chief executives and medical directors of NHS trusts (spread across acute and specialist community and mental health trusts, and foundation trusts)
  • 3 members, 1 director and 1 chief executive of NHS-related bodies
  • 16 general practitioners and general practitioner leaders
  • 3 senior members of the British Dental Association
  • 11 senior members of organisations in the independent sector
  • 6 senior members of social care provider representative organisations
  • 8 members of individual adult social care providers
  • 22 quality leaders of social care providers
  • 8 local authority directors or chief executives of adult social care
  • 9 ICB chairs and chief executives
  • 6 chairs and chief executives of statutory and quality-related health bodies
  • 3 contributors from academia or think tanks
  • 14 senior members from user voice organisations
  • 4 people from London councils
  • 10 other individuals

A subset of the people listed above participated in an advisory board to review emerging findings and recommendations, meeting twice during the development of this report.

Appendix 4: example of a rating descriptor

CQC defines the ‘safeguarding’ quality statement as:

  • there is a strong understanding of safeguarding and how to take appropriate action
  • people are supported to understand safeguarding, what being safe means to them, and how to raise concerns when they don’t feel safe, or they have concerns about the safety of other people
  • there are effective systems, processes and practices to make sure people are protected from abuse and neglect
  • there is a commitment to taking immediate action to keep people safe from abuse and neglect. This includes working with partners in a collaborative way
  • people are appropriately supported when they feel unsafe or experience abuse or neglect
  • where applicable, there is a clear understanding of the Deprivation of Liberty Safeguards (DoLS) and this is only used when it is in the best interest of the person
  • safeguarding systems, processes and practices mean that people’s human rights are upheld and they are protected from discrimination
  • people are supported to understand their rights, including their human rights, rights under the Mental Capacity Act 2005 and their rights under the Equality Act 2010

Appendix 5: examples of rating descriptors in previous assessment model

We have taken this from page 26 of CQC ’s Key lines of enquiry for healthcare services , which includes ratings characteristics.

We have reproduced, for ease of reference, the rating characteristics for ‘safe’ in healthcare services below.

By safe, we mean people are protected from abuse and avoidable harm.

Note: abuse can be physical, sexual, mental or psychological, financial, neglect, institutional or discriminatory abuse.

The ratings are:

  • outstanding: people are protected by a strong comprehensive safety system, and a focus on openness, transparency and learning when things go wrong
  • good: people are protected from avoidable harm and abuse. Legal requirements are met
  • requires improvement: there is an increased risk that people are harmed or there is limited assurance about safety. Regulations may or may not be met
  • inadequate: people are not safe or at high risk of avoidable harm or abuse. Normally some regulations are not met

S1 ( CQC code for safe): how do systems, processes and practices keep people safe and safeguarded from abuse?

Outstanding

Outstanding means:

  • there are comprehensive systems to keep people safe, which take account of current best practice. The whole team is engaged in reviewing and improving safety and safeguarding systems. People who use services are at the centre of safeguarding and protection from discrimination
  • innovation is encouraged to achieve sustained improvements in safety and continual reductions in harm

Good means:

  • are reliable and minimise the potential for error
  • reflect national, professional guidance and legislation
  • are appropriate for the care setting and address people’s diverse needs
  • are understood by all staff and implemented consistently
  • are reviewed regularly and improved when needed
  • staff have received up-to-date training in all safety systems, processes and practices
  • safeguarding adults, children and young people at risk is given sufficient priority. Staff take a proactive approach to safeguarding and focus on early identification. They take steps to prevent abuse or discrimination that might cause avoidable harm, respond appropriately to any signs or allegations of abuse and work effectively with others, including people using the service, to agree and implement protection plans. There is active and appropriate engagement in local safeguarding procedures and effective work with other relevant organisations, including when people experience harassment or abuse in the community

Requires improvement

Requires improvement means:

  • systems, processes and standard operating procedures are not always reliable or appropriate to keep people safe
  • monitoring whether safety systems are implemented is not robust. There are some concerns about the consistency of understanding and the number of staff who are aware of them
  • safeguarding is not given sufficient priority at all times. Systems are not fully embedded, staff do not always respond quickly enough, or there are shortfalls in the system of engaging with local safeguarding processes and with people using the service
  • there is an inconsistent approach to protecting people from discrimination

Inadequate means:

  • safety systems, processes and standard operating procedures are not fit for purpose
  • there is wilful or routine disregard of standard operating or safety procedures
  • there is insufficient attention to safeguarding children and adults. Staff do not recognise or respond appropriately to abuse or discriminatory practice
  • care premises, equipment and facilities are unsafe

Appendix 6: examples of combined ratings

The tables below show how locations can hold ‘combined ratings’. A combined rating is where the overall ratings are aggregated from 2 or more inspections on different dates.

This can happen at both the domain level (from focused inspections on a subset of domains) or the service level (from focused inspections at the service level, which are more common for a larger location such as an acute hospital) or both.

The data used to create these was sourced from CQC reports found under Find and compare services on the following care providers:

  • Manchester University NHS Foundation Trust March 2019 report, which includes data on the ratings for Wythenshawe Hospital and St Mary’s Hospital
  • Wythenshawe Hospital and St Mary’s Hospital July 2023 reports
  • Aire Valley Surgery March 2016 report
  • Aire Valley Surgery January 2020 report
  • Ranelagh House October 2019 report
  • Ranelagh House March 2023 report

Wythenshawe Hospital

Wythenshawe Hospital had a full inspection on all services and domains, except as to whether it was effective for outpatients, in March 2019.

In July 2023, only the maternity service was inspected with its ‘safe’, ‘well led’ and overall ratings all being downgraded.

This led to Wythenshawe Hospital’s overall location ratings for ‘safe’, ‘well led’ and overall to be downgraded to ‘requires improvement’ even though several services were not re-inspected.

Table 5a: ratings for Wythenshawe Hospital, published March 2019

Service Safe rating Effective rating Caring rating Responsive rating Well led rating Overall rating
Urgent and emergency care Requires improvement Requires improvement Good Requires improvement Good Requires improvement
Medical care (including older people’s care) Good Good Good Requires improvement Good Good
Surgery Good Good Good Good Good Good
Critical care Good Good Outstanding Outstanding Good Outstanding
Maternity Good Good Good Good Good Good
Services for children and young people Good Good Good Outstanding Requires improvement Good
End of life care Good Good Outstanding Good Good Good
Outpatients Good Not rated Good Good Good Good
Overall Good Good Outstanding Requires improvement Good Good

Table 5b: ratings for Wythenshawe Hospital published July 2023

Service Safe rating Well led rating Overall rating
Maternity Inadequate Requires improvement Requires improvement
Overall Requires improvement Requires improvement Requires improvement

St Mary’s Hospital

Similarly, St Mary’s Hospital had a full inspection on all services and domains in March 2019.

This led to the overall location ratings for these 2 domains and the overall rating to be downgraded to ‘requires improvement’ for overall and ‘well led’, and ‘inadequate’ for ‘safe’.

Table 5c: ratings for St Mary’s Hospital, published March 2019

Service Safe rating Effective rating Caring rating Responsive Well led rating Overall rating
Maternity Good Good Good Good Good Good
Neonatal services Good Good Outstanding Good Good Good
Overall Good Good Outstanding Good Good Good

Table 5d: ratings for St Mary’s Hospital, published July 2023

Service Safe rating Well led rating Overall rating
Maternity Inadequate Requires improvement Requires improvement
Overall Inadequate Requires improvement Requires improvement

Aire Valley Surgery and Ranelagh House

These combined ratings do not exclusively occur for service-level inspections in hospitals, as Aire Valley Surgery (a GP practice) and Ranelagh House (a care home) also had focused domain inspections in January 2020 and March 2023.

For Ranelagh House, the re-inspection and downgrading of the ‘well led’ domain to ‘requires improvement’ made the overall location rating downgrade to ‘requires improvement’.

Table 5e: ratings for Aire Valley Surgery in March 2016 and January 2020

Publication date Safe rating Caring rating Effective rating Responsive rating Well led rating Overall rating
9 January 2020 Not inspected Not inspected Good Not inspected Good Good
21 March 2016 Good Good Good Good Good Good

Table 5f: Ratings for Ranelagh House in October 2019 and March 2023

Publication date Safe rating Caring rating Effective rating Responsive rating Well led rating Overall rating
22 March 2023 Requires improvement Not inspected Not inspected Not inspected Requires improvement Requires improvement
17 October 2019 Requires improvement Good Good Good Good Good

Note: some groups refer to users of social care as “those who draw on care and support in social care”.  ↩

Total UK public and private expenditure on health and social care is taken from tables 1a and 7a of the Office for National Statistics’ ( ONS ) UK Health Accounts data set, which states that health-related expenditure totalled £292.476 billion in 2023 and long-term care (social care) £12.069 billion in 2022 - this is the most up-to-date data available. UK GDP is estimated at around £2.5 trillion in 2022 to 2023, according to HM Treasury’s GDP deflators at market prices, and money GDP June 2024 (Quarterly National Accounts) . Taken together, this leads to an estimate of approximately 12% of GDP .  ↩

Total public expenditure on health in 2022 to 2023 was 18.4% of total managed expenditure (£213.3 billion on health and £1,157.4 billion total managed expenditure), as stated in HM Treasury’s Public spending statistics: May 2024 . Total managed expenditure includes all outgoings from government, which includes resource and capital spending on services, and all benefits and debt repayments. Total social care spending is £27.3 billion for the UK. This is the total of ‘sickness and disability - of which personal social services’ and ‘old age - of which personal social services’ - see HM Treasury’s Public Expenditure Statistical Analyses 2024 , page 74, rows 10.1 and 10.2 of Table 5.2. Taken together, health and social care public expenditure is estimated to be approximately 21% of total public expenditure.  ↩

NHS Confederation.  Creating better health value: understanding the economic impact of NHS spending by care setting.  2023.  ↩

For example, the Nolan Principles , which provide a framework for ethical behaviour and good governance in public life.  ↩

Healthcare Quality Improvement Partnership. Clinical audit: a manual for lay members of the clinical audit team. 2012.  ↩

NHS England. Consolidated NHS provider accounts: annual report and accounts 2022 to 2023 , page 8. 2024.  ↩

DHSC and Government Office for Science. Evidence review for adult social care reform , pages 10-11, paragraph 2.7, figure 2. 2021.  ↩

ONS . Care homes and estimating the self-funding population, England: 2022 to 2023. 2023.  ↩

ONS . Estimating the size of the self-funding population in the community, England: 2022 to 2023. 2023.  ↩

Competition and Markets Authority ( CMA ). Care homes market study. 2016.  ↩

DHSC analysis of the Using CQC data care provider directory with filters, accessed August 2024, using the location type of ‘social care organisation’ for a broad definition of adult social care and treating each brand as a single provider.  ↩   ↩ 2

‘Care Homes for Older People UK Market Report’, 34th edition. 2023. LaingBuisson, London.  ↩

Due to definitional issues, no source has a UK or England-based value on the same definition of ‘the independent sector’ as regulated by CQC .  ↩

‘Private Healthcare Self-Pay UK Market Report’, 5th edition, page 2. 2023. LaingBuisson, London.  ↩

‘UK Healthcare Market Review’, 34th edition. LaingBuisson, London.  ↩   ↩ 2

DHSC . DHSC annual report and accounts: 2022 to 2023. 2024: page 5, table 69. This states that total independent sector expenditure of £12.7 billion consists of £11.454 billion from independent sector providers and £1.264 billion from voluntary sector or not-for-profit providers.  ↩

CQC . CQC  Board meeting: 22 May 2024. 2024: see ‘Corporate performance report (2023/24 year end) - appendix’.  ↩   ↩ 2

Such as Norfolk County Council’s Provider assessment and market management solution (PAMMS) , an online assessment tool used to help assess the quality of care delivered by providers of adult social care services in Norfolk.  ↩

Cavendish C, Moberg G and Freedman J. ‘A better old age? Improving health and care outcomes for the over-65s in the UK.’ 2024: M-RCBG Associate Working Paper No. 236, Mossavar-Rahmani Center for Business and Government, Harvard Kennedy School.  ↩   ↩ 2

Han TS, Murray P, Robin J, Wilkinson P, Fluck D and Fry CH. ‘Evaluation of the association of length of stay in hospital and outcomes.’ International Journal for Quality in Health Care 2022: volume 34, issue 2.  ↩

NHS Confederation and CF. ‘Unlocking the power of health beyond the hospital: supporting communities to prosper.’ 2023.  ↩

DHSC . Independent investigation of the NHS in England. 2024.  ↩

For example, NAO ’s 2021 report The adult social care market in England and CMA ’s 2017 Care homes market study: summary of final report called for “greater accountability for local authorities in delivering on their care obligations”.  ↩

Association of Directors of Adult Social Services ( ADASS ). ADASS Spring Survey 2024. 2024.  ↩

NHS England. Adult Social Care Statistics in England: An Overview. 2024.  ↩

Oikonomou E, Carthey J, Macrae C and others. ‘Patient safety regulation in the NHS: mapping the regulatory landscape of healthcare.’ BMJ Open 2019: volume 9, issue 7.  ↩

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. Please fill in this survey (opens in a new tab) .

IMAGES

  1. The Research Excellence Framework Assessment framework guidance on

    research excellence framework unit of assessment

  2. PPT

    research excellence framework unit of assessment

  3. PPT

    research excellence framework unit of assessment

  4. PPT

    research excellence framework unit of assessment

  5. Research Excellence Framework (REF)

    research excellence framework unit of assessment

  6. Research Excellence Framework 2028: issues for further consultation

    research excellence framework unit of assessment

VIDEO

  1. Ch 1 Theoretical Framework

  2. UX Research Roadmaps

  3. University of Cambridge: A Beacon of Academic Excellence and Research Innovation

  4. Theoretical framework || Unit 3,4 || CA Foundation Accounts [NEW SCHEME]|| Handwritten notes||Shreya

  5. Faculty Development Programme HIGHER EDUCATION

  6. Research Excellence Framework: Learnings from 10 yrs of impact assessment

COMMENTS

  1. Ref 2029

    The Research Excellence Framework (REF) is the UK's system for assessing the excellence of research in UK higher education institutions (HEIs). The REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities' research. The REF is a process of expert review, carried out by sub-panels ...

  2. Research Excellence Framework

    The Research Excellence Framework (REF) is the UK's system for assessing the excellence of research in UK higher education providers (HEPs). The REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities' research. The REF was first carried out in 2014, replacing the previous Research ...

  3. Panels

    The Research Excellence Framework (REF) is a process of expert review, and institutions will be invited to make submissions into units of assessment (UoAs). The submissions are assessed by an expert sub-panel for each UoA, working under the guidance of four main panels.

  4. What is the REF?

    The Research Excellence Framework (REF) is the UK's system for assessing the excellence of research in UK higher education providers (HEIs). The REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities' research. The REF was first carried out in 2014, replacing the previous Research ...

  5. Results of Research Excellence Framework are published

    12 May 2022. The results of the UK-wide assessment of university research, conducted through the latest Research Excellence Framework (REF), have been published. The 2021 assessment process has identified a substantial proportion of world-leading research across all UK nations and English regions, and across the full range of subject areas.

  6. PDF Annex A: Unit of Assessment structure for REF 2021

    Annex A: Unit of Assessment structure for REF 2021 Main Panel Unit of assessment A 1 Clinical Medicine 2 Public Health, Health Services and Primary Care 3 Allied Health Professions, Dentistry, Nursing and Pharmacy ... Initial decisions on the Research Excellence Framework 2021

  7. REF 2021: research excellence framework results

    See here for full results table. The results of the REF will be used to distribute quality-related research funding by the UK's four higher education funding bodies, the value of which will stand at around £2 billion from 2022-23. The requirement to submit all research-active staff was introduced following the review of the REF conducted by ...

  8. 2014-01 : Ref 2014

    This document sets out the main results of the 2014 Research Excellence Framework. Printed copies have been sent to higher education institutions. To: ... the overall quality profiles for each unit of assessment. Download the REF 2014 - The results - main tables as PDF (857 KB) Annexes. Download the REF 2014 - The results - annexes as PDF (559 KB)

  9. REF

    The Research Excellence Framework (REF) is the system for assessing the quality of research in UK universities and higher education colleges and replaces the Research Assessment Exercise (RAE). The first REF exercise was run in 2014, and the second one was run in 2021. King's made an institutional submission to REF, which is broken down into ...

  10. Research Excellence Framework

    The Research Excellence Framework (REF) is the system for assessing the quality of research in UK higher education institutions (HEIs). The REF is carried out approximately every six to seven years to assess the quality of research across 157 UK universities and to share how this research benefits society both in the UK and globally. It is implemented by Research England, part of UK Research ...

  11. PDF YOUR SIMPLE GUIDE TO REF

    The REF looks at three areas of assessment, which together reflect the key characteristics of research excellence: 1. Quality of research outputs (accounting for 60% of the assessment) 2. Impact of research - its effect on, change, or benefit to the economy, society, policy, culture and quality of life (accounting for 25% of the assessment) 3.

  12. REF 2021: Times Higher Education's table methodology

    The subject tables rank institutional submissions to each of the 34 units of assessment based on the GPA of the institution's overall quality profiles in that unit of assessment, as well as its research power. GPAs for output, impact and environment are also provided. Where a university submitted fewer than four people to a UoA, the funding bodies suppress its quality profiles for impact ...

  13. PDF Assessment framework and guidance on submissions

    9. This document sets out the framework for assessment and administrative arrangements for the 2014 Research Excellence Framework (REF). It specifies the data requirements, definitions and criteria that will apply, for submissions by higher education institutions (HEIs). It should be read in conjunction with the documents setting out the panel

  14. Research Excellence Framework 2021

    In the top three nationally for nine subjects (Unit of Assessment by grade point average or research power). About REF. The Research Excellence Framework (REF) is the system for assessing the quality of research in UK higher education institutions. Manchester made one of the largest and broadest REF submissions in the UK, entering 2,249 ...

  15. PDF University of Cambridge Research Excellence Framework 2021 Code of Practice

    Research Excellence Framework 2021 Code of Practice . Part One: Introduction & Context . 1. The Code of Practice sets out the development and application of the processes that will ... The working methods of the Unit of Assessment Committees are set out in the relevant sections of Parts 2, 3 and 4 of this document and in detail in the document ...

  16. PDF Consultation on the second Research Excellence Framework

    Research Excellence Framework (REF) review: 'Building on success and learning from experience', published in July 2016, and our evaluation of REF 2014. The consultation ran ... Several comments related to specific policies and proposals (open access, unit of assessment (UOA) structure, interdisciplinary research, staff selection, and the ...

  17. PDF RESEARCH EXCELLENCE FRAMEWORK

    The Research Excellence Framework (REF) 2014 is the new system put in place by the four UK ... Unit of Assessment (UOA) Coordinators Description of role and responsibilities A coordinator has been identified for each UOA to which the University intends to submit a return.

  18. REF Case study search

    Case studies in the same unit of assessment are not indexed by separate multiple submissions within this database. For further information, ... (HEIs) to the 2014 Research Excellence Framework (REF2014). The documents have been processed by Digital Science, normalised to a common format to improve searchability and tagged against select fields.

  19. Filter by unit of assessment : Results and submissions : REF 2021

    157 UK higher education institutions (HEIs) made submissions in 34 subject-based units of assessment (UOAs). The submissions were assessed by panels of experts, who produced an overall quality profile for each submission.Each overall quality profile shows the proportion of research activity judged by the panels to have met each of the four starred quality levels in steps of 1%.

  20. PDF Circular letter

    the next Research Excellence Framework (REF) developed in partnership by the four UK higher education funding bodies (Research England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for Economy, Northern Ireland). The Future Research Assessment Programme (FRAP) was launched in May 2021, seeking

  21. A study of implementation factors for a novel approach to clinical

    Background Traditional medical research infrastructures relying on the Centers of Excellence (CoE) model (an infrastructure or shared facility providing high standards of research excellence and resources to advance scientific knowledge) are often limited by geographic reach regarding patient accessibility, presenting challenges for study recruitment and accrual. Thus, the development of novel ...

  22. Review into the operational effectiveness of the Care Quality

    The new framework was intended to make the assessment process simpler and more insight driven by drawing on a wide range of data about quality of care, with the ability to prioritise assessments ...