How we reach a rating (for assessments before December 2024)

Page last updated: 10 January 2025

Jump to:

  1. How we reach a rating
  2. Example: how we reach a rating
  3. Calculating the first scores using our new approach

How we reach a rating

To support the transparency and consistency of our judgements, we have introduced a scoring framework into our assessments.

Where appropriate, we’ll continue to describe the quality of care using our 4 ratings: outstanding, good, requires improvement, or inadequate.

When we assess evidence, we assign scores to the relevant evidence categories for each quality statement that we’re assessing. Ratings will be based on building up scores from quality statements to an overall rating.

This approach makes clear the type of evidence that we have used to reach decisions.

Some types of services are exempt from CQC's legal duty to provide a rating. Read our guidance for non-rated services.

Scoring

Using scoring as part of our assessments will help us be clearer and more consistent about how we’ve reached a judgement on the quality of care in a service. The score will indicate a more detailed position within the rating scale. This will help us to see if quality or performance is moving up or down within a rating.

For example, for a rating of good, the score will tell us if this is either:

  • in the upper threshold, nearing outstanding
  • in the lower threshold, nearer to requires improvement.

Similarly, for a rating of requires improvement, the score would tell us if it was either:

  • in the upper threshold, nearing good
  • in the lower threshold, nearer to inadequate.

Our quality statements clearly describe the standards of care that people should expect.

To assess a specific quality statement, we will take into account the evidence we have in each relevant evidence category. This will vary depending on the type of service or organisation. For example, the evidence we will collect for GP practices will be different to what we’ll have available to us in an assessment of a home care service.

Evidence could be information that we either:

  • already have, for example from statutory notifications
  • actively look for, for example from an on-site inspection.

Depending on what we find, we give a score for each evidence category that is part of the assessment of the quality statement. All evidence categories and quality statements are weighted equally.

Scores for evidence categories relate to the quality of care in a service:

4 = Evidence shows an exceptional standard
3 = Evidence shows a good standard
2 = Evidence shows some shortfalls
1 = Evidence shows significant shortfalls

As we have moved away from assessing at a single point in time, we aim to assess different areas of the framework on an ongoing basis. This means we can update scores for different evidence categories at different times.

The first time we assess a quality statement, we score all the relevant evidence categories. After this, we can update our findings by updating individual evidence category scores. Any changes in evidence category scores can then update the existing quality statement score.

We will follow these initial 3 stages for services that receive a rating:

  1. Review evidence within the evidence categories we’re assessing for each quality statement.
  2. Apply a score to each of these evidence categories.
  3. Combine these evidence category scores to give a score for the related quality statement.

After these stages, the quality statement scores are combined to give a total score and then a rating for the relevant key question (safe, effective, caring, responsive, and well-led).

We then aggregate the scores for key questions to give a rating for our view of quality at an overall service level. See how we aggregate ratings for different types of services.

How we calculate quality statement scores

When we combine evidence category scores to give a quality statement score, we calculate this as a percentage. This provides more detailed information at evidence category and quality statement level. See the example of calculating scores (link to example).

To calculate the percentage, we divide the total evidence category scores by the maximum possible score. This maximum score is the number of relevant evidence categories multiplied by the highest score for each category, which is 4. This gives a percentage score for the quality statement.

We then convert this back to a score. This makes it easier to understand and combine with other quality statement scores to calculate the related key question score.

We use these thresholds to convert percentages to scores:

25 to 38% = 1
39 to 62% = 2
63 to 87% = 3
over 87% = 4

Our scoring model treats all evidence categories as equal when calculating the quality statement score. There are some circumstances where it is not appropriate to weight all scores as equal; in those cases, we will set our judgements at the quality statement level to reflect the findings across the evidence categories. For example, if we have served a Warning Notice that relates to a specific quality statement.

How we calculate key question scores

We then use the quality statement score to give us an updated view of quality at key question level.

Again, we calculate a percentage score. We divide the total by the maximum possible score. This is the number of quality statements under the key question multiplied by the highest score for each statement, which is 4. This gives a percentage score for the key question.

At key question level, we translate this percentage into a rating rather than a score, using these thresholds:

25 to 38% = inadequate
39 to 62% = requires improvement
63 to 87% = good
88% and above = outstanding

By using the following rules, we can make sure any areas of poor quality are not hidden:

  • If the key question score is within the good range, but one or more of the quality statement scores is 1, the rating is limited to requires improvement.
  • If the key question score is within the outstanding range, but one or more of the quality statement scores is 1 or 2, the rating is limited to good.

Our judgements go through quality assurance processes.

For services that have not previously been inspected or rated, we will need to assess all quality statements in a key question before we publish the rating. For newly registered services, we’ll usually assess all quality statements within 12 months.

How we aggregate ratings using the rating principles

Overall location ratings are produced on the basis of the following principles:

  1. The 5 key questions are all equally important and are weighted equally when aggregating.
  2. At least 2 of the 5 key questions would normally need to be rated as outstanding and 3 key questions rated as good before an aggregated rating of outstanding can be awarded.
  3. There are a number of ratings combinations that will lead to a rating of good. The overall rating will normally be good if there are no key question ratings of inadequate and no more than one key question rating of requires improvement.
  4. If 2 or more of the key questions are rated as requires improvement, then the overall rating will normally be requires improvement.
  5. If 2 or more of the key questions are rated as inadequate, then the overall rating will normally be inadequate.

Back to top


Example: how we reach a rating

To assess quality against a particular quality statement, operational colleagues will look at the relevant evidence categories. In this example, we are just looking at the 'infection prevention and control' quality statement.

For this service, the key evidence categories for this quality statement are:

  • People's experiences
  • Feedback from staff and leaders
  • Observation
  • Processes

We would look at individual pieces of evidence under each evidence category and based on the strength of what we find, give a score of 1 to 4.

For example, in the ‘people's experience’ evidence category, we may look at:

  • patient surveys
  • complaints and compliments

To gather evidence in the ‘feedback from staff and leaders’ and ‘observation’ categories, we might schedule:

  • an inspection to look at the care environment
  • a call to speak with staff at the service.

We would then combine this new evidence with what we already hold on ‘processes’ to help us form a view of quality.

Example: combining evidence category scores to give a quality statement score

Evidence categoryScoreExisting or updated score
People's experiences3updated
Feedback from staff and leaders2updated
Observation3updated
Processes3existing
Total score for the combined evidence categories11 

We calculate this as a percentage so that we have more detailed information at evidence category and quality statement level.

To calculate the percentage, we divide the total (in this case 11) by the maximum possible score. This maximum score is the number of relevant evidence categories multiplied by the highest score for each category, which is 4. In this case, the maximum score is 16. Here, it gives a percentage score for the quality statement of 69% (this is 11 divided by 16).

We convert this back to a score. This makes it easier to understand and combine with other quality statement scores to calculate the related key question score.

We use these thresholds to convert percentages to scores:

  • 25 to 38% = 1
  • 39 to 62% = 2
  • 63 to 87% = 3
  • over 87% = 4

In this case, the percentage score of 69% converts to a score of 3.

We then use this score to give us an updated view of quality at key question level. In this case it is for the safe key question:

Example: combining quality statement scores to give a key question rating

Quality statementScoreExisting or updated score
Learning culture2existing
Safe systems, pathways and transitions3existing
Safeguarding3existing
Involving people to manage risks2existing
Safe environments3existing
Infection prevention and control3updated
Safe and effective staffing2existing
Medicines optimisation3existing
Total score for the safe key question21 

Again, we calculate a percentage score. We divide the total (in this case 21) by the maximum possible score. For the safe key question, this is 8 quality statements multiplied by the highest score for each statement, which is 4. So the maximum score is 32. Here, it gives a percentage score for the key question of 65.6% (this is 21 divided by 32).

At key question level we translate this percentage into a rating rather than a score, using these thresholds:

  • 25 to 38% = inadequate
  • 39 to 62% = requires improvement
  • 63 to 87% = good
  • over 87% = outstanding

Therefore, the rating for the safe key question in this case is good.

Back to top


Calculating the first scores using our new approach

When we assess services using our new approach, we will need to apply scores for each quality statement to decide the ratings. This page explains how we will do this.

Services with an existing rating or findings about compliance

When we carry out our first assessment of your service, we will select which quality statements to look at. The selection of quality statements will be determined by national priorities, set by the type of service, as well as a consideration of the information we hold about your service.

For each of the quality statements selected, we will collect evidence and give a score for all the relevant evidence categories. This means the scores for those quality statements will be based entirely on our new assessment.

For the remaining quality statements, the scores will be based on our previous findings and the date of their assessment will be provided alongside. We will do this by using the current, published ratings for the relevant key question. These scores will be:

  • 4 for each quality statement where the key question is rated as outstanding
  • 3 for each quality statement where the key question is rated as good
  • 2 for each quality statement where the key question is rated as requires improvement
  • 1 for each quality statement where the key question is rated as inadequate

There are 4 exceptions to this approach where specific topics that have moved from one key question to another or are new to our assessment framework.

For all services these exceptions are as follows:

  • The initial scores for the 'workforce wellbeing and enablement' quality statement will be based on the rating for the well-led question. This is because this topic area has moved from the well-led key question to the caring key question in our new framework.
  • We will not apply an initial score for the 'environmental sustainability' quality statement. This is because it is a new area in our framework.

For services previously inspected using the adult social care framework only:

  • The initial scores for the 'care provision, integration and continuity' quality statement will be based on the rating for the well-led key question.
  • The initial scores for the 'providing information' quality statement will be based on the rating for the effective key question. This is because this topic area has moved from the effective key question to the responsive key question in our new framework.

Services that have not yet been inspected

If your service has not previously been inspected when we assess using our new approach, we will not apply initial scores as there are no previous findings to base these on.

For these services, we will normally collect evidence for all the quality statements within the first year.

Services we do not rate

For some types of service, we do not have the legal ability to give a rating.

We will assess these services using the new framework. However, unlike services we rate, there is no overall score or scoring for key questions, quality statements or evidence categories, and no overall rating. Read our guidance for non-rated services.

Back to top