Friday, November 21, 2014

GP Ratings - The Dangers of Flawed Data


This week the health watchdog, the Care Quality Commission (CQC), published a list of rankings for the risk of providing poor care in general practice. The aim has been to give the public information regarding their surgeries and the quality of the services that they are offering. But are the rankings accurate or will they cause unnecessary anxiety

No serious professional would question the right of the public to know exactly what quality they are getting from their services, and this is even more the case when that service may mean the difference between life and death. General practice is an important part of many people’s lives, providing most of us with our entry point into the NHS. In fact for the vast majority it is the only contact we have with the health service from one year’s end to the next, as general practice deals with 90% of cases in house without any need for onward referral to hospital. For over 50 million of us, general practice is the only experience we will have of the NHS in the next 12 months, so these figures matter. The big question is are they reliable.

To answer this question we need to understand what the figures mean. They are an expression of the raw data gathered on each practice across a range of performance indicators and which CQC inspectors then use to guide their questions and inquiries when they visit the practice on an inspection. The data is bundled together and given a simple score from 1 to 6, the score of 1 marking a significant potential worry. In short these rankings are therefore the raw data that helps to guide the inspection process, but they are not the results of the inspection itself. It is not set in any context and is therefore free of any meaning. The CQC is clear about this in its declaration that these rankings “do not amount to a judgement of practices”, but the danger, of course, is that this is how the rankings will be seen by public and professionals alike.

In my everyday work as a GP I see many people who present with symptoms that are a very clear cause for concern. The likelihood is that the person sitting before me has some minor illness that is very simple and of no significant threat to them. However, their symptoms and the results of my examination may leave open the possibility of some much more serious diagnosis. I therefore arrange further more detailed tests to establish the diagnosis definitively before making any pronouncement. In ordering these tests I am at pains to explain to the patient what we are doing and why. I take time to set out clearly the extremely unlikely but potentially devastating diagnosis that we seek to exclude. What I do not do is scare them half to death with the blunt statement “You’ll need a test. You might have cancer.”

This is effectively what the CQC have done in publishing this data before the inspections have taken place and without any attempt to establish its validity. Of course there will be some practices within the cohort giving concern that do in fact turn out after inspection to have genuine problems. Some may even be deemed unsafe. For many, however, there will be perfectly good reasons to explain why they have recorded apparently low scores when the care they deliver is in fact excellent. There are 101 reasons why data can be skewed and it is not until one drills down in detail that it can be established whether the cause for concern is genuine.

For example, in Somerset this year an innovative deal has been reached that frees practices of the box-ticking bureaucracy linked to a scheme called QOF (the GP Quality and Outcomes Framework). This has allowed practices to do even more of their exemplary work as resources can be focused where they really matter - at the sharp end with patients who need them. In return a few boxes are left un-ticked. Better care, fewer ticks. Crucially, the CQC indicators are driven by those ticks, which raises the possibility that the rankings are completely inaccurate in Somerset. A similar scheme operated last year in Devon and Cornwall, raising doubts about the rankings’ accuracy there also.

The public deserves an accurate and robust inspection regime to assure quality in general practice. What it has been given in these misleading figures is a half baked and distorted picture that is likely to cause serious anxiety. General practice is close to collapse with chronic under funding and severe workforce shortages. It is likely that, for the first time ever, 2015 will see significant numbers of people simply unable to find a GP as some surgeries close their doors forever. On this background the public and the profession deserve better than this shoddy approach from a watchdog that is supposed to be the guardian of quality.

Dr Mark Sanford-Wood
Devon Local Medical Committee

20th November 2014

No comments:

Post a Comment