Beware of Samples

Thank you for the positive feedback and support on our first blog. We received almost unanimous support which is really encouraging, therefore we’ll continue and I’ll update you as time goes by on my own QI project to reduce my cholesterol levels.

As promised I wanted this blog to focus on the pitfalls of analysing and interpreting data based on samples as I feel this is an important skill that is often overlooked.

Resources within clinical audit are stretched like other areas of the NHS – at a time when there is an increased need to provide assurance or improvement.  This combination sometimes creates pressure on project teams to “make good” of what information is collected and report to meet deadlines, even if it isn’t in-line with the project plan.

As analysts its easy to switch on the autopilot and calculate percentages, create pie charts etc and send back the analysis as requested and present the results as if they were matter of fact regardless of how the data was obtained. However it is our duty to critique the data and present it accordingly detailing limitations. It is really important for all involved (clinicians, patients, the organisation and your own reputation) that we get it right. Acting on sub-standard data can be worse than doing nothing at all, as it can provide false assurance or initiate a change in practice when there wasn’t actually a problem.

So how do we know that the dataset we are analysing is fit for purpose? I’m sure answers to this question would vary greatly, as I’m not entirely sure there is enough support and training provided on this at present.

The Health Foundation recently highlighted that there is a current lack of skilled analysts within the NHS so this presents a problem which will take time to address.

So to help in the interim – I wish to share my knowledge on this matter via a set of 5 tips to highlight key aspects of analysing data based on samples and where to highlight possible limitations. This has been gleaned from over 15 years analysing healthcare data since graduating (with a joint statistics and business BSc). My wife is currently reading Davina McCall’s autobiography – which is entitled ‘Lessons I’ve Learned – I’ve made mistakes so you don’t have to’ so this is something along those lines (Other autobiographies are available but not as aptly entitled :-)).

Tip 1 Make sure you are clear what the aims and plan of the project are – Is the project for assurance, improvement or research?

Experience over time has taught me the importance of understanding the reasoning behind the project, so keep your analysis focused around the aims. Assurance and improvement data analysis require different approaches so its essential to understand the differences between the two. Sometimes a pragmatic approach is required to find a balance around what is needed and what is possible but this needs to be explained.

Tip 2 – Understand what sampling techniques have been used?

A) Is it clear what the population is that you’ree trying to make conclusions about?

B) is it clear how the sample was selected? Is the reasoning explained?

C) How confident do we need to be in this data? What margin of error can you accept?

I have provided links to sampling websites below that can help with this so that you can identify how representative a sample is and the margins of error – both planning and critiquing retrospectively. There are also links to guides on sampling for clinical audit so this is worth referring to especially around the techniques you can use to select your sample. The most important thing here to assess is, have we introduced a bias from how the sample was drawn and do we have enough data for us to be confident that our sample contains the true result for our population?

Tip 3 – Have SMART questions been used?

Its very important to use questions that are SMART (Specific, measurable, achievable, realistic & timely) when collecting data to minimise individual interpretation especially if more than one person is collecting data / completing survey. None compliance or levels of satisfaction can sometimes be lower if your response options have no opt out (NA / contra-indication or no opinion). If you collect information that is not linked to your aims this is potentially an information governance concern (we plan to write future blogs on this matter).

Tip 4 – What checks / validation has been undertaken to ensure the data has been collected accurately and consistently?

Who has collected the data? If it was more than one person check for consistencies – usually best to collect the first few cases together and discuss the interpretation of questions/answers to resolve any misunderstandings before going alone. Pilots also highlight variations that you may have missed in your planning. Inter reliability checks are ideal to test this but are often not used due to limited time and resources. Also it is best practice to carry out a  senior review for any cases that are non-compliant to ensure that there wasn’t a valid reason that had not been previously included as a valid option.

Tip 5 – Use confidence intervals

ci

A sample only provides us with a point estimate about the true result for the population. So by adding confidence interval to our sample value it provides us with a range of compliance / satisfaction that we can be confident (often 95%) that it contains the true population result. In clinical audit we usually look at proportions (%) of compliance so make sure you use the right forumula – useful link here if you want to miss out the calculation part.

In Summary

It’s obviously limited in a blog as to how much detail I can provide but hopefully this stimulates some further thinking and review of your own data analysis / interpretation skills.  I have added a few references below for additional  reading if you’re interested and where training is available. HQIP have produced a very good guide to ensuring data quality in clinical audits and I recommend reading this as a starting point. Please add a comment below if you have anything relevant to add / share.

These views are my own and do not reflect any NQICAN discussion other than general experience obtained. I hope this will start the conversation across our clinical audit networks (and beyond) & perhaps provide a basis for future network training sessions.

The clinical audit support centre recently shared some results from their annual ‘state of clinical audit’ survey. I tweeted that the survey had limitations and promised to explain why so hopefully I have now indirectly done this. It would be great to see some of these tips applied to final analysis to give the report & future surveys increased value for the clinical audit community & beyond.

Thanks for taking the time to read this blog – well done if you’ve made it to the end 🙂

Feedback as always very welcome.

Useful resources

Understanding analytical capability in health care – Do we have more data than insight? Heathcare Foundation

How To: Set an Audit Sample & Plan Your Data Collection – University Hospitals Bristol NHS Foundation Trust

http://www.uhbristol.nhs.uk/files/nhs-ubht/5%20How%20To%20Sample%20Data%20Collection%20and%20Form%20v3.pdf

How to Select an Audit Sample – NHS Blood & Transplant

http://hospital.blood.co.uk/media/26844/select-audit-sample.pdf

HQIP Guide to Ensuring Data Quality in Clinical Audits

http://www.hqip.org.uk/public/cms/253/625/19/191/HQIP-Guide-to-Ensuring-Data-Quality-in-CA-Reviewed%202011.pdf?realName=Zmh8bI.pdf

Sampling websites:

https://www.checkmarket.com/sample-size-calculator/

http://www.raosoft.com/samplesize.html

Confidence interval for proportions calculator

http://www.sample-size.net/confidence-interval-proportion/

 

12 comments

  1. I read this with interest. I am particularly keen to know more about your approach in Tip 1 – “Assurance and improvement data analysis require different approaches”
    Could you provide a couple of examples of how you analyse for each approach please?

  2. I might be a sample size of one but this blog isn’t for me. If it was entitled ‘Beware of poor quality data collection methods’ then fair enough. The blog doesn’t cover the basics in sampling e.g. simple random, stratified, cluster etc. The mandatory national audits that we have to take part in employ some very varied and dubious sampling approaches. Perhaps the author could give personal reflections on those NCA’s demonstrating best practice and those that employ questionable methodologies? Thank you.

    1. Thanks for the feedback. As mentioned the blog was written as an overview on the subject providing tips from my experience along with signposts to further reading. In hindsight I should have added links to the relevant guides in the text as well as at the end – we will do this for future blogs.
      We have a blog planned on best practice in NCAs soon around the work our Yorkshire network have been doing to address these points so I hope you continue to give the blog a read.
      If you wish to discuss further please drop me a line.

  3. Really good blog. Would be interested in reading something further down the line on Clinical Audit feeding Quality Improvement – it seems very topical at the moment.

  4. Well done Carl on another interesting read. Given the reference to Clinical Audit Support Centre in the blog we thought it was best to reply here. We are pleased our survey has in part led to this blog.

    In many respects it isn’t possible to compare our survey with this blog as this relates to clinical audit. For example, we can’t control the return rates for our annual survey whereas we can usually apply sampling techniques to audit projects.

    Our survey has limitations and we always highlight these in the relevant report. For example, respondents are sending information direct to CASC and therefore one would expect questions relating to CASC products to be biased. We’d also agree that a few questions could be SMARTer so we will work with you and NQICAN in the future to improve these.

    However, overall our survey has 7 years of data and the results never bounce around much across the questions. We’ve had over 100 respondents every year since 2010 with a new high of well over 200 respondents in Dec 2016. Respondents reply anonymously and can opt out of questions they choose not to answer. We are also pleased that the likes of NAGCAE chair Prof Black has repeatedly presented data from our survey (from years when responses were far fewer than 2016) and HSJ have also recently included results from the most recent survey in their recent article on NCAs.

    Our survey is not perfect and we would never claim otherwise, but few clinical audits or healthcare surveys are. What we would welcome is a wider debate. What about data (sometimes not current) from NCAs? As implied in the comment above, how strong are sampling techniques in NCAs? Are NCA questions all SMART? Hopefully, our survey that in part led to this blog could be a springboard for further discussion. We are happy to share the spotlight and be critiqued alongside others, especially publicly funded projects.

    We look forward to blog 3!

  5. Well done Carl, my comment is not so much around the methods of sampling but the lack of quality assurance around as you have also reminded me of another pitfall, the motive for the audit? Quality improvement or compliance evidence. your sample is for i.e. if the audit is being don’t to tick a box for compliance ( CQC evidence, NHS LA evidence), we have seen first cycle audits meet compliance standards for the likes of the CQC or NHS LA and this has been an argument by some to not do a re-audit. This happens a lot because of the pressures on Audits teams in the NHS. But the reality is the Re-audit even on a clinical project that score an 100% compliance on every criteria in the first cycle in big part a validation of your sampling in the first audit cycle and assurance against its findings.
    Even if it’s a national audits linked to a Cquins Submissions then you would think that the quality of samples would be all the more important, whereas I have experience this, I have also been aware of the opposite because the Cquins payment is of more local trust importance than being a point or two down on a National audit SPC chart.
    Where there is the pressure to demonstrate high quality compliance to the CQC for evidence or receiving a Cquin payment linked to a nation audit submission can become the target and not the quality of the data becomes secondary. For those of us like yourself who have worked in Clinical audit for many years have seen this bias and countered against it, and educated in the process, but are also very aware of how clinical audit teams are toothless tigers.
    However, my concern is National audits make up part of 150 different parts of data evidence the CQC turns to with choosing who to inspect, another external pressure linked to national audits, so you can ask where is the quality assurance around the sampling at local sites, often the generic tools do not work with local systems square pegs in round holes. For me some national audits do achieve this but these are the likes of Minap, which has been around since the 1990s and does has local validation audits of its submitted data, or some of the national cancer audits, but then again these national audits have had cancer information teams grow up around them.
    HQIP can’t do this it would be a conflict of interest for me to NQICAN have done some quality assurance around national audit at a national level, but I would argue that there is a role for Clinical audit networks to do peer review of national audits in the Trusts of neighbouring clinical audit networks ( but funded of course). For me local audits which had the support of a clinical team always provide a better indication of quality, and standards of care. But the burden of national audits is reducing this. When I started working in clinical audit in the 1990s you would see clinical audit training include as an essential section on sampling. When you evaluated an audit project you first looked at the data-collection form and sampling from this point you could decide where the audit report was of any credibility. Now you see clinical audit training in some trusts that does not even mention it. So thanks you very much for this back to basic but essentials blog.

    cheers
    John

  6. Hi
    My feeling is this, leaving aside the technicalities of sampling you need to think carefully about the following
    1 Does the data you collect actually answer the audit criterion or is it merely what is available or a proxy measure.
    2. If the audit is for quality improvement, I believer the original and the primary purpose, then would you change your practice based on the sample size / strategy. If not it is insufficient, irrespective of whether it is technically adequate.
    3. Clinical audit is not a particularly good way of providing assurance, it really only measures what happened, or happens in one place, at one time. Past performance may be a guide to future performance but it may well not be.
    P

Leave a Reply to AnonymousCancel reply