top of page
Search

Why Rigorous Survey Design Matters: Lessons from the Royal College of Physicians’ 2024 EGM

  • Writer: Neil Howie
    Neil Howie
  • Sep 15, 2024
  • 8 min read

Surveys are a popular method used by healthcare researchers and students (and it is that time of year when master's students start thinking about dissertation ideas) alike to gather data, measure opinions, and inform decisions. Whether in academic research or practical healthcare settings, surveys offer a quick and effective way to capture a wide range of perspectives. However, the ease of conducting surveys can sometimes lead to a lack of rigour in their design and execution. This can result in misleading data, poor decision-making, and even reputational damage.


The recent issues surrounding the Royal College of Physicians’ (RCP) survey on physician associates, conducted in the lead-up to their 2024 Extraordinary General Meeting (EGM), serve as a timely reminder of the importance of conducting surveys rigorously. When surveys are poorly designed, hastily executed, or analysed with bias, they can do more harm than good.


This educational post is informed by the findings of the publicly available independent learning review conducted by The King’s Fund, commissioned by the Royal College of Physicians (RCP), to examine the survey conducted around their 2024 Extraordinary General Meeting (EGM). By focusing on survey design and analysis, we aim to provide insights into how survey methodologies can impact results and decision-making.


 

The Context: A Survey Gone Wrong


In early 2024, the RCP conducted a survey to gather opinions on physician associates—a group of healthcare professionals working in multidisciplinary teams across the National Health Service (NHS). The results of this survey were to be used in an important debate during their EGM, where members would discuss and vote on motions that could shape the future role of physician associates.


However, rather than providing clarity, the survey process became the subject of widespread criticism. From questionable design to biased analysis and poor presentation of results, the survey failed to meet the standards expected of a professional organisation like the RCP. Let’s break down the lessons from this incident and look at best practices that offer valuable insights.


 

Lesson 1: Survey Design Must Be Clear and Objective


One of the fundamental issues with the RCP’s survey was its lack of a clear and objective design. Surveys should begin with well-defined goals: What do you need to know, and why? If your goals aren’t clear, the questions you ask may not accurately capture the data you need. In the RCP case, some members voiced concerns that the survey design might have been steered towards supporting a particular narrative rather than gathering impartial data.


Surveys should aim to avoid bias at every step. This starts with crafting neutral, clear, and concise questions. Leading or double-barrelled questions can steer respondents toward a particular answer, resulting in skewed data. Additionally, every question should address a single topic or issue to prevent confusion or misleading results.


Best Practice Example: The American Association for Public Opinion Research (AAPOR) highlights that the design of a survey must focus on simplicity and clarity. Avoiding leading questions and ensuring each question focuses on only one concept at a time reduces the risk of bias. The OECD guidelines also emphasize the importance of defining clear objectives for each survey and pilot testing the questions with target respondents to refine them before widespread distribution【1】【2】.


Figure one: The Survey Design Process


 

Lesson 2: Sampling Matters—Ensure Representation


A survey is only as good as the sample it captures. If the respondents don’t represent the broader population, the results will be skewed, leading to poor or inaccurate conclusions. In the case of the RCP, the response rate was only 17.7%, which is far below the expected rate for similar surveys. While response rates can vary, a higher response rate typically leads to more reliable, representative data.


A key concern raised by RCP members was the exclusion of nearly half of the survey’s respondents—those who had worked with physician associates. This cohort had direct experience with the topic at hand, making their input critical for a fair and comprehensive debate. Excluding this group biased the data, raising concerns about the transparency and objectivity of the survey results【3】.


Best Practice Example: According to A Quick Guide to Survey Research by Jones, Baxter, and Khanduja, surveys in healthcare settings often face challenges in achieving high response rates, but several strategies can be employed to improve them. These include offering incentives, sending follow-up reminders, and ensuring that surveys are short and relevant to the target group. Implementing such strategies could have helped the RCP improve its response rate and collect more representative data【4】.


Healthcare journals often set benchmarks for acceptable survey response rates. For instance, BMJ Open suggests a minimum response rate of 60% for surveys distributed to professionals and organisational members【5】.


 

Lesson 3: Timeliness Is Key, But Not at the Expense of Quality


The RCP’s survey timeline was rushed. With less than two weeks for respondents to complete it, and the survey hastily conducted in the lead-up to the EGM, the quality of the data inevitably suffered. Designing and executing a survey takes time. Rushing the process can lead to poorly thought-out questions, limited respondent engagement, and incomplete data analysis.


Best Practice Example: A real-life best practice example is the National Health and Nutrition Examination Survey (NHANES) conducted by the Centers for Disease Control and Prevention (CDC) in the United States. This survey, which has been running since the 1960s, is well-regarded for its methodological rigour【6】. NHANES collects data on the health and nutritional status of the U.S. population, and unlike many surveys, it allows ample time for designing, piloting, and executing the study. NHANES operates on a continuous basis, with data collection spread across the year, allowing for careful planning, quality control, and the ability to adjust if any issues arise during the survey process. This extended timeline ensures that the data collected is robust and reliable, without the compromises that arise from rushing the process.


 

Lesson 4: Transparency Builds Trust


Surveys are only useful if their results are trusted. This trust is built through transparency at every step. Respondents need to understand how their data will be used, and stakeholders need to see that the survey process is fair and rigorous.


Some members felt there was a lack of transparency in the RCP’s handling of the survey process from the outset, particularly in how decisions were communicated and how the survey results were presented【3】. Worse still, the results that were presented appeared skewed to support a particular agenda. This lack of transparency eroded trust and led to accusations of bias.


Best Practice Example: The COVID-19 Social Study conducted by University College London (UCL) offers an excellent example of transparency in survey research【7】. This study, which tracked the mental health and wellbeing of participants during the pandemic, regularly published the full data and analysis online. UCL was transparent about how the data was collected, how it would be used, and what conclusions were drawn from it. This openness helped to build trust in the findings and foster public confidence.



 

Lesson 5: Analysing Survey Data Requires Skill and Objectivity


Survey data is only useful if it is analysed accurately and objectively. In the RCP’s case, the data analysis process fell short of these standards. The analysis was conducted by a small group, without the input of impartial experts, which led to questionable and potentially misleading conclusions. A lack of transparency around the methodology raised concerns among members about the objectivity of the results.


One significant issue was the exclusion of responses from nearly half of the survey’s participants—those who had worked with physician associates. This group, with direct experience on the subject of debate, represented a critical data set, and their exclusion raised suspicions about the integrity of the analysis【3】. By excluding relevant responses, the RCP undermined the reliability of its findings.


Moreover, another point of criticism was the way the data was analysed. The RCP grouped neutral responses (option 3 on the survey) with positive responses (options 4 and 5), which artificially inflated the perception of support for physician associates. Typically, neutral responses are not counted as positive, as they do not express a clear agreement with the subject at hand. This decision distorted the results, leading to a skewed portrayal of the membership’s opinions【3】. Additionally, key negative responses—such as concerns over the impact of physician associates on training opportunities—were downplayed or omitted entirely, further skewing the analysis to align with a more favourable narrative【3】.

These issues led to significant criticism of the survey’s objectivity, raising questions about the integrity of the entire process.


Best Practice Example: The UK Biobank Study is an example of how to analyse data objectively and rigorously. The data from this extensive study is reviewed by teams of experts who follow strict protocols for accuracy. Furthermore, consultation with specialists and peer review ensures that the analysis remains unbiased. Such thorough procedures ensure that the conclusions drawn from the data are robust, impartial, and scientifically sound【8】.


Figure two: Survey Data Analysis Process


 

Lesson 6: Grounding Surveys in Existing Literature for True Context


One key issue with the RCP’s survey was its failure to reference the wealth of existing literature on physician associates (PAs) in healthcare. Surveys should not exist in isolation but should be grounded in the broader context of research. By failing to acknowledge previous studies on PAs, the RCP missed the opportunity to analyse concerns raised by respondents in relation to established research.


There is substantial evidence indicating that PAs positively contribute to healthcare teams by improving access to care, reducing workloads for physicians, and supporting continuity of care. Several reports have also indicated that PAs generally have a neutral or positive impact on medical training【3】【8】. However, the RCP survey did not place concerns about PAs—such as their perceived negative impact on training opportunities—within this research context.


For instance, 44% of survey respondents expressed concerns that PAs negatively impacted training. Rather than analysing these concerns in light of existing research, which suggests that PAs complement medical training, the survey results amplified these concerns without proper contextualisation. By failing to incorporate the broader literature, the RCP’s analysis unintentionally reinforced misconceptions, fuelling negative perceptions among members.

Surveys that lack grounding in existing research can lead to ill-informed decisions and polarised debates, as was evident with the RCP. When data is not contextualised within the larger body of evidence, it can exacerbate misconceptions and hinder productive, informed discussion.


Best Practice Example: The UK Biobank Study demonstrates how surveys can be effectively grounded in existing research. By consistently cross-referencing new findings with prior studies and data, the study ensures that its conclusions are aligned with established knowledge. This helps build trust in the findings and contributes to a more nuanced understanding of healthcare issues【8】.


 

Conclusion


The issues that arose from the Royal College of Physicians’ 2024 survey serve as a powerful reminder of the importance of thoughtful and rigorous survey design. Surveys are vital tools for gathering opinions and informing decision-making in healthcare, but they must be handled with care. From setting clear objectives and ensuring representative sampling, to conducting objective data analysis and grounding findings in existing literature, there are numerous opportunities for bias to creep in if best practices are not followed.


By learning from the shortcomings of the RCP survey, healthcare professionals, researchers, policy makers and students can ensure that future surveys contribute to more transparent, reliable, and data-driven decision-making processes. In doing so, we can foster better-informed healthcare policy decisions that benefit both healthcare providers and patients.



 

Bibliography

  1. American Association for Public Opinion Research (AAPOR). Best Practices for Survey Research. Available from: https://www.aapor.org/Standards-Ethics/Best-Practices.aspx

  2. OECD. Guidelines for Designing Surveys. Available from: https://www.oecd.org/sdd/statistical-methodology.htm

  3. Buckingham H, Perera K. Independent learning review following the Royal College of Physicians’ Extraordinary General Meeting 2024. The King's Fund. Available from: https://www.kingsfund.org.uk

  4. Jones TL, Baxter MAJ, Khanduja V. A quick guide to survey research. Ann R Coll Surg Engl. 2013 Jan;95(1):5–7. Available from: https://doi.org/10.1308/003588413X13511609956372

  5. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, Kwan I. Methods to increase response rates to postal questionnaires. Cochrane Database Syst Rev. 2009 Jul 8;(3). Available from: https://pubmed.ncbi.nlm.nih.gov/19588449/

  6. Centers for Disease Control and Prevention (CDC). National Health and Nutrition Examination Survey. CDC. Available from: https://www.cdc.gov/nchs/nhanes/index.htm

  7. Fancourt D, Steptoe A, Bu F, Wright L. Covid-19 Social Study. University College London (UCL). Available from: https://www.covidsocialstudy.org

  8. Sudlow C, Gallacher J, Allen N, et al. UK Biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 2015 Mar 31;12(3). Available from: https://pubmed.ncbi.nlm.nih.gov/25826379/


 
 
 

Comments


Join our mailing list for our newsletter and updates on CPD and our research

Thanks for submitting!

Physician Associates for Healthcare Improvement CIC
Registered in England and Wales Company Registration Number: 15741239

Registered Office: 1A Compton Road West, Wolverhampton, WV3 9DJ.

For enquiries, contact us at: info@pahi.org

© 2024 by Physician Associates for Healthcare Improvement CIC. 

bottom of page