Understanding and Evaluating Survey Research

  • December 2015
  • Journal of the Advanced Practitioner in Oncology 6(2):168-171
  • 6(2):168-171
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Mercy Ogunnusi

  • K.H.K. Dharmadasa

Udayangani Kulatunga

  • Menaha Thayaparan
  • K.P. Keraminiyage

Zain Halloush

  • Zainab Al-Lataifeh
  • Jason A Williams

Melanka Pathirage

  • George Agyenim-Boateng
  • Seth Kwaku Amoah

Augustine Anane

  • Francis Ogbonnia Egwu
  • Chidi Philip Okorie

Aleke Jude Uchechukwu

  • Rita Abdul Rahman Ramakrishna

Abiodun Olafenwa

  • Yuki Shirai

Mariko Asai

  • Peter I Buerhaus

Catherine M Desroches

  • Carol D. Ryff

David M Almeida

  • Suzanne Mellon

Susan L Beck

  • J PSYCHOSOM RES

Ingvar Bjelland

  • Dag Neckelmann
  • Ingvar Bjelland
  • Tone Tangen Haug
  • D A Dillman
  • L M Christian
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

methodology survey article

Survey Research Methods

Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research.

Three types of papers are in-scope:

  • Papers discussing methodological issues in substantive research using survey data
  • Papers that discuss methodological issues that are more or less independent of the specific field of substantive research
  • Replication studies of studies published in SRM

Topics of particular interest include survey design, sample design, question and questionnaire design, data collection, nonresponse, data capture, data processing, coding and editing, measurement errors, imputation, weighting and survey data analysis methods.

Survey Research Methods focuses on data collection methods for large scale surveys, but papers on special populations are welcome. We do not publish pure mathematical papers or simulations. We also normally do not publish papers based on students, or on small sized experiments. Papers on larger experiments, or papers based on data from non-probability samples are welcome given that authors undertake steps to assess external validity or limitations concerning population estimates in detail.

Survey Research Methods publishes replications studies of articles published in the journal. Replication studies will undergo the same reviewing process as original journal articles. In case of publication, replication studies will be marked as such.

Survey Research Methods is indexed by the Social Sciences Citation Index (SSCI) , Scopus , and the Directory of Open Access Journals (DOAJ) . The journal has signed the Transparency and Openness Promotion Guidelines of the Center for Open Science; see the Author Guidelines for SRM’s adoption of these guidelines.

Find out more about the journal’s editorial team here .

A Comprehensive Guide to Survey Research Methodologies

For decades, researchers and businesses have used survey research to produce statistical data and explore ideas. The survey process is simple, ask questions and analyze the responses to make decisions. Data is what makes the difference between a valid and invalid statement and as the American statistician, W. Edwards Deming said:

“Without data, you’re just another person with an opinion.” - W. Edwards Deming

In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey.

What is Survey Research

A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It’s an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts.

Brief History of Survey Research

Survey research may have its roots in the American and English “social surveys” conducted around the turn of the 20th century. The surveys were mainly conducted by researchers and reformers to document the extent of social issues such as poverty. ( 1 ) Despite being a relatively young field to many scientific domains, survey research has experienced three stages of development ( 2 ):

-       First Era (1930-1960)

-       Second Era (1960-1990)

-       Third Era (1990 onwards)

Over the years, survey research adapted to the changing times and technologies. By exploiting the latest technologies, researchers can gain access to the right population from anywhere in the world, analyze the data like never before, and extract useful information.

Survey Research Methods & Types

Survey research can be classified into seven categories based on objective, data sources, methodology, deployment method, and frequency of deployment.

Types of survey research based on objective, data source, methodology, deployment method, and frequency of deployment.

Surveys based on Objective

Exploratory survey research.

Exploratory survey research is aimed at diving deeper into research subjects and finding out more about their context. It’s important for marketing or business strategy and the focus is to discover ideas and insights instead of gathering statistical data.

Generally, exploratory survey research is composed of open-ended questions that allow respondents to express their thoughts and perspectives. The final responses present information from various sources that can lead to fresh initiatives.

Predictive Survey Research

Predictive survey research is also called causal survey research. It’s preplanned, structured, and quantitative in nature. It’s often referred to as conclusive research as it tries to explain the cause-and-effect relationship between different variables. The objective is to understand which variables are causes and which are effects and the nature of the relationship between both variables.

Descriptive Survey Research

Descriptive survey research is largely observational and is ideal for gathering numeric data. Due to its quantitative nature, it’s often compared to exploratory survey research. The difference between the two is that descriptive research is structured and pre-planned.

 The idea behind descriptive research is to describe the mindset and opinion of a particular group of people on a given subject. The questions are every day multiple choices and users must choose from predefined categories. With predefined choices, you don’t get unique insights, rather, statistically inferable data.

Survey Research Types based on Concept Testing

Monadic concept testing.

Monadic testing is a survey research methodology in which the respondents are split into multiple groups and ask each group questions about a separate concept in isolation. Generally, monadic surveys are hyper-focused on a particular concept and shorter in duration. The important thing in monadic surveys is to avoid getting off-topic or exhausting the respondents with too many questions.

Sequential Monadic Concept Testing

Another approach to monadic testing is sequential monadic testing. In sequential monadic surveys, groups of respondents are surveyed in isolation. However, instead of surveying three groups on three different concepts, the researchers survey the same groups of people on three distinct concepts one after another. In a sequential monadic survey, at least two topics are included (in random order), and the same questions are asked for each concept to eliminate bias.

Based on Data Source

Primary data.

Data obtained directly from the source or target population is referred to as primary survey data. When it comes to primary data collection, researchers usually devise a set of questions and invite people with knowledge of the subject to respond. The main sources of primary data are interviews, questionnaires, surveys, and observation methods.

 Compared to secondary data, primary data is gathered from first-hand sources and is more reliable. However, the process of primary data collection is both costly and time-consuming.

Secondary Data

Survey research is generally used to collect first-hand information from a respondent. However, surveys can also be designed to collect and process secondary data. It’s collected from third-party sources or primary sources in the past.

 This type of data is usually generic, readily available, and cheaper than primary data collection. Some common sources of secondary data are books, data collected from older surveys, online data, and data from government archives. Beware that you might compromise the validity of your findings if you end up with irrelevant or inflated data.

Based on Research Method

Quantitative research.

Quantitative research is a popular research methodology that is used to collect numeric data in a systematic investigation. It’s frequently used in research contexts where statistical data is required, such as sciences or social sciences. Quantitative research methods include polls, systematic observations, and face-to-face interviews.

Qualitative Research

Qualitative research is a research methodology where you collect non-numeric data from research participants. In this context, the participants are not restricted to a specific system and provide open-ended information. Some common qualitative research methods include focus groups, one-on-one interviews, observations, and case studies.

Based on Deployment Method

Online surveys.

With technology advancing rapidly, the most popular method of survey research is an online survey. With the internet, you can not only reach a broader audience but also design and customize a survey and deploy it from anywhere. Online surveys have outperformed offline survey methods as they are less expensive and allow researchers to easily collect and analyze data from a large sample.

Paper or Print Surveys

As the name suggests, paper or print surveys use the traditional paper and pencil approach to collect data. Before the invention of computers, paper surveys were the survey method of choice.

Though many would assume that surveys are no longer conducted on paper, it's still a reliable method of collecting information during field research and data collection. However, unlike online surveys, paper surveys are expensive and require extra human resources.

Telephonic Surveys

Telephonic surveys are conducted over telephones where a researcher asks a series of questions to the respondent on the other end. Contacting respondents over a telephone requires less effort, human resources, and is less expensive.

What makes telephonic surveys debatable is that people are often reluctant in giving information over a phone call. Additionally, the success of such surveys depends largely on whether people are willing to invest their time on a phone call answering questions.

One-on-one Surveys

One-on-one surveys also known as face-to-face surveys are interviews where the researcher and respondent. Interacting directly with the respondent introduces the human factor into the survey.

Face-to-face interviews are useful when the researcher wants to discuss something personal with the respondent. The response rates in such surveys are always higher as the interview is being conducted in person. However, these surveys are quite expensive and the success of these depends on the knowledge and experience of the researcher.

Based on Distribution

The easiest and most common way of conducting online surveys is sending out an email. Sending out surveys via emails has a higher response rate as your target audience already knows about your brand and is likely to engage.

Buy Survey Responses

Purchasing survey responses also yields higher responses as the responders signed up for the survey. Businesses often purchase survey samples to conduct extensive research. Here, the target audience is often pre-screened to check if they're qualified to take part in the research.

Embedding Survey on a Website

Embedding surveys on a website is another excellent way to collect information. It allows your website visitors to take part in a survey without ever leaving the website and can be done while a person is entering or exiting the website.

Post the Survey on Social Media

Social media is an excellent medium to reach abroad range of audiences. You can publish your survey as a link on social media and people who are following the brand can take part and answer questions.

Based on Frequency of Deployment

Cross-sectional studies.

Cross-sectional studies are administered to a small sample from a large population within a short period of time. This provides researchers a peek into what the respondents are thinking at a given time. The surveys are usually short, precise, and specific to a particular situation.

Longitudinal Surveys

Longitudinal surveys are an extension of cross-sectional studies where researchers make an observation and collect data over extended periods of time. This type of survey can be further divided into three types:

-       Trend surveys are employed to allow researchers to understand the change in the thought process of the respondents over some time.

-       Panel surveys are administered to the same group of people over multiple years. These are usually expensive and researchers must stick to their panel to gather unbiased opinions.

-       In cohort surveys, researchers identify a specific category of people and regularly survey them. Unlike panel surveys, the same people do not need to take part over the years, but each individual must fall into the researcher’s primary interest category.

Retrospective Survey

Retrospective surveys allow researchers to ask questions to gather data about past events and beliefs of the respondents. Since retrospective surveys also require years of data, they are similar to the longitudinal survey, except retrospective surveys are shorter and less expensive.

Why Should You Conduct Research Surveys?

“In God we trust. All others must bring data” - W. Edwards Deming

 In the information age, survey research is of utmost importance and essential for understanding the opinion of your target population. Whether you’re launching a new product or conducting a social survey, the tool can be used to collect specific information from a defined set of respondents. The data collected via surveys can be further used by organizations to make informed decisions.

Furthermore, compared to other research methods, surveys are relatively inexpensive even if you’re giving out incentives. Compared to the older methods such as telephonic or paper surveys, online surveys have a smaller cost and the number of responses is higher.

 What makes surveys useful is that they describe the characteristics of a large population. With a larger sample size , you can rely on getting more accurate results. However, you also need honest and open answers for accurate results. Since surveys are also anonymous and the responses remain confidential, respondents provide candid and accurate answers.

Common Uses of a Survey

Surveys are widely used in many sectors, but the most common uses of the survey research include:

-       Market research : surveying a potential market to understand customer needs, preferences, and market demand.

-       Customer Satisfaction: finding out your customer’s opinions about your services, products, or companies .

-       Social research: investigating the characteristics and experiences of various social groups.

-       Health research: collecting data about patients’ symptoms and treatments.

-       Politics: evaluating public opinion regarding policies and political parties.

-       Psychology: exploring personality traits, behaviors, and preferences.

6 Steps to Conduct Survey Research

An organization, person, or company conducts a survey when they need the information to make a decision but have insufficient data on hand. Following are six simple steps that can help you design a great survey.

Step 1: Objective of the Survey

The first step in survey research is defining an objective. The objective helps you define your target population and samples. The target population is the specific group of people you want to collect data from and since it’s rarely possible to survey the entire population, we target a specific sample from it. Defining a survey objective also benefits your respondents by helping them understand the reason behind the survey.

Step 2: Number of Questions

The number of questions or the size of the survey depends on the survey objective. However, it’s important to ensure that there are no redundant queries and the questions are in a logical order. Rephrased and repeated questions in a survey are almost as frustrating as in real life. For a higher completion rate, keep the questionnaire small so that the respondents stay engaged to the very end. The ideal length of an interview is less than 15 minutes. ( 2 )

Step 3: Language and Voice of Questions

While designing a survey, you may feel compelled to use fancy language. However, remember that difficult language is associated with higher survey dropout rates. You need to speak to the respondent in a clear, concise, and neutral manner, and ask simple questions. If your survey respondents are bilingual, then adding an option to translate your questions into another language can also prove beneficial.

Step 4: Type of Questions

In a survey, you can include any type of questions and even both closed-ended or open-ended questions. However, opt for the question types that are the easiest to understand for the respondents, and offer the most value. For example, compared to open-ended questions, people prefer to answer close-ended questions such as MCQs (multiple choice questions)and NPS (net promoter score) questions.

Step 5: User Experience

Designing a great survey is about more than just questions. A lot of researchers underestimate the importance of user experience and how it affects their response and completion rates. An inconsistent, difficult-to-navigate survey with technical errors and poor color choice is unappealing for the respondents. Make sure that your survey is easy to navigate for everyone and if you’re using rating scales, they remain consistent throughout the research study.

Additionally, don’t forget to design a good survey experience for both mobile and desktop users. According to Pew Research Center, nearly half of the smartphone users access the internet mainly from their mobile phones and 14 percent of American adults are smartphone-only internet users. ( 3 )

Step 6: Survey Logic

Last but not least, logic is another critical aspect of the survey design. If the survey logic is flawed, respondents may not continue in the right direction. Make sure to test the logic to ensure that selecting one answer leads to the next logical question instead of a series of unrelated queries.

How to Effectively Use Survey Research with Starlight Analytics

Designing and conducting a survey is almost as much science as it is an art. To craft great survey research, you need technical skills, consider the psychological elements, and have a broad understanding of marketing.

The ultimate goal of the survey is to ask the right questions in the right manner to acquire the right results.

Bringing a new product to the market is a long process and requires a lot of research and analysis. In your journey to gather information or ideas for your business, Starlight Analytics can be an excellent guide. Starlight Analytics' product concept testing helps you measure your product's market demand and refine product features and benefits so you can launch with confidence. The process starts with custom research to design the survey according to your needs, execute the survey, and deliver the key insights on time.

  • Survey research in the United States: roots and emergence, 1890-1960 https://searchworks.stanford.edu/view/10733873    
  • How to create a survey questionnaire that gets great responses https://luc.id/knowledgehub/how-to-create-a-survey-questionnaire-that-gets-great-responses/    
  • Internet/broadband fact sheet https://www.pewresearch.org/internet/fact-sheet/internet-broadband/    

Related Articles

Time to market: what is ttm and why is it so important.

Learn how you can improve the success of your brand’s product launch by nailing down an effective time to market (TTM) strategy.

Fuel your innovation with data - Custom AI-powered Research

Need fresh innovation ideas? Read more about our custom AI-powered methods that generate + prioritize new product ideas for your innovation pipeline.

Consumer Insights | Tap Into Your Core Customer Base

Learn how you can leverage consumer insights to improve your position in the market and discover valuable information about your customers.

What is Product Positioning? (Examples and Strategies)

Launching a new product is a long and arduous process. Learn how to define and differentiate your product for maximum success with a product positioning strategy.

The Introduction Stage of Product Life Cycle | What to Know

The introduction stage in the product life cycle is meant to build product awareness. Click here to learn more about the introduction stage and how it works.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

PERSPECTIVE article

Methodological considerations for survey-based research during emergencies and public health crises: improving the quality of evidence and communication.

Eric B Kennedy

  • 1 Disaster and Emergency Management, School of Administrative Studies, York University, Toronto, Canada
  • 2 Institute for Methods Innovation (IMI), Eureka, CA, United States

The novel coronavirus (COVID-19) outbreak has resulted in a massive amount of global research on the social and human dimensions of the disease. Between academic researchers, governments, and polling firms, thousands of survey projects have been launched globally, tracking aspects like public opinion, social impacts, and drivers of disease transmission and mitigation. This deluge of research has created numerous potential risks and problems, including methodological concerns, duplication of efforts, and inappropriate selection and application of social science research techniques. Such concerns are more acute when projects are launched under the auspices of quick response, time-pressured conditions–and are magnified when such research is often intended for rapid public and policy-maker consumption, given the massive public importance of the topic.

Introduction

The COVID-19 pandemic has unfortunately illustrated the deadly consequences of ineffective science communication and decision-making. Globally, millions of people have succumbed to scientific misinformation about mitigation and treatment of the virus, fuelling behaviors that put themselves and their loved ones in mortal danger. 1 Nurses have told stories of COVID-19 patients, gasping for air, and dying, while still insisting the disease was a hoax (e.g., Villegas 2020 ). While science communication has always had real world implications, the magnitude of the COVID-19 crisis illustrates a remarkable degree of impact. Moreover, the crisis has demonstrated the complexity and challenge of making robust, evidence-informed policy in the midst of uncertain evidence, divergent public views, and heterogenous impacts. This adds urgency to seemingly abstract or academic questions of how the evidence that informs science communication practice and decision-making can be made more robust, even during rapidly evolving crises and grand challenges.

There has been a massive surge of science communication-related survey research projects in response to the COVID-19 crisis. These projects cover a wide range of topics, from assessing psychosocial impacts to attempting to evaluate different interventions and containment measures. Many of the issues being investigated connect to core themes in science communication, including (mis)information on scientific issues (e.g., Gupta et al., 2020 ; Pickles et al., 2021 ), trust in scientific technologies and interventions, including vaccines (e.g., Jensen et al., 2021a ; Kennedy et al., 2021a ; Kwok et al., 2021 ; Ruiz and Ball 2021 ), and more general issues of scientific literacy (e.g., Biasio et al., 2021 )—themes being investigated in a context of heightened public interest, significant pressure for effectiveness in interventions, and with highly polarized and contentious debate. Such survey research can be instrumental in informing effective government policies and interventions, for example, by evaluating the acceptability of different mitigation strategies, identifying vulnerable populations experiencing disproportionate negative effects, and clarifying information needs ( Van Bavel et al., 2020 ).

However, the rush of COVID-19 survey research has exposed challenges in using questionnaires in emergency contexts, such as methodological flaws, duplication of efforts, and lack of transparency. These issues are especially apparent when projects are launched under time-pressured conditions and conducted exclusively online. Addressing these challenges head on is essential to reduce the flow of questionable results into the policymaking process, where problematic methods can go undetected. To truly succeed at evidence-based science communication (see Jensen and Gerber 2020 )—and to support evidence-based decision-making through good science communication—requires that survey-based research in emergency settings be conducted according to the best feasible practices.

In this article, we highlight the utility of questionnaire-based research in COVID-19 and other emergencies, outlining best practices. We offer guidance to help researchers navigate key methodological choices, including sampling strategies, validation of measures, harmonization of instruments, and conceptualization/operationalization of research frameworks. Finally, we provide a summary of emerging networks, remaining gaps, and best practices for international coordination of survey-based research relating to COVID-19 and future disasters, emergencies, and crises.

Suitability of Survey-Based Research

Social and behavioural sciences have much to offer in terms of understanding emergency situations broadly, including the COVID-19 crisis, and informing policy responses (see Van Bavel et al., 2020 ) and post-disaster reactions ( Solomon and Green, 1992 ). Questionnaires have unique advantages and limitations in terms of the information that can be gathered and the insights that can be generated when used in isolation from other research approaches (e.g., see Jensen and Laurie, 2016 ). For these reasons, researchers should carefully assess the suitability of survey-based methods for addressing their research questions.

In emergency contexts, survey research can offer several advantages. Questionnaire-based work can:

• Allow for relatively straightforward recruitment and consenting procedures with large numbers of participants, as well as increasing the geographical scale that researchers can target (versus, for example, interview or observational research).

• Gather accurate data about an individual’s subjective memories or personal accounts, knowledge, attitudes, appraisals, interpretations, and perceptions about experiences.

• Allow for many mixed or integrated strategies for data collection, including both qualitative/quantitative; cross-sectional/longitudinal; closed-/open-ended; among others.

• Integrate effectively with other research methods (e.g., interviews, case study, biosampling) as supplemental or complementary (see Morgan, 2007 ) approaches to maximise strengths and offset weaknesses that allow for data triangulation.

• Allow for consistent administration of questions across a sample, as well as carefully crafted administration across multi-lingual contexts (e.g., validating multiple languages of a survey for consistent results).

• Enable highly complicated back-end rules (“survey logic”) for tailoring the user experience to ensure only relevant questions are presented.

• Create opportunities for carefully-crafted experimental designs, such as manipulating a variable of interest or comparing responses to different scenarios across a population.

• Deploy with relatively low costs and rapid timeframes compared to in-person methodologies.

At the same time, surveys can have significant limitations in the context of crisis research that can undermine their reliability or create temptations for methodological shortcuts. For example:

• Surveys face important limits in terms of what information can be reliably obtained. For example, respondents generally cannot accurately report about the attitudes, experiences, and behaviors of other people in their social groups. Likewise, self-reports can be systematically distorted by psychological processes, especially when it comes to behavioural intentions and projected future actions. Retrospective accounts can also be unreliable, particularly in cases of complex event sequences or events that took place long ago (e.g., Wagoner and Jensen 2015 ).

• The quality of survey data can degrade rapidly when there is low ecological validity (i.e., participants are not representative of the broader population), whether through sampling problems, systematic patterns in attrition for longitudinal research, or other factors.

• Seemingly simple designs may require extensive methodological or statistical expertise to maximise questionnaire design and data analysis (i.e., ensuring valid measures, maximizing best practice, and avoiding common mistakes).

• The limited ability to adjust measures once a survey has been released, without compromising the ability to develop inferences from comparable data, can be challenging in rapidly evolving crisis contexts where relevant issues are changing rapidly.

• Cross-sectional surveys can give a false impression of personal attributes that are prone to change if assumptions of cross-situational consistency are applied (e.g., factors that are expected to remain stable across time) (e.g., Hoffman, 2015 ).

Given these advantages and limitations, there are several appropriate targets for survey research in crises and emergencies. Alongside other methods—including observational, ethnographic, and interview-based work, depending on the specific research questions formulated—surveys can help to gather reliable data on:

• Knowledge: What people currently believe to be true about the disease (e.g., origin of the coronavirus, how could they catch it, or how they could reduce exposure).

• Trust: Confidence in different political and government institutions/actors, media and information sources, and other members of their community (e.g., neighbors, strangers) (e.g., see Jensen et al., 2021 ).

• Opinions: Approval of particular interventions to slow the spread; belief about whether policies or behaviours have been effective or changed the emergency outcome; or personal views about perceptions of vaccine efficacy or safety.

• Personal impacts: Reports from individuals who are exposed or negatively affected, such as with chronic stress or loss of loved ones, employment, health, and stigmatization.

• Risk perceptions: Hopes and fears related to the disease, end points of the emergency, and return to normalcy.

Even when aware of the limitations, launching and conducting survey research is a specialized skill that requires training, experience and mentorship. This expertise is comparable to conducting epidemiological, biomedical, or statistical research. Even when questionnaires appear ‘simple’ because of the skillful use of plain language and straightforward user interfaces, there are substantial methodological learning curves associated with proper research designs. In the next sections, we provide several project design, coordination, and methodological recommendations for researchers launching or conducting rapid-response research projects on these topics inherent with emergency contexts, in both COVID-19 and beyond. In the next section, we discuss overall research coordination, project designs, and specific methodological approaches.

Project Design

Researchers face important choices when designing survey-based research within the fast-moving context of disasters and emergencies. There can be a substantial pressure to conduct research quickly , including funder timelines, the perceived race to publish, or pressure to collect ephemeral data. Each of these factors can necessitate difficult decisions about project and research designs. At a high level, we recommend that survey-based projects on COVID-19 adopt the following standards ( Table 1 ):

www.frontiersin.org

TABLE 1 . Key factors for effective COVID-19 survey-based research.

Methodological Considerations

In emergency situations, avoiding common pitfalls in methodological designs can be challenging because of temporal pressures and unique emergency contexts. We recommend the following standards in methodological designs for COVID-19 research ( Table 2 ):

www.frontiersin.org

TABLE 2 . Key methodological considerations for COVID-19 survey research.

We also encourage readers to explore other resources for supporting methodological rigour in emergency contexts. In particular, the CONVERGE program associated with the Natural Hazards Center at the University of Colorado Boulder maintains a significant community resource via tutorials and “check sheets” to support method design and implementation (see https://converge.colorado.edu/resources/check-sheets/ ).

Research Coordination

Research coordination during emergencies requires pragmatic strategies to maximise the impact of evidence from rapid-response research. Despite massive government attention and resulting funding schemes, the available funds for social science research are outstripped by research needs–a situation made worse through duplication of research, overproduction, and inefficient use of resources in some topics. This results in fewer topics and populations receiving research attention, and investigations spanning a shorter period. It also generates a “wave profile” of investigation that is temporary and transient, disappearing as funds become limited due to economic constraints or further displacements occur to new topics.

We recommend the following practical considerations to maximize the efficiency, coordination, and effectiveness of survey-based research efforts ( Table 3 ):

www.frontiersin.org

TABLE 3 . Primary considerations for coordination of survey-based COVID-19 research.

Evidence-based science communication and decision-making depends on the reliability and robustness of the underlying research. Survey-based research can be valuable to supporting communication and policy-making efforts. However, it can also be vulnerable to significant limitations and common mistakes in the rush of trying to deploy instruments in an emergency context. The best practices outlined above not only help to ensure more rigorous data, but also serve as valuable intermediate steps when developing the project (e.g., meta-analysis helping to inform more robust question formulations; methodological transparency allowing more scrutiny of instruments before deployment). For example, by drawing on existing survey designs prepared by well-qualified experts, you can both help to enable comparability of data and reduce the risk of using flawed survey questions and response options.

In this article, we have presented a series of principles regarding effective crisis and emergency survey research. We argue that it is essential to begin by assessing the suitability of questionnaire-based approaches (including the unique strengths of surveys, potential limitations related to design and self-reporting, and the types of information that can be collected). We then laid out best practices essential to reliable research such as open access designs, engaging requisite social science expertise, using longitudinal and repeated measure designs, and selecting suitable sampling strategies. We then discussed three methodological issues (validation of items, use of standardized items, and alignment between concepts and operationalizations) that can prove challenging in rapid response contexts. Finally, we highlighted best practices for funding and project management in crisis contexts, including de-duplication, coordination, harmonization, and evidence synthesis.

Survey research is challenging work requiring methodological expertise. The best practices cannot be satisfactorily trained in the immediate race to respond to a crisis. Indeed, even for those with significant expertise in survey methods, issues like open access, de-duplication of projects, and harmonization between designs can pose significant challenges. Ultimately, the same principles hold true in emergency research as in more “normal” survey operations, and “the quality of a survey is best judged not by its size, scope, or prominence, but by how much attention is given to dealing with all the many important problems that can arise” ( American Statistical Association, 1998 , p. 11).

The emergency context should not weaken commitments to best practice principles, given the need to provide robust evidence that can inform policy and practice during crises. For researchers, this means creating multidisciplinary teams with sufficient expertise to ensure methodological quality. For practitioners and policy makers, this means being conscientious consumers of survey data–and seeking ways to engage expert perspectives in critical reviews of best available evidence. And, for funders of such research, it means redoubling a commitment to rigorous approaches and building the infrastructure that supports pre-crisis design and implementation, as well as effective coordination during events. Building resilience for future crises requires investment in survey methodology capacity building and network development before emergencies strike.

Author Contributions

All three authors contributed to the drafting and editing of the manuscript, with EBK as lead.

This project is supported in part by funding from the Social Sciences and Humanities Research Council (1006-2019-0001). This project was also supported through the COVID-19 Working Group effort supported by the National Science Foundation-funded Social Science Extreme Events Research (SSEER) Network and the CONVERGE facility at the Natural Hazards Center at the University of Colorado Boulder (NSF Award #1841338). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of SSHRC, NSF, SSEER, or CONVERGE.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1 As just one example, Loomba et al. 2021 found that misinformation results in a decline of over 6% in vaccine intentions in the United States, or some approximately 21 million prospective American vaccine recipients.

American Statistical Association (1998). Judging the Quality of a Survey. ASA: Section Surv. Res. Methods , 1–13.

Google Scholar

Ballantyne, N. (2019). Epistemic Trespassing. Mind 128, 367–395. doi:10.1093/mind/fzx042

CrossRef Full Text | Google Scholar

Biasio, L. R., Bonaccorsi, G., Lorini, C., and Pecorelli, S. (2021). Assessing COVID-19 Vaccine Literacy: a Preliminary Online Survey. Hum. Vaccin. Immunother. 17 (5), 1304–1312. doi:10.1080/21645515.2020.1829315

PubMed Abstract | CrossRef Full Text | Google Scholar

Dong, E., Du, H., and Gardner, L. (2020). An Interactive Web-Based Dashboard to Track COVID-19 in Real Time. Lancet Infect. Dis 20 (5). doi:10.1016/s1473-3099(20)30120-1

Gupta, L., Gasparyan, A. Y., Misra, D. P., Agarwal, V., Zimba, O., and Yessirkepov, M. (2020). Information and Misinformation on COVID-19: a Cross-Sectional Survey Study. J. Korean Med. Sci. 35 (27), e256. doi:10.3346/jkms.2020.35.e256

Hoffman, L. (2015). Longitudinal Analysis: Modeling Within-Person Fluctuation and Change . New York: Routledge . doi:10.4324/9781315744094

Jensen, E. A., and Gerber, A. (2020). Evidence-based Science Communication. Front. Commun. 4 (78), 1–5. doi:10.3389/fcomm.2019.00078

Jensen, E. A., Kennedy, E. B., and Greenwood, E. (2021a). Pandemic: Public Feeling More Positive about Science. Nature 591, 34. doi:10.1038/d41586-021-00542-w

Jensen, E. A., Pfleger, A., Herbig, L., Wagoner, B., Lorenz, L., and Watzlawik, M. (2021b). What Drives Belief in Vaccination Conspiracy Theories in Germany. Front. Commun. 6. doi:10.3389/fcomm.2021.678335

Jensen, E. A., Kennedy, E., and Greenwood, E. (2021). Pandemic: Public Feeling More Positive About Science (Correspondence). Nature 591, 34.

Jensen, E., and Laurie, C. (2016). Doing Real Research: A Practical Guide to Social Research . London: SAGE .

Jensen, E., and Wagoner, B. (2014). “Developing Idiographic Research Methodology: Extending the Trajectory Equifinality Model and Historically Situated Sampling,” in Cultural Psychology and its Future: Complementarity in a New Key . Editors B. Wagoner, N. Chaudhary, and P. Hviid.

Kennedy, E. B., Daoust, J. F., Vikse, J., and Nelson, V. (2021a). “Until I Know It’s Safe for Me”: The Role of Timing in COVID-19 Vaccine Decision-Making and Vaccine Hesitancy. Under Review. doi:10.3390/vaccines9121417

Kennedy, E. B., Nelson, V., and Vikse, J. (2021b). Survey Research in the Context of COVID-19: Lessons Learned from a National Canadian Survey. Working Paper.

Kennedy, E. B., Vikse, J., Chaufan, C., O’Doherty, K., Wu, C., Qian, Y., et al. (2020). Canadian COVID-19 Social Impacts Survey - Summary of Results #1: Risk Perceptions, Trust, Impacts, and Responses. Technical Report #004. Toronto, Canada: York University Disaster and Emergency Management . doi:10.6084/m9.figshare.12121905

Kwok, K. O., Li, K.-K., Wei, W. I., Tang, A., Wong, S. Y. S., and Lee, S. S. (2021). Influenza Vaccine Uptake, COVID-19 Vaccination Intention and Vaccine Hesitancy Among Nurses: A Survey. Int. J. Nurs. Stud. 114, 103854. doi:10.1016/j.ijnurstu.2020.103854

Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., and Larson, H. J. (2021). Measuring the Impact of COVID-19 Vaccine Misinformation on Vaccination Intent in the UK and USA. Nat. Hum. Behav. 5 (3), 337–348. doi:10.1038/s41562-021-01056-1

Mauss, I. B., and Robinson, M. D. (2009). Measures of Emotion: A Review. Cogn. Emot. 23 (2), 209–237. doi:10.1080/02699930802204677

Morgan, D. L. (2007). Paradigms Lost and Pragmatism Regained: Methodological Implications of Combining Qualitative and Quantitative Methods. Journal of Mixed Methods Research 2007, 1–48. doi:10.1177/2345678906292462

Pickles, K., Cvejic, E., Nickel, B., Copp, T., Bonner, C., Leask, J., et al. (2021). COVID-19 Misinformation Trends in Australia: Prospective Longitudinal National Survey. J. Med. Internet Res. 23 (1), e23805. doi:10.2196/23805

Ruiz, J. B., and Bell, R. A. (2021). Predictors of Intention to Vaccinate against COVID-19: Results of a Nationwide Survey. Vaccine 39 (7), 1080–1086. doi:10.1016/j.vaccine.2021.01.010

Schwarz, N., Kahneman, D., Xu, J., Belli, R., Stafford, F., and Alwin, D. (2009). “Global and Episodic Reports of Hedonic Experience,” in Using Calendar and Diary Methods in Life Events Research , 157–174.

Smith, B. K., and Jensen, E. A. (2016). Critical Review of the United Kingdom's “gold Standard” Survey of Public Attitudes to Science. Public Underst. Sci. 25, 154–170. doi:10.1177/0963662515623248

Smith, B. K., Jensen, E., and Wagoner, B. (2015). “The International Encyclopedia of Communication Theory and Philosophy,” in International Encyclopedia of Communication Theory and Philosophy . Editors K. B. Jensen, R. T. Craig, J. Pooley, and E. Rothenbuhler (New Jersey, Wiley-Blackwell ). wbiect201]. doi:10.1002/9781118766804

Solomon, S. D., and Green, B. L. (1992). Mental Health Effects of Natural and Human-Made Disasters. PTSD. Res. Q. 3 (1), 1–8.

Tourangeau, R., Rips, L., and Rasinski, K. (2000). The Psychology of Survey Response . Cambridge: Cambridge University Press .

Van Bavel, J. J., Baicker, K., Boggio, P., Capraro, V., Cichocka, A., Cikara, M., et al. (2020). Using Social and Behavioural Science to Support COVID-19 Pandemic Response. Nat. Hum. Behav. 4, 460–471. doi:10.1038/s41562-020-0884-z

Villegas, Paulina. (2020). South Dakota Nurse Says many Patients Deny the Coronavirus Exists – Right up until Death . Washington, DC: Washington Post . Available at: https://www.washingtonpost.com/health/2020/11/16/south-dakota-nurse-coronavirus-deniers .

Wagoner, B., and Jensen, E. (2015). “Microgenetic Evaluation: Studying Learning in Motion,” in The Yearbook of Idiographic Science. Volume 6: Reflexivity and Change in Psychology . Editors G. Marsico, R. Ruggieri, and S. Salvatore (Charlotte, N.C.: Information Age Publishing ).

Wagoner, B., and Valsiner, J. (2005). “Rating Tasks in Psychology: From Static Ontology to Dialogical Synthesis of Meaning,” in Contemporary Theorizing in Psychology: Global Perspectives . Editors A. Gülerce, I. Hofmeister, G. Saunders, and J. Kaye (Toronto, Canada: Captus ), 197–213.

Keywords: survey, questionnaire, research methods, COVID-19, emergency, crises

Citation: Kennedy EB, Jensen EA and Jensen AM (2022) Methodological Considerations for Survey-Based Research During Emergencies and Public Health Crises: Improving the Quality of Evidence and Communication. Front. Commun. 6:736195. doi: 10.3389/fcomm.2021.736195

Received: 04 July 2021; Accepted: 18 October 2021; Published: 15 February 2022.

Reviewed by:

Copyright © 2022 Kennedy, Jensen and Jensen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eric B Kennedy, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

methodology survey article

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Use a good survey analysis software . Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

Participant Engagement

Participant Engagement: Strategies + Improving Interaction

Sep 12, 2024

Employee Recognition Programs

Employee Recognition Programs: A Complete Guide

Sep 11, 2024

Agile Qual for Rapid Insights

A guide to conducting agile qualitative research for rapid insights with Digsite 

Cultural Insights

Cultural Insights: What it is, Importance + How to Collect?

Sep 10, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Prevent plagiarism. Run a free check.

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

"I thought AI Proofreading was useless but.."

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Study Protocol

Assessing the fragility index of randomized controlled trials supporting perioperative care guidelines: A methodological survey protocol

Roles Conceptualization, Investigation, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliations Department of Anesthesiology and Pain Medicine, Toronto General Hospital, University Health Network, University of Toronto, Toronto, ON, Canada, Department of Clinical Epidemiology and Biostatistics, School of Medicine, Pontificia Universidad Javeriana, Bogotá D.C, Colombia

ORCID logo

Roles Supervision, Writing – review & editing

Affiliations Department of Clinical Epidemiology and Biostatistics, School of Medicine, Pontificia Universidad Javeriana, Bogotá D.C, Colombia, Department of Anesthesiology, San Ignacio University Hospital, School of Medicine, Pontificia Universidad Javeriana, Bogotá D.C, Colombia

Affiliation Department of Clinical Epidemiology and Biostatistics, School of Medicine, Pontificia Universidad Javeriana, Bogotá D.C, Colombia

Affiliations Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada, Biostatistics Unit, St Joseph’s Healthcare Hamilton, Hamilton, ON, Canada, Faculty of Health Sciences, University of Johannesburg, Johannesburg, South Africa

  • Margarita Otalora-Esteban, 
  • Martha Beatriz Delgado-Ramirez, 
  • Fabian Gil, 
  • Lehana Thabane

PLOS

  • Published: September 12, 2024
  • https://doi.org/10.1371/journal.pone.0310092
  • Reader Comments

Table 1

Introduction

The Fragility Index (FI) and the FI family are statistical tools that measure the robustness of randomized controlled trials (RCT) by examining how many patients would need a different outcome to change the statistical significance of the main results of a trial. These tools have recently gained popularity in assessing the robustness or fragility of clinical trials in many clinical areas and analyzing the strength of the trial outcomes underpinning guideline recommendations. However, it has not been applied to perioperative care Clinical Practice Guidelines (CPG).

This study aims to survey clinical practice guidelines in anesthesiology to determine the Fragility Index of RCTs supporting the recommendations, and to explore trial characteristics associated with fragility.

Methods and analysis

A methodological survey will be conducted using the targeted population of RCT referenced in the recommendations of the CPG of the North American and European societies from 2012 to 2022. FI will be assessed for statistically significant and non-significant trial results. A Poisson regression analysis will be used to explore factors associated with fragility.

This methodological survey aims to estimate the Fragility Index of RCTs supporting perioperative care guidelines published by North American and European societies of anesthesiology between 2012 and 2022. The results of this study will inform the methodological quality of RCTs included in perioperative care guidelines and identify areas for improvement.

Citation: Otalora-Esteban M, Delgado-Ramirez MB, Gil F, Thabane L (2024) Assessing the fragility index of randomized controlled trials supporting perioperative care guidelines: A methodological survey protocol. PLoS ONE 19(9): e0310092. https://doi.org/10.1371/journal.pone.0310092

Editor: Stefano Turi, IRCCS: IRCCS Ospedale San Raffaele, ITALY

Received: August 9, 2023; Accepted: August 24, 2024; Published: September 12, 2024

Copyright: © 2024 Otalora-Esteban et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: No datasets were generated or analyzed during the current study. All relevant data from this study will be made available upon study completion.

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

1. Introduction

The use of statistically significant findings in scientific literature has come under scrutiny in recent years, as has the misuse of statistical tests and misinterpreting results [ 1 , 2 ]. A significant and seemingly high-quality portion of evidence-based medicine consists of randomized trials. Limitations of Randomized Clinical Trials (RCTs) include incorrect statistical inference, low internal or external validity, misinterpretation of statistical approaches, publication bias, and difficulty applying findings to individual patients [ 3 ]. Conclusions taken from trial findings may be questioned due to their fragility, particularly when little adjustments have a significant impact on the result [ 4 ]. The fragility index (FI) measures the minimum number of conversions from nonevent to event in a treatment group to shift the P-value over the 0.05 threshold [ 4 ]. The FI family of indices includes the FI, the Reverse Fragility Index (rFI), the Fragility Quotient (FQ), Incidence fragility index (FIq), and the Generalized fragility index (GFIq) [ 5 ]. These measures can provide valuable information on the statistical stability of trial results and the susceptibility to misinterpretation [ 6 – 10 ].

The limitations of conducting RCTs in perioperative medicine, including frequently non-statistically significant results and susceptibility to spin bias, have prompted more comprehensive research methods to facilitate comparison between studies [ 11 ]. The use of fragility assessment as a complementary measure to determine the stability of study results has been proposed as a potential solution [ 12 , 13 ]. Its use has been extended to evaluate the robustness of RCT results in Clinical Practice Guidelines (CPG) [ 6 , 14 , 15 ].

Previous studies evaluating fragility in anesthesiology RCTs have produced similar results to those reported in other clinical areas, with a median FI of 3 (IQR,1–7) [ 11 , 16 – 19 ].

A recent assessment of the evidence supporting the North American Society of Anesthesiology and the European Society of Anesthesiology CPG found that less than one-fifth of the recommendations are supported by Grade A evidence [ 20 ].

Spin in abstracts of RCTs published in high-impact anesthesia journals has been reported between 40–54%, misrepresenting validity and potentially impacting clinical decisions [ 16 , 21 ].

These findings underscore the importance of Meta-research-related initiatives’ importance in identifying methodological strengths and weaknesses and promoting evidence-based science by eliminating ineffective research practices [ 22 , 23 ].

a. Primary objective.

To assess the fragility of RCTs with dichotomous outcomes supporting perioperative care guidelines of the North American and European societies of anesthesiology between 2012 and 2022.

b. Secondary objective.

To explore randomized controlled trials attributes influencing statistical fragility

2. Methods and analysis

Study design.

This study corresponds to a methodological survey of RCTs supporting perioperative care guidelines published by North American and European societies of anesthesiology between 2012 and 2022.

This protocol is registered in OSF registries (Registration DOI: https://doi.org/10.17605/OSF.IO/8KBPE ).

Eligibility criteria

CPGs will be selected based on the following eligibility criteria:

  • Perioperative evidence-based guidelines for perioperative care medicine interventions aimed at anesthesiologists.
  • English-written guidelines from the North American societies (the United States, Canada, and the United Kingdom) and European societies.
  • Released between 2012 and 2022.
  • Must include an explicit statement identifying it as a "guideline."
  • Critical care and chronic pain medicine perioperative evidence-based guidelines.
  • Practice recommendations, practice advisories, or consensus statements.
  • Older versions of the same guidelines, based on the year of publication.

Selected CPGs will be reviewed to identify all possible RCTs supporting the recommendations within each guideline; without language limitations. Each of the trials will be screened for eligibility using the following criteria:

  • Human clinical trials with two arms using a 1:1 allocation ratio
  • The trial uses a binary outcome.
  • Intervention studies in the adult, obstetric, and pediatric populations.
  • Not using a two-parallel-arm trial design or a two-by-two factorial RCT.
  • Non-inferiority trials

Search strategy

Before conducting the comprehensive search, a preliminary search was made to ensure study viability and to identify the list of eligible anesthesia societies “ S1 Appendix” . The search strategy was developed with the assistance of a medical librarian.

We will conduct a two-step comprehensive search strategy. In the first step, we will search for CPGs published by the North American and European Societies of Anesthesiology between 2012 and 2022. In the second step, we will identify RCTs supporting the recommendations in these guidelines.

The search will be conducted in MEDLINE, Embase, TRIP Database, and the North American and European Societies of Anesthesiology websites. We will use controlled vocabulary and free text terms, with field labels, Boolean, and proximity operators tailored to each search engine. The evaluation period will run from January 2012 to December 2022.

Data extraction

Since we will use CPGs as the primary source of trials, the data extraction process will consist of two steps.

In the first step, two independent reviewers will screen the search results to identify the CPGs that meet the eligibility criteria. Any disagreements will be resolved through discussion and, if necessary, with the involvement of a third reviewer. Rayyan, a web and mobile app for systematic reviews [ 24 ], will be used for the screening process.

In the second step, two independent reviewers will screen the trials identified from each of the included CPGs and data extraction will be conducted using a standardized extraction form in REDCap®.

Since there is the possibility that an RCT is cited in more than one guideline, we will remove duplicates from the data set before the extraction, to avoid taking into account the information of a trial more than once in the analysis.

The extracted data will encompass general information about the CPGs, such as title, year of publication, target audience, total number of recommendations, recommendation classification system used to determine the level of evidence, quality of the studies, and the strength of recommendations. Additionally, we will collect individual study characteristics, publication year, type of trial, unit of allocation, number of participating centers, type of blinding, method of allocation concealment, ethical approval, source of funding, and data sharing agreement. Furthermore, we will extract outcome-related information, outcome name, outcome definition, imputation method, sample size, number of patients randomized per group, number of patients who experienced the endpoint per group, level of statistical significance, and number of patients lost to follow-up.

This study complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for methodological studies [ 25 ] “ S1 Checklist” .

Sample size estimation

The reference population corresponds to all RCTs identified from the guidelines published between 2012 to 2022, and the unit of analysis is RCT reports.

As it is a methodological survey a sample of the reference population will be calculated using a “rule of thumb” of 10 events per predictor variable (EPP).

The sample size and EPP calculations were implemented in R using the sampler package [ 26 ].

The proportion of RCTs required by the sampler package to calculate a stratified sample based on the distribution of RCTs by categories (general, cardiovascular, pediatric, regional, obstetric, and neuroanesthesia) will be obtained from the survey results.

Data analysis

Primary analysis..

A descriptive analysis will be conducted to characterize the CPGs and RCTs that support the recommendations. Counts and percentages will be used as summary measures for categorical variables. In contrast, means and interquartile ranges will be used for continuous variables, with standard deviations or ranges employed as appropriate. The Statistical Analyses and Methods in the Published Literature (SAMPL) Guidelines for reporting descriptive statistics will be followed [ 27 ].

The FI calculation for RCTs with significant results (p < 0.05), will use the smaller number of events required to obtain a p ≥ 0.05 (4). For non-significant RCTs (p ≥ 0.05), the rFI will be the fewer number of events required to obtain p < 0.05 (5). For the analysis, priority will be given to the outcome(s) supporting the guideline recommendation. Median and interquartile ranges will be used to describe the results.

The overall FI report is tentatively proposed for the following subgroups based on the targeted population (general anesthesia, regional, pediatrics, obstetrics, cardiovascular and neuroanesthesia) of the identified RCTs.

R will be used as statistical software [ 28 ], and the R Packages: Fragility Index [ 29 ] and FragilityTools [ 30 ].

Secondary analysis.

The exploratory analysis of the factors associated with the overall Fragility Index (FI) will be addressed using a Poisson regression analysis. In case of over-dispersion, a negative binomial regression will be employed. The overall FI will serve as the dependent variable in the analysis. The following independent variables are tentatively proposed to be included in the analysis: (1) type of trial, (2) type of blinding, (3) allocation concealment, (4) patients lost-to-follow-up, (5) source of funding, (6) ethical approval, (7) type of intervention (drug-related/non-drug-related), (8) open data/transparency agreement, and (9) type of imputation method used.

Before fitting the model, the assumptions of Poisson distribution and the absence of over-dispersion will be verified. The maximum likelihood estimation method will be utilized for model fitting, and goodness-of-fit and deviance will be used for model evaluation. Additionally, variance inflation factors (VIF) will be calculated to assess multicollinearity, defined as VIF > 10. The analysis results will be reported as estimated β coefficients with 95% confidence intervals (CIs), p-values for each included variable, and overall model fit statistics. All estimates will be reported to two decimal places, and p-values will be reported to three decimal places.

The R statistical software [ 28 ] will be utilized for all analyses. Two-sided hypothesis testing will explore associations between factors and fragility index, with a significance level set at alpha = 0.05.

Refer to Table 1 . for a summary of the analysis plan.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0310092.t001

Updates and amendments

Updates and amendments to this protocol (if applicable) will be summarized in the final manuscript.

Ethics and dissemination

This study is a methodological survey and does not involve human subjects. Nevertheless, it has institutional board approval (Act N° 6/2023).

3. Discussion

Perioperative medicine faces unique challenges in conducting randomized controlled trials (RCTs) and generating statistically significant results [ 11 ]. These limitations have led to the proposal of complementary measures, such as fragility assessment, to determine the robustness of RCT results.

Since the Fragility analysis description, there have been efforts to expand its applicability to other types of outcomes and meta-analysis evaluation [ 5 , 31 , 32 ].

Previous FI assessment of RCTs in CPGs have been limited either to the evaluation of studies published in high-impact journals or to evaluations of a single guideline and its respective RCTs [ 11 , 17 , 18 ].

This proposal, unlike previous research on frailty, focuses on conducting a comprehensive and reproducible search. It does not restrict itself to general anesthesia guidelines, specific types of publications (such as Q1 journals), or only trials with statistically significant results. Consequently, the full spectrum of recommendations and their corresponding randomized controlled trials (RCTs) were included in the sampling frame. This approach aligns with established guidelines for conducting methodological research [ 23 ].

The results of this study will provide valuable insights into the use of fragility assessment to show the inherent weakness of depending solely on statistical significance and its futility. And emphasizing its ability to support RCT comparability in perioperative medicine.

Supporting information

S1 checklist. prisma-p-checklist..

https://doi.org/10.1371/journal.pone.0310092.s001

S1 Appendix. Preliminary search.

https://doi.org/10.1371/journal.pone.0310092.s002

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 28. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2022.
  • Open access
  • Published: 13 September 2024

Prevalence, predictors and outcomes of self-reported feedback for EMS professionals: a mixed-methods diary study

  • Caitlin Wilson 1 , 2 , 3 ,
  • Luke Budworth 3 , 4 ,
  • Gillian Janes 5   nAff6 ,
  • Rebecca Lawton 1 , 3 , 4 &
  • Jonathan Benn 1 , 3 , 4  

BMC Emergency Medicine volume  24 , Article number:  165 ( 2024 ) Cite this article

9 Altmetric

Metrics details

Providing feedback to healthcare professionals and organisations on performance or patient outcomes may improve care quality and professional development, particularly in Emergency Medical Services (EMS) where professionals make autonomous, complex decisions and current feedback provision is limited. This study aimed to determine the content and outcomes of feedback in EMS by measuring feedback prevalence, identifying predictors of receiving feedback, categorising feedback outcomes and determining predictors of feedback efficacy.

An observational mixed-methods study was used. EMS professionals delivering face-to-face patient care in the United Kingdom’s National Health Service completed a baseline survey and diary entries between March-August 2022. Diary entries were event-contingent and collected when a participant identified they had received feedback. Self-reported data were collected on feedback frequency, environment, characteristics and outcomes. Feedback environment was measured using the Feedback Environment Scale. Feedback outcomes were categorised using hierarchical cluster analysis. Multilevel logistic regression was used to assess which variables predicted feedback receipt and efficacy. Qualitative data were analysed using content analysis.

299 participants completed baseline surveys and 105 submitted 538 diary entries. 215 (71.9%) participants had received feedback in the last 30 days, with patient outcome feedback the most frequent ( n  = 149, 42.8%). Feedback format was predominantly verbal ( n  = 157, 73.0%) and informal ( n  = 189, 80.4%). Significant predictors for receiving feedback were a paramedic role (aOR 3.04 [1.14, 8.00]), a workplace with a positive feedback-seeking culture (aOR 1.07 [1.04, 1.10]) and white ethnicity (aOR 5.68 [1.01, 29.73]). Feedback outcomes included: personal wellbeing (closure, confidence and job satisfaction), professional development (clinical practice and knowledge) and service outcomes (patient care and patient safety). Feedback-seeking behaviour and higher scores on the Feedback Environment Scale were statistically significant predictors of feedback efficacy. Solicited feedback improved wellbeing (aOR 3.35 [1.68, 6.60]) and professional development (aOR 2.58 [1.10, 5.56]) more than unsolicited feedback.

Feedback for EMS professionals was perceived to improve personal wellbeing, professional development and service outcomes. EMS workplaces need to develop a culture that encourages feedback-seeking to strengthen the impact of feedback for EMS professionals on clinical decision-making and staff wellbeing.

Peer Review reports

The National Health Service (NHS) staff survey [ 1 ] consistently identifies Emergency Medical Services (EMS) professionals as the group with the highest work-related stress (55.7%), burnout (49.3%) and leaving intentions (42.9%) – with ~ 25% having applied for non-NHS jobs post COVID-19 [ 2 ]. Receiving feedback on patient outcomes and personal performance may improve job support for EMS professionals and enhance staff wellbeing, job satisfaction and patient care [ 3 , 4 ].

Across healthcare settings, including EMS, clinical performance feedback has been demonstrated to improve quality of care and professional development [ 5 , 6 ]. However, recent reviews of existing literature and current practice [ 7 ] recommend further research on the provision of patient outcome feedback and the impact of feedback on staff wellbeing in EMS.

EMS professionals could particularly benefit from feedback as their work environment is characterised by complexity, uncertainty and extreme stressors [ 8 , 9 ]. EMS professionals work autonomously, making complex decisions including assessing and treating patients at home to avoid unnecessary hospital attendance and reduce demand on emergency departments [ 10 , 11 ]. Nevertheless, providing and accessing, EMS feedback on decision-making is difficult due to constraints such as a mobile workforce, disconnected digital technology [ 12 ] and data sharing governance issues [ 4 ].

When feedback is provided for EMS professionals this is typically through formal initiatives, such as performance feedback during appraisals, patient outcome feedback from “post-box” schemes and patient-experience feedback through thank-you letters [ 3 , 7 ]. However, qualitative research suggests that EMS professionals desire more and better feedback, especially concerning patient outcomes [ 3 , 4 , 13 ]. When formal feedback initiatives are lacking, EMS professionals informally approach ED staff seeking feedback on patient outcomes [ 3 ]. However, informal feedback is limited by patient confidentiality issues, information quality, verbal format and geographical barriers [ 4 , 13 ]. While systematic reviews [ 6 ] and current practice [ 7 ] suggest formal feedback to EMS professionals positively affects patient care and clinical performance, it is unknown whether informal feedback or actively solicited feedback have similar outcomes.

In the United States (US), it is estimated that feedback is provided to EMS professionals in just 24% of encounters [ 14 ] with 50–69% of paramedics self-reporting having received feedback in the previous month [ 15 , 16 ]. Particular recipient and contextual characteristics appear associated with increased feedback, including staff with higher level certifications, fewer years’ experience and working in busier or hospital-based organisations [ 15 ].

Learning more about how the context and format of feedback impacts outcomes, as well as the mechanisms through which feedback influences outcomes, could be an important step in enhancing feedback effectiveness in EMS [ 17 ]. In this vein, Clinical Performance Feedback Intervention Theory, which has good face validity in the prehospital setting [ 3 ], offers 42 hypotheses of when feedback is more effective e.g. when feeding back to staff with positive beliefs about feedback [ 18 ]. Feedback effectiveness is also predicted by the extent to which an organisation encourages, provides and uses feedback, i.e. the ‘feedback environment’ [ 19 , 20 ], whereby a positive feedback environment predicts positive outcomes for individuals and organisations [ 21 , 22 , 23 , 24 ].

Despite increasing research interest in prehospital feedback, no studies have explored the content and outcomes of prehospital feedback prospectively, or assessed feedback prevalence and predictors amongst EMS professionals in the United Kingdom. International studies have been limited by not drawing upon existing theory and potential recall bias [ 15 , 16 ]. This study aimed to address these gaps by answering the following research questions:

How prevalent is feedback for UK EMS professionals and what types of feedback do they receive?

What individual and contextual factors predict EMS professionals receiving feedback in the previous 30 days?

What are the perceived outcomes of feedback for EMS professionals?

What predicts instances of self-reported feedback being perceived as improving outcomes?

Study design

This observational mixed-methods study consisted of a baseline survey followed by diary entries. Collecting diary entries in real time is known to reduce recall bias by collecting data at the level of feedback events and therefore not relying on generalised reflections of feedback provision over a period of time, whilst enabling analysis of within- and between-person variability [ 25 ]. Diary entries were event-contingent and collected when a participant identified they had received feedback. Diary entries on desired feedback and a follow-up survey were part of the study but are not reported here.

This mixed-methods study followed the approach defined by Creswell and Plano Clark [ 26 ] as ‘triangulation design: quantitative data model’. The primary emphasis of data collection was quantitative survey data, which was supported by open-ended questions in the baseline survey and diary entry form to contextualise and expand upon quantitative results.

Ethical approval was granted from the University of Leeds ethics committee (PSYC-406 04/01/2022) and the Health Research Authority (ID: 295645).

STROBE [ 27 ] and LEVEL recommendations [ 28 ] were followed.

Setting and selection of participants

Eligible participants were EMS clinicians (i.e. paramedics) and non-registered professionals (e.g. Emergency Medical Technicians [EMTs]) delivering face-to-face patient care, employed by an NHS ambulance trust in the United Kingdom.

An opportunistic sample was recruited via social media and organisations’ internal communications. Informed consent was obtained in the baseline survey after providing study information. Access to the baseline survey was via an anonymous link, with individual diary study links issued to participants who provided their email address in their survey response. Participants completing all study elements were enrolled in a prize draw for three £50 vouchers to aid recruitment and reduce drop-out.

Data collection

Data was collected using Qualtrics (Qualtrics, Provo, UT) (March-August 2022). The survey and diary study measures were developed for this study (Additional file 1 ). They were piloted with three EMS professionals and refined based on their feedback.

Baseline survey

The baseline survey covered demographics, feedback frequency and feedback environment. Demographic questions included professional role, years of EMS experience, sex, age and ethnicity. The feedback frequency questions were adapted from a large-scale US EMS feedback survey [ 15 ]. They included items such as ‘In the past 30 days, did you receive any feedback on the medical care you provided to a patient?’ scored on a dichotomous scale (‘yes/no’). If answered positively, it was followed by ‘How was this feedback provided? Verbal, by email, by text, written on paper, other’.

The feedback environment measure was based upon the shortened Feedback Environment Scale (FES) [ 29 ], which demonstrated excellent reliability for nurses (Cronbach’s alpha 0.90) [ 30 ]. The questions were adapted for the prehospital setting and reworded so as not to refer to a specific feedback source. Participants were asked to respond on a Likert-type scale ranging from 1 to 7 (strongly disagree-strongly agree) to statements such as ‘I receive useful feedback at work’ and ‘When I want feedback, this is readily available’. Once respondents provided ratings for each of the 14 items, the scores were aggregated. A high score on the FES generally indicates a positive perception of the feedback environment [ 29 ].

Diary entries

Immediately after completing the baseline survey, participants were sent a link to access their diary which remained open until the end of the data collection period. Participants were instructed to complete diary entries whenever they received feedback and were advised to log these entries as soon as possible to ensure accurate and timely recording. When logging a feedback event, participants were asked a series of multiple choice and structured response questions informed by Clinical Performance Feedback Intervention Theory [ 18 ], including, for example, ‘How quickly after the incident was the feedback provided?’ and ‘What effect do you think receiving this feedback had on your clinical practice/knowledge/confidence/sense of closure/job satisfaction/patient care/patient safety? Positive, negative or no effect’.

In this study, we differentiate between ‘negative feedback’ and ‘positive feedback’ based on the content and delivery of the feedback itself, i.e. the sign, nature or direction of feedback. ‘Negative feedback’ refers to feedback that highlights areas for improvement or points out errors, whereas ‘positive feedback’ focuses on reinforcing successful performance or praising achievements. Conversely, ‘feedback with a negative impact’ or ‘feedback with a positive impact’ refers to the subjective perception of the feedback’s effect on the recipient, as reported by the EMS professionals in their diaries. Thus, the same feedback can be perceived to have different impacts by different individuals.

Data analysis

Quantitative analyses was undertaken in R (Version 4.1.3, R Core Team) [ 31 ] within RStudio [ 32 ] and qualitative analyses in NVivo (Version 12 Plus, QSR International). The detailed multilevel data analysis plan [ 33 ], study hypotheses and research models are described in Additional file 2 .

Free-text qualitative responses in the baseline survey and diary entries were analysed using content analysis by early career paramedic researcher (CW) with input from the wider team of senior health services researchers (GJ, RL, JB). For the prevalence and predictors objectives, content analysis enabled the categorisation of free-text responses that participants has submitted under ‘other’ to either an existing category (e.g. ‘patient outcome feedback’) or development of a new category (e.g. ‘incident-reported feedback’). Within the hierarchical cluster analysis, qualitative insights enriched the interpretation of the quantitative results by providing contextual examples of perceived feedback impact among EMS professionals.

Assuming 50/50 balanced binary predictors and normally distributed continuous predictors, 325 participants were required to detect any significant predictors of a medium-sized effect (i.e. Cohen’s d = 0.5) for prehospital feedback perceived as improving outcomes with 80% power, after adjustment for other variables [ 34 ]. The level-1 sample size was pre-specified by the research team as 10 diary entries was deemed an acceptable burden for each participant during stakeholder consultation. The power analysis was based on the basic research model, which included two level-2 predictors (role – binary, length in service – continuous) and two level-1 predictors (feedback content – categorical, solicited/unsolicited – binary).

Statistical methods

Data on the individual-level variables (role, length in service, FES score) were collected during the baseline survey. Data on the diary-level independent variables (feedback content, feedback-seeking behaviour, formal/informal, source, sign, format, lag-time) and dependent variable (feedback outcome) were collected for each diary entry. FES scale reliability was examined using Cronbach’s alpha.

To describe feedback prevalence, descriptive statistics for baseline quantitative data were produced.

To identify predictors of receiving feedback in the last 30 days, baseline survey data were analysed using binary logistic regression (via ‘lme4’) [ 35 ]. Univariable logistic regression assessed individual associations between each predictor (e.g. role) and the outcome (i.e. having received feedback in the previous 30 days). Multivariable logistic regression included all predictors simultaneously that formed part of the simple or extended research model.

To identify predictors of perceived feedback efficacy, data generated via feedback-received diary entries were analysed using multilevel logistic regression with random intercepts to account for multiple recorded feedback instances per participant. The variables of interest were chosen based on Clinical Performance Feedback Intervention Theory [ 18 ] and qualitative exploratory studies of prehospital feedback [ 3 , 7 ], for example feedback type, feedback-seeking behaviour and formal/informal. Continuous variables were grand-mean centred to improve the interpretation of the intercept values [ 36 ].

Akaike Information Criterion (AIC) [ 37 ] was used to compare models with the same outcome based on goodness-of-fit, whereby smaller AIC values indicate better fit. We did not adjust alpha for multiple comparisons due to deliberately favouring a higher Type I error rate relative to the potential for Type II error, as this was an exploratory study [ 38 ]. Analyses were conducted using complete cases, followed by sensitivity analyses dealing with missing data using the ‘mice’ R package [ 39 ].

To categorise perceived outcomes of receiving feedback, hierarchical cluster analysis was performed on the baseline data (using ‘ClustOfVar’ [ 40 ]). Cluster analysis is an exploratory analysis that identifies structures within the data and visualises them in a dendrogram (tree diagram) with outcomes that co-occur most frequently placed on branches closer together [ 41 ]. Clusters were labelled by the research team using thematic classification informed by previous research [ 3 , 6 ].

Characteristics of study participants

Two hundred and ninety-nine participants completed the baseline survey representing 13 of the 14 UK ambulance trusts (median 19, range 4–88 participants per trust). Of these, 105 completed 538 feedback-received diary entries (range 1–16, median 4).

Table  1 summarises participants’ baseline characteristics. Ethnicity was collapsed into a binary variable (white n  = 290, minoritised ethnic group n  = 8) to avoid identifying participants. Inferential statistics did not indicate that participants’ characteristics significantly differed between the baseline survey and diary entry stages. Comparison with national data for UK ambulance services [ 42 ] using chi-square tests at 0.05 significance level indicated that our study sample was representative in terms of ethnicity ( p  = 0.771), sex ( p  = 0.124) and age ( p  = 0.886).

The FES was found to have excellent internal consistency (alpha = 0.85 [95% CI 0.81 to 0.88]).

Of 299 baseline surveys, 78 (26.1%) were incomplete. Missing values varied from 0.3 to 25.1%.

Feedback prevalence and types

Table  2 describes the characteristics of feedback prevalence from the baseline data and diary entries.

Of the 299 participants completing the baseline survey, 215 (71.9%) indicated that they had received feedback in the last 30 days, with patient outcome feedback being the most frequently received ( n  = 149, 42.8%). Feedback was predominantly provided in verbal format ( n  = 157, 73.0%) and was informal ( n  = 189, 80.4%).

Predicted likelihood of receiving feedback

The likelihood of receiving feedback in the past 30 days was higher for those with a supportive feedback environment (aOR 1.07 [1.04, 1.10]), meaning that each one-point increase in FES increased the odds of receiving feedback by 7% (see Fig.  1 ). Participants in paramedic roles had three times the estimated odds of receiving feedback than EMTs (aOR 3.04 [1.14, 8.00]). Those of white ethnicity had five times the estimated odds of receiving feedback compared with minoritised ethnic group participants (aOR 5.68 [1.01, 29.73]); although, the wide confidence interval indicates a high level of uncertainty in this estimate. The sensitivity analysis (Additional file 3 ) indicated that when missing data was imputed, ethnicity did not predict the likelihood of receiving feedback (aOR 3.34 [0.71, 15.71]).

figure 1

Forest plot of factors associated with receiving feedback in the past 30 days

Perceived outcomes of feedback

Feedback outcomes were categorised into three clusters following a visual inspection of the dendrogram from the hierarchical cluster analysis and stability of the partitions (Additional file 4 ). Cluster 1 (‘ professional development ’) encompassed clinical practice and knowledge, Cluster 2 (‘ personal wellbeing’ ) encompassed closure, confidence and job satisfaction, and Cluster 3 (‘ service outcomes’ ) encompassed patient care and patient safety.

Figure  2 describes the count of perceived positive, negative, mixed and no impact within each feedback outcome cluster and contextual examples from qualitative findings. Overall, feedback was perceived to have a positive impact. The 33 feedback events resulting in negative affective responses were reported by 25 participants, who had lower FES scores and received punitive feedback that was predominantly negative, unsolicited and provided by EMS professionals.

figure 2

Perceived impact within each outcome cluster

Predicted likelihood of feedback efficacy

Additional file 5 summarises the results of the univariable and multivariable multilevel analyses identifying predictors of feedback efficacy. Sensitivity analyses (Additional file 6 ) indicated that missing data had some effect in the univariable analyses but little effect in the multivariable multilevel analyses. The intraclass correlation coefficient (ICC [ICC Professional =0.25, ICC Personal =0.19, ICC Service =0.24]) indicated that a moderate amount of the variablity in feedback having a positive impact was explained at a participant level, rather than at the level of individual feedback events.

Comparing the AICs for the basic and extended research model suggested that the extended research model was the best fit for all three outcome clusters. The extended research model indicated that feedback-seeking behaviour and FES were statistically significant predictors of feedback efficacy. Solicited feedback was more likely to improve professional development (aOR 3.35 [1.68, 6.69]) and personal wellbeing (aOR 2.58 [1.19, 5.56]) than unsolicited feedback. A one-point increase in FES led to a predicted 4% increase in the odds of feedback positively affecting personal wellbeing (aOR 1.04 [1.01, 1.07]) and a 3% increase for service outcomes (aOR 1.03 [1.00, 1.06]).

In total, 215 (71.9%) participants indicated that they had received feedback in the last 30 days with patient outcome feedback most received ( n  = 149, 42.8%). Significant predictors for receiving feedback were a paramedic role and a workplace with a positive feedback-seeking culture. Participants reported that feedback affected personal wellbeing (closure, confidence, job satisfaction), professional development (clinical practice, knowledge) and service outcomes (patient care, patient safety). Solicited feedback was more likely to positively affect personal and professional development than unsolicited feedback.

Compared to US studies, our participants reported a slightly higher prevalence of receiving feedback in the past 30 days: 71.9% compared to 50.0% [ 16 ] and 69.4% [ 15 ]. This could be because our study provided clearer specification of feedback through definitions provided to participants.

Consistent with other studies, feedback was mostly received in verbal format (73.0%) and provided by a mixture of EMS professionals (39.3%), non-ambulance healthcare professionals (33.9%) and patients or relatives (25.3%) [ 15 , 16 ]. Patient outcome feedback was the type most frequently received by our participants (42.8%), which differed from the largest US study on this topic in which receipt of clinical performance feedback dominated [ 15 ].

The limited reporting of debriefing in our study was surprising given that recent research identified debriefing as a prehospital feedback type. Post-event debriefing is designed to help staff process and learn from unusual or critical events [ 43 ]. Although some ambulance services have implemented debriefing programs to support staff [ 44 ], these sessions – which focus on understanding and making sense of events [ 45 ] – were less commonly reported in our study. This discrepancy may be explained by the rarity of critical incidents requiring post-event debriefing and the perception that debriefing is distinct from routine feedback on clinical performance or patient outcomes.

In contrast to previous studies of prehospital feedback [ 14 , 15 , 16 ], years of experience were not a significant predictor of receiving feedback in our study. However, we did identify several novel predictors of receiving feedback, such as paramedic role and a workplace with a supportive feedback culture as indicated by high FES scores. Paramedics may receive more feedback compared with EMTs because they take the lead on more acute cases and are therefore in a better position to actively seek feedback, as indicated by 38.6% ( n  = 180) of feedback for paramedics being solicited compared with only 31.9% ( n  = 23) for EMTs. It may also be that paramedics have become used to receiving enhanced feedback during undergraduate training or the newly qualified paramedic period and are therefore continuing to seek enhanced feedback provision [ 3 ]. The broader feedback literature offers theoretical support regarding feedback exchanges being affected by social categories such as race, gender, age and sexual orientation, in that staff with minority characteristics are less likely to actively seek feedback [ 46 ]. Further understanding how personal characteristics influence EMS feedback interactions is vital to promote equity and inclusion within feedback theory and practice.

Our analysis indicates that solicited feedback was more likely to improve professional development and personal wellbeing than unsolicited feedback. This may be due to solicited feedback being timelier, more relevant and originating from a more credible source as the recipient has some control over whom they approach, compared with unsolicited feedback. Overall this probably reflects the limitations of the existing prehospital feedback provision in regards to timeliness, relevance and credibility, rather than solicited feedback being an ultimate desirable goal [ 7 ].

The positive effects of prehospital feedback on quality of care and professional development were synthesised in a recent systematic review [ 6 ], but EMS professionals in our study also perceived that feedback positively affects personal outcomes such as closure (68.8%), confidence (83.1%) and job satisfaction (81.8%). This confirms suggestions from qualitative and survey studies that feedback for EMS professionals can support staff wellbeing and job satisfaction [ 3 , 4 , 7 , 16 ].

Our study also highlights the importance of feedback delivery, demonstrating that the perceived negative impacts of feedback are influenced not only by its content (e.g. a negative patient outcome), but also by how it is delivered (“ made me feel uncomfortable ”) and the credibility of the feedback source (“ not genuine ”). In the broader audit and feedback literature, credibility of the feedback source is known to influence feedback effectiveness [ 5 , 47 ]. Brehaut et al. [ 47 ] emphasize that credible feedback is less likely to provoke defensive reactions and more likely to be effective. Additionally, a strong relationship between the feedback provider and recipient encourages feedback-seeking behaviour [ 48 ]. Thus, the manner of delivery and the provider’s credibility are crucial for minimising negative emotional responses and improving feedback outcomes.

Implications for research and practice

Further research should include developing theory-informed measures to evaluate how prehospital feedback initiatives impact professional practice, personal wellbeing and service outcomes. Observational studies within EMS should be conducted to deepen our understanding of solicited and unsolicited feedback, the delivery of negative feedback and the influence of personal characteristics on EMS feedback interactions and engagement. A particular area in need of further research are minoritised ethnic EMS professionals. Further research should also focus on what feedback EMS professionals want to receive.

Change in clinical practice should focus on designing and robustly evaluating feedback provision for EMS professionals. All EMS professionals should be enabled to make better use of the feedback they have access to. EMTs should be supported to actively seek feedback to address the current feedback inequity, which places them at a disadvantage when it comes to development of professional competency and performance. Care should be taken in feeding back service level outcomes to frontline EMS professionals to ensure that the feedback is relevant and actionable at their level.

Tailoring feedback interventions to support personal wellbeing is most likely to be perceived by EMS professionals to have positive impacts than those targeting professional development or service outcomes. The benefits of feedback for staff wellbeing should be formally recognised by ambulance services given the potential to mitigate workforce challenges, such as burnout, retention and recruitment. Feedback targeting personal wellbeing may also do harm and organisations should adequately support EMS professionals when receiving feedback.

Strengths and limitations

This was the first study to assess feedback prevalence within the UK EMS population and to explore the associated contextual factors and outcomes. This study was limited by the high drop-out rate ( n  = 299 participants at baseline, n  = 105 logging diary entries), though this is typical of diary studies generally [ 49 ]. A further limitation is that while participants were instructed to log diary entries whenever they received feedback, delays in entry completion likely led to omissions and contributed to lower participation rates. To combat high dropout in future diary studies, researchers could offer greater incentives or further reduce survey length. However, using diary methods was a novel way to assess feedback prevalence that reduced recall bias and provided reliable within-person data. Testing for differences between the prospective diary entries and retrospective baseline data to quantify recall bias indicated significantly shorter lag times ( p  < 0.001) and a higher proportion of unsolicited feedback ( p  = 0.018) for the prospectively collected data, suggesting that retrospective data collection may not be reliable for feedback in EMS.

Despite data collection taking place during the early post-pandemic period when the backlog of health needs were emerging, the large number of NHS staff that participated and feedback events that were reported, indicate an appetite for feedback research from EMS professionals. However, this study was unable to recruit to target. Challenges related to the demanding schedules and limited availability for research participation of the target NHS staff group, combined with reliance on voluntary participation, are likely to have contributed to the relatively low response rate. Future research should explore alternative recruitment strategies to enhance participation rates within this professional context.

Comparison with national data for UK ambulance services [ 42 ] indicated that our study sample was representative of UK EMS but it remains unclear to what extent these findings might be replicated in the health systems of other countries. We acknowledge that collapsing our ethnicity variable into binary categories limits our conclusions regarding specific minoritised ethnic groups. The divergence between the complete case analysis and the multiple imputation sensitivity analysis regarding whether ethnicity predicted the likelihood of receiving feedback suggests this predictor may not be very robust. However, as feedback is mostly positive, this is a potential inequality and needs further investigation. Future studies should specifically target minority group participation, particularly as the literature suggests that social identity and race influence feedback-seeking behaviour [ 46 ].

Another limitation is the absence of triangulation of sources. Feedback is a two-way process [ 15 , 50 ], and relying solely on self-reported data from EMS professionals may not fully capture its dynamics. Including perspectives from feedback providers could have provided a more comprehensive understanding of the feedback process. Future research should incorporate multiple sources to enhance the depth and accuracy of findings.

Conclusions

In conclusion, our study provides valuable insights into the prevalence, predictors and outcomes of feedback provision within the UK EMS context. Our findings underscore the importance of feedback in enhancing not only clinical practice and service outcomes but also personal wellbeing and job satisfaction among EMS professionals. However, the delivery of feedback emerged as a critical factor influencing its effectiveness, highlighting the need for attention to credibility and sensitivity in feedback delivery. Addressing feedback inequities, particularly among non-registered EMS professionals and minoritised groups, is crucial for promoting workforce development and ensuring equitable access to development opportunities. Overall, this study suggests that EMS workplaces need to develop a culture that encourages feedback-seeking by ensuring high-quality positive and negative feedback is readily available and provided by a credible source to strengthen the impact of feedback for EMS professionals on clinical decision-making and staff wellbeing.

Data availability

The datasets generated and analysed during the current study are not publically available as sharing the raw data would violate the agreement to which participants consented; however, the datasets are available from the corresponding author on reasonable request.

Abbreviations

Akaike Information Criterion

Emergency Department

Emergency Medical Services

Emergency Medical Technician

Feedback Environment Scale

National Health Service

NHS England. NHS staff survey 2022 - National results briefing. In. NHS; 2023.

Weyman A, Glendinning R, O’Hara R, Coster J, Roy D, Nolan P. Should I stay or should I go? NHS staff retention in the post COVID-19 world: challenges and prospects - IRR report. In.: University of Bath; 2023.

Google Scholar  

Wilson C, Howell A-M, Janes G, Benn J. The role of feedback in emergency ambulance services: a qualitative interview study. BMC Health Serv Res 2022, 22.

Eaton-Williams P, Mold F, Magnusson C. Exploring paramedic perceptions of feedback using a phenomenological approach. Br Paramedic J. 2020;5(1):7–14.

Article   Google Scholar  

Ivers NM, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, O’Brien MA, Johansen M, Grimshaw J, Oxman AD. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Reviews 2012(6).

Wilson C, Janes G, Lawton R, Benn J. Types and effects of feedback for emergency ambulance staff: a systematic mixed studies review and meta-analysis. BMJ Qual Saf. 2023;32(10):573–88.

Article   PubMed   PubMed Central   Google Scholar  

Wilson C, Janes G, Lawton R, Benn J. Feedback for emergency ambulance staff: a National Review of current practice informed by Realist evaluation methodology. Healthcare 2023, 11(16).

Fisher JD, Freeman K, Clarke A, Spurgeon P, Smyth M, Perkins GD, Sujan MA, Cooke MW. Health Services and Delivery Research. Patient safety in ambulance services: a scoping review. edn. Southampton (UK): NIHR Journals Library; 2015.

Lawn S, Roberts L, Willis E, Couzner L, Mohammadi L, Goble E. The effects of emergency medical service work on the psychological, physical, and social well-being of ambulance personnel: a systematic review of qualitative research. BMC Psychiatry. 2020;20(1):348.

Paulin J, Kurola J, Koivisto M, Iirola T. EMS non-conveyance: a safe practice to decrease ED crowding or a threat to patient safety? BMC Emerg Med. 2021;21(1):115.

Blodgett JM, Robertson DJ, Pennington E, Ratcliffe D, Rockwood K. Alternatives to direct emergency department conveyance of ambulance patients: a scoping review of the evidence. Scand J Trauma Resusc Emerg Med. 2021;29(1):4.

Porter A, Badshah A, Black S, Fitzpatrick D, Harris-Mayes R, Islam S, Jones M, Kingston M, LaFlamme-Williams Y, Mason S et al. Electronic health records in ambulances: the ERA multiple-methods study. Health Serv Delivery Res 2020, 8(10).

Morrison L, Cassidy L, Welsford M, Chan TM. Clinical performance feedback to paramedics: what they receive and what they need. AEM Educ Train. 2017;1(2):87–97.

Mock EF, Wrenn KD, Wright SW, Eustis TC, Slovis CM. Feedback to emergency medical services providers: the good, the bad, and the ignored. Prehosp Disaster Med. 1997;12(2):145–8.

Article   CAS   PubMed   Google Scholar  

Cash RE, Crowe RP, Rodriguez SA, Panchal AR. Disparities in Feedback Provision to Emergency Medical services professionals. Prehospital Emerg Care. 2017;21(6):773–81.

McGuire SS, Luke A, Klassen AB, Myers LA, Mullan AF, Sztajnkrycer MD. It’s time to talk to Prehospital providers: Feedback disparities among Ground-based Emergency Medical services providers and its impact on job satisfaction. Prehosp Disaster Med. 2021;36(4):486–94.

Article   PubMed   Google Scholar  

Hysong SJ, Kell HJ, Petersen LA, Campbell BA, Trautner BW. Theory-based and evidence-based design of audit and feedback programmes: examples from two clinical intervention studies. BMJ Qual Saf. 2017;26(4):323.

Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, Lorencatto F, Presseau J, Peek N, Daker-White G. Clinical performance feedback intervention theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci. 2019;14(1):40.

Rife GL. The influence of feedback orientation and feedback environment on clinician processing of feedback from client outcome measures. University of Akron; 2016.

London M, Smither JW. Feedback orientation, feedback culture, and the longitudinal performance management process. Hum Resource Manage Rev. 2002;12(1):81–100.

Norris-Watts C, Levy PE. The mediating role of affective commitment in the relation of the feedback environment to work outcomes. J Vocat Behav. 2004;65(3):351–65.

Rosen CC, Levy PE, Hall RJ. Placing perceptions of politics in the context of the feedback environment, employee attitudes, and job performance. J Appl Psychol. 2006;91(1):211–20.

Sparr JL, Sonnentag S. Feedback environment and well-being at work: the mediating role of personal control and feelings of helplessness. Eur J Work Organizational Psychol. 2008;17(3):388–412.

Whitaker BG, Dahling JJ, Levy P. The development of a Feedback Environment and Role Clarity Model of Job Performance. J Manag. 2007;33(4):570–91.

Bolger N, Davis A, Rafaeli E. Diary methods: capturing life as it is lived. Ann Rev Psychol. 2003;54:579–616.

Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 3rd ed. Thousand Oaks, CA: SAGE; 2017.

Elm Ev, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ. 2007;335(7624):806–8.

Monsalves MJ, Bangdiwala AS, Thabane A, Bangdiwala SI. LEVEL (logical explanations & visualizations of estimates in Linear mixed models): recommendations for reporting multilevel data and analyses. BMC Med Res Methodol. 2020;20(1):3.

Steelman LA, Levy PE, Snell AF. The Feedback Environment Scale: Construct Definition, Measurement, and validation. Educ Psychol Meas. 2004;64(1):165–84.

Giesbers APM, Schouteten RLJ, Poutsma E, van der Heijden BIJM, van Achterberg T. Towards a better understanding of the relationship between feedback and nurses’ work engagement and burnout: a convergent mixed-methods study on nurses’ attributions about the ‘why’ of feedback. Int J Nurs Stud 2021, 117.

R. A Language and Environment for Statistical Computing [ https://www.R-project.org/ ].

Posit team. RStudio: Integrated Development Environment for R. in. Boston. MA: Posit Software, PBC; 2023.

Sommet N, Morselli D. Keep calm and learn Multilevel Logistic modeling: a Simplified Three-Step Procedure using Stata, R, Mplus, and SPSS. Int Rev Social Psychol 2017.

Olvera Astivia OL, Gadermann A, Guhn M. The relationship between statistical power and predictor distribution in multilevel logistic regression: a simulation-based approach. BMC Med Res Methodol. 2019;19(1):97.

Bates D, Mächler M, Bolker B, Walker S. Fitting Linear mixed-effects models using lme4. J Stat Softw. 2015;67(1):1–48.

Centering in Multilevel Regression. [ http://web.pdx.edu/~newsomj/mlrclass/ho_centering.pdf ]

Bozdogan H. Model selection and Akaike’s information criterion (AIC): the general theory and its analytical extensions. Psychometrika. 1987;52(3):345–70.

Gelman A, Hill J, Yajima M. Why we (usually) don’t have to worry about multiple comparisons. J Res Educational Eff. 2012;5(2):189–211.

van Buuren S, Groothuis-Oudshoorn K. Mice: multivariate imputation by chained equations in R. J Stat Softw. 2011;45(3):1–67.

Chavent M, Kuentz-Simonet V, Liquet B, Saracco J. ClustOfVar: an R Package for the clustering of variables. J Stat Softw. 2012;50(13):1–16.

Tullis T, Albert B. Chap. 9 - Special Topics. In: Measuring the User Experience (Second Edition). edn. Edited by Tullis T, Albert B. Boston: Morgan Kaufmann; 2013: 209–236.

NHS Workforce Statistics - June. 2022 [ https://digital.nhs.uk/data-and-information/publications/statistical/nhs-workforce-statistics/june-2022#]

Phillips EC, Smith SE, Tallentire V, Blair S. Systematic review of clinical debriefing tools: attributes and evidence for use. BMJ Quality & Safety; 2023.

Sharp M-L, Harrison V, Solomon N, Fear N, King H, Pike G. Assessing the mental health and wellbeing of the emergency responder community in the UK. In. London: Open University and Kings College London; 2020.

Fanning RM, Gaba DM. The role of debriefing in Simulation-based learning. Simul Healthc. 2007;2(2):115–25.

Flores C, Elicker JD, Cubrich M. The Importance of Social Identity in Feedback Seeking: A Race Perspective. In: Feedback at Work. 1st ed. edn. Edited by Steelman LA, Williams JR. Cham, Switzerland: Springer; 2019: 141–162.

Brehaut JC, Colquhoun HL, Eva KW, Carroll K, Sales A, Michie S, Ivers N, Grimshaw JM. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–41.

Anseel F, Beatty AS, Shen W, Lievens F, Sackett PR. How are we doing after 30 years? A Meta-Analytic Review of the antecedents and outcomes of Feedback-seeking behavior. J Manag. 2015;41(1):318–48.

Ohly S, Sonnentag S, Niessen C, Zapf D. Diary studies in organizational research: an introduction and some practical recommendations. J Personnel Psychol. 2010;9(2):79–93.

Archer JC. State of the science in health professional education: effective feedback. Med Educ. 2010;44(1):101–8.

Download references

Acknowledgements

The authors would like to thank the study participants for taking the time to complete the survey and log diary entries, as well as the research departments of participating ambulance trusts for their support with advertisement and recruitment. Thank you to Professor Helen Snooks, University of Swansea, and Professor Graham Law, University of Lincoln, for peer-reviewing the study protocol.

This research was funded by the National Institute for Health Research (NIHR) Yorkshire and Humber Patient Safety Translational Research Centre (NIHR Yorkshire and Humber PSTRC). The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Gillian Janes

Present address: Faculty of Health, Medicine and Social Care, Anglia Ruskin University, Chelmsford, CM1 1SQ, UK

Authors and Affiliations

School of Psychology, University of Leeds, Leeds, LS2 9JT, UK

Caitlin Wilson, Rebecca Lawton & Jonathan Benn

Yorkshire Ambulance Service Research Institute, Yorkshire Ambulance Service NHS Trust, Wakefield, WF2 0XQ, UK

Caitlin Wilson

Yorkshire Quality and Safety Research Group, Bradford Institute for Health Research, Bradford, BD9 6RJ, UK

Caitlin Wilson, Luke Budworth, Rebecca Lawton & Jonathan Benn

NIHR Yorkshire & Humber Patient Safety Research Collaboration, Bradford Teaching Hospitals NHS Foundation Trust, Bradford, BD9 6RJ, UK

Luke Budworth, Rebecca Lawton & Jonathan Benn

Faculty of Health and Education, Manchester Metropolitan University, Manchester, M15 6BH, UK

You can also search for this author in PubMed   Google Scholar

Contributions

CW conceived the study, developed the study protocol, obtained relevant ethics and governance approvals, collected the data, analysed the data and drafted the manuscript under supervision from GJ, RL and JB. LB provided guidance on the statistical analysis plan, sample size calculation and data analysis. CW drafted the article and all authors contributed substantially to its revision and approved the final version. CW takes responsibility for the paper as a whole.

Corresponding author

Correspondence to Caitlin Wilson .

Ethics declarations

Ethics approval and consent to participate.

The study was carried out in accordance with the UK Policy Framework for Health and Social Care Research (Health Research Authority, 2017) and was approved by the Health Research Authority (IRAS project ID 295645) and the University of Leeds ethics committee (PSYC-406 04/01/2022). Informed consent was obtained in the baseline survey after providing participants with study information.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1: Survey and diary study measures

Supplementary material 2: in-depth data analysis plan, study hypotheses and theoretical models, supplementary material 3: sensitivity analysis for the predicted likelihood of receiving feedback, supplementary material 4: clustering dendogram and stability of the partitions, 12873_2024_1082_moesm5_esm.docx.

Supplementary Material 5: Results of the univariable and multivariable analyses (including basic and extended research models)

Supplementary Material 6: Sensitivity analyses of the predicted likelihood of feedback efficacy

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Wilson, C., Budworth, L., Janes, G. et al. Prevalence, predictors and outcomes of self-reported feedback for EMS professionals: a mixed-methods diary study. BMC Emerg Med 24 , 165 (2024). https://doi.org/10.1186/s12873-024-01082-y

Download citation

Received : 04 March 2024

Accepted : 28 August 2024

Published : 13 September 2024

DOI : https://doi.org/10.1186/s12873-024-01082-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Prehospital care
  • Emergency medical services
  • Professional development
  • Staff wellbeing
  • Diary methods
  • Multilevel modelling

BMC Emergency Medicine

ISSN: 1471-227X

methodology survey article

  • Search Menu
  • Sign in through your institution

Advance articles

  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Submit?
  • About Journal of Survey Statistics and Methodology
  • About the American Association for Public Opinion Research
  • About the American Statistical Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

The Impact of Mail, Web, and Mixed-Mode Data Collection on Participation in Establishment Surveys

  • View article
  • Supplementary data

Longitudinal Nonresponse Prediction with Time Series Machine Learning

Linking survey and linkedin data: understanding usage and consent patterns, innovating web probing: comparing written and oral answers to open-ended probing questions in a smartphone survey, evaluating item response format and content using partial credit trees in scale development, measuring expenditure with a mobile app: do probability-based and nonprobability panels differ, the use of qr codes to encourage participation in mail push-to-web surveys: an evaluation of experiments from 2015 and 2022, area-level model-based small area estimation of divergence indexes in the spanish labour force survey, optimal allocation under anticipated nonresponse, frequent survey requests and declining response rates: evidence from the 2020 census and household surveys, bayesian multisource hierarchical models with applications to the monthly retail trade survey, optimal predictors of general small area parameters under an informative sample design using parametric sample distribution models, optimal conformal prediction for small areas, estimation of a population total under nonresponse using follow-up, reconsidering sampling and costs for face-to-face surveys in the 21st century, the effects of placement and order on consent to data linkage in a web survey, the prevalence and nature of cognitive interviewing as a survey questionnaire evaluation method in the united states, investigating respondent attention to experimental text lengths, improving donor imputation using the prediction power of random forests: a combination of swisscheese and missforest, using auxiliary information in probability survey data to improve pseudo-weighting in nonprobability samples: a copula model approach, total bias in income surveys when nonresponse and measurement errors are correlated, survey consent to administrative data linkage: five experiments on wording and format, email alerts.

  • Recommend to your Library

Affiliations

  • Online ISSN 2325-0992
  • Copyright © 2024 American Association for Public Opinion Research
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 13 September 2024

Can social media encourage diabetes self-screenings? A randomized controlled trial with Indonesian Facebook users

  • Manuela Fritz 1 , 2 ,
  • Michael Grimm 1 , 3 , 4 ,
  • Ingmar Weber   ORCID: orcid.org/0000-0003-4169-2579 5 ,
  • Elad Yom-Tov   ORCID: orcid.org/0000-0002-2380-4584 6 &
  • Benedictus Praditya 7  

npj Digital Medicine volume  7 , Article number:  245 ( 2024 ) Cite this article

Metrics details

  • Health care economics
  • Population screening
  • Risk factors

Nudging individuals without obvious symptoms of non-communicable diseases (NCDs) to undergo a health screening remains a challenge, especially in middle-income countries, where NCD awareness is low but the incidence is high. We assess whether an awareness campaign implemented on Facebook can encourage individuals in Indonesia to undergo an online diabetes self-screening. We use Facebook’s advertisement function to randomly distribute graphical ads related to the risk and consequences of diabetes. Depending on their risk score, participants receive a recommendation to undergo a professional screening. We were able to reach almost 300,000 individuals in only three weeks. More than 1400 individuals completed the screening, inducing costs of about US $ 0.75 per person. The two ads labeled “diabetes consequences” and “shock” outperform all other ads. A follow-up survey shows that many high-risk respondents have scheduled a professional screening. A cost-effectiveness analysis suggests that our campaign can diagnose an additional person with diabetes for about US $ 9.

Similar content being viewed by others

methodology survey article

A cross-sectional survey on awareness of cancer risk factors, information sources and health behaviors for cancer prevention in Japan

methodology survey article

Vaccine advertising: preach to the converted or to the unaware?

methodology survey article

Effects of a large-scale social media advertising campaign on holiday travel and COVID-19 infections: a cluster randomized controlled trial

Introduction.

Non-communicable diseases (NCDs), such as cardiovascular diseases, diabetes, and cancer, have overtaken infectious diseases as the leading cause of death worldwide 1 . Screening for metabolic NCD risk factors, such as high blood sugar and blood pressure, provides an effective tool to prevent more severe long-term health consequences. Also, behavioral risk factors, such as smoking, drinking, unhealthy diets, and a lack of physical activity, can be addressed once an individual is aware of its personal risk. Yet, nudging individuals to undergo such a screening in case of no apparent symptoms remains a challenge. This holds especially true in low- and middle-income countries (LMICs), where health literacy and the awareness of and screening for NCDs remain limited 2 , 3 , 4 . At the same time, NCDs are increasing at an unprecedented rate in many LMICs, requiring innovative solutions to increase NCD screening 5 , 6 , 7 .

To increase NCD awareness and screening in LMICs, the World Health Organization (WHO) promotes mass media awareness campaigns as a cost-effective instrument 8 , 9 . Yet, their focus is largely on traditional media such as TV, radio and print, whereas public health campaigns via social media advertising remain unmentioned. Social media public health campaigns and health advertisements have been shown to be promising to address a variety of health aspects and health behaviors. For example, social media public health campaigns have been used to address vaccination rates 10 , 11 , 12 , 13 , 14 , Covid-19 infections 15 , drinking during pregnancy 16 , smoking cessation 17 , sexual behaviors 18 , food choices and physical activity 19 , 20 .

Our study adds to this literature, but goes beyond these studies in multiple aspects. First, the major share of these campaigns is implemented and evaluated in high-income countries and addresses health topics of which the general public is broadly aware off. The question of whether such social media health campaigns work similarly well in LMIC contexts, especially if they address a disease for which there is little knowledge and awareness 4 , 21 , 22 , remains unanswered and we address this research gap. Thereby, we also directly speak to the literature that evaluates which other means and nudges (e.g., messages through community leaders or reminders) are effective in LMICs in encouraging better health-related outcomes and behavior 23 , 24 .

Second, most campaigns are limited to the pure provision of information and do not observe and engage viewers in concrete measurable actions other than those happening online (e.g., clicks or likes). Instead, users in our campaign were redirected to our campaign website, on which they could engage in an actual screening activity. Moreover, through a follow-up survey with part of the participants, we elicited behavior that happened (offline) after the campaign exposure. Notable exceptions to mention here are a study on Covid-19 infections 15 , which also expands the research design to offline measurements of user mobility and actual infection rates, and a study on HPV vaccination 14 which measures actual vaccination rates.

Lastly, only a limited number of studies address the aspect of cost-effectiveness, despite the major advantage of online campaigns being cheap in comparison to other mass media campaigns. More specifically, while some studies evaluate the cost per person reached or the cost per person recruited with such campaigns 25 , 26 , they do not go as far as evaluating the cost per actual diagnosed case or prevented case. Hence, we conduct a cost-effectiveness analysis of our campaign to provide insights about the cost-saving potential of social media public health campaigns (beyond the cost per person reached), which is especially relevant in contexts of limited public health budgets as it is the case in Indonesia and in many other LMICs 27 .

We design, implement, and evaluate a diabetes health campaign and assess whether health advertisements (“ads”) distributed via Facebook can serve as a promising instrument to foster the individual decision to undergo a diabetes risk screening in Indonesia. Indonesia is a relevant setting for our campaign since diabetes is currently the third leading cause of death 28 . Moreover, the country ranks fifth in the list of absolute numbers of diabetes cases and third among the countries with the highest number of undiagnosed cases worldwide. In 2021, more than 19 million individuals were estimated to be living with the disease in Indonesia, with more than 70% of the cases remaining undiagnosed 29 . At the same time, the usage rate of Facebook is high, which lends itself as a perfect showcase to study whether social media campaigns are suitable to encourage people to engage in preventive health behavior such as diabetes screening. Given this setting, our results are relevant for many other middle-income countries with similar high rates of diabetes and large numbers of Facebook users, such as other countries in Southeast Asia, as well as for example India, Brazil, Mexico or Pakistan.

Facebook is becoming an increasingly relevant tool for scientific research, especially in terms of implementing randomized controlled trials (RCTs) with a large outreach 12 , 13 , 15 , 30 . Given the platform’s possibilities to specify concrete population targeting criteria and using Facebook’s A/B split test function, it allows us to target our campaign to Facebook users in the cities with the highest diabetes rates in Indonesia (Jakarta and Yogyakarta), and to provide causal evidence on the effectiveness of different ad designs. Specifically, we use an RCT on Facebook and distribute ads that differ in their framing, i.e., in their message and graphical design, but equally invite viewers to visit our campaign website and to complete a diabetes self-screening. We are especially interested in whether loss-framed, i.e., shocking, messages work better than more neutral ads. Theoretical work by Rothman et al. 31 , 32 suggests that loss-framed or shocking messages should be more effective in inducing health behaviors that might be perceived as risky (i.e., have an uncertain outcome), such as disease detection activities. Following this argument, we hypothesize that a diabetes awareness campaign that encourages diabetes screening might be most effective if a shocking or loss-framed perspective is taken and investigate this proposition experimentally. Thereby, we also add to the empirical literature that explores what kind of information, framings or pictorial content drive health-related decisions 33 , 34 , 35 , 36 , 37 , 38 , 39 , in particular health screening activities 40 , 41 , 42 . Specifically, we provide evidence about which ads can effectively nudge individuals to learn about their risk of having or developing diabetes in a country where general disease awareness is low.

We then assess whether the most persuasive ad is good enough to design a cost-effective awareness campaign. Hence, in this second part of our analysis, we are interested in whether a campaign based on the cost and effectiveness parameters of the best-performing ad can be considered a cost-effective public health intervention. To this end, we follow-up with a subset of participants that completed the self-screening and investigate their compliance rate with the recommendation to schedule an appointment for a professional screening if they were found to be at high risk.

Campaign outreach and engagement

From March 15 until April 5, 2022, we ran a diabetes health campaign entitled “Ada Gula, Ada Diabetes”. The title is related to the traditional Indonesian saying “Ada gula, ada semut”, which literally means “When there is sugar, there must be ants”. Figuratively, the saying means that for every action there is an equal and opposite reaction. Our adapted campaign name hence figuratively interprets diabetes as the reaction to too much sugar – also in relation to the fact that diabetes is known as “Sakit Gula” (“sugar disease” or “sugar sickness”) in Indonesia. We ran the campaign in Jakarta and Yogyakarta and used five different ads, two of which took on a loss-framed and rather disquieting perspective, with the remaining three referring to the family, religion, and the local diabetes prevalence rate (see Methods for a detailed description of the campaign and ads). After clicking on one of the ads, users were re-directed to our campaign website, where they were offered the opportunity to complete a diabetes risk screening questionnaire similar to the diabetes risk test of the American Diabetes Association and the diabetes FINDRISC (Finnish Diabetes Risk Score) screening test but adapted to the Indonesian population (see Methods section for details and Supplementary Tables 1 and 2 for the complete questionnaire). Based on the individual answers, a risk score between 0 and 16 points was calculated and participants received an assessment of their personal risk. Additionally, the assessment contained recommendations on how to keep the risk low, how the diabetes risk can be reduced and to visit a health center or a physician if the risk score was too high. Six weeks after the end of the campaign we sent a follow-up survey to (voluntarily left) e-mail addresses to elicit information about actual compliance with the recommendations received.

Table 1 presents the Facebook engagement statistics of our campaign by age, gender, and location (statistics by ad are presented in Supplementary Table 3 ). These descriptive statistics show that our Facebook campaign can be deemed effective in distributing diabetes-related ads and reaching the general public: Within only three weeks, we reached in total 286,776 individuals with our campaign, generated 758,977 impressions (distinct views of the ads) and 5274 link clicks. This amounts to a click rate of 1.84% (relative to the number of reached individuals), which is higher than the rates achieved in studies with a similar setup, for example in Tjaden et al. 12 (1.7%), Choi et al. 43 (1.4%) or Orazi 30 (0.2%). Overall, we spent approximately US $ 1060 and the campaign resulted in 2052 started and 1469 completed screening questionnaires, implying a conversion-to-reach rate of 0.51% (1469/286,776) and a conversion-to-click rate of 27.85% (1469/5274). Moreover, this relates to a cost of around US $ 0.75 per person conducting such a self-screening. The age and gender patterns reflect the Indonesian Facebook user rates, with slightly more men than women using the platform and the elderly having the lowest user rates 44 , 45 .

Due to changes in Apple’s data policy, Facebook is unable to track users who opted out of tracking under iOS 14 or users who prohibit tracking in any other form and therefore relies on statistical modeling to estimate the total number of conversions 46 . Moreover, Facebook is unable to differentiate by age or gender once the users leave the platform and thus only provides aggregated data on conversions. Hence, for the results in terms of conversions (Column (5)), we rely on the more accurate data that was collected directly on our campaign website from which we could extract – without any loss or modeling – the absolute number of completed (and started) screening questionnaires by age, gender, and location.

Screening participation

Once redirected to our campaign website, participants could fill out the screening questionnaire. We used Facebook’s dynamic URL parameters 47 to generate ad-specific referrer links containing information about the ad id, ad name, and ad placement. These URL parameters could then be read out whenever an individual started to fill out the screening questionnaire. For those individuals using an Apple device who opted out of tracking, the ad-specific URL parameters within the referrer link would not be displayed. However, given that a vast majority of smartphone users in Indonesia rely on an Android system, only 26 (out of 1469) completed screening questionnaires could not be linked to the ad from which users were redirected to our campaign website.

Respondents had the possibility to complete the screening questionnaire multiple times on our website, either for themselves or for other relatives and friends. This was to allow for possible spillover effects, for example, if a user, after completion of the screening questionnaire, re-did the screening for another person. This, however, also implies that the same person could fill out the screening questionnaire multiple times with different information, for example, to check for related changes in the obtained diabetes risk score. The individual link id together with the IP address and browser information, however, allowed us to identify repeated survey questionnaires that were completed from the same device. We therefore construct a data sample in which we drop the observations stemming from repeated questionnaires, i.e., for each link id × IP address combination we keep only the first completed observation in our sample. We use this first observation based on the assumption that a person filling out the questionnaire multiple times would do so first for him- or herself and only afterward for another person. Similarly, we assume that if it was filled out multiple times simply out of curiosity, the respondent would enter the true data the first time and hypothetical data only afterward. This procedure led to a reduction from 1533 completed questionnaires (with duplicates) to an individual sample containing the 1469 completed screening questionnaires presented in the summary statistics.

Table 2 presents the summary statistics of the completed screening questionnaires for the main sample. Summary statistics, including the information for all started questionnaires and for the sample of completed questionnaires including any duplicates are presented in Supplementary Tables 4 and 5 .

The greatest proportion of users completing the risk screening questionnaire on our campaign website were in the 45–54 age group, the average BMI was about 26 and the users had on average a high diabetes risk with a risk score of 6.4. Sixty-one percent of them were found to be at high risk of diabetes, indicating that we were indeed able to reach out to persons who could benefit from such a self-screening. Men and women are almost equally represented. Half of the respondents report ever having been told that they have high blood sugar levels and one-third have ever been diagnosed with high blood pressure levels. In terms of smoking, 34% of participants report being ever-smokers, (i.e., either currently smoking or smoking previously but have now stopped). This average smoking rate, however, obscures a strong gender heterogeneity, with 8% of women and 57% of men in our sample being ever-smokers; a trend that is also well in line with the tobacco consumption pattern in Indonesia observed in the Indonesian Basic Health Research (RISKESDAS 48 , with 3.2% female and 65% male ever-smokers, respectively, for the total Indonesian population above the age of 10). Sixty percent of the respondents report doing at least 30 minutes of physical activity per day, while only 45% report consuming fruit or vegetables on a daily basis. Thirty percent of the respondents report consuming sugary beverages every day.

The summary statistics of the started screening questionnaires (Supplementary Table 4 ) reveal that a large share of survey starters dropped out after the first question (9%) and another large share before the question about participants’ weight and height (10%). Overall, 75% of started screening questionnaires were completed. Of all completers, 205 (14%) left their e-mail address to be contacted for further study activities. We sent a follow-up survey to this sub-sample six weeks after the end of the campaign. The full workflow and the number of observations at each step are presented in Fig. 1 .

figure 1

The number 1533 in parentheses at Step 4 refers to the number of completed questionnaires when duplicated questionnaires are also counted.

Results from the follow-up survey

Of the 205 participants who left their e-mail addresses and agreed to be re-contacted for further research activities, 53 participated in the follow-up survey. The primary aim of the follow-up survey was to elicit whether individuals with a high risk of diabetes complied with the recommendation they received to schedule an appointment in a primary healthcare facility or with their physician to undergo a blood test for diabetes. Also, if they reported not planning to schedule an appointment, we were interested in the reasons. Of the 53 individuals participating in this survey, 32 (60%) had received a high-risk score in the screening, 15 (28)% a medium-risk score, and 6 (11%) a low-risk score. Obviously, we must assume that the group of respondents is not necessarily representative of the overall sample of 1469 individuals that participated in the screening, as survey participation was voluntary. However, when comparing their observable characteristics with those of the overall sample we did not find any statistically significant differences in their characteristics, as displayed in Supplementary Table 6 . The power of these tests is of course limited, given the small sample size, but even the absolute size of the differences is in most cases surprisingly small. Moreover, we cannot detect any selection in terms of the ad the individual was exposed to (Supplementary Table 7 ), i.e., we do not find any significant effects of the different ads or the final risk score on the probability of participating in the follow-up survey.

We asked those individuals who either were at high risk according to their screening results or who mentioned remembering that they had a high risk about their plans for a professional appointment (n = 35). Of those individuals, 12 (34%) reported that they had already been aware that they had diabetes and hence no further professional test was needed, 13 (37%) reported that they did not plan to schedule a professional appointment, and 10 (28%) reported that they had already scheduled an appointment after participating in our screening or that they intended to do so in the next month (Supplementary Fig. 1 ). Hence, almost one-third of those deemed to be at high risk, corresponding to 43% of those who were unaware of their disease status, seem to comply with the recommendation to undergo a professional blood test for diabetes. If we extrapolate this share to the full sample, it amounts to 250 complying individuals at high risk. These numbers suggest that the campaign not only attracted individuals who were already aware that they had diabetes but that it also reached a substantial share of individuals at high risk of diabetes who were not aware of their status.

To account for a potential desirability bias in our survey, i.e., individuals simply reporting complying with the received recommendation because they expected this to be the socially desirable answer, we randomized two different framings of the same question. One highlighted the importance of scheduling a professional appointment given the possible severe health consequences of diabetes, the other implied that the time that had passed since the screening was probably too short to already have scheduled a meeting (the exact framings are shown in Supplementary Material 3 ). Whereas the first framing should increase the psychological cost of admitting to not having made an appointment, the second framing makes it psychologically rather easy to admit to not having made an appointment. If both framings lead to a comparable share of respondents who report having made an appointment, we can interpret this as evidence that a desirability bias is not at work. Indeed, we do not find any significant differences in the response pattern to the questions, which increases our trust in the reported answers (Supplementary Table 8 ).

Individuals reporting not intending to schedule an appointment for a professional blood test were further asked for the main reasons keeping them from doing so (Supplementary Fig. 2 ). More than half of the respondents answered being afraid of the possible costs of such a test. Given the small sample size for this question, the results have to be interpreted carefully. Yet, since preventive health care visits, including tests for chronic diseases, are free of charge for those covered by the JKN national health insurance scheme (which around 80% in our sample are), a potentially promising strategy to increase screening rates could be to distribute detailed information about the services covered in the scheme.

Ad performance

Next to the assessment of the outreach and engagement with our campaign, we were interested in which ad design and framing would be most effective in creating clicks and conversions (completed screening questionnaires). In particular, we were interested in whether the two loss-framed ads would outperform the more neutrally framed ads (see the Methods section for the different designs). To assess ad performance, we estimate the following logistic regression models:

where λ is the logistic function, \(\mathop{\sum }\nolimits_{j = 1}^{4}A{d}_{i}^{j}\) is a set of four dummy variables that are equal to one whenever person i saw ad j (the ad “family” serves as the reference group), Z i is a vector of control variables (age, gender, region), and u i ( e i ) is the error term. Note that the coefficients β j and δ j can be interpreted as causal effects since the ads were randomly assigned to Facebook users. Additionally, we investigate the effects separately by gender, since previous empirical evidence suggests that the effects of framing and information differ significantly for men and women 49 , 50 , 51 , 52 .

Figures 2 and 3 together with Supplementary Tables 9 and 10 in Supplementary Material 4 show the results for link clicks and conversions for the total sample and separately for men and women. Figures 2 and 3 show the relative increases in comparison to the “family” ad, which implies a reference click-to-reach-ratio of 1.7% and a reference conversion-to-reach-ratio of 0.4%. Supplementary Tables 9 and 10 display the regression coefficients and marginal effects (with and without controls and by gender) from the logit model, together with the p -values of pairwise Wald tests for the different coefficients.

figure 2

Figure 2 shows the effectiveness of the different ads in terms of link clicks for a the full sample and b by gender. The effects are presented as relative effect to the “family ad'', which serves as a reference category. Black whiskers present the 95% confidence intervals.

figure 3

Figure 3 shows the effectiveness of the different ads in terms of conversions for a the full sample and b by gender. The effects are presented as relative effects to the “family ad'', which serves as reference category. Black whiskers present the 95% confidence intervals.

Graph (a) for the full sample in Fig. 2 shows that we can establish a clear hierarchy in terms of ad effectiveness for generating link clicks, with the two loss-framed ads clearly outperforming the ads “family” and “geography”. Only the effect of the “religion” ad is not statistically different from that of the “shock” ad. The performance of the “consequences” ad is somewhat larger than that of the “shock” ad, yet this difference is only significant at the 10% level (see also Supplementary Table 9 ).

In terms of the effect size, a user seeing one of the two loss-framed ads “shock” or “consequences” was 15% and 23%, respectively, more likely to click on the ad compared to someone who saw the least performing “family” ad. In absolute terms, this implies an increase to a click-to-reach-ratio of 1.9% and 2.1%. Those seeing the “shock” or “consequences” ads were also 3% and 11% more likely to click on the ads in comparison to the “religion” ad, though the differential effect between the “shock” and “religion” ads is not statistically significant. The magnitudes of these effects are comparable to those found in a study with a similar set-up, also based on Facebook’s A/B split function: Tjaden et al. 12 test several ads to increase Covid-19 vaccination rates in Germany and vary the pictured messenger (doctor, governmental representative, religious leader). They report an increase between 20% and 40% in clicks of the best-performing versus other ads.

Differentiating the ads’ effectiveness by gender (Graph (b)), however, shows that the effectiveness of the “consequences” and “shock” ads in terms of link clicks seems to be driven by women, whereas men reacted to all ads in a rather similar manner. In fact, while the effect is still the largest for the two loss-framed ads in qualitative terms, we cannot reject the hypothesis of equal performance of all five ads for the male audience.

Turning to conversions, Fig. 3 , Graph (a) shows a slightly different picture. While the “consequences” ad is again the best-performing ad in generating conversions (significantly different from all but the “shock” ad), the performance of the “religion” ad, which was the one that came closest to the performance of the loss-framed ads in terms of creating link clicks, is no longer significantly different from the least performing “family” ad. This might be a sign that the “religion” ad did not sufficiently relate to the topic of diabetes and viewers of the ad did not proceed to the screening once they realized that the website did not contain religious content.

In contrast, the “geography” ad is significantly more effective than the “family” and “religion” ads and equally effective as the “shock” ad in generating finalized risk screening tests. Differentiating by gender (Graph (b)) reveals, however, that the effectiveness of the “geography” ad is again solely due to female users. For men, responsiveness to the “consequences” ad was greatest and the ad performed significantly better compared to all other ads with the exception of the “shock” ad ( p -value 0.188).

The effect magnitudes are somewhat larger than those for link clicks when comparing the best-performing ad against the others: an individual exposed to the consequences ad was 57%, 48%, 19% and 10% more likely to complete the self-screening than someone seeing the family, religion, geography or shocking ad, respectively.

Women were also more likely overall (+25%) to complete a screening questionnaire conditional on seeing any of the ads compared to their male counterparts. Yet, given that the number of women seeing an ad on Facebook was lower in absolute terms (since there are generally fewer female Facebook users than male users in Indonesia 45 ), the sample of completed questionnaires is balanced in the gender distribution. Although the oldest age group (65+) was more likely to click on the ads than users below the age of 45, they are about equally likely to complete the questionnaire as the youngest age group, which is driven by a higher attrition rate in the oldest age group. Specifically, when we regress the probability of attrition on participants’ characteristics (conditional on having started the screening questionnaire), we find that elderly respondents above the age of 65 were 34 percentage points more likely to drop out in the course of the questionnaire compared to the youngest age groups. This effect is larger for older men, though not statistically different from the effect for older women (results shown in Supplementary Table 11 ).

Overall, we can confirm the hypothesis that an ad with a loss-framed perspective, i.e., highlighting the adverse health consequences of diabetes, performs significantly better than ads referring to the family, religion, or local prevalence rates. Only the second loss-framed and “shocking” ad comes close to the performance of the “consequences” ad in our health awareness campaign. Hence, an online diabetes awareness campaign focusing on the health consequences of diabetes can be an effective tool to induce diabetes self-screenings. When we assess whether the diabetes risk level of the screening completers differs in relation to the ad they saw, we find that those who saw one of the loss-framed ads had a risk score that was on average higher by 0.28 ( p -value 0.039) than the score of those who saw one of the other three ads. This supports the hypothesis by Rothman et al. 31 , 32 by showing that those who do indeed have a higher diabetes risk, and might also perceive it as such, were more responsive to the loss-framing ads than someone with a lower risk.

Our campaign also shows that the content and framing of the ads is particularly important when targeting women. Women reacted more differentially to the different ads, whereas men responded to the ads more equally, especially for the outcome of link clicks. Yet, also for men, the “consequences” ad performed significantly better than the “family”, “geography” and “religion” ads for the conversion outcome, indicating that the loss-perspective was successful in engaging men in the actual self-screening activity.

While such gender-heterogeneous responses are in line with previous research highlighting the moderating effect of gender in loss- versus gain-framing experiments e.g., 49 , 50 , 51 , 52 , we must refrain from a more extensive analysis of the drivers of this effect, simply due to data limitations. We did not collect any information on underlying characteristics that could explain such differential behavior. Yet, the literature suggests that gender-differences in risk perceptions 49 , avoidance orientation 50 or trust 41 can shape these gender-specific responses. Also, we did not explicitly test loss- versus gain-framing but rather loss-focused versus differently focused ads, which limits the comparability of our results with more precise gain- versus loss-framed campaigns. Nevertheless, our results provide important insights into the question of what type of ads can effectively be used to enhance preventive health behavior and how responsiveness differs between men and women.

Comparison of the sample and benchmark populations

A valid concern that might arise at this point is that we were only able to reach out to a particular population group with our Facebook campaign. While the distribution of the ads was random conditional on being in the pre-specified target group, the actual selection into completing the screening questionnaire is endogenous, and hence the results concerning the effectiveness of our campaign might not to be generalized to other population groups. To investigate the importance of such selection effects, we compare our sample of participants who completed the screening questionnaire with the universe of people who met our eligibility criteria in Jakarta and Yogyakarta. This comparison is presented in detail in Supplementary Material 5 and Supplementary Table 12 . It suggests that the sample generated by our experiment is slightly skewed toward the 45-55 age group and to those who seem to be significantly more at risk of having or developing diabetes compared to the total population above the age of 35 in Jakarta and Yogyakarta. We interpret this self-selection as an indication that our campaign was very effective in reaching out to people at high risk who could potentially benefit from such online screening. Since we also showed above in the results of the follow-up survey that only one-third of the individuals who were found to have a high risk and that self-selected into the follow-up survey had already been aware that they have diabetes, we deem this as evidence that our campaign was indeed able to reach out to a large number of individuals who were unaware of their high risk and that our campaign was able to effectively engage them in the diabetes self-screening.

Cost-effectiveness

Having identified that ads focusing on the detrimental health consequences of diabetes can be a particularly well-suited approach to encourage diabetes risk screening among those with a comparably high diabetes risk, we are now interested in the cost-effectiveness of such an online campaign. We analyze the cost-effectiveness of our Facebook health campaign under the assumption that it would be scaled-up to a one-year health campaign across the whole island of Java. This implies a target population of about 25 million Facebook users above the age of 35. We perform a simple cost-effectiveness calculation based on the cost and effectiveness parameters derived from our study and enrich them with a repeated decision-tree model. The final cost parameter of interest is the cost per newly diagnosed person.

The assumptions, results, and sensitivity analysis of the cost-effectiveness analysis are presented in Supplementary Material 6 (Supplementary Tables 13 and 14 and Supplementary Fig. 3 ). We show that the hypothetical up-scaling of the campaign to the whole of Java over the period of one year could lead to about 1.7 million users participating in the online screening, of whom about 250,000 would continue with the professional follow-up screening, and finally to the diagnosis of almost 170,000 previously undetected diabetes cases. This corresponds to an increase from 25% to 29% of diagnosed cases relative to all cases, i.e., an increase of 16%. While the share might still seem small, the absolute number is large, especially in light of the low cost and low effort needed to implement an online health campaign. This low cost is further confirmed when we look at the total cost of the proposed intervention (including the professional follow-up screening), which is slightly higher than US $ 1.5 million. Dividing the total cost by the 170,000 newly diagnosed cases, the cost of detecting one more previously undiagnosed person amounts to approximately US $ 9 (with a lower bound of US $ 5.20 in a best-case scenario and an upper bound of US $ 37 in a worst-case scenario).

Contrasting these amounts to the cost of long-term diabetes care in Indonesia suggests a large cost-saving potential. Hidayat et al. 53 estimate the direct medical costs for a patient in the Indonesian healthcare system with severe diabetes health consequences at US $ 930 per person per year, whereas a patient without severe diabetes consequences costs the healthcare system only US $ 420. Under the premise that early diagnosis reduces the probability of severe diabetic health consequences, an online diabetes health campaign offers the possibility of reducing healthcare expenditures in the long term. Further, the cost per detected case is lower in comparison to other screening strategies, for example, screening with a similar diabetes risk questionnaire during annual health check-ups in Thailand (~US $ 30 per detected case, counting only direct medical cost) 54 .

NCDs are the leading cause of death worldwide. In LMICs, the health and economic burden due to NCDs is rising rapidly and innovative solutions to increase screening activities and encourage healthy lifestyles could counteract this problem. Public health campaigns can help to increase awareness of NCDs and encourage populations at risk to change unhealthy lifestyles, inform them about important preventive health measures such as screening, ensure adequate treatment in the event of a positive diagnosis, and thereby reduce health care costs and productivity losses in the long run.

We show that using social media platforms, such as Facebook, for such health campaigns sets out new opportunities to increase awareness and screening for diabetes in LMICs. Such campaigns can generate high exposure and engagement rates at very low cost. With our campaign, we were able to reach out to almost 300,000 individuals in only three weeks and with a budget of less than US $ 1100. More than 1400 individuals completed the offered online diabetes risk screening on our campaign website, implying a cost of less than US $ 0.75 per person screened in that way. We also relied on insights from psychology and assessed whether such a campaign should rely on ads with a focus on a loss-framed or shocking perspective to effectively induce preventive health screenings. Our randomized experiment shows that this is indeed a promising approach and that ads focusing on the adverse health consequences of diabetes are most effective in nudging viewers to click on the ads and to carry out a diabetes self-screening. In particular, we find that an ad highlighting the risk of losing eyesight or developing heart- and kidney diseases as a consequence of diabetes outperformed all other ads in the number of link clicks and completed screening questionnaires. Only the second loss-framed ad, which focused on the fact that diabetes can result in death, came near the performance of the “consequences” ad. Yet, this framing effect was more pronounced for the female sample in our study. Men responded more equally also to other ads. These gender differences should be considered by policymakers aiming to design an effective public health campaign.

We also find that such a campaign is especially well-suited for reaching out to the population in the 45–55 age range. This is an encouraging finding, given that the risk of diabetes increases after the age of 45 and a diagnosis of elevated blood sugar at this age offers the opportunity for early treatment to prevent further adverse health consequences.

However, while we can establish that loss-framed or more shocking ads are more effective in terms of creating link clicks and completed self-screenings, it is beyond the scope of our study to assess whether such negatively framed ads could have longer-term negative consequences. A potential adverse effect could for example arise if individuals exposed to the loss-framed ads would engage in information avoidance. In the context of our study, we can show that those individuals being exposed to the loss-framed ads were more likely to participate in the self-screening and equally likely to participate in the follow-up survey, indicating that they did not engage in information avoidance in the short term. Yet, we cannot rule out that long-term health behavior after having received a high-risk result in the self-screening could be adversely affected by the prospect of negative health consequences. Moreover, while shocking contents work well in social media networks to go viral, such content could also induce anxiety or trigger mental health consequences. A recent study in the context of Covid-19 55 , for example, shows that loss-framed ads increased anxiety levels. Together with the fact that a diabetes diagnosis can lead to diabetes distress 56 and affected individuals are at increased risk for mental health disorders 57 , our results call for further research in terms of longer-term consequences of using loss-framed ads in public health campaigns, especially when implemented at scale.

A remaining limitation of our study is that our measure of compliance with the received recommendation to visit a physician or the report of an existing diagnosis is self-reported. Even though we control for social desirability bias, we are limited in our ability to measure whether individuals claiming to have scheduled an appointment indeed follow through with the professional screening, or whether an individual indeed was already diagnosed with diabetes before. This leaves ample room for future studies in which actual compliance rates are being measured. This could be done, for example, by cooperating directly with local health centers that verify whether a person was referred via an online campaign (e.g., via a referral voucher). Moreover, to confirm the self-reported diabetes diagnoses, it would be interesting to set up a study aiming to verify existing diagnoses through medical records. Yet, privacy concerns and data protection rules pose a substantial hurdle for such a study design.

While we run our campaign in Indonesia, many other middle-income countries are equally experiencing a rapidly increasing diabetes burden and have high social media usage rates. This suggests that the insights from our campaign and study should not only be transferable to other countries in Southeast Asia but also to countries such as India, Brazil, Mexico, and Pakistan.

Overall, our study suggests that a health awareness campaign implemented on the social media platform Facebook is a useful tool to increase awareness of and (self-)screenings for diabetes, and loss-framed ads work particularly well. Policymakers in Indonesia and comparable countries should consider using such social media health campaigns as an innovative tool to address the increasing diabetes burden.

Campaign and ad design

From March 15 until April 5, 2022, we ran a diabetes health campaign on Facebook, targeting Indonesian Facebook users in Jakarta and Yogyakarta – the two cities with the highest diabetes rates in Indonesia 48 . In Indonesia’s urban areas, which also have higher diabetes prevalence rates than rural areas, internet penetration rates and usage of social media platforms are high. As of January 2022, the internet penetration rate in Indonesia stood at 74%, with 94% of all users accessing the internet via smartphones. Around 190 million Indonesians are active social media users, of which 130-135 million are active Facebook users, according to the audience size to be reached with Facebook’s advertising tool 58 , 59 .

We implemented the campaign via Facebook’s advertisement function which permits the distribution of self-designed ads to Facebook users while using specific demographic and geographic targeting criteria. This advertisement tool was originally developed for businesses to boost their customer base and increase sales, but it is also increasingly used by scientific researchers to recruit survey participants 43 , 60 , 61 , 62 , 63 . While using the tool for the recruitment of survey participants is indisputably practical, it also offers an even more sophisticated and scientifically valuable function that allows researchers to implement randomized controlled trials. Facebook’s A/B split test allows for a random distribution of two or more ads to evenly split and statistically comparable audiences to test which ad performs best in terms of a pre-specified campaign target 64 . The ads can thus differ in their design or placement, depending on which variable is being tested. This A/B test design also ensures that the same budget is allocated to each ad and hence avoids Facebook’s algorithm determining the budget allocation, something which could generate unbalanced Facebook user exposure rates across ads.

We designed five different ads, two of which took on a loss-framed and rather disquieting perspective, with the remaining three referring to the family, religion, and the local diabetes prevalence rate. The two loss-framed ads were entitled “diabetes consequences” and “shock”. The non-loss-framed ads were entitled “family”, “religion” and “geography”. These non-loss-framed ads were inspired by different strands of the literature that link religion and health 65 , family and health 66 , and information about local health conditions and health behavior 67 . While this design does not allow us to infer the effects of loss- versus gain-framing (since we do not include a specifically gain-framed ad), it allows us to compare the effect of loss-framed ads with ads that rely on different psychological channels that have been shown to affect health-related behavior. The ads and their displayed message are described in more detail below and presented in Fig. 4 .

Consequences: The consequences ad contained a statement about the possible health consequences of diabetes, including blindness, kidney- and heart diseases. The graphic showed a wooden mannequin on which the body parts that can be affected by diabetes were marked with a black cross.

Shock: The shocking ad pictured a man in front of a coffin and contained the message that diabetes can have deadly consequences.

Family: The family ad pictured three generations of an Indonesian family and contained the message that every family can be affected by diabetes.

Geography: One geography ad was designed for each of the two regions in our study (Jakarta and Yogyakarta). The graphics showed a landmark of each of the two cities (the National Monument in Jakarta and the Yogyakarta Monument in Yogyakarta, respectively) covered in sweets. The message referred to the local prevalence rate of diabetes in each of the regions.

Religion: The religion ad presented an Indonesian woman in hijab cooking and contained a statement from the Quran that conveyed the message that one should not live a potentially self-harming life.

figure 4

a Diabetes consequences – Diabetes can cause blindness, heart diseases, and kidney failure. Learn about your diabetes risk now! b Shock – Diabetes can have deadly consequences. Diabetes can be prevented and controlled. Learn about your diabetes risk now! c Family – Diabetes can affect every family. Diabetes can be prevented and controlled. Learn about your diabetes risk now! d Geography (Jakarta) – Jakarta is the city with the highest diabetes prevalence rate in Indonesia. Learn about your diabetes risk now! ( e ) Geography (Yogyakarta) – Yogyakarta is one of the cities with the highest diabetes prevalence rates. Learn about your diabetes risk now! f Religion – “and do not throw [yourselves] with your [own] hands into destruction” (Q.S. Al-Baqarah, 2:195). Diabetes can be prevented and controlled. Learn about your diabetes risk now!

In addition to the messages outlined above, each ad carried the statement “Learn about your diabetes risk now” (“Pelajari tentang risiko diabetes Anda sekarang”) to encourage the ad viewers to click on the ad and visit the campaign website on which they could conduct the risk screening test. Technically, we ran two different campaigns, one for each of the targeted regions, and then pooled the data for the analysis. Each ad received an equal budget of US $ 5 per day, summing to a total daily budget of US $ 50 for both cities. In terms of the target population, we restricted the audience demographically to Facebook users above the age of 35 and geographically to users living in either Jakarta or Yogyakarta.

The campaign objective was chosen to optimize “conversions”, with conversion programmed to be equal to completion of the screening questionnaire. Setting “conversions” as the campaign objective (instead of the other two possibilities “awareness” or “consideration”) allowed us to focus on possible screening questionnaire completers who would thus gain from the campaign, while simultaneously preventing showing the ads to seemingly uninterested users. This conversion objective required the generation of a so-called Facebook Pixel code, which had to be embedded in the code of the website to which the ad viewers were redirected. Facebook could then use this Pixel to track user actions taking place on our website and optimize accordingly. This implies that after a learning phase, Facebook’s algorithm aimed to show the ads to individuals more likely to click on the ads and to complete the screening questionnaire, based on the characteristics of earlier completers. The success of the algorithm is confirmed by the positive trend in the number of daily clicks and conversions over time as presented in Fig. 5 . After a first peak in link clicks, most likely driven by immediate reactions from viewers always responding to such ads, the learning phase sets in and translates into a positive trend in clicks and conversions. While this internal algorithm exaggerates a selection bias per ad if the conversion objective is used in regular campaigns 68 , the use of the A/B split test ensured that, conditional on being in the target audience, the ad version the user saw was random. This randomization procedure allowed us to compare the different ads based on their effectiveness in generating clicks and conversions, i.e., completed screening questionnaires.

figure 5

Figure 5 displays the time trend in a daily link clicks and b daily conversions (i.e., completed screening questionnaires). Vertical gray dashed lines indicate Sundays.

Campaign website

After clicking on one of the ads in Facebook, individuals were redirected to the landing page of the campaign website. Before being able to browse further on the website, the participants were informed about our privacy policy and that data generated on the website were used for an academic study. For both, they had to indicate their informed consent. Individuals were then offered the opportunity to complete a diabetes risk screening questionnaire on this website similar to the diabetes risk test of the American Diabetes Association and the diabetes FINDRISC (Finnish Diabetes Risk Score) screening test. The questionnaire version we used is an adapted and translated version specifically for the Indonesian population. The original FINDRISC questionnaire was developed to identify individuals at risk of diabetes using a Finnish population sample 69 . Since then, the questionnaire has been evaluated and validated many times and has been adjusted to different populations and country samples 70 , 71 , 72 , 73 . The original diabetes risk test of the American Diabetes Association dates back to 1993 and has likewise been adapted multiple times 74 . The version we used is based on the diabetes risk test of the American Diabetes Association 75 , 76 , the ModAsian FINDRISC for Asia, the FINDRISC Bahasa Indonesia 77 , and the Malay version of the American Diabetes Association diabetes risk test 74 . It consisted of eleven questions which could be answered in approximately 90 seconds. Based on the individual answers, a risk score between 0 and 16 points was calculated and participants received an assessment of their personal risk rated as low risk (0-3 points), medium risk (4-5 points), or high risk (6 or more points). Additionally, the assessment contained recommendations on how to keep the risk low, how the diabetes risk can be reduced and to visit a health center or a physician if the risk score was too high.

The website also included a page with factual information on diabetes in Indonesia, including the distribution of prevalence rates across the country, behavioral risk factors, as well as information about how diabetes can be diagnosed and how it can be treated. Furthermore, we provided detailed information about the institutions involved in the research activities, the aim of the campaign, and the notification that the campaign was purely educational and could not replace a professional health visit or screening. We also asked participants to leave their e-mail addresses so that they could get follow-up information and continue to be involved in the study.

Follow-up survey

Six weeks after the end of the campaign we sent a follow-up survey to all these addresses to elicit information about actual compliance with the recommendations received. Since providing the e-mail address was voluntary and hence the sub-sample of respondents was subject to a potential self-selection bias, we provide a description of the sample that completed this follow-up survey and contrast it with the profile of the entire sample (see Results section). In this follow-up survey, we asked the respondents about their plans to comply with the received recommendations. Specifically, we asked whether they plan to schedule a professional medical screening (or have already done so), if yes, when and where they planned to go and if no, what their reasons were for not doing so. We also asked several questions about diabetes risk factors, symptoms, and health consequences, whether the respondent had health insurance, whether this was the first time they had conducted a diabetes risk test, whether they had already been diagnosed with diabetes, and whether they were currently on medication.

IRB approval and RCT registration

This study received ethical approval from the University of Passau Research Ethics Committee (15.03.2022, IRB Approval Number I-07.5090/2022). Informed consent was obtained from all participants who browsed our website. Informed consent for the experiment on Facebook is covered by Facebook’s data use policy. Identifiable images relating to persons in our ads are no patients and no written consent was required since ads were designed by ourselves with pictures taken from openly accessible stocks with license-free images. The study was pre-registered at the AEA RCT Registry (0008781, https://doi.org/10.1257/rct.8781 ). In the final manuscript/study, we deviated in some features from our initial analysis plan, partly for technical reasons, and marginally adjusted our hypotheses after the pilot study. These changes are explained in detail in an appendix to our pre-analysis plan (downloadable under the same registration number). The study was conducted without any support from or connection to Facebook (Meta group) and Facebook had no access to the responses that were generated on our website or during the follow-up survey.

Data availability

All data underlying this study are available from the authors upon request.

Code availability

All codes underlying this study are available from the authors upon request.

Global Burden of Disease Collaborative Network. Global Burden of Disease Study 2019 (GBD 2019) Reference Life Table . https://ghdx.healthdata.org/record/ihme-data/global-burden-disease-study-2019-gbd-2019-reference-life-table (2021).

Geldsetzer, P. et al. The state of hypertension care in 44 low-income and middle-income countries: A cross-sectional study of nationally representative individual-level data from 1.1 million adults. Lancet 394 , 652–662 (2019).

Article   PubMed   Google Scholar  

Manne-Goehler, J. et al. Health system performance for people with diabetes in 28 low-and middle-income countries: A cross-sectional study of nationally representative surveys. Plos Med. 16 , e1002751 (2019).

Article   PubMed   PubMed Central   Google Scholar  

Widyaningsih, V. et al. Missed opportunities in hypertension risk factors screening in Indonesia: A mixed-methods evaluation of integrated health post (Posbindu) implementation. BMJ Open 12 , e051315 (2022).

Lin, X. et al. Global, regional, and national burden and trend of diabetes in 195 countries and territories: an analysis from 1990 to 2025. Sci. Rep. 10 , 14790 (2020).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Abegunde, D. O., Mathers, C. D., Adam, T., Ortegon, M. & Strong, K. The burden and costs of chronic diseases in low-income and middle-income countries. Lancet 370 , 1929–1938 (2007).

Tabassum, R. et al. Untapped aspects of mass media campaigns for changing health behaviour towards non-communicable diseases in Bangladesh. Glob. Health 14 , 1–4 (2018).

Article   Google Scholar  

World Health Organization. Tackling NCDs: “Best buys” and other recommended interventions for the prevention and control of noncommunicable diseases . https://apps.who.int/iris/bitstream/handle/10665/259232/WHO-NMH-NVI-17.9-eng.pdf?sequence=1&isAllowed=y (2017).

World Health Organization. Package of Essential Noncommunicable (PEN) disease interventions for primary health care in low-resource settings . https://www.who.int/publications/i/item/9789240009226 (2020).

Pereira da Veiga, C. R., Semprebon, E., da Silva, J. L., Lins Ferreira, V. & Pereira da Veiga, C. Facebook HPV vaccine campaign: Insights from Brazil. Hum. Vaccines Immunother. 16 , 1824–1834 (2020).

Krupenkin, M., Yom-Tov, E. & Rothschild, D. Vaccine advertising: Preach to the converted or to the unaware? NPJ Digit. Med. 4 , 23 (2021).

Tjaden, J., Haarmann, E. & Savaskan, N. Experimental evidence on improving COVID-19 vaccine outreach among migrant communities on social media. Sci. Rep. 12 , 16256 (2022).

Ho, L. et al. The impact of large-scale social media advertising campaigns on COVID-19 vaccination: Evidence from two randomized controlled trials. AEA Pap. Proc. 113 , 653–658 (2023).

Mohanty, S., Leader, A. E., Gibeau, E. & Johnson, C. Using Facebook to reach adolescents for human papillomavirus (HPV) vaccination. Vaccine 36 , 5955–5961 (2018).

Breza, E. et al. Effects of a large-scale social media advertising campaign on holiday travel and Covid-19 infections: A cluster randomized controlled trial. Nat. Med. 27 , 1622–1628 (2021).

Parackal, M., Parackal, S., Eusebius, S. & Mather, D. The use of Facebook advertising for communicating public health messages: A campaign against drinking during pregnancy in New Zealand. J. Med. Internet Res.: Public Health Surveill. 3 , e7032 (2017).

Google Scholar  

Thrul, J., Klein, A. B. & Ramo, D. E. Smoking cessation intervention on Facebook: which content generates the best engagement? J. Med. Internet Res. 17 , e244 (2015).

Bull, S. S., Levine, D. K., Black, S. R., Schmiege, S. J. & Santelli, J. Social media–delivered sexual health intervention: A cluster randomized controlled trial. Am. J. Prev. Med. 43 , 467–474 (2012).

Yom-Tov, E., Shembekar, J., Barclay, S. & Muennig, P. The effectiveness of public health advertisements to promote health: A randomized-controlled trial on 794,000 participants. NPJ Digit. Med. 1 , 24 (2018).

Northcott, C. et al. Evaluating the effectiveness of a physical activity social media advertising campaign using Facebook, Facebook Messenger, and Instagram. Transl. Behav. Med. 11 , 870–881 (2021).

Widyahening, I., Van Der Graaf, Y., Soewondo, P., Glasziou, P. & Van Der Heijden, G. Awareness, agreement, adoption and adherence to type 2 diabetes mellitus guidelines: A survey of Indonesian primary care physicians. BMC Fam. Pract. 15 , 72 (2014).

Bakti, I. G. M. Y., Sumardjo, S., Fatchiya, A. & Syukri, A. F. Public knowledge of diabetes and hypertension in metropolitan cities, Indonesia. Public Health Sci. J. 13 , 1–13 (2021).

Banerjee, A. et al. Messages on COVID-19 prevention in India increased symptoms reporting and adherence to preventive behaviors among 25 million recipients with similar effects on non-recipient members of their communities. National Bureau of Economic Research. Preprint available at. https://www.nber.org/system/files/working_papers/w27496/w27496.pdf (2020).

Marcus, M. E., Reuter, A., Rogge, L. & Vollmer, S. The effect of SMS reminders on health screening uptake: A randomized experiment in Indonesia. J. Econ. Behav. Organ. (forthcoming)

Athey, S., Grabarz, K., Luca, M. & Wernerfelt, N. Digital public health interventions at scale: The impact of social media advertising on beliefs and outcomes related to covid vaccines. Proc. Natl Acad. Sci. 120 , e2208110120 (2023).

Tunkl, C. et al. Are digital social media campaigns the key to raise stroke awareness in low-and middle-income countries? A study of feasibility and cost-effectiveness in Nepal. Plos One 18 , e0291392 (2023).

World Bank. Current health expenditure (% of GDP). World Development Indicators . [Dataset]. Washington D.C.: World Bank. https://data.worldbank.org/indicator/SH.XPD.CHEX.GD.ZS (2022).

Centers for Disease Control and Prevention. CDC in Indonesia. Factsheet Indonesia . https://www.cdc.gov/globalhealth/countries/indonesia/pdf/indonesia-fs.pdf (2020).

International Diabetes Federation. IDF Diabetes Atlas . (International Diabetes Federation, Brussels, 2021).

Orazi, D. C. & Johnston, A. C. Running field experiments using Facebook split test. J. Bus. Res. 118 , 189–198 (2020).

Rothman, A. & Salovey, P. Shaping perceptions to motivate healthy behavior: The role of message framing. Psychol. Bull. 121 , 3–19 (1997).

Article   CAS   PubMed   Google Scholar  

Rothman, A. J., Bartels, R. D., Wlaschin, J. & Salovey, P. The strategic use of gain-and loss-framed messages to promote healthy behavior: How theory can inform practice. J. Commun. 56 , S202–S220 (2006).

Cherry, T. L., James, A. G. & Murphy, J. The impact of public health messaging and personal experience on the acceptance of mask wearing during the COVID-19 pandemic. J. Econ. Behav. Organ. 187 , 415–430 (2021).

Seah, S. S. Y. et al. Impact of tax and subsidy framed messages on high-and lower-sugar beverages sold in vending machines: A randomized crossover trial. Int. J. Behav. Nutr. Phys. Act. 15 , 1–9 (2018).

Kuehnle, D. How effective are pictorial warnings on tobacco products? New evidence on smoking behaviour using Australian panel data. J. Health Econ. 67 , 102215 (2019).

Cil, G. Effects of posted point-of-sale warnings on alcohol consumption during pregnancy and on birth outcomes. J. Health Econ. 53 , 131–155 (2017).

Hall, M. G. et al. The impact of pictorial health warnings on purchases of sugary drinks for children: A randomized controlled trial. Plos Med. 19 , e1003885 (2022).

de Vries Mecheva, M., Rieger, M., Sparrow, R., Prafiantini, E. & Agustina, R. Snacks, nudges and asymmetric peer influence: Evidence from food choice experiments with children in Indonesia. J. Health Econ. 79 , 102508 (2021).

Maclean, J. C. & Buckell, J. Information and sin goods: Experimental evidence on cigarettes. Health Econ. 30 , 289–310 (2021).

Eibich, P. & Goldzahl, L. Health information provision, health knowledge and health behaviours: Evidence from breast cancer screening. Soc. Sci. Med. 265 , 113505 (2020).

Beam, E. A., Masatioglu, Y., Watson, T. & Yang, D. Loss aversion or lack of trust: Why does loss framing work to encourage preventive health behaviors? J. Behav. Exp. Econ. 104 , 102022 (2023).

Bertoni, M., Corazzini, L. & Robone, S. The good outcome of bad news: A field experiment on formatting breast cancer screening invitation letters. Am. J. Health Econ. 6 , 372–409 (2020).

Choi, I. et al. Using different Facebook advertisements to recruit men for an online mental health study: Engagement and selection bias. Internet Interv. 8 , 27–34 (2017).

Statista. Share of Facebook users in Indonesia as of April 2021, by age group . https://www.statista.com/statistics/1235773/indonesia-share-of-facebook-users-by-age/ (2024).

Statista. Share of Facebook users in Indonesia as of April 2021, by gender . https://www.statista.com/statistics/997045/share-of-facebook-users-by-gender-indonesia/ (2024).

Meta. Understand how results are sometimes calculated differently . https://en-gb.facebook.com/business/help/1329822420714248 (2024).

Meta. How to add URL parameters to Meta ads . https://en-gb.facebook.com/business/help/1016122818401732 (2024).

Kementerian Kesehatan Republik Indonesia. RISKESDAS 2018. Laporan Nasional Riskesdas . https://repository.badankebijakan.kemkes.go.id/id/eprint/3514/ (2018).

Toll, B. A. et al. Message framing for smoking cessation: The interaction of risk perceptions and gender. Nicotine Tob. Res. 10 , 195–200 (2008).

Nan, X. Communicating to young adults about HPV vaccination: Consideration of message framing, motivation, and gender. Health Commun. 27 , 10–18 (2012).

Hasseldine, J. & Hite, P. A. Framing, gender and tax compliance. J. Econ. Psychol. 24 , 517–533 (2003).

Kim, H. J. The effects of gender and gain versus loss frame on processing breast cancer screening messages. Commun. Res. 39 , 385–412 (2012).

Hidayat, B. et al. Direct medical cost of type 2 diabetes mellitus and its associated complications in Indonesia. Value Health Reg. Issues 28 , 82–89 (2022).

Srichang, N., Jiamjarasrangsi, W., Aekplakorn, W. & Supakankunti, S. Cost and effectiveness of screening methods for abnormal fasting plasma glucose among Thai adults participating in the annual health check-up at King Chulalongkorn Memorial Hospital. J. Med. Assoc. Thail. 94 , 833–41 (2011).

Dorison, C. A. et al. In COVID-19 health messaging, loss framing increases anxiety with little-to-no concomitant benefits: Experimental evidence from 84 countries. Affect. Sci. 3 , 577–602 (2022).

Sofyan, H. et al. The state of diabetes care and obstacles to better care in Aceh, Indonesia: a mixed-methods study. BMC Health Serv. Res. 23 , 271 (2023).

Ducat, L., Philipson, L. H. & Anderson, B. J. The mental health comorbidities of diabetes. JAMA 312 , 691–692 (2014).

Kepios. Digital 2022: Indonesia . https://datareportal.com/reports/digital-2022-indonesia (2022).

Kepios. Facebook Users, Stats, Data & Trends . https://datareportal.com/essential-facebook-stats (2023).

Kosinski, M., Matz, S. C., Gosling, S. D., Popov, V. & Stillwell, D. Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. Am. Psychol. 70 , 543–556 (2015).

Thornton, L. et al. Recruiting for health, medical or psychosocial research using Facebook: Systematic review. Internet Interv. 4 , 72–81 (2016).

Ananda, A. & Bol, D. Does knowing democracy affect answers to democratic support questions? A survey experiment in Indonesia. Int. J. Public Opin. Res. 33 , 433–443 (2021).

Grow, A. et al. Addressing public health emergencies via Facebook surveys: Advantages, challenges, and practical considerations. J. Med. Internet Res. 22 , e20653 (2020).

Meta. About A/B testing . https://en-gb.facebook.com/business/help/1738164643098669 (2024).

Alfano, M. Islamic law and investments in children: Evidence from the Sharia introduction in Nigeria. J. Health Econ. 85 , 102660 (2022).

Fadlon, I. & Nielsen, T. H. Family health behaviors. Am. Econ. Rev. 109 , 3162–3191 (2019).

Haglin, K., Chapman, D., Motta, M. & Kahan, D. How localized outbreaks and changes in media coverage affect Zika attitudes in national and local contexts. Health Commun. 35 , 1686–1697 (2020).

Neundorf, A. & Öztürk, A. How to improve representativeness and cost-effectiveness in samples recruited through meta: A comparison of advertisement tools. Plos One 18 , e0281243 (2023).

Lindstrom, J. & Tuomilehto, J. The diabetes risk score: A practical tool to predict type 2 diabetes risk. Diab. Care 26 , 725–731 (2003).

Nieto-Martínez, R., González-Rivas, J. P., Aschner, P., Barengo, N. C. & Mechanick, J. I. Transculturalizing diabetes prevention in Latin America. Ann. Glob. Health 83 , 432–443 (2017).

Muñoz-González, M. C. et al. FINDRISC modified for Latin America as a screening tool for persons with impaired glucose metabolism in Ciudad Bolívar, Venezuela. Med. Princ. Pract. 28 , 324–332 (2019).

Ku, G. M. & Kegels, G. The performance of the Finnish Diabetes Risk Score, a modified Finnish Diabetes Risk Score and a simplified Finnish Diabetes Risk Score in community-based cross-sectional screening of undiagnosed type 2 diabetes in the Philippines. Prim. Care Diab. 7 , 249–259 (2013).

Lim, H. M., Chia, Y. C. & Koay, Z. L. Performance of the Finnish Diabetes Risk Score (FINDRISC) and Modified Asian FINDRISC (ModAsian FINDRISC) for screening of undiagnosed type 2 diabetes mellitus and dysglycaemia in primary care. Prim. Care Diab. 14 , 494–500 (2020).

Fauzi, N. F. M., Wafa, S. W., Ibrahim, A. M., Raj, N. B. & Nurulhuda, M. H. Translation and validation of American Diabetes Association diabetes risk test: The Malay version. Malays. J. Med. Sci. 29 , 113–125 (2022).

American Diabetes Association. American diabetes alert. Diab. Forecast 46 , 54–55 (1993).

American Diabetes Association. Good to know: Diabetes risk test. Clin. Diab. 37 , 291 (2019).

Rokhman, M. et al. Translation and performance of the Finnish Diabetes Risk Score for detecting undiagnosed diabetes and dysglycaemia in the Indonesian population. Plos One 17 , e0269853 (2022).

Download references

Acknowledgements

We thank Ferdyani Yulia Atikaputri, Ayu Paramudita and Mardha Tilla Septiani for their research assistance. We also thank Gerard van den Berg, Annegret Kuhn, Robert Lensink and participants at seminars at the University of Groningen and the University of Passau as well as at the NCDE 2023 in Gothenburg, GDEC 2023 in Dresden and the Web Conference 2023 in Austin, Texas for very useful comments and suggestions. Part of this work was done while IW was at the Qatar Computing Research Institute, HBKU, Doha, Qatar, EYT was at Microsoft Research, Herzliya, Israel and at the Technion Israel Institute of Technology, Faculty of Industrial Engineering and Management, Haifa, Israel, and MF was at the University of Groningen, Department for Economics, Econometrics and Finance, Groningen, The Netherlands. We acknowledge financial support by the Open Access Publication Fund of the University Library Passau.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

University of Passau, Department of Economics, Passau, Germany

Manuela Fritz & Michael Grimm

Technical University Munich, School of Social Science and Technology, Munich, Germany

Manuela Fritz

IZA, Bonn, Germany

Michael Grimm

RWI Research Network, Essen, Germany

Saarland University, Department of Computer Science, Saarbruecken, Germany

Ingmar Weber

Bar Ilan University, Department of Computer Science, Ramat Gan, Israel

Elad Yom-Tov

Xiaomi Indonesia, DKI Jakarta, Indonesia

Benedictus Praditya

You can also search for this author in PubMed   Google Scholar

Contributions

MF: Conceptualization, Methodology, Software, Data Curation, Formal analysis, Writing - Original Draft, Writing - Review & Editing. MG: Conceptualization, Methodology, Writing - Original Draft, Writing - Review & Editing, Supervision. IW: Conceptualization, Writing - Review & Editing. EYT: Conceptualization, Writing - Review & Editing. BP: Conceptualization, Visualization. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Manuela Fritz .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fritz, M., Grimm, M., Weber, I. et al. Can social media encourage diabetes self-screenings? A randomized controlled trial with Indonesian Facebook users. npj Digit. Med. 7 , 245 (2024). https://doi.org/10.1038/s41746-024-01246-x

Download citation

Received : 18 January 2024

Accepted : 31 August 2024

Published : 13 September 2024

DOI : https://doi.org/10.1038/s41746-024-01246-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

methodology survey article

  • Open access
  • Published: 12 September 2024

An intervention study of poly-victimization among rural left-behind children based on the theoretical framework of planned behavior

  • Yandong Luo 1 , 2   na1 ,
  • Jiajun Zhou 1 , 2   na1 ,
  • Pan Wen 1 , 2 ,
  • Ping Chang 1 , 2 ,
  • Zicheng Cao 1 &
  • Liping Li 1 , 2  

Child and Adolescent Psychiatry and Mental Health volume  18 , Article number:  116 ( 2024 ) Cite this article

Metrics details

Poly-victimization (PV) not only threatens physical and mental health but also causes a range of social problems. Left-behind children in rural areas are more likely to experience PV problems. However, there have been fewer studies on PV among rural children, and even fewer intervention studies.

The difference-in-differences method was employed to analyze the impact of intervention measures, based on the theory of planned behavior, on PV among left-behind children in rural areas.

The study subjects were left-behind children from six middle schools in two cities in southern China, who completed the baseline survey from 2020 to 2021. They were divided into a control group and an intervention group, each consisting of 228 cases, based on their schools. Before and after the intervention, the Self-made victimization-related knowledge, attitude, and practice questionnaire, Poly-victimization scale, and Middle school students’ coping style scale were used to evaluate the victimization-related KAP(knowledge, attitude, and practice), victimization occurrence, and coping styles of left-behind children, respectively. Stata 15.0 was used to establish a difference-in-differences regression model to analyze the impact of the intervention measures on poly-victimization and coping styles.

Mixed Anova revealed that after the intervention, the KAP scores of the intervention group were significantly higher than those of the control group ( p  < 0.05). After the intervention, the incidence of child victimization in the intervention group dropped to 9.60% ( n  = 22), lower than in the baseline survey, with a statistically significant difference ( p  < 0.01). The incidence of PV among children in the intervention group was lower than that in the control group, with the difference being statistically significant ( p  < 0.01). The net reduction in the incidence of PV among children was 21.20%. After the intervention, the protection rate for preventing PV among children was 73.33%, and the effect index was 3.75. The intervention improved children’s coping styles, problem-solving, and help-seeking, while reducing negative coping styles such as avoidance and venting, with the differences being statistically significant ( p  < 0.05).

Intervention measures based on the theory of planned behavior reduce the occurrence of PV among left-behind children, and the intervention effects on different types of victimization are also different.

Introduction

Left-behind children in rural areas are a unique group among children in China. Rural left-behind children are defined as children under the age of 18 who reside in rural areas and have one or both parents working as migrant laborers in urban areas [ 1 ]. These children are typically left in the care of relatives, such as grandparents, or other guardians. The phenomenon of left-behind children is prevalent in China due to the large-scale internal migration driven by economic opportunities in urban areas. This prolonged separation from their primary caregivers often leads to emotional and social challenges [ 2 ]. As of the latest data, more than 33% of children (6.97 million) residing in rural China are left-behind children [ 3 ]. This phenomenon is not exclusive to China, similar patterns are observed globally wherever significant rural-to-urban or international labor migration occurs. For instance, substantial numbers of left-behind children are also reported in countries such as India [ 4 ], Philippines (27%,8 million) [ 5 ], and Thailand (6.6%, 3 million) [ 6 ], facing comparable social and emotional challenges.

These children are more vulnerable to victimization due to the lack of parental supervision and support, making them a critical group for study in the context of poly-victimization (PV). Research has shown that this group has a higher probability of experiencing PV problems [ 7 ]. PV refers to children experiencing multiple types of harm within the past year, including physical victimization, property victimization, child abuse, peer victimization, sexual victimization, witnessing/indirect victimization, and other forms of victimization [ 8 ]. PV not only threatens physical and mental health but also causes a range of social problems such as suicide attempts, post-traumatic stress disorder (PTSD), depressive symptoms, and violent behavior [ 9 , 10 ]. It can also lead to practical problems in children, such as poor school performance, alcohol abuse, involvement in crime, and revictimization [ 11 , 12 ]. According to a survey conducted in Sichuan Province, China [ 13 ], the PV incidence rate among the general child population was found to be 28.3%. In comparison, left-behind children whose parents have migrated for work exhibited a PV incidence of 28.5%. Notably, left-behind children experiencing parental separation or divorce had a significantly higher PV incidence rate of 39.1%. These statistics highlight the elevated risk of victimization among left-behind children, underscoring the importance of targeted interventions to address their unique challenges. Currently, many studies in China have focused on individual types of victimization among rural children [ 14 , 15 , 16 ], but there have been fewer studies on PV in this population, and even fewer intervention studies.

Children use different coping styles when they experience PV as a stressful event. Coping style refers to the cognitive and practical strategies that individuals adopt in the face of frustration and stress, also known as coping mechanisms [ 17 ]. Coping styles can significantly influence how children deal with PV, affecting both their immediate reactions and long-term resilience [ 18 ]. An individual’s coping style influences the nature and intensity of the stress response and moderates the relationship between stress and its outcomes. The Theory of Planned Behavior (TPB) [ 19 ] is a practical decision-making model proposed by Icek Ajzen. It is mainly used to predict and understand human behavior, with a core intervention strategy focused on changing practical intention, which is jointly determined by attitude, subjective norm, and perceived practical control. The theory has been widely applied to many aspects of human life, such as fitness and exercise behavior, healthcare behavior, social learning behavior, and more [ 20 , 21 ]. Meanwhile, the Theory of Planned Behavior has been successfully applied in various interventions aimed at children and adolescents, demonstrating its relevance and effectiveness in this demographic [ 22 , 23 ]. Several studies have shown that TPB has been used to investigate or predict children’s diets and undesirable behaviors or encounters, among other things. For example, one study used two theories, including the TPB, to predict smoking among Chinese adolescents and found that the TPB was superior [ 24 ]. And another Indian study explored the role of Theory of Planned Behavior in predicting areca nut use among adolescents [ 25 ]. Therefore, the coping styles of left-behind children exposed to PV can be regulated and improved by applying this theory, focusing on intervention in attitudes, subjective norms, and perceived practical control.

A baseline survey conducted as part of this study in 2020 established the initial prevalence of PV among left-behind children at 23%(626/2722). This data underscores the critical need for targeted interventions to address the high vulnerability of this group. In response, our study designed and implemented an intervention strategy based on the Theory of Planned Behavior (TPB), aimed specifically at mitigating these risks. The baseline study and its results are an integral part of the sample size calculations for subsequent randomized controlled trials and the analysis of intervention effects in this paper.

The difference-in-differences (DID) model [ 26 ] was used to assess the impact of the intervention on the knowledge, attitudes, and practices of left-behind children and the incidence of PV, thereby comprehensively evaluating the effectiveness of the intervention. The Difference-in-Differences (DID) model is a quasi-experimental research design that helps estimate the causal effect of a treatment or intervention. It does so by comparing the changes in outcomes over time between a treatment group (which receives the intervention) and a control group (which does not). This method helps to account for time-invariant differences between the groups and for trends that would affect both groups similarly, thus isolating the effect of the intervention. DID allows us to make robust causal inferences about the impact of our intervention by comparing the pre- and post-intervention outcomes between the intervention group and the control group. Moreover, the DID model is well-suited for this repeated measures data of victimization-related outcomes before and after the intervention. This allows us to control for baseline differences and observe changes over time, providing a clearer picture of the intervention’s effectiveness. DID has been widely used in research on adolescents. In a survey of repeated measures of adolescent risk behavior across 41 continents in the United States, the study used the DID to compare changes in past-year physical TDV (Teen dating violence) in states that enacted TDV laws compared to states with no required laws [ 27 ]. In another study of pedestrian injuries in U.S. schoolchildren estimated difference-in-differences in injury risk between census tracts with and without intervention following the changepoint [ 28 ].

The purpose of this study was to design and implement a TPB-based PV intervention strategy for left-behind children and to explore its effectiveness in addressing the PV challenges faced by these children, and specifically whether: [ 1 ] the TPB-based intervention reduce the incidence of PV among left-behind children [ 2 ]. Whether children’s coping styles improved after the intervention, particularly their ability to manage their emotions and communicate their thoughts and feelings to others [ 3 ]. Evaluate the effects of the intervention on enhancing children’s Knowledge, Attitude, and Practice (KAP) related to personal safety, peer interactions and so on. Through a community intervention trial, tis study evaluated the effectiveness of these interventions to provide a scientific basis for developing victimization intervention programs for left-behind children in other regions.

Participants

A survey was conducted in six middle schools in two cities in southern China (Shantou and Jieyang) between 2020 and 2021 to establish a baseline. Left-behind children from these schools were randomly assigned to intervention and control groups. Eligible participants were students in the first and second grades of junior high school who were in good health, had typical physical mobility, and were willing to undergo the intervention. The intervention phase was carried out between 2021 and 2022. Inclusion Criteria: [ 1 ] the first or second grades of junior high school [ 2 ]. Identified as left-behind (having one or both parents working away from home). Exclusion Criteria: [ 1 ] Those with significant cognitive or physical disabilities that could interfere with participation in the intervention or assessment procedures, as reported by school administrators or guardians. We sampled 460 students, and based on the above inclusion and exclusion criteria we ended up with 456 students who were randomly assigned to the intervention and control groups ( n  = 228). As shown in Table  1 , the mean and standard deviation of children’s ages in the intervention and control groups were 13.28 ± 0.232 and 13.66 ± 0.342, respectively.

Sample size

Based on the findings of the baseline survey, the initial prevalence of PV among left-behind children was recorded at 23.00%, denoted as P_max = 0.23. Following the implementation of the proposed intervention, the prevalence of PV decreased by 10.00–13.00%, denoted as P_min = 0.13. Utilizing significance levels of α = 0.05 and β = 0.20, with a factor k = 2 representing the number of groups, a critical value of λ = 7.85 was obtained from the relevant table of values. Consequently, a minimum sample size of n  = 228 was determined, necessitating at least 228 participants in both the intervention and control groups. The formula for this calculation is provided below:

This study selected six middle schools in two cities in southern China as research sites to conduct an intervention study and measure the changes in the knowledge, belief, and behavior levels, as well as the incidence of PV among left-behind children, before and after the intervention.

Research in behavioral psychology suggests that longer interventions are more effective in instilling sustainable changes in behavior and attitudes [ 29 , 30 ]. Twelve months allow for the gradual assimilation and reinforcement of new behaviors and concepts, essential in changing established habits and norms. And this schedule also aligns with academic calendars, facilitating a structured approach to implementation within school settings, ensuring that interventions do not overwhelm participants while providing regular touchpoints for reinforcement and assessment of progress. Meanwhile, the number of interventions was 12 a year rather than more, taking into account the possibility of students changing schools and the lost visits, as well as the fact that the poly-victimization questionnaire was a survey of victimization sustained in the past year [ 8 ].

The study began in September 2021 and continued through August 2022, with thematically relevant content interventions each month for the intervention group and general health and safety education for the control group, for a total of 12 interventions for each group. And a comprehensive assessment was conducted before and after the intervention to assess changes in the incidence of poly-victimization in children as well as changes in participants’ knowledge, beliefs, and behaviors.

Intervention group

The intervention themes, grounded in the Theory of Planned Behavior [ 31 ], aim to modify attitudes, subjective norms, and perceived behavioral control regarding poly-victimization. Themes like Types of PV, School Bullying, and Sexual Safety educate on risks and prevention, aiming to change attitudes and enhance behavioral control [ 32 ]. Topics such as Interpersonal Communication and Non-violent Communication foster supportive social norms and improve skills in managing stress and emotional reactions, crucial for preventing poly-victimization [ 33 ].

Practical attitudes are raised by explaining to students the types of PV, its characteristics, forms of occurrence, and associated hazards, to raise awareness among left-behind children about PV. Subjective norms primarily involve fostering positive external environments, engaging in interventions with teachers to enhance their understanding of PV, and equipping them with the skills to identify individuals at risk and prevent such incidents.

Improvement in perceived practical control was achieved by having students record instances of victimization and their responses. Finally, case studies and discussions were conducted to promote changes in the practical intentions of left-behind children towards beneficial actions.

Changes in Knowledge (K): Knowledge enhancement was achieved through educational sessions that provided information about PV, its signs, consequences, and prevention strategies [ 9 ]. These sessions were interactive, using multimedia presentations, discussions, and Q&A sessions to ensure participants comprehended and retained the information. Pre- and post-tests were used to measure changes in knowledge levels.

Changes in Attitude (A): Attitude modification was addressed by creating a supportive environment where children could openly discuss their feelings and beliefs about PV [ 12 ]. Activities included role-playing, group discussions, and sharing personal experiences. These activities were designed to challenge negative attitudes and reinforce positive ones. Attitude changes were assessed using questionnaires that gauged shifts in participants’ beliefs and perceptions.

Changes in Practice (P): Behavioral changes were encouraged through practical skill-building exercises. Children were taught and practiced effective coping strategies, assertiveness training, and problem-solving skills [ 34 ]. These practices were reinforced through regular follow-up sessions. Behavioral changes were monitored through self-reports and observations by facilitators.

The intervention group received a 12-month intervention with a one-month gap between each session. The intervention consisted of educational presentations and videos delivered on-site, with each session lasting 30 to 40 min. The content of the intervention was shown in Table  1 .

Control groups

The control group received general health education, which included information on preventing traffic accidents and drowning. To ensure the results were not confounded by extraneous variables, the length of the intervention and the interval between sessions were kept consistent with those of the intervention group.

Ethics approval and consent to participate

All methods were performed in accordance with the relevant guidelines and regulations. This research was approved by the ethics committee of Shantou University Medical College. All the participants and their guardians agreed and provided signed, informed assent or consent on a voluntarily basis.

Measurements

Victimization-related kap scores.

“Victimization-Related KAP Scores” is based on the theories developed by G. Cust [ 35 ], a British health educator, as well as the themes of this study on poly-victimization interventions. The questionnaire about victimization contained a total of 10 items for knowledge (K), 20 items for attitude (A), and 10 items for practice (P). The total KAP score was obtained by adding the attitude score, knowledge score, and practice score. For each item, the correct option was determined through prior discussion among experts, and the knowledge rate for each item was calculated for all children. The children’s KAP items on victimization were scored as follows: 1 point was given for each correctly answered item, and 0 points for each incorrectly answered item. The maximum score was 40 points. The overall reliability coefficient was 0.85, indicating high internal consistency. Each subscale also demonstrated acceptable reliability with coefficients above 0.75. Construct validity was assessed using factor analysis, which confirmed that the items loaded appropriately onto their respective factors (knowledge, attitude, and practice) with factor loadings above 0.70.

Occurrence of poly-victimization

Indicators for evaluating the effectiveness of interventions.

Index of effectiveness is calculated as the ratio of the PV incidence rate in the control group to the PV incidence rate in the intervention group. A higher index value indicates a more effective intervention, as it shows a greater relative reduction in PV incidence among the intervention group compared to the control. The effectiveness index is compared against a criterion of 1.0. An index value: Greater than 1.0 indicates that the intervention was effective in reducing PV rates compared to the control group. Equal to 1.0 suggests no difference in PV rates between the two groups. Less than 1.0 would imply that the intervention group had higher PV rates than the control group, suggesting ineffectiveness.

Coping styles questionnaire

Coping styles were evaluated with the Coping Style Questionnaire (CSQ) [ 36 ], which is a 62-item self-report test. Items are rated as 1 (agree) or 0 (disagree). The questionnaire comprises six subscales including both immature and mature coping styles. Immature coping styles include “avoidance”, “fantasy” and “venting”; mature coping styles include “problem solving” “help seeking” and “bear”. In the original development of the CSQ, the construct validity was established with each factor consisting of entries with factor loadings greater than or equal to 0.35, and the overall reliability was recorded at 0.72. These metrics confirm the tool’s ability to reliably and validly measure coping behaviors. In the present study, the Cronbach alpha coefficient for the CSQ was 0.873, and the validity was 0.869, ensuring that the questionnaire remains a reliable and valid instrument for assessing coping styles in this study.

Poly-victimization

Poly-victimization among children was examined by combining the JVQ-R2 (Juvenile Victimization Questionnaire-2nd Revision) developed by Finkelhor et al. [ 37 ] and the Chinese version of the JVQ [ 38 ]. The JVQ comprises six modules: Conventional crimes(9 items), Caregiver victimization(4 items), peer and sibling victimization(6 items), sexual victimization(6 items), Witnessing and indirect victimization(7 items), and Electronic victimization(2 items), encompassing 34 items in total. Each item relates to a specific type of harmful event, and subjects are asked to indicate whether such events have occurred. Scoring is binary; “yes” responses are valued at 1 point and “no” responses at 0 points. In the present sample, reliability and validity of the scale were tested; the standardized Cronbach’s alpha coefficient for the total score of recent victimization items is 0.875, with a KMO value of 0.793. According to prior studies, this study operationally defines Poly-victimization as a JVQ scale score of 4 points or more in the last year [ 39 , 40 ].

Data analysis

The study assesses the effectiveness of the intervention by comparing the KAP score and coping styles score differences before and after the intervention between the intervention group and the control group using Mixed Anova in SPSS 25.0. The count data were examined using a chi-square test to identify any variations in rates and composition ratios. The researchers utilized the Standardized Mean Difference (SMD) [ 41 ] metric to evaluate the equilibrium among the groups, with SMD < 0.1 signifying a satisfactory balance. This threshold indicates a minimal disparity between the groups under study. The expression was:

In the above formula, P T and P C represent the positive rate of a covariate in the intervention group and the control group, respectively.

The efficacy of the intervention was assessed through the application of a double difference model, specifically the Difference-in-Differences (DID) method. This approach was utilized to analyze variations in Knowledge, Attitudes, and Practices (KAP), incidence of victimization, and coping mechanisms both pre- and post-intervention, as well as to determine the impact of the intervention within the intervention and control groups.

The DID model is constructed on the basis of the Ordinary least squares (OLS) method, and its generalized linear model expression for the double difference model is

The variable T serves as a categorical placeholder; when an individual ( i) is impacted by the intervention, they are categorized into the intervention group denoted by T  = 1. Conversely, if the individual ( i )is not influenced by the intervention, they are categorized into the control group with T  = 0. D is a dummy variable for the implementation of the intervention, where D  = 0 before the intervention and D  = 1 after the intervention. Tit * Dit is the interaction term between the grouping dummy variable and the intervention implementation dummy variable with coefficient α 3 .

Baseline characterization

The baseline characteristics of the control and intervention groups are presented in Table  2 . Each group contained 228 subjects. The differences in the distribution proportions of gender, only-child status, health status, unscrupulous behavior of close friends, parental marital status, and parental health status did not exhibit statistical significance between the two groups ( p >0.05). However, the distribution proportions of parental occupation demonstrated statistical significance ( p <0.05). Gender, only-child status, health status, parental marital status, and parental health status had SMD < 0.1, while unscrupulous behavior of close friends and parental occupation had SMD > 0.1.

Variations in KAP related to victimization before and after intervention

A 2 × 2 mixed ANOVA was conducted with group (intervention and control) as the between subjects factor and time (before and after intervention) as the within subjects factor. The results showed a significant main effect for intervention, F(1, 454) = 25.905, p  < 0.05. For group, control group reported significantly less KAP score than intervention group. There was also a significant group × time interaction, F(1, 454) = 42.142, p  < 0.05. The results of the simple effects tests indicated that control group reported significantly less KAP score than intervention group before intervention, ( p  < 0.05). The mean and standard deviation of the KAP and its subcategory scores are reported in the following table (Tables  3 , 4 ).

Variations in PV occurrence before and after the intervention

During the initial survey, the prevalence of PV among children in the intervention group was 20.2% ( n  = 46), while in the control group it was 25.4% ( n  = 58), indicating no statistically significant difference ( p  = 0.18). The incidence of PV in the control group was 25.4% during the initial survey and increased to 36.0% ( n  = 82) following the intervention, demonstrating a statistically significant difference ( p  = 0.10). Following the intervention, the prevalence of PV among children in the intervention group declined to 9.6% ( n  = 22), demonstrating a statistically significant decrease compared to the baseline survey findings ( p  < 0.01). The prevalence of PV among children in the intervention group was found to be significantly lower compared to the control group, with statistical significance ( p  < 0.01). The net reduction in the incidence of PV among children was 21.2%. The post-intervention protection against the occurrence of PV among children was 73.33% with an effectiveness index of 3.75 (Table  5 ).

The changes in the distribution of victimization types before and after the intervention are presented in Table  6 . The incidence of various types of victimization changed differently following the intervention. The predominant types of victimization observed in children before and after the intervention primarily consisted of conventional crimes and witnessing and indirect victimization. The most effective interventions were observed in cases of peer victimization and electronic victimization, where protection rates exceeded 60% and the effectiveness indices were recorded at 2.80 and 2.81, respectively.

Subgroup dummy variables (control group d = 0; intervention group d = 1), time dummy variables (after intervention t = 1; before intervention t = 0), and interaction terms between subgroups and time (td) were set according to the requirements of the DID model. The data presented in Table  7 demonstrate that the intervention resulted in a statistically significant decrease in the occurrence of victimization among children.

Variations in coping styles before and after the intervention

A 2 × 2 mixed ANOVA was conducted with group (intervention and control) as the between subjects factor and time (before and after intervention) as the within subjects factor. The results showed a significant main effect for intervention, F(1, 454) = 9.554, p  < 0.05. For group, control group reported significantly less total coping styles score than intervention group. There was also a significant group × time interaction, F(1, 454) = 20.004, p  < 0.05. The results of the simple effects tests indicated that control group reported significantly less coping styles score than intervention group before intervention, ( p  < 0.05). The mean and standard deviation of the coping styles scores and its subcategory scores are reported in the following table (Table  8 ).

Dummy variables were created to represent subgroups (control group d = 0 and intervention group d = 1), time points (before intervention t = 0 and after intervention t = 1), and the interactions between subgroups and time (td), as specified by the DID model. The intervention led to an increase in children’s competence in positive coping styles when facing victimization, including problem-solving and help-seeking, while concurrently reducing competence in negative coping styles, such as avoidance and venting. The observed difference was statistically significant ( p  < 0.05) (Table 9 ).

The primary purpose of this study was to evaluate the effectiveness of an intervention program based on the Theory of Planned Behavior in reducing poly-victimization among left-behind children in rural China. Additionally, we aimed to assess the impact of the intervention on the Knowledge, Attitudes, and Practice (KAP) related to victimization and the coping styles of the participants. The results of the study showed that after the intervention, the incidence of PV in left-behind children decreased by 21.20%. Therefore, the intervention implemented in this study may have been effective in reducing the incidence of PV in left-behind children. After the intervention, the KAP scores related to victimization among left-behind children were significantly improved compared to the baseline survey. Therefore, such interventions can lead to enhanced levels of KAP related to PV among children. The results of this study showed that the intervention increased children’s competence in positive coping styles when facing victimization, such as problem-solving and help-seeking, while decreasing competence in negative coping styles, such as avoidance and venting, with statistically significant differences ( p <0.05). This suggests that interventions based on the TPB framework can improve the coping styles of left-behind children and thus reduce the occurrence of PV.

Our findings align with existing literature that underscores the effectiveness of interventions based on cognitive-behavioral frameworks like TPB in changing behavior and improving psychological outcomes [ 31 ]. Research has shown that structured interventions can significantly impact knowledge and attitudes towards complex issues like victimization [ 42 ]. Furthermore, the improvement in coping strategies observed in our study reflects findings from prior studies, which indicate that positive coping styles can mitigate the impacts of adverse experiences [ 43 ]. The reduction in PV rates and improvement in KAP scores in our study are consistent with the theoretical predictions of TPB, which suggest that changes in attitudes, subjective norms, and perceived behavioral control can lead to behavior change. The enhancement of coping strategies supports literature emphasizing the role of adaptive coping in reducing the occurrence and impact of negative life events [ 44 ].

Prior studies have indicated that experiences of PV can impact the coping styles of children and adolescents when faced with challenges. Coping refers to the cognitive and practical strategies employed by an individual in response to stressful situations to reduce or alleviate the negative impacts of stress [ 45 ]. Academics widely recognize coping as a dynamic combination of cognition and behavior. Coping styles may also impact the occurrence of PV, with positive coping styles mitigating the negative effects of adverse events [ 46 , 47 ]. Some studies have also suggested that the more frequently negative coping styles are used, the higher the frequency of adverse life events. Appropriate coping styles may not only reduce negative affect but also reduce the frequency of individual injurious events and the number of types of victimization, thereby preventing the occurrence of PV [ 48 , 49 , 50 ].

The study employed a rigorous difference-in-differences (DID) analysis to evaluate the intervention’s effectiveness, providing robust causal inference. The intervention was based on the well-established Theory of Planned Behavior, which has been shown to be effective in various health-related behavior change interventions. This robust statistical approach allows us to confidently attribute observed changes in poly-victimization and coping strategies directly to the intervention. This analytical rigor is complemented by the theoretical foundation of the intervention, which is based on the well-established Theory of Planned Behavior (TPB). The study included a sizable sample of left-behind children from multiple schools, focusing on this particular group. Additionally, the focus on left-behind children, a particularly vulnerable population, not only enhances the generalizability of our findings within this demographic but also addresses a significant gap in the literature. By integrating a sizable sample from multiple schools, the study aims to mitigate challenges faced by these children. However, this study is not without several limitations. First, Some of the study data relied on self-reported measures, which may be subject to social desirability bias and recall bias. Second, the follow-up period lasted for a relatively short period of about one year, limiting the assessment of the long-term sustainability of the intervention effects. Third, the study did not compare the intervention group with a group receiving a different type of intervention, such as social-emotional learning, which could have provided a more comprehensive understanding of the relative effectiveness of different intervention strategies. Fourth, some participants in the intervention group transitioned from PV to non-PV, while others moved from non-PV to PV status. However, our study did not focus on those who transitioned from non-PV status (having experienced 0, 1, 2, or 3 victimization incidents) to PV status (experiencing 4 or more incidents), nor on the differences in protection rates among groups subjected to varying numbers of victimization. Future studies could conduct long-term follow-up studies to assess the sustainability of the intervention effects, or explore the potential mediating and moderating factors that influence the effectiveness of interventions in reducing PV among left-behind children, to promote the healthy growth of the special group of left-behind children.

Interventions based on the Theory of Planned Behavior reduced the incidence of poly-victimization in left-behind children, with effects varying across different types of victimization. The implementation of an intervention program based on the TPB should consider the prevalence characteristics of poly-victimization among left-behind children in the region and the influencing factors obtained from the baseline survey, and then formulate evidence-based, rational, and comprehensive interventions. Therefore, by adopting measures that consider the types of victimization in the intervention program, the occurrence of the main types of victimization can be effectively reduced.

Data availability

No datasets were generated or analysed during the current study.

Wen YJ, Hou WP, Zheng W, Zhao XX, Wang XQ, Bo QJ, et al. The neglect of left-behind children in China: a meta-analysis. Trauma Violence Abuse. 2021;22(5):1326–38.

Article   PubMed   Google Scholar  

Qu X, Wang X, Huang X, Ashish KC, Yang Y, Huang Y, et al. Socio-emotional challenges and development of children left behind by migrant mothers. J Glob Health. 2020;10(1):010806.

Article   PubMed   PubMed Central   Google Scholar  

Zhang X, Hong H, Hou W, Liu X. A prospective study of peer victimization and depressive symptoms among left-behind children in rural China: the mediating effect of stressful life events. Child Adolesc Psychiatry Ment Health. 2022;16(1):56.

Viet Nguyen C. Does parental migration really benefit left-behind children? Comparative evidence from Ethiopia, India, Peru and Vietnam. Soc Sci Med. 2016;153:230–9.

Dominguez GB, Hall BJ. The health status and related interventions for children left behind due to parental migration in the Philippines: a scoping review. Lancet Reg Health West Pac. 2022;28:100566.

PubMed   PubMed Central   Google Scholar  

Thailand’s Left-Behind Children.| Global Health NOW. https://globalhealthnow.org/2017-11/thailands-left-behind-children

Chen M, Chan KL. Parental absence, child victimization, and psychological well-being in rural China. Child Abuse Negl. 2016;59:45–54.

Finkelhor D, Ormrod RK, Turner HA, Hamby SL. Measuring poly-victimization using the Juvenile victimization questionnaire. Child Abuse Negl. 2005;29(11):1297–312.

Finkelhor D, Ormrod RK, Turner HA. Polyvictimization and trauma in a national longitudinal cohort. Dev Psychopathol. 2007;19(1):149–66.

Chan KL, Chen M, Chen Q, Ip P. Can family structure and social support reduce the impact of child victimization on health-related quality of life? Child Abuse Negl. 2017;72:66–74.

Ford JD, Delker BC, Polyvictimization. Adverse impacts in childhood and across the lifespan. Routledge; 2020. p. 143.

Ford JD, Delker BC. Polyvictimization in childhood and its adverse impacts across the lifespan: introduction to the special issue. J Trauma Dissociation. 2018;19(3):275–88.

Kl MC. C. Parental absence, child victimization, and psychological well-being in rural China. Child Abuse Neglect. 2016;59. https://pubmed.ncbi.nlm.nih.gov/27500387/

Zhang H, Zhou H, Cao R. Bullying victimization among left-behind children in Rural China: Prevalence and Associated Risk factors. J Interpers Violence. 2021;36(15–16):NP8414–30.

Zhang H, Chi P, Long H, Ren X. Bullying victimization and depression among left-behind children in rural China: roles of self-compassion and hope. Child Abuse Negl. 2019;96:104072.

Xiao J, Su S, Lin D. Trajectories of peer victimization among left-behind children in rural China: the role of positive school climate. J Res Adolesc. 2024.

PhD RSL, PhD SF. Stress, appraisal, and coping. Cham: Springer; 1984. p. 460.

Guerra C, Pereda N, Guilera G. Poly-victimization and coping profiles: Relationship with externalizing symptoms in adolescents. J Interpers Violence. 2021;36(3–4):1865–82.

Ajzen I. The theory of planned behavior. Org Behav Hum Decis Process. 1991;50(2):179–211.

Article   Google Scholar  

Haubenstricker JE, Lee JW, Segovia-Siapco G, Medina E. The theory of planned behavior and dietary behaviors in competitive women bodybuilders. BMC Public Health. 2023;23(1):1716.

Lareyre O, Gourlan M, Stoebner-Delbarre A, Cousson-Gélie F. Characteristics and impact of theory of planned behavior interventions on smoking behavior: a systematic review of the literature. Prev Med. 2021;143:106327.

St-Pierre RA, Temcheff CE, Derevensky JL, Gupta R. Theory of planned behavior in school-based adolescent problem gambling prevention: a conceptual framework. J Prim Prev. 2015;36(6):361–85.

Tapera R, Mbongwe B, Mhaka-Mutepfa M, Lord A, Phaladze NA, Zetola NM. The theory of planned behavior as a behavior change model for tobacco control strategies among adolescents in Botswana. PLoS ONE. 2020;15(6):e0233462.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Guo Q, Johnson CA, Unger JB, Lee L, Xie B, Chou CP, et al. Utility of the theory of reasoned action and theory of planned behavior for predicting Chinese adolescent smoking. Addict Behav. 2007;32(5):1066–81.

Gupte HA, Chatterjee N, Mandal G. Using the theory of Planned Behavior to explain and predict Areca Nut Use among adolescents in India: an exploratory study. Subst Abuse Rehabil. 2022;13:47–55.

Lechner M, The estimation of causal effects by difference-in-difference methods. 2011;4(3):165–224. https://www.nowpublishers.com/article/Details/ECO-014

Adhia A, Roy Paladhi U, Ellyson AM. State laws addressing teen dating violence in US high schools: a difference-in-differences study. Prev Med. 2024;182:107937.

DiMaggio C, Chen Q, Muennig PA, Li G. Timing and effect of a safe routes to school program on child pedestrian injury risk during school travel hours: bayesian changepoint and difference-in-differences analysis. Inj Epidemiol. 2014;1(1):17.

Jepson RG, Harris FM, Platt S, Tannahill C. The effectiveness of interventions to change six health behaviours: a review of reviews. BMC Public Health. 2010;10:538.

Michie S, West R, Sheals K, Godinho CA. Evaluating the effectiveness of behavior change techniques in health-related behavior: a scoping review of methods used. Transl Behav Med. 2018;8(2):212–24.

Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

Finkelhor D, Shattuck A, Turner HA, Ormrod R, Hamby SL. Polyvictimization in developmental context. J Child Adol Trauma. 2011;4(4):291–300.

Adriani PA, Hino P, Taminato M, Okuno MFP, Santos OV, Fernandes H. Non-violent communication as a technology in interpersonal relationships in health work: a scoping review. BMC Health Serv Res. 2024;24(1):289.

Provide Psychosocial Skills Training and Cognitive Behavioral Interventions| CDC. 2024. https://www.cdc.gov/healthyyouth/mental-health-action-guide/provide-psychosocial-skills-training-and-cognitive-behavioral-interventions.html

Cust G. Why health education? Dist Nurs. 1966;9(7):162–4.

CAS   PubMed   Google Scholar  

Xiao JH, Xu XF. Study on validity and reliability of coping style questionnaire. Chin Ment Health J. 1996;4:164–8.

Google Scholar  

Finkelhor D, Hamby SL, Ormrod R, Turner H. The Juvenile victimization questionnaire: reliability, validity, and national norms. Child Abuse Negl. 2005;29(4):383–412.

Chan KL, Fong DY, Yan E, Chow CB, Ip P. Validation of the Chinese juvenile victimisation questionnaire. Hong Kong J Paediatrics. 2011;16(1):17–24.

Finkelhor D, Ormrod RK, Turner HA. Poly-victimization: a neglected component in child victimization. Child Abuse Negl. 2007;31(1):7–26.

Finkelhor D, Ormrod RK, Turner HA. Re-victimization patterns in a national longitudinal sample of children and youth. Child Abuse Negl. 2007;31(5):479–502.

Austin PC. Using the standardized difference to compare the prevalence of a binary variable between two groups in observational research. Commun Stat-simul C. 2009;38(6):1228–34. https://doi.org/10.1080/03610910902859574 .

Butler N, Quigg Z, Wilson C, McCoy E, Bates R. The mentors in violence Prevention programme: impact on students’ knowledge and attitudes related to violence, prejudice, and abuse, and willingness to intervene as a bystander in secondary schools in England. BMC Public Health. 2024;24(1):729.

Be C, Jk CS, Ah HS, Me T. W. Coping with stress during childhood and adolescence: problems, progress, and potential in theory and research. Psychol Bull. 2001;127(1). https://pubmed.ncbi.nlm.nih.gov/11271757/

Grych JH, Fincham FD. Marital conflict and children’s adjustment: a cognitive-contextual framework. Psychol Bull. 1990;108(2):267–90.

Article   CAS   PubMed   Google Scholar  

Theodoratou M, Argyrides M. Neuropsychological insights into coping strategies: integrating theory and practice in clinical and therapeutic contexts. Psychiatry Int. 2024;5(1):53–73.

Clemmensen L, Jepsen JRM, van Os J, Blijd-Hoogewys EMA, Rimvall MK, Olsen EM, et al. Are theory of mind and bullying separately associated with later academic performance among preadolescents? Br J Educ Psychol. 2020;90(1):62–76.

van Dijk FA, Schirmbeck F, Boyette LL, de Haan L. For genetic risk and outcome of psychosis (GROUP) investigators. Coping styles mediate the association between negative life events and subjective well-being in patients with non-affective psychotic disorders and their siblings. Psychiatry Res. 2019;272:296–303.

Zheng Y, Fan F, Liu X, Mo L. Life events, coping, and posttraumatic stress symptoms among Chinese adolescents exposed to 2008 Wenchuan Earthquake, China. PLoS ONE. 2012;7(1):e29404.

Heffer T, Willoughby T. A count of coping strategies: a longitudinal study investigating an alternative method to understanding coping and adjustment. PLoS ONE. 2017;12(10):e0186057.

Ren Z, Zhang X, Shen Y, Li X, He M, Shi H, et al. Associations of negative life events and coping styles with sleep quality among Chinese adolescents: a cross-sectional study. Environ Health Prev. 2021;26(1):85.

Download references

This work was supported by Science Planning General Project of Guangdong Philosophy Association [grant numbers GD19CSH08].

Author information

Yandong Luo and Jiajun Zhou contributed equally to this work.

Authors and Affiliations

School of Public Health, Shantou University, Shantou, 515041, P.R. China

Yandong Luo, Jiajun Zhou, Pan Wen, Ping Chang, Zicheng Cao & Liping Li

Injury Prevention Research Center, Shantou University Medical College, Shantou, 515041, China

Yandong Luo, Jiajun Zhou, Pan Wen, Ping Chang & Liping Li

You can also search for this author in PubMed   Google Scholar

Contributions

L.L. and J.Z. designed the study. Y.L. and P.C. collected the data. J.Z., Z.C and P.W. were involved in the analysis of data. J.Z. drafted the paper, and J.Z., P.W. and L.L. iterated and commented on drafts. P.W. and L.L. reviewed the manuscript. All authors read and approved the submitted manuscript.

Corresponding author

Correspondence to Liping Li .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Luo, Y., Zhou, J., Wen, P. et al. An intervention study of poly-victimization among rural left-behind children based on the theoretical framework of planned behavior. Child Adolesc Psychiatry Ment Health 18 , 116 (2024). https://doi.org/10.1186/s13034-024-00812-1

Download citation

Received : 05 June 2024

Accepted : 09 September 2024

Published : 12 September 2024

DOI : https://doi.org/10.1186/s13034-024-00812-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Left-behind children
  • Theory of planned behavior
  • Intervention research

Child and Adolescent Psychiatry and Mental Health

ISSN: 1753-2000

methodology survey article

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

A critical look at online survey or questionnaire-based research studies during COVID-19

In view of restrictions imposed to control COVID-19 pandemic, there has been a surge in online survey-based studies because of its ability to collect data with greater ease and faster speed compared to traditional methods. However, there are important concerns about the validity and generalizability of findings obtained using the online survey methodology. Further, there are data privacy concerns and ethical issues unique to these studies due to the electronic and online nature of survey data. Here, we describe some of the important issues associated with poor scientific quality of online survey findings, and provide suggestions to address them in future studies going ahead.

1. Introduction

Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in the rising number studies employing online survey to collect data since the beginning of COVID-19 pandemic ( Akintunde et al., 2021 ). This could be due to the relative ease of online data collection over traditional face-to-face interviews while following the travel restrictions and distancing guidelines for controlling the spread of COVID-19 pandemic. Further, it offers a cost-effective and faster way of data collection (with no interviewer requirement and automatic data entry) as compared to other means of remote data collection (e.g. telephonic interview) ( Hlatshwako et al., 2021 ), both of which are important for getting rapid results to guide development and implementation public-health interventions for preventing and/or mitigating the harms related to COVID-19 pandemic (e.g. mental health effects of COVID-19, misconceptions related to spread of COVID-19, factors affecting vaccine hesitancy etc.). However, there have been several concerns raised about the validity and generalizability of findings obtained from online survey studies ( Andrade et al., 2020 ; Sagar et al., 2020 ). Here, we describe some of the important issues associated with scientific quality of online survey findings, and provide suggestions to address them in future studies going ahead. The data privacy concerns and ethical issues unique to these studies due to the electronic and online nature survey data have also briefly discussed.

2. Limited generalizability of online survey sample to the target general population

The findings obtained from online surveys need to be generalized to the target population in the real world. For this, the online survey population needs to be clearly defined and should be representative of the target population as much as possible. This would be possible when there is reliable sampling frame for online surveys, and participants could be selected using randomized or probability sampling method. However, online surveys are often conducted via email or online survey platform, with survey link shared on social media platforms or websites or directory of email ids accessed by researchers. Also, participants might be asked to share the survey link further with their eligible contacts. In turn, the population from which the study sample is selected often not clearly defined, and information about response rates (i.e. out of the total number people who viewed the survey link, how many of them did actually respond) are seldom available with the researcher. This makes generalization of study findings unreliable.

This problem may be addressed by sending survey link individually to all the people comprising the study population via email and/ or telephonic message (e.g. all the members of a professional society through membership directory, people residing in a society through official records etc.), with a request not to share the survey link with anyone else. Alternatively, required number of people could be randomly selected from the entire list of potential subjects and approached telephonically for taking consent. Basic socio-demographic details could be obtained from those who refused to participate and share the survey link with those agreeing to participate. Although, if the response rates are low or the socio-demographic details of non-responders significantly differ from that of responders, then the online survey sample is unlikely to be representative of the target study population. Further, this is a more resource intensive strategy and might not be always feasible (as it requires a list of contact details for the entire study population prior to beginning of data collection). In certain situations, when the area of research is relatively new and/or needs urgent exploration for hypothesis generation or guiding immediate response; the online survey study should list all possible attempts made to achieve a representative sample and clearly acknowledge it as a limitation while discussing their study findings ( Zhou et al., 2021 ).

A more recent innovative solution to this problem involves partnership between academic institutions (Maryland University and Carnegie Mellon University) and the Facebook company for conducting online COVID-19 related research ( Barkay et al., 2020 ). The COVID-19 Symptom Survey (CSS) conducted (in more than 200 countries since April 2020) using this approach involves exchange of information between the researchers and the Facebook without compromising the data privacy of information collected from survey participants. The survey link is shared on the Facebook, and user voluntary choose to participate in the study. The Facebook’s active user base is leveraged to provide a reliable sampling frame for the CSS survey. The researchers select random ID numbers for the users who completed the survey, and calculate survey weights for each them on a given day. Survey weights adjust for both non-response errors (helps in making them sample more representative of the Facebook users) and coverage related errors (helps in making generalizing findings obtained using FAUB to the general population) ( Barkay et al., 2020 ). A respondent belonging to a demographic group with a high likelihood of responding to the survey might get a weight of 10, whereas another respondent belonging to a demographic group with less likelihood of responding to survey might get a weight of 50. It also accounts for the proportion or density of Facebook or internet users in a given geographical area. Thus, findings obtained using this approach could be used for drawing inferences about the target general population. The survey weights to be used for weighted analysis of global CSS survey findings for different geographical regions are available to researchers upon request from either of the two above-mentioned academic institutions. For example, spatio-temporal trends in COVID-19 vaccine related hesitancy across different states of India was estimated by a group of Indian researchers using this approach ( Chowdhury et al., 2021 ).

3. Survey fraud and participant disinterest

Survey fraud is when a person takes the online survey more than once with or without any malicious intent (e.g. monetary compensation, helping researchers collect the requisite number of responses). Another related problem is when the participant responds to some or all the survey questions in a casual manner without actually making any attempt at reading and/or understanding them due to reasons like participant disinterest or survey fatigue. This affects the representativeness and validity of online survey findings, and is increasingly being recognized as an important challenge for researchers ( Chandler et al., 2020 ). While providing monetary incentives improves low response rates, it also increases the risk of survey fraud. Similarly, having a shorter survey length with few simple questions decreases the chances of survey fatigue, but limits the ability of researchers to obtain meaningful information about relatively complex issues. A researcher can take different approaches to address these concerns, ranging from relatively simpler ones such as requesting people to not participate more than once, providing different kind of monetary incentives (e.g. donation to a charity instead of the participant), or manually checking survey responses for inconsistent (e.g. age and date of birth responses not consistent) or implausible response patterns (e.g. average daily smartphone use of greater than 24 h, “all or none” response pattern) to more complex ones involving use of computer software or online survey platform features to block multiple entries by same person using IP address and/or internet cookies check, analysis of response time, latency or total time taken to complete survey for detecting fraudulent responses. There have been several different ways described in the available literature to detect fraudulent or inattentive survey responses, with a discussion about merits and demerits of each of them ( Teitcher et al., 2015 ). However, no single method is completely fool proof, and it is recommended to use a combination of different methods to ensure adequate data quality in online surveys.

4. Possible bias introduced in results by the online survey administration mode

One of the contributory reasons for surge in online survey studies assessing mental health related aspects during the COVID-19 pandemic stems from the general thought that psychiatry research could be easily accomplished through scales or questionnaires administered through online survey methods, especially with the reliance on physical examination and other investigation findings being much less or non-existent. However, the reliability and validity of the scales or instruments used in online surveys have been traditionally established in studies administering them in face-to-face settings (often in pen/pencil-paper format) rather than online mode. There could be variation introduced in the results with different survey administration modes, which is often described as the measurement effect ( Jäckle et al., 2010 ). This could be due to differences in the participants’ level of engagement, understanding of questions, social desirability bias experienced across different survey administration methods. Few studies using the same study sample or sample sampling frame have compared the results obtained with difference in survey administration mode (ie. traditional face-to-face [paper format] vs. online survey), with mixed findings suggesting large significant differences to small significant difference or no significant differences ( Determann et al., 2017 , Norman et al., 2010 , Saloniki et al., 2019 ). This suggests the need for conducting further studies before arriving at a final conclusion. Hence, we need to be careful while interpreting the results of online survey studies. Ideally, online survey findings should be compared with those obtained using traditional survey administration mode, and validation studies should be conducted to establish the psychometric properties of these scales for online survey mode.

5. Inadequately described online survey methodology

A recent systematic review assessing the quality of 80 online survey based published studies assessing the mental health impact of COVID-19 pandemic, reported that a large majority of them did not adhere to the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) guideline aimed at improving the quality of online surveys ( Eysenbach, 2004 , Sharma et al., 2021 ). Information related to parameters such as view rate (Ratio of unique survey visitors/unique site visitors), participation rate (Ratio of unique visitors who agreed to participate/unique first survey page visitors), and completion rate (Ratio of users who finished the survey/users who agreed to participate); which gives an idea about the representativeness of the online study sample as described previously were not mentioned in about two-third studies. Similarly, information about steps taken to prevent multiple entries by same participant or analysis of atypical timestamps to check for fraudulent and inattentive survey responses was provided by less than 5% studies. Thus, it is imperative to popularize and emphasize upon the use of these reporting guidelines for online survey studies to improve the scientific value of findings obtained from internet-based studies.

6. Data privacy and ethics of online survey studies

Lastly, most of the online survey studies either did not mention at all or mentioned in passing about maintain the anonymity and confidentiality of information obtained from online survey. However, details about the various steps or precautions taken by the researchers to ensure data safety and privacy were seldom mentioned (e.g. de-identified data, encryption process or password protected data storage, use of HIPAA-compliant online survey form/platform etc.). The details and limitations of safety steps taken, and the possibility of data leak should be clearly mentioned/ communicated to participants at the time of taking informed consent (rather than simply mentioning anonymity and confidentiality of information obtained will be ensured, as is the case with offline studies). Moreover, obtaining ethical approval prior to conducting online survey studies is a must. The various ethical concerns unique to online survey methodology (e.g. issues with data protection, informed consent process, survey fraud, online survey administration etc.) should be adequately described in the protocol and deliberated upon by the review boards ( Buchanan and Hvizdak, 2009 , Gupta, 2017 ).

In conclusion, there is an urgent need to consider the above described issues while planning and conducting an online survey, and also reviewing the findings obtained from these studies to improve the overall quality and utility of internet-based research during COVID-19 and post-COVID era.

Financial disclosure

The authors did not receive any funding for this work.

Acknowledgments

Conflict of interest.

The authors have no conflict of interest to declare.

  • Akintunde T.Y., Musa T.H., Musa H.H., Musa I.H., Chen S., Ibrahim E., Tassang A.E., Helmy M. Bibliometric analysis of global scientific literature on effects of COVID-19 pandemic on mental health. Asian J. Psychiatry. 2021; 63 doi: 10.1016/j.ajp.2021.102753. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andrade C. The limitations of online surveys. Indian J. Psychol. Med. 2020; 42 (6):575–576. doi: 10.1177/0253717620957496. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barkay, N., Cobb, C., Eilat, R., Galili, T., Haimovich, D., LaRocca, S., ..., Sarig, T., 2020. Weights and methodology brief for the COVID-19 symptom survey by University of Maryland and Carnegie Mellon University, in partnership with Facebook, arXiv preprint arXiv:2009.14675.
  • Buchanan E.A., Hvizdak E.E. Online survey tools: ethical and methodological concerns of human research ethics committees. J. Empir. Res. Hum. Res. Ethics. 2009; 4 (2):37–48. doi: 10.1525/jer.2009.4.2.37. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chandler J., Sisso I., Shapiro D. Participant carelessness and fraud: consequences for clinical research and potential solutions. J. Abnorm. Psychol. 2020; 129 (1):49–55. doi: 10.1037/abn0000479. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chowdhury, S.R., Motheram, A., Pramanik, S., 2021. Covid-19 vaccine hesitancy: trends across states, over time. Ideas For India, 14 April, Available from: 〈https://www.ideasforindia.in/topics/governance/covid-19-vaccine-hesitancy-trends-across-states-over-time.html%20%20〉 (Accessed 4 August 2021).
  • Determann D., Lambooij M.S., Steyerberg E.W., de Bekker-Grob E.W., de Wit G.A. Impact of survey administration mode on the results of a health-related discrete choice experiment: online and paper comparison. Value Health.: J. Int. Soc. Pharm. Outcomes Res. 2017; 20 (7):953–960. doi: 10.1016/j.jval.2017.02.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) J. Med. Internet Res. 2004; 6 (3):34. doi: 10.2196/jmir.6.3.e34. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gupta S. Ethical issues in designing internet-based research: recommendations for good practice. J. Res. Pract. 2017; 13 (2) Article D1. [ Google Scholar ]
  • Hlatshwako T.G., Shah S.J., Kosana P., Adebayo E., Hendriks J., Larsson E.C., Hensel D.J., Erausquin J.T., Marks M., Michielsen K., Saltis H., Francis J.M., Wouters E., Tucker J.D. Online health survey research during COVID-19. Lancet Digit. Health. 2021; 3 (2):e76–e77. doi: 10.1016/S2589-7500(21)00002-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jäckle A., Roberts C., Lynn P. Assessing the effect of data collection mode on measurement. Int. Stat. Rev. 2010; 78 (1):3–20. doi: 10.1111/j.1751-5823.2010.00102.x. [ CrossRef ] [ Google Scholar ]
  • Norman R., King M.T., Clarke D., Viney R., Cronin P., Street D. Does mode of administration matter? Comparison of online and face-to-face administration of a time trade-off task. Qual. Life Res.: Int. J. Qual. Life Asp. Treat., Care Rehabil. 2010; 19 (4):499–508. doi: 10.1007/s11136-010-9609-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sagar R., Chawla N., Sen M.S. Is it correct to estimate mental disorder through online surveys during COVID-19 pandemic? Psychiatry Res. 2020; 291 doi: 10.1016/j.psychres.2020.113251. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saloniki E.C., Malley J., Burge P., Lu H., Batchelder L., Linnosmaa I., Trukeschitz B., Forder J. Comparing internet and face-to-face surveys as methods for eliciting preferences for social care-related quality of life: evidence from England using the ASCOT service user measure. Qual. Life Res.: Int. J. Qual. Life Asp. Treat., Care Rehabil. 2019; 28 (8):2207–2220. doi: 10.1007/s11136-019-02172-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma R., Tikka S.K., Bhute A.R., Bastia B.K. Adherence of online surveys on mental health during the early part of the COVID-19 outbreak to standard reporting guidelines: a systematic review. Asian J. Psychiatry. 2021; 65 doi: 10.1016/j.ajp.2021.102799. (Advance online publication) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Teitcher J.E., Bockting W.O., Bauermeister J.A., Hoefer C.J., Miner M.H., Klitzman R.L. Detecting, preventing, and responding to “fraudsters” in internet research: ethics and tradeoffs. J. Law, Med. Ethics: J. Am. Soc. Law, Med. Ethics. 2015; 43 (1):116–133. doi: 10.1111/jlme.12200. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhou T., Chen W., Liu X., Wu T., Wen L., Yang X., Hou Z., Chen B., Zhang T., Zhang C., Xie C., Zhou X., Wang L., Hua J., Tang Q., Zhao M., Hong X., Liu W., Du C., Li Y., Yu X. Children of parents with mental illness in the COVID-19pandemic: a cross-sectional survey in China. Asian J. Psychiatry. 2021; 64 doi: 10.1016/j.ajp.2021.102801. (Advance online publication) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. A Comprehensive Guide to Survey Research Methodologies

    methodology survey article

  2. Journal of Survey Statistics and Methodology Template

    methodology survey article

  3. 3 Summary of survey methodology

    methodology survey article

  4. Survey Methods: Definition, Types, and Examples

    methodology survey article

  5. PPT

    methodology survey article

  6. Journal of Survey Statistics and Methodology Template

    methodology survey article

VIDEO

  1. RESEARCH METHODOLOGY

  2. RESEARCH METHODOLOGY

  3. Methodological Reviews

  4. Survey Method and its Advantages

  5. Research Methodology-16: စစ်တမ်းကောက် သုတေသန ( survey descriptive research) ဆိုတာ ဘာလဲ

  6. RESEARCH METHODOLOGY

COMMENTS

  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" (Check & Schutt, 2012, p. 160). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research ...

  2. Survey response rates: Trends and a validity assessment framework

    Survey methodology has been and continues to be a pervasively used data-collection method in social science research. To better understand the state of the science, we first analyze response-rate information reported in 1014 surveys described in 703 articles from 17 journals from 2010 to 2020. Results showed a steady increase in average ...

  3. The State of Survey Methodology: Challenges, Dilemmas, and New

    In this overview, we discuss the current state of survey methodology in a form that is useful and informative to a general social science audience. The article covers existing challenges, dilemmas, and opportunities for survey researchers and social scientists. We draw on the most current research to articulate our points; however, we also ...

  4. PDF Fundamentals of Survey Research Methodology

    Surveys can also be used to assess needs, evaluate demand, and examine impact (Salant & Dillman, 1994, p. 2). The term . survey instrument. is often used to distinguish the survey tool from the survey research that it is designed to support. 1.1 Survey Strengths . Surveys are capable of obtaining information from large samples of the population ...

  5. Journal of Survey Statistics and Methodology

    The Journal of Survey Statistics and Methodology invites submissions for a future special issue on Survey Research from Asia-Pacific, Africa, the Middle East, Latin America, and the Caribbean. Learn more about the topic and submit your paper through September 30, 2024. Find out more.

  6. (PDF) Understanding and Evaluating Survey Research

    The number of main criteria results from the complexity of the In order to take into account the voice of customers (VoC) [13], the proposed method included conducting a survey, questionnaire, or ...

  7. Journal of Survey Statistics and Methodology

    An official journal of the American Association for Public Opinion Research. Publishes cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  8. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  9. Survey Research Methods

    Survey Research Methods is the official peer-reviewed journal of the European Survey Research Association (ESRA). The journal publishes articles in English, which discuss methodological issues related to survey research. Three types of papers are in-scope: Topics of particular interest include survey design, sample design, question and ...

  10. A tutorial on methodological studies: the what, when, how and why

    "Systematic survey" may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using ... Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example ...

  11. A quick guide to survey research

    Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates. Go to:

  12. High-Impact Articles

    High-Impact Articles. Journal of Survey Statistics and Methodology, sponsored by the American Association for Public Opinion Research and the American Statistical Association, began publishing in 2013.Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  13. A Comprehensive Guide to Survey Research Methodologies

    In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey. ‍ What is Survey Research ‍ A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular ...

  14. Why Survey Research and Survey Methodology Matter

    People also conduct surveys out of a desire for "social comparison," which drives us to learn about others, and surveys are the best way to get this information. After all, context is king. There are at least four main reasons that people conduct surveys. We say "at least," because with more than 17 million customers, the SurveyMonkey ...

  15. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  16. Methodological Considerations for Survey-Based Research During

    Alongside other methods—including observational, ethnographic, and interview-based work, depending on the specific research questions formulated—surveys can help to gather reliable data on: • Knowledge: What people currently believe to be true about the disease (e.g., origin of the coronavirus, how could they catch it, or how they could ...

  17. Survey Research Methodology

    Survey Research Methodology. Surveys—from in-person to web-based, and discrete choice to stated preference—are an important means of collecting sociological, statistical, and demographic data. RAND has pioneered the use of surveys in several fields, including the development of the Delphi method of opinion gathering, and examined the ...

  18. Designing, Conducting, and Reporting Survey Studies: A Primer for

    A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method. +.

  19. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  20. Research Methods

    You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.. Primary vs. secondary research. Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys, observations and experiments). Secondary research is data that has already been collected by other researchers (e ...

  21. Assessing the fragility index of randomized controlled trials

    Methods and analysis A methodological survey will be conducted using the targeted population of RCT referenced in the recommendations of the CPG of the North American and European societies from 2012 to 2022. FI will be assessed for statistically significant and non-significant trial results. A Poisson regression analysis will be used to ...

  22. Prevalence, predictors and outcomes of self-reported feedback for EMS

    Study design. This observational mixed-methods study consisted of a baseline survey followed by diary entries. Collecting diary entries in real time is known to reduce recall bias by collecting data at the level of feedback events and therefore not relying on generalised reflections of feedback provision over a period of time, whilst enabling analysis of within- and between-person variability [].

  23. Advance articles

    Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  24. Reporting Survey Based Studies

    INTRODUCTION. Surveys are the principal method used to address topics that require individual self-report about beliefs, knowledge, attitudes, opinions or satisfaction, which cannot be assessed using other approaches.1 This research method allows information to be collected by asking a set of questions on a specific topic to a subset of people and generalizing the results to a larger population.

  25. Can social media encourage diabetes self-screenings? A ...

    A follow-up survey shows that many high-risk respondents have scheduled a professional screening. A cost-effectiveness analysis suggests that our campaign can diagnose an additional person with ...

  26. An intervention study of poly-victimization among rural left-behind

    The difference-in-differences method was employed to analyze the impact of intervention measures, based on the theory of planned behavior, on PV among left-behind children in rural areas. Methods. The study subjects were left-behind children from six middle schools in two cities in southern China, who completed the baseline survey from 2020 to ...

  27. A critical look at online survey or questionnaire-based research

    Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in ...

  28. Understanding the needs for support and coping strategies in grief

    Follow-up meetings have previously been suggested as an effective method. 44,45. In this study, most of the participants stated that they felt the support they did receive was helpful. ... Harrop E, Goss S, Farnell D, et al. Support needs and barriers to accessing support: baseline results of a mixed-methods national survey of people bereaved ...