Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Survey Research | Definition, Examples & Methods

Survey Research | Definition, Examples & Methods

Published on August 20, 2019 by Shona McCombes . Revised on June 22, 2023.

Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyze the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyze the survey results, step 6: write up the survey results, other interesting articles, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research : investigating the experiences and characteristics of different social groups
  • Market research : finding out what customers think about products, services, and companies
  • Health research : collecting data from patients about symptoms and treatments
  • Politics : measuring public opinion about parties and policies
  • Psychology : researching personality traits, preferences and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and in longitudinal studies , where you survey the same sample several times over an extended period.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research and survey article

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • US college students
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18-24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalized to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

Several common research biases can arise if your survey is not generalizable, particularly sampling bias and selection bias . The presence of these biases have serious repercussions for the validity of your results.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every college student in the US. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalize to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions. Again, beware of various types of sampling bias as you design your sample, particularly self-selection bias , nonresponse bias , undercoverage bias , and survivorship bias .

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by mail, online or in person, and respondents fill it out themselves.
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses.

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g. residents of a specific region).
  • The response rate is often low, and at risk for biases like self-selection bias .

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyze.
  • The anonymity and accessibility of online surveys mean you have less control over who responds, which can lead to biases like self-selection bias .

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping mall or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g. the opinions of a store’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations and is at risk for sampling bias .

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data: the researcher records each response as a category or rating and statistically analyzes the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analyzed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g. yes/no or agree/disagree )
  • A scale (e.g. a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g. age categories)
  • A list of options with multiple answers possible (e.g. leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analyzed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an “other” field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic. Avoid jargon or industry-specific terminology.

Survey questions are at risk for biases like social desirability bias , the Hawthorne effect , or demand characteristics . It’s critical to use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no indication that you’d prefer a particular answer or emotion.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Prevent plagiarism. Run a free check.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by mail, online, or in person.

There are many methods of analyzing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also clean the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organizing them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analyzing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analyzed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyze it. In the results section, you summarize the key results from your analysis.

In the discussion and conclusion , you give your explanations and interpretations of these results, answer your research question, and reflect on the implications and limitations of the research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Survey Research | Definition, Examples & Methods. Scribbr. Retrieved July 9, 2024, from https://www.scribbr.com/methodology/survey-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, questionnaire design | methods, question types & examples, what is a likert scale | guide & examples, what is your plagiarism score.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 8 July 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research and survey article

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

research and survey article

CX Shenanigans: Booth Duty and Beyond — Tuesday CX Thoughts

Jul 9, 2024

Negative correlation

Negative Correlation: Definition, Examples + How to Find It?

customer marketing

Customer Marketing: The Best Kept Secret of Big Brands

Jul 8, 2024

positive correlation

Positive Correlation: What It Is, Importance & How It Works

Jul 5, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • A-Z Publications

Annual Review of Psychology

Volume 50, 1999, review article, survey research.

  • Jon A. Krosnick 1
  • View Affiliations Hide Affiliations Affiliations: Department of Psychology, Ohio State University, Columbus, Ohio 43210; e-mail: [email protected]
  • Vol. 50:537-567 (Volume publication date February 1999) https://doi.org/10.1146/annurev.psych.50.1.537
  • © Annual Reviews

For the first time in decades, conventional wisdom about survey methodology is being challenged on many fronts. The insights gained can not only help psychologists do their research better but also provide useful insights into the basics of social interaction and cognition. This chapter reviews some of the many recent advances in the literature, including the following: New findings challenge a long-standing prejudice against studies with low response rates; innovative techniques for pretesting questionnaires offer opportunities for improving measurement validity; surprising effects of the verbal labels put on rating scale points have been identified, suggesting optimal approaches to scale labeling; respondents interpret questions on the basis of the norms of everyday conversation, so violations of those conventions introduce error; some measurement error thought to have been attributable to social desirability response bias now appears to be due to other factors instead, thus encouraging different approaches to fixing such problems; and a new theory of satisficing in questionnaire responding offers parsimonious explanations for a range of response patterns long recognized by psycholo-gists and survey researchers but previously not well understood.

Article metrics loading...

Full text loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

Publication Date: 01 Feb 1999

Online Option

Sign in to access your institutional or personal subscription or get immediate access to your online copy - available in PDF and ePub formats

Loading metrics

Open Access

Peer-reviewed

Research Article

Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices

* E-mail: [email protected]

Affiliation Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliation Canadian Institutes of Health Research, Ottawa, Canada

Affiliation Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Medicine, University of Ottawa, Ottawa, Canada

  • Carol Bennett, 
  • Sara Khangura, 
  • Jamie C. Brehaut, 
  • Ian D. Graham, 
  • David Moher, 
  • Beth K. Potter, 
  • Jeremy M. Grimshaw

PLOS

  • Published: August 2, 2011
  • https://doi.org/10.1371/journal.pmed.1001069
  • Reader Comments

Table 1

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions

There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.

Please see later in the article for the Editors' Summary

Citation: Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, et al. (2011) Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices. PLoS Med 8(8): e1001069. https://doi.org/10.1371/journal.pmed.1001069

Academic Editor: Rachel Jewkes, Medical Research Council, South Africa

Received: December 23, 2010; Accepted: June 17, 2011; Published: August 2, 2011

Copyright: © 2011 Bennett et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: Funding, in the form of salary support, was provided by the Canadian Institutes of Health Research [MGC – 42668]. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Editors' Summary

Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.

Why Was This Study Done?

This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.

What Did the Researchers Do and Find?

The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.

The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

What Do These Findings Mean?

Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Additional Information

Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069 .

  • More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
  • More information about STROBE is available on the STROBE Statement web site

Introduction

Surveys are a research method by which information is typically gathered by asking a subset of people questions on a specific topic and generalising the results to a larger population [1] , [2] . They are an essential component of many types of research including public opinion, politics, health, and others. Surveys are especially important when addressing topics that are difficult to assess using other approaches (e.g., in studies assessing constructs that require individual self-report about beliefs, knowledge, attitudes, opinions, or satisfaction). However, there is substantial literature to show that the methods used in conducting survey research can significantly affect the reliability, validity, and generalisability of study results [3] , [4] . Without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics.

Reporting guidelines are evidence-based, validated tools that employ expert consensus to specify minimum criteria for authors to report their research such that readers can critically appraise and interpret study findings [5] – [7] . More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Network's website ( www.equator-network.org ). There is increasing evidence that reporting guidelines are achieving their aim of improving the quality of reporting of health research [8] – [11] .

Given the growth in the number and range of reporting guidelines, the need for guidance on how to develop a guideline has been addressed [7] . A well-structured development process for reporting guidelines includes a review of the literature to determine whether a reporting guideline already exists (i.e., a needs assessment) [7] . The needs assessment should also include a search for evidence on the quality of reporting of published research in the domain of interest [7] .

The series of studies reported here was conducted to help determine whether there is a need for survey research reporting guidelines. We sought to identify any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research. The objectives of our study were:

  • to identify current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature;
  • to conduct a systematic review of evidence on the quality of reporting of surveys; and
  • to identify key quality criteria for the conduct of survey research and to review how they are being reported through a review of recently published reports of self-administered surveys.

Part 1: Identification of Current Guidance for Survey Research

Identifying guidance in “instructions to authors” sections in peer reviewed journals..

Using a strategy originally developed by Altman [12] to assess endorsement of CONSORT by top medical journals, we identified the top five journals from each of 33 medical specialties, and the top 15 journals from the general and internal medicine category, using Web of Science citation impact factors (list of journals available on request). The final sample consisted of 165 unique journals (15 appeared in more than one specialty).

We reviewed each journal's “Instructions to Authors” web pages as well as related PDF documents between January 12 and February 9, 2009. We used the “find” features of the Firefox web browser and Adobe Reader software to identify the following search terms: survey, questionnaire, response, response rate, respond, and non-responder. Web pages were hand searched for statements relevant to survey research. We also conducted an electronic search (MEDLINE 1950 – February Week 1, 2009; terms: survey, questionnaire) to identify whether the journals have published survey research.

Any relevant text was summarized by journal into categories: “No guidance” (survey related term found; however, no reporting guidance provided); “One directive” (survey related term(s) found that included one brief statement, directive or reference(s) relevant to reporting survey research); and “Guidance” (survey related term(s) including more than one statement, instruction and/or directive relevant to reporting survey research). Coding was carried out by one coder (SK) and verified by a second coder (CB).

Identifying published survey reporting guidelines.

MEDLINE (1950 – April Week 1, 2011) and PsycINFO (1806 – April Week 1, 2011) electronic databases were searched via Ovid to identify relevant citations. The MEDLINE electronic search strategy ( Text S1 ), developed by an information specialist, was modified as required for the PsycINFO database. For all papers meeting eligibility criteria, we hand-searched the reference lists and used the “Related Articles” feature in PubMed. Additionally, we reviewed relevant textbooks and web sites. Two reviewers (SK, CB) independently screened titles and abstracts of all unique citations to identify English language papers and resources that provided explicit guidance on the reporting of survey research. Full-text reports of all records passing the title/abstract screen were retrieved and independently reviewed by two members of the research team; there were no disagreements regarding study inclusion and all eligible records passing this stage of screening were included in this review. One researcher (CB) undertook a thematic analysis of identified guidance (e.g., sample selection, response rate, background, etc.), which was subsequently reviewed by all members of the research team. Data were summarized as frequencies.

Part 2: Systematic Review of Published Studies on the Quality of Survey Reporting

The results of the above search strategy ( Text S1 ) were also screened by the two reviewers to identify publications providing evidence on the quality of reporting of survey research in the health science literature. We identified the aspects of reporting survey research that were addressed in these evaluative studies and summarized their results descriptively.

Part 3: Assessment of Quality of Survey Reporting

The results from Part 1 and Part 2 identified items critical to reporting survey research and were used to inform the development of a data abstraction tool. Thirty-two items were deemed most critical to the reporting of survey research on that basis. These were compiled and categorized into a draft data abstraction tool that was reviewed and modified by all the authors, who have expertise in research methodology and survey research. The resulting draft data abstraction instrument was piloted by two researchers (CB, SK) on a convenience sample of survey articles identified by the authors. Items were added and removed and the wording was refined and edited through discussion and consensus among the coauthors. The revised final data abstraction tool ( Table S1 ) comprised 33 items.

Aiming for a minimum sample size of 100 studies, we searched the top 15 journals (by impact factor) from each of four broad areas of health research: health science, public health, general/internal medicine, and medical informatics. These categories, identified through Web of Science, were known to publish survey research and covered a broad range of the biomedical literature. An Ovid MEDLINE search of these 57 journals (three were included in more than one topic area) included Medical Subject Heading (MeSH) terms (“Questionnaires,” “Data Collection,” and “Health Surveys”) and keyword terms (“survey” and “questionnaire”). The search was limited to studies published between January 2008 and February 2009.

We defined a survey as a research method by which information is gathered by asking people questions on a specific topic and the data collection procedure is standardized and well defined. The information is gathered from a subset of the population of interest with the intent of generating summary statistics that are generalisable to the larger population [1] , [2] .

Two reviewers (CB, SK) independently screened all citations (title and abstract) to determine whether the study used a survey instrument consistent with our definition. The same reviewers screened all full-text articles of citations meeting our inclusion criteria, and those whose eligibility remained unclear. We included all primary reports of self-administered surveys, excluding secondary analyses, longitudinal studies, or surveys that were administered openly through the web (i.e., studies that lacked a clearly defined sampling frame). Duplicate data extraction was completed by the two reviewers. Inconsistencies were resolved by discussion and consensus.

Part 1: Identification of Current Guidance for Survey Research – “Instructions to Authors”

Of the 165 journals searched, 154 (93.3%) did not provide any guidance on survey reporting. Of these 154, 126 (81.8%) have published survey research, while 28 have not. Of the 11 journals providing some guidance, eight provided a brief phrase, statement of guidance, or reference; and three provided more substantive guidance, including more than one directive or statement. Examples are provided in Table 1 . Although no reporting guidelines for survey research were identified, several journals referenced the EQUATOR Network's web site. The EQUATOR Network includes two papers relevant to reporting survey research [13] , [14] .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmed.1001069.t001

The EQUATOR Network also links to the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement ( www.strobe-statement.org ). Although the STROBE Statement includes cross-sectional studies, a class of studies that subsumes surveys, not all surveys are epidemiological. Additionally, STROBE does not include Methods ' and Results ' reporting characteristics that are unique to surveys ( Table S1 ).

Part 1: Identification of Current Guidance for Survey Research - Published Survey Reporting Guidelines

Our search identified 2,353 unique records ( Figure 1 ), which were title-screened. One-hundred sixty-four records were included in the abstract screen, from which 130 were excluded. The remaining 34 records were retrieved for full-text screening to determine eligibility. There was substantial agreement between reviewers across all the screening phases (kappa  =  0.73; 95% CI 0.69–0.77).

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.g001

We identified five papers [13] – [17] and one internet site [18] that provided guidance on the reporting of survey research. None of these sources reported using valid measures or explicit methods for development. In all cases, in addition to more descriptive details, the guidance was presented in the form of a numbered or bulleted checklist. One checklist was excluded from our descriptive analysis as it was very specific to the reporting of internet surveys [16] . Two checklists were combined for analysis because one [14] was a slightly modified version of the other [17] .

Amongst the four checklists, 38 distinct reporting items were identified and grouped in eight broad themes: background, methods, sample selection, research tool, results, response rates, interpretation and discussion, and ethics and disclosure ( Table 2 ). Only two items appeared in all four checklists: providing a description of the questionnaire instrument and describing the representativeness of the sample to the population of interest. Nine items appear in three checklists, 17 items appear in two checklists, and 10 items appear in only one checklist.

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t002

Screening results are presented in Figure 1 . Eight papers were identified that addressed the quality of reporting of some aspect of survey research. Five studies [19] – [23] addressed the reporting of response rates; three evaluated the reporting of non-response analyses in survey research [20] , [21] , [24] ; and two assessed the degree to which authors make their survey instrument available to readers ( Table 3 ) [25] , [26] .

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t003

Part 3: Assessment of Quality of Survey Reporting from the Biomedical Literature

Our search identified 1,719 citations: 1,343 citations were excluded during title/abstract screening because these studies did not use a survey instrument as their primary research tool. Three hundred seventy-six citations were retrieved for full-text review. Of those, 259 did not meet our eligibility criteria; reasons for their exclusion are reported in Figure 2 . The remaining 117 articles, reporting results from self-administered surveys, were retained for data abstraction.

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.g002

The 117 articles were published in 34 different journals: 12 journals from health science, seven from medical informatics, 10 from general/internal medicine, and eight from public health ( Table S2 ). The median number of pages per study was 8 (range 3–26). Of the 33 items that were assessed using our data abstraction form, the median number of items reported was 18 (range 11–25).

Reporting Characteristics: Title, Abstract, and Introduction

The majority (113 [97%]) of articles used the term “survey” or “questionnaire” in the title or abstract; four articles did not use a term to indicate that the study was a survey. While all of the articles presented a background to their research, 17 (15%) did not identify a specific purpose, aim, goal, or objective of the study. ( Table 4 )

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t004

Reporting Characteristics: Methods

Approximately one-third (40 [34%]) of survey research reports did not provide access to the questionnaire items used in the study in either the article, appendices, or an online supplement. Of those studies that reported the use of existing survey questionnaires, the majority (40/52 [77%]) did not report the psychometric properties of the tool (although all but two did reference their sources). The majority of studies that developed a novel questionnaire (91/111 [82%]) failed to clearly describe the development process and/or did not describe the methods used to pre-test the tool; the majority (89/111 [80%]) also failed to report the reliability or validity of a newly developed survey instrument. For those papers which used survey instruments that required scoring (n = 95), 63 (66%) did not provide a description of the scoring procedures.

With respect to a description of sample selection methods, 104 (89%) studies did not describe the sample's representativeness of the population of interest. The majority (110 [94%]) of studies also did not present a sample size calculation or other justification of the sample size.

There were 23 (20%) papers for which we could not determine the mode of survey administration (i.e., in-person, mail, internet, or a combination of these). Forty-one (35%) articles did not provide information on either the type (i.e. phone, e-mail, postal mail) or the number of contact attempts. For 102 (87%) papers, there was no description of who was identified as the organization/group soliciting potential research subjects for their participation in the survey.

Twelve (10%) papers failed to provide a description of the methods used to analyse the data (i.e., a description of the variables that were analysed, how they were manipulated, and the statistical methods used). However, for a further 55 (47%) studies, the data analysis would be a challenge to replicate based on the description provided in the research report. Very few studies provided methods for analysis of non-response error, calculating response rates, or handling missing item data (15 [13%], 5 [4%], and 13 [11%] respectively). The majority (112 [96%]) of the articles did not provide a definition or cut-off limit for partial completion of questionnaires.

Reporting Characteristics: Results

While the majority (89 [76%]) of papers provided a defined response rate, 28 studies (24%) failed to define the reported response rate (i.e., no information was provided on the definition of the rate or how it was calculated), provided only partial information (e.g., response rates were reported for only part of the data, or some information was reported but not a response rate), or provided no quantitative information regarding a response rate. The majority (104 [87%]) of studies did not report the sample disposition (i.e., describing the number of complete and partial returned questionnaires according to the number of potential participants known to be eligible, of unknown eligibility, or known to be ineligible). More than two-thirds (80 [68%]) of the reports provided no information on how non-respondents differed from respondents.

Reporting Characteristics: Discussion and Ethical Quality Indicators

While all of the articles summarized their results with regard to the objectives, and the majority (110 [94%]) described the limitations of their study, most (90 [77%]) did not outline the strengths of their study and 70 (60%) did not include any discussion of the generalisability of their results.

When considering the ethical quality indicators, reporting was varied. While three-quarters (86 [74%]) of the papers reported their source of funding, approximately the same proportion (88 [75%]) did not include any information on consent procedures for research participants. One-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

Our comprehensive review, to identify relevant guidance for survey research and evidence on the quality of reporting of surveys, substantiates the need for a reporting guideline for survey research. Overall, our results show that few medical journals provide guidance to authors regarding survey research. Furthermore, no validated guidelines for reporting surveys currently exist. Previous reviews of survey reporting quality and our own review of 117 published studies revealed that many criteria are poorly reported.

Surveys are common in health care research; we identified more than 117 primary reports of self-administered surveys in 34 high-impact factor journals over a one-year period. Despite this, the majority of these journals provided no guidance to authors for reporting survey research. This may stem, at least partly, from the fact that validated guidelines for survey research do not exist and that recommended quality criteria vary considerably. The recommended reporting criteria that we identified in the published literature are not mutually exclusive, and there is perhaps more overlap if one takes into account implicit and explicit considerations. Regardless of these limitations, the lack of clear guidance has contributed to inconsistency in the literature; both this work and that of others [19] – [26] shows that key survey quality characteristics are often under-reported.

Self-administered sample surveys are a type of observational study and for that reason they can fall within the scope of STROBE. However, there are methodological features relevant to sample surveys that need to be highlighted in greater detail. For example, surveys that use a probability sampling design do so in order to be able to generalise to a specific target population (many other types of observational research may have a more “infinite” target population); this emphasizes the importance of coverage error and non-response error – topics that have received attention in the survey literature. Thus, in our data abstraction tool, we placed emphasis on specific methodological details excluded from STROBE – such as non-response analysis, details of strategies used to increase response rates (e.g., multiple contacts, mode of contact of potential participants), and details of measurement methods (e.g., making the instrument available so that readers can consider questionnaire formatting, question framing, choice of response categories, etc.).

Consistent with previous work [25] , [26] , fully one-third of our sample failed to provide access to any survey questions used in the study. This poses challenges both for critical analysis of the studies and for future use of the tools, including replication in new settings. These challenges will be particularly apparent as the articles age and study authors become more difficult to contact [25] .

Assessing descriptions of the study population and sampling frame posed particular challenges in this study. It was often unclear whom the authors considered to be the population of interest. To standardise our assessment of this item, we used a clearly delineated definition of “survey population” and “sampling frame” [3] , [27] . A survey reporting guideline could help this issue by clearly defining the difference between the terms and descriptions of “population” and “sampling frame.”

Our results regarding reporting of response rates and non-response analysis were similar to previously published studies [19] – [24] . In our sample, 24% of papers assessed did not provide a defined response rate and 68% did not provide results from non-response analysis. The wide variation in how response rates are reported in the literature is perhaps a historical reflection of the limited consensus or explicit journal policy for response rate reporting [22] , [28] , [29] . However, despite lack of explicit policies regarding acceptable standards for response rates or the reporting of response rates, journal editors are known to have implicit policies for acceptable response rates when considering the publication of surveys [17] , [22] , [29] , [30] . Given the concern regarding declining response rates to surveys [31] , there is a need to ensure that aspects of the survey's design and conduct are well reported so that reviewers can adequately assess the degree of bias that may be present and allay concerns over the representativeness of the survey population.

With regard to the ethical quality indicators, sources of study funding were often reported (74%) in this sample of articles. However, the reporting of research ethics board approval and subject consent procedures were reported far less often. In particular, the reporting of informed consent procedures was often absent in studies where physicians, residents, other clinicians or health administrators were the subjects. This finding may suggest that researchers do not perceive doctors and other health-care professionals and administrators to be research subjects in the same way they perceive patients and members of the public to be. It could also reflect a lack of current guidelines that specifically address the ethical use of health services professionals and staff as research subjects.

Our research is not without limitations. With respect to the review of journals' “Instructions to Authors,” the study was cross-sectional in contrast with the dynamic nature of web pages. Since our searches in early 2009, several journals have updated their web pages. It has been noted that at least one has added a brief reference to the reporting of survey research.

A second limitation is that our sample included only the contents of “Instructions to Authors” web pages for higher-impact factor journals. It is possible that journals with lower impact factors contain guidance for reporting survey research. We chose this approach, which replicates previous similar work [12] , to provide a defensible sample of journals.

</?twb=.3w?>Third, the problem of identifying non-randomised studies in electronic searches is well known and often related to the inconsistent use of terminology in the original papers. It is possible that our search strategy failed to identify relevant articles. However, it is unlikely that there is an existing guideline for survey research that is in widespread use, given our review of actual surveys, instructions to authors, and reviews of reporting quality.

Fourth, although we restricted our systematic review search strategy to two health science databases, our hand search did identify one checklist that was not specific to the health science literature [18] . The variation in recommended reporting criteria amongst the checklists may, in part, be due to variation in the different domains (i.e., health science research versus public opinion research).

Additionally, we did not critically appraise the quality of evidence for items included in the checklists nor the quality of the studies that addressed the quality of reporting of some aspect of survey research. For our review of current reporting practices for surveys, we were unable to identify validated tools for evaluation of these studies. While we did use a comprehensive and iterative approach to develop our data abstraction tool, we may not have captured information on characteristics deemed important by other researchers. Lastly, our sample was limited to self-administered surveys, and the results may not be generalisable to interviewer-administered surveys.

Recently, Moher and colleagues outlined the importance of a structured approach to the development of reporting guidelines [7] . Given the positive impact that reporting guidelines have had on the quality of reporting of health research [8] – [11] , and the potential for a positive upstream effect on the design and conduct of research [32] , there is a fundamental need for well-developed reporting guidelines. This paper provides results from the initial steps in a structured approach to the development of a survey reporting guideline and forms the foundation for our further work in this area.

In conclusion, there is limited guidance and no consensus regarding the optimal reporting of survey research. While some key criteria are consistently reported by authors publishing their survey research in peer-reviewed journals, the majority are under-reported. As in other areas of research, poor reporting compromises both transparency and reproducibility, which are fundamental tenets of research. Our findings highlight the need for a well developed reporting guideline for survey research – possibly an extension of the guideline for observational studies in epidemiology (STROBE) – that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Supporting Information

Data abstraction tool items and overlap with STROBE.

https://doi.org/10.1371/journal.pmed.1001069.s001

Journals represented by 117 included articles.

https://doi.org/10.1371/journal.pmed.1001069.s002

Ovid MEDLINE search strategy.

https://doi.org/10.1371/journal.pmed.1001069.s003

Acknowledgments

We thank Risa Shorr (Librarian, The Ottawa Hospital) for her assistance with designing the electronic search strategy used for this study.

Author Contributions

Conceived and designed the experiments: JG DM CB SK JB IG BP. Analyzed the data: CB SK JB DM JG. Contributed to the writing of the manuscript. CB SK JB IG DM BP JG. ICMJE criteria for authorship read and met. CB SK JB IG DM BP JG. Acqusition of data: CB SK.

  • 1. Groves RM, Fowler FJ, Couper MP, Lepkowski JM, Singer E, et al. (2004) Survey Methodology. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • 2. Aday LA, Cornelius LJ (2006) Designing and Conducting Health Surveys. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • View Article
  • Google Scholar
  • 6. EQUATOR NetworkIntroduction to Reporting Guidelines. Available: http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines/#what . Accessed 23 November 2009.
  • 18. AAPORHome page of the American Association of Public Opinion Research (AAPOR). Available: http://www.aapor.org . Accessed 20 January 2009.
  • 22. Johnson T, Owens L (2003) Survey Response Rate Reporting in the Professional Literature. Available: http://www.amstat.org/sections/srms/proceedings/y2003/Files/JSM2003-000638.pdf . Accessed 11 July 2011.
  • 27. Dillman DA (2007) Mail and Internet Surveys: The Tailored Design Method. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Supplements
  • French Abstracts
  • Portuguese Abstracts
  • Spanish Abstracts
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Journal for Quality in Health Care
  • About the International Society for Quality in Health Care
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ISQua
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

What is survey research, advantages and disadvantages of survey research, essential steps in survey research, research methods, designing the research tool, sample and sampling, data collection, data analysis.

  • < Previous

Good practice in the conduct and reporting of survey research

  • Article contents
  • Figures & tables
  • Supplementary Data

KATE KELLEY, BELINDA CLARK, VIVIENNE BROWN, JOHN SITZIA, Good practice in the conduct and reporting of survey research, International Journal for Quality in Health Care , Volume 15, Issue 3, May 2003, Pages 261–266, https://doi.org/10.1093/intqhc/mzg031

  • Permissions Icon Permissions

Survey research is sometimes regarded as an easy research approach. However, as with any other research approach and method, it is easy to conduct a survey of poor quality rather than one of high quality and real value. This paper provides a checklist of good practice in the conduct and reporting of survey research. Its purpose is to assist the novice researcher to produce survey work to a high standard, meaning a standard at which the results will be regarded as credible. The paper first provides an overview of the approach and then guides the reader step-by-step through the processes of data collection, data analysis, and reporting. It is not intended to provide a manual of how to conduct a survey, but rather to identify common pitfalls and oversights to be avoided by researchers if their work is to be valid and credible.

Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth [ 1 ] and Joseph Rowntree [ 2 ]), and indeed survey research remains most used in applied social research. The term ‘survey’ is used in a variety of ways, but generally refers to the selection of a relatively large sample of people from a pre-determined population (the ‘population of interest’; this is the wider group of people in whom the researcher is interested in a particular study), followed by the collection of a relatively small amount of data from those individuals. The researcher therefore uses information from a sample of individuals to make some inference about the wider population.

Data are collected in a standardized form. This is usually, but not necessarily, done by means of a questionnaire or interview. Surveys are designed to provide a ‘snapshot of how things are at a specific time’ [ 3 ]. There is no attempt to control conditions or manipulate variables; surveys do not allocate participants into groups or vary the treatment they receive. Surveys are well suited to descriptive studies, but can also be used to explore aspects of a situation, or to seek explanation and provide data for testing hypotheses. It is important to recognize that ‘the survey approach is a research strategy, not a research method’ [ 3 ]. As with any research approach, a choice of methods is available and the one most appropriate to the individual project should be used. This paper will discuss the most popular methods employed in survey research, with an emphasis upon difficulties commonly encountered when using these methods.

Descriptive research

Descriptive research is a most basic type of enquiry that aims to observe (gather information on) certain phenomena, typically at a single point in time: the ‘cross-sectional’ survey. The aim is to examine a situation by describing important factors associated with that situation, such as demographic, socio-economic, and health characteristics, events, behaviours, attitudes, experiences, and knowledge. Descriptive studies are used to estimate specific parameters in a population (e.g. the prevalence of infant breast feeding) and to describe associations (e.g. the association between infant breast feeding and maternal age).

Analytical studies

Analytical studies go beyond simple description; their intention is to illuminate a specific problem through focused data analysis, typically by looking at the effect of one set of variables upon another set. These are longitudinal studies, in which data are collected at more than one point in time with the aim of illuminating the direction of observed associations. Data may be collected from the same sample on each occasion (cohort or panel studies) or from a different sample at each point in time (trend studies).

Evaluation research

This form of research collects data to ascertain the effects of a planned change.

Advantages:

The research produces data based on real-world observations (empirical data).

The breadth of coverage of many people or events means that it is more likely than some other approaches to obtain data based on a representative sample, and can therefore be generalizable to a population.

Surveys can produce a large amount of data in a short time for a fairly low cost. Researchers can therefore set a finite time-span for a project, which can assist in planning and delivering end results.

Disadvantages:

The significance of the data can become neglected if the researcher focuses too much on the range of coverage to the exclusion of an adequate account of the implications of those data for relevant issues, problems, or theories.

The data that are produced are likely to lack details or depth on the topic being investigated.

Securing a high response rate to a survey can be hard to control, particularly when it is carried out by post, but is also difficult when the survey is carried out face-to-face or over the telephone.

Research question

Good research has the characteristic that its purpose is to address a single clear and explicit research question; conversely, the end product of a study that aims to answer a number of diverse questions is often weak. Weakest of all, however, are those studies that have no research question at all and whose design simply is to collect a wide range of data and then to ‘trawl’ the data looking for ‘interesting’ or ‘significant’ associations. This is a trap novice researchers in particular fall into. Therefore, in developing a research question, the following aspects should be considered [ 4 ]:

Be knowledgeable about the area you wish to research.

Widen the base of your experience, explore related areas, and talk to other researchers and practitioners in the field you are surveying.

Consider using techniques for enhancing creativity, for example brainstorming ideas.

Avoid the pitfalls of: allowing a decision regarding methods to decide the questions to be asked; posing research questions that cannot be answered; asking questions that have already been answered satisfactorily.

The survey approach can employ a range of methods to answer the research question. Common survey methods include postal questionnaires, face-to-face interviews, and telephone interviews.

Postal questionnaires

This method involves sending questionnaires to a large sample of people covering a wide geographical area. Postal questionnaires are usually received ‘cold’, without any previous contact between researcher and respondent. The response rate for this type of method is usually low, ∼20%, depending on the content and length of the questionnaire. As response rates are low, a large sample is required when using postal questionnaires, for two main reasons: first, to ensure that the demographic profile of survey respondents reflects that of the survey population; and secondly, to provide a sufficiently large data set for analysis.

Face-to-face interviews

Face-to-face interviews involve the researcher approaching respondents personally, either in the street or by calling at people’s homes. The researcher then asks the respondent a series of questions and notes their responses. The response rate is often higher than that of postal questionnaires as the researcher has the opportunity to sell the research to a potential respondent. Face-to-face interviewing is a more costly and time-consuming method than the postal survey, however the researcher can select the sample of respondents in order to balance the demographic profile of the sample.

Telephone interviews

Telephone surveys, like face-to-face interviews, allow a two-way interaction between researcher and respondent. Telephone surveys are quicker and cheaper than face-to-face interviewing. Whilst resulting in a higher response rate than postal surveys, telephone surveys often attract a higher level of refusals than face-to-face interviews as people feel less inhibited about refusing to take part when approached over the telephone.

Whether using a postal questionnaire or interview method, the questions asked have to be carefully planned and piloted. The design, wording, form, and order of questions can affect the type of responses obtained, and careful design is needed to minimize bias in results. When designing a questionnaire or question route for interviewing, the following issues should be considered: (1) planning the content of a research tool; (2) questionnaire layout; (3) interview questions; (4) piloting; and (5) covering letter.

Planning the content of a research tool

The topics of interest should be carefully planned and relate clearly to the research question. It is often useful to involve experts in the field, colleagues, and members of the target population in question design in order to ensure the validity of the coverage of questions included in the tool (content validity).

Researchers should conduct a literature search to identify existing, psychometrically tested questionnaires. A well designed research tool is simple, appropriate for the intended use, acceptable to respondents, and should include a clear and interpretable scoring system. A research tool must also demonstrate the psychometric properties of reliability (consistency from one measurement to the next), validity (accurate measurement of the concept), and, if a longitudinal study, responsiveness to change [ 5 ]. The development of research tools, such as attitude scales, is a lengthy and costly process. It is important that researchers recognize that the development of the research tool is equal in importance—and deserves equal attention—to data collection. If a research instrument has not undergone a robust process of development and testing, the credibility of the research findings themselves may legitimately be called into question and may even be completely disregarded. Surveys of patient satisfaction and similar are commonly weak in this respect; one review found that only 6% of patient satisfaction studies used an instrument that had undergone even rudimentary testing [ 6 ]. Researchers who are unable or unwilling to undertake this process are strongly advised to consider adopting an existing, robust research tool.

Questionnaire layout

Questionnaires used in survey research should be clear and well presented. The use of capital (upper case) letters only should be avoided, as this format is hard to read. Questions should be numbered and clearly grouped by subject. Clear instructions should be given and headings included to make the questionnaire easier to follow.

The researcher must think about the form of the questions, avoiding ‘double-barrelled’ questions (two or more questions in one, e.g. ‘How satisfied were you with your personal nurse and the nurses in general?’), questions containing double negatives, and leading or ambiguous questions. Questions may be open (where the respondent composes the reply) or closed (where pre-coded response options are available, e.g. multiple-choice questions). Closed questions with pre-coded response options are most suitable for topics where the possible responses are known. Closed questions are quick to administer and can be easily coded and analysed. Open questions should be used where possible replies are unknown or too numerous to pre-code. Open questions are more demanding for respondents but if well answered can provide useful insight into a topic. Open questions, however, can be time consuming to administer and difficult to analyse. Whether using open or closed questions, researchers should plan clearly how answers will be analysed.

Interview questions

Open questions are used more frequently in unstructured interviews, whereas closed questions typically appear in structured interview schedules. A structured interview is like a questionnaire that is administered face to face with the respondent. When designing the questions for a structured interview, the researcher should consider the points highlighted above regarding questionnaires. The interviewer should have a standardized list of questions, each respondent being asked the same questions in the same order. If closed questions are used the interviewer should also have a range of pre-coded responses available.

If carrying out a semi-structured interview, the researcher should have a clear, well thought out set of questions; however, the questions may take an open form and the researcher may vary the order in which topics are considered.

A research tool should be tested on a pilot sample of members of the target population. This process will allow the researcher to identify whether respondents understand the questions and instructions, and whether the meaning of questions is the same for all respondents. Where closed questions are used, piloting will highlight whether sufficient response categories are available, and whether any questions are systematically missed by respondents.

When conducting a pilot, the same procedure as as that to be used in the main survey should be followed; this will highlight potential problems such as poor response.

Covering letter

All participants should be given a covering letter including information such as the organization behind the study, including the contact name and address of the researcher, details of how and why the respondent was selected, the aims of the study, any potential benefits or harm resulting from the study, and what will happen to the information provided. The covering letter should both encourage the respondent to participate in the study and also meet the requirements of informed consent (see below).

The concept of sample is intrinsic to survey research. Usually, it is impractical and uneconomical to collect data from every single person in a given population; a sample of the population has to be selected [ 7 ]. This is illustrated in the following hypothetical example. A hospital wants to conduct a satisfaction survey of the 1000 patients discharged in the previous month; however, as it is too costly to survey each patient, a sample has to be selected. In this example, the researcher will have a list of the population members to be surveyed (sampling frame). It is important to ensure that this list is both up-to date and has been obtained from a reliable source.

The method by which the sample is selected from a sampling frame is integral to the external validity of a survey: the sample has to be representative of the larger population to obtain a composite profile of that population [ 8 ].

There are methodological factors to consider when deciding who will be in a sample: How will the sample be selected? What is the optimal sample size to minimize sampling error? How can response rates be maximized?

The survey methods discussed below influence how a sample is selected and the size of the sample. There are two categories of sampling: random and non-random sampling, with a number of sampling selection techniques contained within the two categories. The principal techniques are described here [ 9 ].

Random sampling

Generally, random sampling is employed when quantitative methods are used to collect data (e.g. questionnaires). Random sampling allows the results to be generalized to the larger population and statistical analysis performed if appropriate. The most stringent technique is simple random sampling. Using this technique, each individual within the chosen population is selected by chance and is equally as likely to be picked as anyone else. Referring back to the hypothetical example, each patient is given a serial identifier and then an appropriate number of the 1000 population members are randomly selected. This is best done using a random number table, which can be generated using computer software (a free on-line randomizer can be found at http://www.randomizer.org/index.htm ).

Alternative random sampling techniques are briefly described. In systematic sampling, individuals to be included in the sample are chosen at equal intervals from the population; using the earlier example, every fifth patient discharged from hospital would be included in the survey. Stratified sampling selects a specific group and then a random sample is selected. Using our example, the hospital may decide only to survey older surgical patients. Bigger surveys may employ cluster sampling, which randomly assigns groups from a large population and then surveys everyone within the groups, a technique often used in national-scale studies.

Non-random sampling

Non-random sampling is commonly applied when qualitative methods (e.g. focus groups and interviews) are used to collect data, and is typically used for exploratory work. Non-random sampling deliberately targets individuals within a population. There are three main techniques. (1) purposive sampling: a specific population is identified and only its members are included in the survey; using our example above, the hospital may decide to survey only patients who had an appendectomy. (2) Convenience sampling: the sample is made up of the individuals who are the easiest to recruit. Finally, (3) snowballing: the sample is identified as the survey progresses; as one individual is surveyed he or she is invited to recommend others to be surveyed.

It is important to use the right method of sampling and to be aware of the limitations and statistical implications of each. The need to ensure that the sample is representative of the larger population was highlighted earlier and, alongside the sampling method, the degree of sampling error should be considered. Sampling error is the probability that any one sample is not completely representative of the population from which it has been drawn [ 9 ]. Although sampling error cannot be eliminated entirely, the sampling technique chosen will influence the extent of the error. Simple random sampling will give a closer estimate of the population than a convenience sample of individuals who just happened to be in the right place at the right time.

Sample size

What sample size is required for a survey? There is no definitive answer to this question: large samples with rigorous selection are more powerful as they will yield more accurate results, but data collection and analysis will be proportionately more time consuming and expensive. Essentially, the target sample size for a survey depends on three main factors: the resources available, the aim of the study, and the statistical quality needed for the survey. For ‘qualitative’ surveys using focus groups or interviews, the sample size needed will be smaller than if quantitative data is collected by questionnaire. If statistical analysis is to be performed on the data then sample size calculations should be conducted. This can be done using computer packages such as G * Power [ 10 ]; however, those with little statistical knowledge should consult a statistician. For practical recommendations on sample size, the set of survey guidelines developed by the UK Department of Health [ 11 ] should be consulted.

Larger samples give a better estimate of the population but it can be difficult to obtain an adequate number of responses. It is rare that everyone asked to participate in the survey will reply. To ensure a sufficient number of responses, include an estimated non-response rate in the sample size calculations.

Response rates are a potential source of bias. The results from a survey with a large non-response rate could be misleading and only representative of those who replied. French [ 12 ] reported that non-responders to patient satisfaction surveys are less likely to be satisfied than people who reply. It is unwise to define a level above which a response rate is acceptable, as this depends on many local factors; however, an achievable and acceptable rate is ∼75% for interviews and 65% for self-completion postal questionnaires [ 9 , 13 ]. In any study, the final response rate should be reported with the results; potential differences between the respondents and non-respondents should be explicitly explored and their implications discussed.

There are techniques to increase response rates. A questionnaire must be concise and easy to understand, reminders should be sent out, and method of recruitment should be carefully considered. Sitzia and Wood [ 13 ] found that participants recruited by mail or who had to respond by mail had a lower mean response rate (67%) than participants who were recruited personally (mean response 76.7%). A most useful review of methods to maximize response rates in postal surveys has recently been published [ 14 ].

Researchers should approach data collection in a rigorous and ethical manner. The following information must be clearly recorded:

How, where, how many times, and by whom potential respondents were contacted.

How many people were approached and how many of those agreed to participate.

How did those who agreed to participate differ from those who refused with regard to characteristics of interest in the study, for example how were they identified, where were they approached, and what was their gender, age, and features of their illness or health care.

How was the survey administered (e.g. telephone interview).

What was the response rate (i.e. the number of usable data sets as a proportion of the number of people approached).

The purpose of all analyses is to summarize data so that it is easily understood and provides the answers to our original questions: ‘In order to do this researchers must carefully examine their data; they should become friends with their data’ [ 15 ]. Researchers must prepare to spend substantial time on the data analysis phase of a survey (and this should be built into the project plan). When analysis is rushed, often important aspects of the data are missed and sometimes the wrong analyses are conducted, leading to both inaccurate results and misleading conclusions [ 16 ]. However, and this point cannot be stressed strongly enough, researchers must not engage in data dredging, a practice that can arise especially in studies in which large numbers of dependent variables can be related to large numbers of independent variables (outcomes). When large numbers of possible associations in a dataset are reviewed at P < 0.05, one in 20 of the associations by chance will appear ‘statistically significant’; in datasets where only a few real associations exist, testing at this significance level will result in the large majority of findings still being false positives [ 17 ].

The method of data analysis will depend on the design of the survey and should have been carefully considered in the planning stages of the survey. Data collected by qualitative methods should be analysed using established methods such as content analysis [ 18 ], and where quantitative methods have been used appropriate statistical tests can be applied. Describing methods of analysis here would be unproductive as a multitude of introductory textbooks and on-line resources are available to help with simple analyses of data (e.g. [ 19 , 20 ]). For advanced analysis a statistician should be consulted.

When reporting survey research, it is essential that a number of key points are covered (though the length and depth of reporting will be dependent upon journal style). These key points are presented as a ‘checklist’ below:

Explain the purpose or aim of the research, with the explicit identification of the research question.

Explain why the research was necessary and place the study in context, drawing upon previous work in relevant fields (the literature review).

State the chosen research method or methods, and justify why this method was chosen.

Describe the research tool. If an existing tool is used, briefly state its psychometric properties and provide references to the original development work. If a new tool is used, you should include an entire section describing the steps undertaken to develop and test the tool, including results of psychometric testing.

Describe how the sample was selected and how data were collected, including:

How were potential subjects identified?

How many and what type of attempts were made to contact subjects?

Who approached potential subjects?

Where were potential subjects approached?

How was informed consent obtained?

How many agreed to participate?

How did those who agreed differ from those who did not agree?

What was the response rate?

Describe and justify the methods and tests used for data analysis.

Present the results of the research. The results section should be clear, factual, and concise.

Interpret and discuss the findings. This ‘discussion’ section should not simply reiterate results; it should provide the author’s critical reflection upon both the results and the processes of data collection. The discussion should assess how well the study met the research question, should describe the problems encountered in the research, and should honestly judge the limitations of the work.

Present conclusions and recommendations.

The researcher needs to tailor the research report to meet:

The expectations of the specific audience for whom the work is being written.

The conventions that operate at a general level with respect to the production of reports on research in the social sciences.

Anyone involved in collecting data from patients has an ethical duty to respect each individual participant’s autonomy. Any survey should be conducted in an ethical manner and one that accords with best research practice. Two important ethical issues to adhere to when conducting a survey are confidentiality and informed consent.

The respondent’s right to confidentiality should always be respected and any legal requirements on data protection adhered to. In the majority of surveys, the patient should be fully informed about the aims of the survey, and the patient’s consent to participate in the survey must be obtained and recorded.

The professional bodies listed below, among many others, provide guidance on the ethical conduct of research and surveys.

American Psychological Association: http://www.apa.org

British Psychological Society: http://www.bps.org.uk

British Medical Association: http://www.bma.org.uk .

UK General Medical Council: http://www.gmc-uk.org

American Medical Association: http://www.ama-assn.org

UK Royal College of Nursing: http://www.rcn.org.uk

UK Department of Health: http://www.doh.gov

Survey research demands the same standards in research practice as any other research approach, and journal editors and the broader research community will judge a report of survey research with the same level of rigour as any other research report. This is not to say that survey research need be particularly difficult or complex; the point to emphasize is that researchers should be aware of the steps required in survey research, and should be systematic and thoughtful in the planning, execution, and reporting of the project. Above all, survey research should not be seen as an easy, ‘quick and dirty’ option; such work may adequately fulfil local needs (e.g. a quick survey of hospital staff satisfaction), but will not stand up to academic scrutiny and will not be regarded as having much value as a contribution to knowledge.

Address reprint requests to John Sitzia, Research Department, Worthing Hospital, Lyndhurst Road, Worthing BN11 2DH, West Sussex, UK. E-mail: [email protected]

London School of Economics, UK. Http://booth.lse.ac.uk/ (accessed 15 January 2003 ).

Vernon A. A Quaker Businessman: Biography of Joseph Rowntree (1836–1925) . London: Allen & Unwin, 1958 .

Denscombe M. The Good Research Guide: For Small-scale Social Research Projects . Buckingham: Open University Press, 1998 .

Robson C. Real World Research: A Resource for Social Scientists and Practitioner-researchers . Oxford: Blackwell Publishers, 1993 .

Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to their Development and Use . Oxford: Oxford University Press, 1995 .

Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care 1999 ; 11: 319 –328.

Bowling A. Research Methods in Health. Investigating Health and Health Services . Buckingham: Open University Press, 2002 .

American Statistical Association, USA. Http://www.amstat.org (accessed 9 December 2002 ).

Arber S. Designing samples. In: Gilbert N, ed. Researching Social Life . London: SAGE Publications, 2001 .

Heinrich Heine University, Dusseldorf, Germany. Http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/index.html (accessed 12 December 2002 ).

Department of Health, England. Http://www.doh.gov.uk/acutesurvey/index.htm (accessed 12 December 2002 ).

French K. Methodological considerations in hospital patient opinion surveys. Int J Nurs Stud 1981 ; 18: 7 –32.

Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care 1998 ; 10: 311 –317.

Edwards P, Roberts I, Clarke M et al. Increasing response rates to postal questionnaires: systematic review. Br Med J 2002 ; 324: 1183 .

Wright DB. Making friends with our data: improving how statistical results are reported. Br J Educ Psychol 2003 ; in press.

Wright DB, Kelley K. Analysing and reporting data. In: Michie S, Abraham C, eds. Health Psychology in Practice . London: SAGE Publications, 2003 ; in press.

Davey Smith G, Ebrahim S. Data dredging, bias, or confounding. Br Med J 2002 ; 325: 1437 –1438.

Morse JM, Field PA. Nursing Research: The Application of Qualitative Approaches . London: Chapman and Hall, 1996 .

Wright DB. Understanding Statistics: An Introduction for the Social Sciences . London: SAGE Publications, 1997 .

Sportscience, New Zealand. Http://www.sportsci.org/resource/stats/index.html (accessed 12 December 2002 ).

Month: Total Views:
January 2017 546
February 2017 1,306
March 2017 1,894
April 2017 654
May 2017 558
June 2017 672
July 2017 1,008
August 2017 1,402
September 2017 1,804
October 2017 1,956
November 2017 2,383
December 2017 12,150
January 2018 14,746
February 2018 14,347
March 2018 19,522
April 2018 22,008
May 2018 21,049
June 2018 16,327
July 2018 15,714
August 2018 16,971
September 2018 13,090
October 2018 13,950
November 2018 18,006
December 2018 14,269
January 2019 13,561
February 2019 14,618
March 2019 18,228
April 2019 21,143
May 2019 19,192
June 2019 14,610
July 2019 13,374
August 2019 14,002
September 2019 17,582
October 2019 18,119
November 2019 15,591
December 2019 10,684
January 2020 10,063
February 2020 10,358
March 2020 10,819
April 2020 15,267
May 2020 8,603
June 2020 11,066
July 2020 10,600
August 2020 10,331
September 2020 12,058
October 2020 11,934
November 2020 11,938
December 2020 9,337
January 2021 9,580
February 2021 12,962
March 2021 12,776
April 2021 12,545
May 2021 10,295
June 2021 6,443
July 2021 6,310
August 2021 6,980
September 2021 6,884
October 2021 7,713
November 2021 9,433
December 2021 6,886
January 2022 7,206
February 2022 7,517
March 2022 8,644
April 2022 8,995
May 2022 8,402
June 2022 5,556
July 2022 3,849
August 2022 3,901
September 2022 4,495
October 2022 5,624
November 2022 5,794
December 2022 4,603
January 2023 5,501
February 2023 5,148
March 2023 6,984
April 2023 7,545
May 2023 6,876
June 2023 4,578
July 2023 4,286
August 2023 4,431
September 2023 4,664
October 2023 5,736
November 2023 6,022
December 2023 4,633
January 2024 4,965
February 2024 4,645
March 2024 6,754
April 2024 6,545
May 2024 5,452
June 2024 2,788
July 2024 730

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1464-3677
  • Print ISSN 1353-4505
  • Copyright © 2024 International Society for Quality in Health Care and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

A Comprehensive Guide to Survey Research Methodologies

For decades, researchers and businesses have used survey research to produce statistical data and explore ideas. The survey process is simple, ask questions and analyze the responses to make decisions. Data is what makes the difference between a valid and invalid statement and as the American statistician, W. Edwards Deming said:

“Without data, you’re just another person with an opinion.” - W. Edwards Deming

In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey.

What is Survey Research

A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It’s an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts.

Brief History of Survey Research

Survey research may have its roots in the American and English “social surveys” conducted around the turn of the 20th century. The surveys were mainly conducted by researchers and reformers to document the extent of social issues such as poverty. ( 1 ) Despite being a relatively young field to many scientific domains, survey research has experienced three stages of development ( 2 ):

-       First Era (1930-1960)

-       Second Era (1960-1990)

-       Third Era (1990 onwards)

Over the years, survey research adapted to the changing times and technologies. By exploiting the latest technologies, researchers can gain access to the right population from anywhere in the world, analyze the data like never before, and extract useful information.

Survey Research Methods & Types

Survey research can be classified into seven categories based on objective, data sources, methodology, deployment method, and frequency of deployment.

Types of survey research based on objective, data source, methodology, deployment method, and frequency of deployment.

Surveys based on Objective

Exploratory survey research.

Exploratory survey research is aimed at diving deeper into research subjects and finding out more about their context. It’s important for marketing or business strategy and the focus is to discover ideas and insights instead of gathering statistical data.

Generally, exploratory survey research is composed of open-ended questions that allow respondents to express their thoughts and perspectives. The final responses present information from various sources that can lead to fresh initiatives.

Predictive Survey Research

Predictive survey research is also called causal survey research. It’s preplanned, structured, and quantitative in nature. It’s often referred to as conclusive research as it tries to explain the cause-and-effect relationship between different variables. The objective is to understand which variables are causes and which are effects and the nature of the relationship between both variables.

Descriptive Survey Research

Descriptive survey research is largely observational and is ideal for gathering numeric data. Due to its quantitative nature, it’s often compared to exploratory survey research. The difference between the two is that descriptive research is structured and pre-planned.

 The idea behind descriptive research is to describe the mindset and opinion of a particular group of people on a given subject. The questions are every day multiple choices and users must choose from predefined categories. With predefined choices, you don’t get unique insights, rather, statistically inferable data.

Survey Research Types based on Concept Testing

Monadic concept testing.

Monadic testing is a survey research methodology in which the respondents are split into multiple groups and ask each group questions about a separate concept in isolation. Generally, monadic surveys are hyper-focused on a particular concept and shorter in duration. The important thing in monadic surveys is to avoid getting off-topic or exhausting the respondents with too many questions.

Sequential Monadic Concept Testing

Another approach to monadic testing is sequential monadic testing. In sequential monadic surveys, groups of respondents are surveyed in isolation. However, instead of surveying three groups on three different concepts, the researchers survey the same groups of people on three distinct concepts one after another. In a sequential monadic survey, at least two topics are included (in random order), and the same questions are asked for each concept to eliminate bias.

Based on Data Source

Primary data.

Data obtained directly from the source or target population is referred to as primary survey data. When it comes to primary data collection, researchers usually devise a set of questions and invite people with knowledge of the subject to respond. The main sources of primary data are interviews, questionnaires, surveys, and observation methods.

 Compared to secondary data, primary data is gathered from first-hand sources and is more reliable. However, the process of primary data collection is both costly and time-consuming.

Secondary Data

Survey research is generally used to collect first-hand information from a respondent. However, surveys can also be designed to collect and process secondary data. It’s collected from third-party sources or primary sources in the past.

 This type of data is usually generic, readily available, and cheaper than primary data collection. Some common sources of secondary data are books, data collected from older surveys, online data, and data from government archives. Beware that you might compromise the validity of your findings if you end up with irrelevant or inflated data.

Based on Research Method

Quantitative research.

Quantitative research is a popular research methodology that is used to collect numeric data in a systematic investigation. It’s frequently used in research contexts where statistical data is required, such as sciences or social sciences. Quantitative research methods include polls, systematic observations, and face-to-face interviews.

Qualitative Research

Qualitative research is a research methodology where you collect non-numeric data from research participants. In this context, the participants are not restricted to a specific system and provide open-ended information. Some common qualitative research methods include focus groups, one-on-one interviews, observations, and case studies.

Based on Deployment Method

Online surveys.

With technology advancing rapidly, the most popular method of survey research is an online survey. With the internet, you can not only reach a broader audience but also design and customize a survey and deploy it from anywhere. Online surveys have outperformed offline survey methods as they are less expensive and allow researchers to easily collect and analyze data from a large sample.

Paper or Print Surveys

As the name suggests, paper or print surveys use the traditional paper and pencil approach to collect data. Before the invention of computers, paper surveys were the survey method of choice.

Though many would assume that surveys are no longer conducted on paper, it's still a reliable method of collecting information during field research and data collection. However, unlike online surveys, paper surveys are expensive and require extra human resources.

Telephonic Surveys

Telephonic surveys are conducted over telephones where a researcher asks a series of questions to the respondent on the other end. Contacting respondents over a telephone requires less effort, human resources, and is less expensive.

What makes telephonic surveys debatable is that people are often reluctant in giving information over a phone call. Additionally, the success of such surveys depends largely on whether people are willing to invest their time on a phone call answering questions.

One-on-one Surveys

One-on-one surveys also known as face-to-face surveys are interviews where the researcher and respondent. Interacting directly with the respondent introduces the human factor into the survey.

Face-to-face interviews are useful when the researcher wants to discuss something personal with the respondent. The response rates in such surveys are always higher as the interview is being conducted in person. However, these surveys are quite expensive and the success of these depends on the knowledge and experience of the researcher.

Based on Distribution

The easiest and most common way of conducting online surveys is sending out an email. Sending out surveys via emails has a higher response rate as your target audience already knows about your brand and is likely to engage.

Buy Survey Responses

Purchasing survey responses also yields higher responses as the responders signed up for the survey. Businesses often purchase survey samples to conduct extensive research. Here, the target audience is often pre-screened to check if they're qualified to take part in the research.

Embedding Survey on a Website

Embedding surveys on a website is another excellent way to collect information. It allows your website visitors to take part in a survey without ever leaving the website and can be done while a person is entering or exiting the website.

Post the Survey on Social Media

Social media is an excellent medium to reach abroad range of audiences. You can publish your survey as a link on social media and people who are following the brand can take part and answer questions.

Based on Frequency of Deployment

Cross-sectional studies.

Cross-sectional studies are administered to a small sample from a large population within a short period of time. This provides researchers a peek into what the respondents are thinking at a given time. The surveys are usually short, precise, and specific to a particular situation.

Longitudinal Surveys

Longitudinal surveys are an extension of cross-sectional studies where researchers make an observation and collect data over extended periods of time. This type of survey can be further divided into three types:

-       Trend surveys are employed to allow researchers to understand the change in the thought process of the respondents over some time.

-       Panel surveys are administered to the same group of people over multiple years. These are usually expensive and researchers must stick to their panel to gather unbiased opinions.

-       In cohort surveys, researchers identify a specific category of people and regularly survey them. Unlike panel surveys, the same people do not need to take part over the years, but each individual must fall into the researcher’s primary interest category.

Retrospective Survey

Retrospective surveys allow researchers to ask questions to gather data about past events and beliefs of the respondents. Since retrospective surveys also require years of data, they are similar to the longitudinal survey, except retrospective surveys are shorter and less expensive.

Why Should You Conduct Research Surveys?

“In God we trust. All others must bring data” - W. Edwards Deming

 In the information age, survey research is of utmost importance and essential for understanding the opinion of your target population. Whether you’re launching a new product or conducting a social survey, the tool can be used to collect specific information from a defined set of respondents. The data collected via surveys can be further used by organizations to make informed decisions.

Furthermore, compared to other research methods, surveys are relatively inexpensive even if you’re giving out incentives. Compared to the older methods such as telephonic or paper surveys, online surveys have a smaller cost and the number of responses is higher.

 What makes surveys useful is that they describe the characteristics of a large population. With a larger sample size , you can rely on getting more accurate results. However, you also need honest and open answers for accurate results. Since surveys are also anonymous and the responses remain confidential, respondents provide candid and accurate answers.

Common Uses of a Survey

Surveys are widely used in many sectors, but the most common uses of the survey research include:

-       Market research : surveying a potential market to understand customer needs, preferences, and market demand.

-       Customer Satisfaction: finding out your customer’s opinions about your services, products, or companies .

-       Social research: investigating the characteristics and experiences of various social groups.

-       Health research: collecting data about patients’ symptoms and treatments.

-       Politics: evaluating public opinion regarding policies and political parties.

-       Psychology: exploring personality traits, behaviors, and preferences.

6 Steps to Conduct Survey Research

An organization, person, or company conducts a survey when they need the information to make a decision but have insufficient data on hand. Following are six simple steps that can help you design a great survey.

Step 1: Objective of the Survey

The first step in survey research is defining an objective. The objective helps you define your target population and samples. The target population is the specific group of people you want to collect data from and since it’s rarely possible to survey the entire population, we target a specific sample from it. Defining a survey objective also benefits your respondents by helping them understand the reason behind the survey.

Step 2: Number of Questions

The number of questions or the size of the survey depends on the survey objective. However, it’s important to ensure that there are no redundant queries and the questions are in a logical order. Rephrased and repeated questions in a survey are almost as frustrating as in real life. For a higher completion rate, keep the questionnaire small so that the respondents stay engaged to the very end. The ideal length of an interview is less than 15 minutes. ( 2 )

Step 3: Language and Voice of Questions

While designing a survey, you may feel compelled to use fancy language. However, remember that difficult language is associated with higher survey dropout rates. You need to speak to the respondent in a clear, concise, and neutral manner, and ask simple questions. If your survey respondents are bilingual, then adding an option to translate your questions into another language can also prove beneficial.

Step 4: Type of Questions

In a survey, you can include any type of questions and even both closed-ended or open-ended questions. However, opt for the question types that are the easiest to understand for the respondents, and offer the most value. For example, compared to open-ended questions, people prefer to answer close-ended questions such as MCQs (multiple choice questions)and NPS (net promoter score) questions.

Step 5: User Experience

Designing a great survey is about more than just questions. A lot of researchers underestimate the importance of user experience and how it affects their response and completion rates. An inconsistent, difficult-to-navigate survey with technical errors and poor color choice is unappealing for the respondents. Make sure that your survey is easy to navigate for everyone and if you’re using rating scales, they remain consistent throughout the research study.

Additionally, don’t forget to design a good survey experience for both mobile and desktop users. According to Pew Research Center, nearly half of the smartphone users access the internet mainly from their mobile phones and 14 percent of American adults are smartphone-only internet users. ( 3 )

Step 6: Survey Logic

Last but not least, logic is another critical aspect of the survey design. If the survey logic is flawed, respondents may not continue in the right direction. Make sure to test the logic to ensure that selecting one answer leads to the next logical question instead of a series of unrelated queries.

How to Effectively Use Survey Research with Starlight Analytics

Designing and conducting a survey is almost as much science as it is an art. To craft great survey research, you need technical skills, consider the psychological elements, and have a broad understanding of marketing.

The ultimate goal of the survey is to ask the right questions in the right manner to acquire the right results.

Bringing a new product to the market is a long process and requires a lot of research and analysis. In your journey to gather information or ideas for your business, Starlight Analytics can be an excellent guide. Starlight Analytics' product concept testing helps you measure your product's market demand and refine product features and benefits so you can launch with confidence. The process starts with custom research to design the survey according to your needs, execute the survey, and deliver the key insights on time.

  • Survey research in the United States: roots and emergence, 1890-1960 https://searchworks.stanford.edu/view/10733873    
  • How to create a survey questionnaire that gets great responses https://luc.id/knowledgehub/how-to-create-a-survey-questionnaire-that-gets-great-responses/    
  • Internet/broadband fact sheet https://www.pewresearch.org/internet/fact-sheet/internet-broadband/    

Related Articles

Market growth: tap into your full market potential.

Market growth rate is the change in a market’s size over a given period, typically expressed as a positive or negative percentage.

Test Marketing | How to Test Market a New Product

Test marketing is a tool used to help companies test their product and gather customer feedback before its launch. Click here to learn more about it.

Four Critical Pieces of Market Research Your Investors Expect to See

Investors, banks, and venture capitalists understand that a sure thing doesn’t exist but it does not stop them asking for a tangible guarantee. For all the charisma anyone can offer, effective market research carries more weight as you can detail what value you are bringing to the market, who the obtainable market is, how you reach them and what weaknesses exist within your potential competition.

Product Life Cycle | What is it and What are the Stages?

The product life cycle outlines a product's journey from introduction to decline. Learn all about the product life cycle, its stages, and examples here.

The Introduction Stage of Product Life Cycle | What to Know

The introduction stage in the product life cycle is meant to build product awareness. Click here to learn more about the introduction stage and how it works.

Real-Life Voice of Customer Examples & Takeaways

Voice of Customer (VoC) is a market research term that describes customer experiences, expectations, and needs.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

A critical look at online survey or questionnaire-based research studies during COVID-19

In view of restrictions imposed to control COVID-19 pandemic, there has been a surge in online survey-based studies because of its ability to collect data with greater ease and faster speed compared to traditional methods. However, there are important concerns about the validity and generalizability of findings obtained using the online survey methodology. Further, there are data privacy concerns and ethical issues unique to these studies due to the electronic and online nature of survey data. Here, we describe some of the important issues associated with poor scientific quality of online survey findings, and provide suggestions to address them in future studies going ahead.

1. Introduction

Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in the rising number studies employing online survey to collect data since the beginning of COVID-19 pandemic ( Akintunde et al., 2021 ). This could be due to the relative ease of online data collection over traditional face-to-face interviews while following the travel restrictions and distancing guidelines for controlling the spread of COVID-19 pandemic. Further, it offers a cost-effective and faster way of data collection (with no interviewer requirement and automatic data entry) as compared to other means of remote data collection (e.g. telephonic interview) ( Hlatshwako et al., 2021 ), both of which are important for getting rapid results to guide development and implementation public-health interventions for preventing and/or mitigating the harms related to COVID-19 pandemic (e.g. mental health effects of COVID-19, misconceptions related to spread of COVID-19, factors affecting vaccine hesitancy etc.). However, there have been several concerns raised about the validity and generalizability of findings obtained from online survey studies ( Andrade et al., 2020 ; Sagar et al., 2020 ). Here, we describe some of the important issues associated with scientific quality of online survey findings, and provide suggestions to address them in future studies going ahead. The data privacy concerns and ethical issues unique to these studies due to the electronic and online nature survey data have also briefly discussed.

2. Limited generalizability of online survey sample to the target general population

The findings obtained from online surveys need to be generalized to the target population in the real world. For this, the online survey population needs to be clearly defined and should be representative of the target population as much as possible. This would be possible when there is reliable sampling frame for online surveys, and participants could be selected using randomized or probability sampling method. However, online surveys are often conducted via email or online survey platform, with survey link shared on social media platforms or websites or directory of email ids accessed by researchers. Also, participants might be asked to share the survey link further with their eligible contacts. In turn, the population from which the study sample is selected often not clearly defined, and information about response rates (i.e. out of the total number people who viewed the survey link, how many of them did actually respond) are seldom available with the researcher. This makes generalization of study findings unreliable.

This problem may be addressed by sending survey link individually to all the people comprising the study population via email and/ or telephonic message (e.g. all the members of a professional society through membership directory, people residing in a society through official records etc.), with a request not to share the survey link with anyone else. Alternatively, required number of people could be randomly selected from the entire list of potential subjects and approached telephonically for taking consent. Basic socio-demographic details could be obtained from those who refused to participate and share the survey link with those agreeing to participate. Although, if the response rates are low or the socio-demographic details of non-responders significantly differ from that of responders, then the online survey sample is unlikely to be representative of the target study population. Further, this is a more resource intensive strategy and might not be always feasible (as it requires a list of contact details for the entire study population prior to beginning of data collection). In certain situations, when the area of research is relatively new and/or needs urgent exploration for hypothesis generation or guiding immediate response; the online survey study should list all possible attempts made to achieve a representative sample and clearly acknowledge it as a limitation while discussing their study findings ( Zhou et al., 2021 ).

A more recent innovative solution to this problem involves partnership between academic institutions (Maryland University and Carnegie Mellon University) and the Facebook company for conducting online COVID-19 related research ( Barkay et al., 2020 ). The COVID-19 Symptom Survey (CSS) conducted (in more than 200 countries since April 2020) using this approach involves exchange of information between the researchers and the Facebook without compromising the data privacy of information collected from survey participants. The survey link is shared on the Facebook, and user voluntary choose to participate in the study. The Facebook’s active user base is leveraged to provide a reliable sampling frame for the CSS survey. The researchers select random ID numbers for the users who completed the survey, and calculate survey weights for each them on a given day. Survey weights adjust for both non-response errors (helps in making them sample more representative of the Facebook users) and coverage related errors (helps in making generalizing findings obtained using FAUB to the general population) ( Barkay et al., 2020 ). A respondent belonging to a demographic group with a high likelihood of responding to the survey might get a weight of 10, whereas another respondent belonging to a demographic group with less likelihood of responding to survey might get a weight of 50. It also accounts for the proportion or density of Facebook or internet users in a given geographical area. Thus, findings obtained using this approach could be used for drawing inferences about the target general population. The survey weights to be used for weighted analysis of global CSS survey findings for different geographical regions are available to researchers upon request from either of the two above-mentioned academic institutions. For example, spatio-temporal trends in COVID-19 vaccine related hesitancy across different states of India was estimated by a group of Indian researchers using this approach ( Chowdhury et al., 2021 ).

3. Survey fraud and participant disinterest

Survey fraud is when a person takes the online survey more than once with or without any malicious intent (e.g. monetary compensation, helping researchers collect the requisite number of responses). Another related problem is when the participant responds to some or all the survey questions in a casual manner without actually making any attempt at reading and/or understanding them due to reasons like participant disinterest or survey fatigue. This affects the representativeness and validity of online survey findings, and is increasingly being recognized as an important challenge for researchers ( Chandler et al., 2020 ). While providing monetary incentives improves low response rates, it also increases the risk of survey fraud. Similarly, having a shorter survey length with few simple questions decreases the chances of survey fatigue, but limits the ability of researchers to obtain meaningful information about relatively complex issues. A researcher can take different approaches to address these concerns, ranging from relatively simpler ones such as requesting people to not participate more than once, providing different kind of monetary incentives (e.g. donation to a charity instead of the participant), or manually checking survey responses for inconsistent (e.g. age and date of birth responses not consistent) or implausible response patterns (e.g. average daily smartphone use of greater than 24 h, “all or none” response pattern) to more complex ones involving use of computer software or online survey platform features to block multiple entries by same person using IP address and/or internet cookies check, analysis of response time, latency or total time taken to complete survey for detecting fraudulent responses. There have been several different ways described in the available literature to detect fraudulent or inattentive survey responses, with a discussion about merits and demerits of each of them ( Teitcher et al., 2015 ). However, no single method is completely fool proof, and it is recommended to use a combination of different methods to ensure adequate data quality in online surveys.

4. Possible bias introduced in results by the online survey administration mode

One of the contributory reasons for surge in online survey studies assessing mental health related aspects during the COVID-19 pandemic stems from the general thought that psychiatry research could be easily accomplished through scales or questionnaires administered through online survey methods, especially with the reliance on physical examination and other investigation findings being much less or non-existent. However, the reliability and validity of the scales or instruments used in online surveys have been traditionally established in studies administering them in face-to-face settings (often in pen/pencil-paper format) rather than online mode. There could be variation introduced in the results with different survey administration modes, which is often described as the measurement effect ( Jäckle et al., 2010 ). This could be due to differences in the participants’ level of engagement, understanding of questions, social desirability bias experienced across different survey administration methods. Few studies using the same study sample or sample sampling frame have compared the results obtained with difference in survey administration mode (ie. traditional face-to-face [paper format] vs. online survey), with mixed findings suggesting large significant differences to small significant difference or no significant differences ( Determann et al., 2017 , Norman et al., 2010 , Saloniki et al., 2019 ). This suggests the need for conducting further studies before arriving at a final conclusion. Hence, we need to be careful while interpreting the results of online survey studies. Ideally, online survey findings should be compared with those obtained using traditional survey administration mode, and validation studies should be conducted to establish the psychometric properties of these scales for online survey mode.

5. Inadequately described online survey methodology

A recent systematic review assessing the quality of 80 online survey based published studies assessing the mental health impact of COVID-19 pandemic, reported that a large majority of them did not adhere to the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) guideline aimed at improving the quality of online surveys ( Eysenbach, 2004 , Sharma et al., 2021 ). Information related to parameters such as view rate (Ratio of unique survey visitors/unique site visitors), participation rate (Ratio of unique visitors who agreed to participate/unique first survey page visitors), and completion rate (Ratio of users who finished the survey/users who agreed to participate); which gives an idea about the representativeness of the online study sample as described previously were not mentioned in about two-third studies. Similarly, information about steps taken to prevent multiple entries by same participant or analysis of atypical timestamps to check for fraudulent and inattentive survey responses was provided by less than 5% studies. Thus, it is imperative to popularize and emphasize upon the use of these reporting guidelines for online survey studies to improve the scientific value of findings obtained from internet-based studies.

6. Data privacy and ethics of online survey studies

Lastly, most of the online survey studies either did not mention at all or mentioned in passing about maintain the anonymity and confidentiality of information obtained from online survey. However, details about the various steps or precautions taken by the researchers to ensure data safety and privacy were seldom mentioned (e.g. de-identified data, encryption process or password protected data storage, use of HIPAA-compliant online survey form/platform etc.). The details and limitations of safety steps taken, and the possibility of data leak should be clearly mentioned/ communicated to participants at the time of taking informed consent (rather than simply mentioning anonymity and confidentiality of information obtained will be ensured, as is the case with offline studies). Moreover, obtaining ethical approval prior to conducting online survey studies is a must. The various ethical concerns unique to online survey methodology (e.g. issues with data protection, informed consent process, survey fraud, online survey administration etc.) should be adequately described in the protocol and deliberated upon by the review boards ( Buchanan and Hvizdak, 2009 , Gupta, 2017 ).

In conclusion, there is an urgent need to consider the above described issues while planning and conducting an online survey, and also reviewing the findings obtained from these studies to improve the overall quality and utility of internet-based research during COVID-19 and post-COVID era.

Financial disclosure

The authors did not receive any funding for this work.

Acknowledgments

Conflict of interest.

The authors have no conflict of interest to declare.

  • Akintunde T.Y., Musa T.H., Musa H.H., Musa I.H., Chen S., Ibrahim E., Tassang A.E., Helmy M. Bibliometric analysis of global scientific literature on effects of COVID-19 pandemic on mental health. Asian J. Psychiatry. 2021; 63 doi: 10.1016/j.ajp.2021.102753. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andrade C. The limitations of online surveys. Indian J. Psychol. Med. 2020; 42 (6):575–576. doi: 10.1177/0253717620957496. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barkay, N., Cobb, C., Eilat, R., Galili, T., Haimovich, D., LaRocca, S., ..., Sarig, T., 2020. Weights and methodology brief for the COVID-19 symptom survey by University of Maryland and Carnegie Mellon University, in partnership with Facebook, arXiv preprint arXiv:2009.14675.
  • Buchanan E.A., Hvizdak E.E. Online survey tools: ethical and methodological concerns of human research ethics committees. J. Empir. Res. Hum. Res. Ethics. 2009; 4 (2):37–48. doi: 10.1525/jer.2009.4.2.37. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chandler J., Sisso I., Shapiro D. Participant carelessness and fraud: consequences for clinical research and potential solutions. J. Abnorm. Psychol. 2020; 129 (1):49–55. doi: 10.1037/abn0000479. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chowdhury, S.R., Motheram, A., Pramanik, S., 2021. Covid-19 vaccine hesitancy: trends across states, over time. Ideas For India, 14 April, Available from: 〈https://www.ideasforindia.in/topics/governance/covid-19-vaccine-hesitancy-trends-across-states-over-time.html%20%20〉 (Accessed 4 August 2021).
  • Determann D., Lambooij M.S., Steyerberg E.W., de Bekker-Grob E.W., de Wit G.A. Impact of survey administration mode on the results of a health-related discrete choice experiment: online and paper comparison. Value Health.: J. Int. Soc. Pharm. Outcomes Res. 2017; 20 (7):953–960. doi: 10.1016/j.jval.2017.02.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) J. Med. Internet Res. 2004; 6 (3):34. doi: 10.2196/jmir.6.3.e34. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gupta S. Ethical issues in designing internet-based research: recommendations for good practice. J. Res. Pract. 2017; 13 (2) Article D1. [ Google Scholar ]
  • Hlatshwako T.G., Shah S.J., Kosana P., Adebayo E., Hendriks J., Larsson E.C., Hensel D.J., Erausquin J.T., Marks M., Michielsen K., Saltis H., Francis J.M., Wouters E., Tucker J.D. Online health survey research during COVID-19. Lancet Digit. Health. 2021; 3 (2):e76–e77. doi: 10.1016/S2589-7500(21)00002-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jäckle A., Roberts C., Lynn P. Assessing the effect of data collection mode on measurement. Int. Stat. Rev. 2010; 78 (1):3–20. doi: 10.1111/j.1751-5823.2010.00102.x. [ CrossRef ] [ Google Scholar ]
  • Norman R., King M.T., Clarke D., Viney R., Cronin P., Street D. Does mode of administration matter? Comparison of online and face-to-face administration of a time trade-off task. Qual. Life Res.: Int. J. Qual. Life Asp. Treat., Care Rehabil. 2010; 19 (4):499–508. doi: 10.1007/s11136-010-9609-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sagar R., Chawla N., Sen M.S. Is it correct to estimate mental disorder through online surveys during COVID-19 pandemic? Psychiatry Res. 2020; 291 doi: 10.1016/j.psychres.2020.113251. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saloniki E.C., Malley J., Burge P., Lu H., Batchelder L., Linnosmaa I., Trukeschitz B., Forder J. Comparing internet and face-to-face surveys as methods for eliciting preferences for social care-related quality of life: evidence from England using the ASCOT service user measure. Qual. Life Res.: Int. J. Qual. Life Asp. Treat., Care Rehabil. 2019; 28 (8):2207–2220. doi: 10.1007/s11136-019-02172-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma R., Tikka S.K., Bhute A.R., Bastia B.K. Adherence of online surveys on mental health during the early part of the COVID-19 outbreak to standard reporting guidelines: a systematic review. Asian J. Psychiatry. 2021; 65 doi: 10.1016/j.ajp.2021.102799. (Advance online publication) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Teitcher J.E., Bockting W.O., Bauermeister J.A., Hoefer C.J., Miner M.H., Klitzman R.L. Detecting, preventing, and responding to “fraudsters” in internet research: ethics and tradeoffs. J. Law, Med. Ethics: J. Am. Soc. Law, Med. Ethics. 2015; 43 (1):116–133. doi: 10.1111/jlme.12200. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhou T., Chen W., Liu X., Wu T., Wen L., Yang X., Hou Z., Chen B., Zhang T., Zhang C., Xie C., Zhou X., Wang L., Hua J., Tang Q., Zhao M., Hong X., Liu W., Du C., Li Y., Yu X. Children of parents with mental illness in the COVID-19pandemic: a cross-sectional survey in China. Asian J. Psychiatry. 2021; 64 doi: 10.1016/j.ajp.2021.102801. (Advance online publication) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Utility Menu

University Logo

Harvard University Program on Survey Research

The survey statistician.

This newsletter, produced by the International Association for Survey Statisticians, covers essential topics in sampling approaches.

Survey Research Newsletter

Survey Research is a subscription based newsletter published three times a year and serves as a clearinghouse for information about academic and not-for-profit survey research organizations around the world.   Past issues are available on-line.

http://www....

Survey Research Methods

This peer-reviewed journal aims to be a high quality scientific publication that will be of interest to researchers in all disciplines involved in the design, implementation and analysis of surveys. The journal is published electronically with free and open access via the internet.

www.princeton.edu/~psrc/

http://srcweb.berkeley.edu/

International Journal of Public Opinion Research

The International Journal of Public Opinion Research is a source of informed analysis and comment for both professionals and academics. Edited by a board drawn from over a dozen countries and several disciplines, and operated on a professional referee system, the journal is the first...

Survey Methodology

Survey Methodology, published by Statistics Canada, publishes articles dealing with various aspects of statistical development such as design issues in the context of practical constraints, use of different data sources and collection techniques, total survey error, survey evaluation, research...

Journal of Official Statistics

The Journal of Official Statistics is published by Statistics Sweden, the national statistical office of Sweden. The journal publishes articles on statistical methodology and theory, with an emphasis on applications. It is an open access journal, which gives the right to users to read,...

Public Opinion Quarterly

Public Opinion Quarterly (POQ), published by the American Association for Public Opinion Research (AAPOR), is one of the major journals which covers both survey methods and public opinion research.  Available through the Harvard library and most other university library systems.  POQ...

Survey Practice

Survey Practice is an on-line journal about the practice of survey research published by the American Association of Public Opinion Research (AAPOR).

PublicOpinionPros.com

An Online Magazine for the Public Opinion Professional (and Anybody Else)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Survey research

Affiliation.

  • 1 Ann Arbor, Mich. From the Section of Plastic Surgery, Department of Surgery, The University of Michigan Medical Center; and Division of General Medicine, Department of Internal Medicine, University of Michigan.
  • PMID: 20885261
  • DOI: 10.1097/PRS.0b013e3181ea44f9

Survey research is a unique methodology that can provide insight into individuals' perspectives and experiences and can be collected on a large population-based sample. Specifically, in plastic surgery, survey research can provide patients and providers with accurate and reproducible information to assist with medical decision-making. When using survey methods in research, researchers should develop a conceptual model that explains the relationships of the independent and dependent variables. The items of the survey are of primary importance. Collected data are only useful if they accurately measure the concepts of interest. In addition, administration of the survey must follow basic principles to ensure an adequate response rate and representation of the intended target sample. In this article, the authors review some general concepts important for successful survey research and discuss the many advantages this methodology has for obtaining limitless amounts of valuable information.

PubMed Disclaimer

Similar articles

  • Survey Research. Story DA, Tait AR. Story DA, et al. Anesthesiology. 2019 Feb;130(2):192-202. doi: 10.1097/ALN.0000000000002436. Anesthesiology. 2019. PMID: 30688782 Review.
  • Health-related quality of life in early breast cancer. Groenvold M. Groenvold M. Dan Med Bull. 2010 Sep;57(9):B4184. Dan Med Bull. 2010. PMID: 20816024
  • Survey research techniques. Leaver D. Leaver D. Radiol Technol. 2000 Mar-Apr;71(4):364-75; quiz 376-8. Radiol Technol. 2000. PMID: 10743667 Review.
  • Special issues in assessing care of Medicaid recipients. Brown JA, Nederend SE, Hays RD, Short PF, Farley DO. Brown JA, et al. Med Care. 1999 Mar;37(3 Suppl):MS79-88. doi: 10.1097/00005650-199903001-00009. Med Care. 1999. PMID: 10098562 Clinical Trial.
  • The use of cognitive testing to develop and evaluate CAHPS 1.0 core survey items. Consumer Assessment of Health Plans Study. Harris-Kojetin LD, Fowler FJ Jr, Brown JA, Schnaier JA, Sweeny SF. Harris-Kojetin LD, et al. Med Care. 1999 Mar;37(3 Suppl):MS10-21. doi: 10.1097/00005650-199903001-00002. Med Care. 1999. PMID: 10098555
  • Factors affecting the support for physical activity in children and adolescents with type 1 diabetes mellitus: a national survey of health care professionals' perceptions. Cockcroft EJ, Wooding EL, Narendran P, Dias RP, Barker AR, Moudiotis C, Clarke R, Andrews RC. Cockcroft EJ, et al. BMC Pediatr. 2023 Mar 22;23(1):131. doi: 10.1186/s12887-023-03940-3. BMC Pediatr. 2023. PMID: 36949473 Free PMC article.
  • Surveys in urology. Kumar R. Kumar R. Indian J Urol. 2022 Jan-Mar;38(1):1-2. doi: 10.4103/iju.iju_455_21. Indian J Urol. 2022. PMID: 35136287 Free PMC article. No abstract available.
  • Redesigning a Healthcare Demand Questionnaire for National Population Survey: Experience of a Developing Country. Chong DWQ, Jawahir S, Tan EH, Sararaks S. Chong DWQ, et al. Int J Environ Res Public Health. 2021 Apr 22;18(9):4435. doi: 10.3390/ijerph18094435. Int J Environ Res Public Health. 2021. PMID: 33921985 Free PMC article.
  • COVID-19 Lockdown and the Behavior Change on Physical Exercise, Pain and Psychological Well-Being: An International Multicentric Study. Sonza A, da Cunha de Sá-Caputo D, Sartorio A, Tamini S, Seixas A, Sanudo B, Süßenbach J, Provenza MM, Xavier VL, Taiar R, Bernardo-Filho M. Sonza A, et al. Int J Environ Res Public Health. 2021 Apr 6;18(7):3810. doi: 10.3390/ijerph18073810. Int J Environ Res Public Health. 2021. PMID: 33917363 Free PMC article.
  • A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). Sharma A, Minh Duc NT, Luu Lam Thang T, Nam NH, Ng SJ, Abbas KS, Huy NT, Marušić A, Paul CL, Kwok J, Karbwang J, de Waure C, Drummond FJ, Kizawa Y, Taal E, Vermeulen J, Lee GHM, Gyedu A, To KG, Verra ML, Jacqz-Aigrain ÉM, Leclercq WKG, Salminen ST, Sherbourne CD, Mintzes B, Lozano S, Tran US, Matsui M, Karamouzian M. Sharma A, et al. J Gen Intern Med. 2021 Oct;36(10):3179-3187. doi: 10.1007/s11606-021-06737-1. Epub 2021 Apr 22. J Gen Intern Med. 2021. PMID: 33886027 Free PMC article. No abstract available.
  • Alderman AK, Ubel P, Kim H, Fox D, Chung K. Surgical management of the rheumatoid hand: Consensus and controversy among rheumatologists and hand surgeons. J Rheumatol. 2003;30:1464–1472.
  • Alderman AK, Chung K, Kim H, Fox D, Ubel P. Effectiveness of rheumatoid hand surgery: Contrasting perceptions of hand surgeons and rheumatologists. J Hand Surg (Am.) 2003;28:3–11.
  • Babbie E. Survey Research Methods. 2nd ed. Belmont, Calif.: Wadsworth; 1998.
  • Alderman AK, Hawley ST, Waljee J, et al. Correlates of referral practices of general surgeons to plastic surgeons for mastectomy reconstruction. Cancer 2007;109:1715–1720.
  • Riegelman R. Studying a Study and Testing a Test. Philadelphia: Lippincott Williams & Wilkins; 2000.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • Wolters Kluwer

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

How Pew Research Center Uses Its National Public Opinion Reference Survey (NPORS)

In 2020, Pew Research Center launched a new project called the  National Public Opinion Reference Survey (NPORS) . NPORS is an annual, cross-sectional survey of U.S. adults. Respondents can answer by paper, online or over the phone, and they are selected using address-based sampling from the United States Postal Service’s Computerized Delivery Sequence File. The response rate to the latest NPORS was 32%, and previous years’ surveys were designed with a similarly rigorous approach. 

NPORS estimates are separate from the  American Trends Panel  (ATP) – the Center’s national online survey platform. Pew Research Center launched NPORS to address a limitation that researchers observed in the ATP. While the ATP was well-suited for the vast majority of the Center’s U.S. survey work, estimates for a few outcomes were not in line with other high-quality surveys, even after weighting to demographics like age, education, race and ethnicity, and gender.

For example, in 2018, roughly one-quarter of U.S. adults were religiously unaffiliated (i.e., atheist, agnostic or “nothing in particular”), according to the General Social Survey (GSS) and the Center’s own telephone-based polling . The ATP, however,  estimated the religiously unaffiliated rate at about 32%. The Center did not feel comfortable publishing that ATP estimate because there was too much evidence that the rate was too high, likely because the types of people willing to participate in an online panel skew less religious than the population as a whole. Similarly, the ATP estimate for the share of U.S. adults identifying as a Democrat or leaning to the Democratic Party was somewhat higher than the rate indicated by the GSS and our own telephone surveys .

From 2014 to late 2020, the Center approached these outcomes slightly differently. We addressed the political partisanship issue by weighting every ATP survey to an external benchmark for the share of Americans identifying as a Republican, Democrat or independent. For the benchmark, we used the average of the results from our three most recent national cellphone and landline random-digit-dial (RDD) surveys. 

During this time period, ATP surveys were not weighted to an external benchmark for Americans’ religious affiliation. The ATP was used for some research on religious beliefs and behaviors, but it was not used to estimate the overall share of Americans identifying as religiously affiliated or unaffiliated, nor was it used to estimate the size of particular faith groups, such as Catholics, Protestants or the Church of Jesus Christ of Latter-day Saints. NPORS allows us to improve and harmonize our approach to both these outcomes (Americans’ political and religious affiliations). 

Design and estimates

Read our fact sheet to find the latest NPORS estimates as well as methodological details. Data collection for NPORS was performed by Ipsos from 2020 through 2023 and is now performed by SSRS. 

Why is the NPORS response rate higher than most opinion polls?

Several features of NPORS set it apart from a typical public opinion poll. 

  • People can respond offline or online.  NPORS offers three different ways to respond: by paper (through the mail), online, or by telephone (by calling a provided phone number and speaking to a live interviewer). The paper and telephone options bring in more conservative, more religious adults who are less inclined to take surveys online.
  • Monetary incentives.  When sampled adults are first asked to respond to NPORS online, the mailing contains a $2 incentive payment (cash visible from the outside of the envelope) and offers a $10 incentive payment contingent on the participant completing the survey. When nonrespondents to that first stage are sent the paper version of the survey, the mailing contains a visible $5 bill. These incentives give people a reason to respond, even if they might not be interested in the questions or inclined to take surveys in general. 
  • Priority mailing.  The paper version of the survey is mailed in a USPS Priority Mail envelope, which is more expensive than a normal envelope, signaling that the contents are important and that the mailing is not haphazard. It helps people distinguish the survey from junk mail, increasing the likelihood that they open and read what is inside. 
  • Low burden.  The NPORS questionnaire is intentionally kept short. It’s about 40 questions long, including demographics such as age, gender and education. This means that NPORS takes about seven minutes to finish, while many polls take 10 minutes or longer. 
  • Bilingual materials.  In parts of the country with sizable shares of Hispanic Americans, the materials are sent in both English and Spanish. 
  • No requirement to join a panel.  NPORS respondents are not required to join a survey panel, which for some people would be a reason to decline the request. 

These features are not possible in most public polls for a host of reasons. But NPORS is designed to produce estimates of high enough quality that they can be used as weighting benchmarks for other polls, and so these features are critical.

Why a ‘reference’ survey for public opinion?

The “R” in NPORS stands for “reference.” In this context, the term comes from  studies  in which researchers calibrate a small sample survey to a large, high-quality survey with greater precision and accuracy. Examples of reference surveys used by researchers include the Census Bureau’s American Community Survey (ACS) and the Current Population Survey (CPS). NPORS is not on the scale of the ACS or CPS, nor does it feature face-to-face data collection. But it does have something that those studies lack: timely estimates of key public opinion outcomes. Other studies like the American National Election Survey (ANES) and the General Social Survey collect key public opinion measures, but their data is released months, if not years, after data collection. The ANES, while invaluable to academic researchers, also excludes noncitizens who constitute about 7% of adults living in the U.S. and are included in the Center’s surveys.

NPORS is truly a reference survey for Pew Research Center because researchers weight each American Trends Panel wave to several NPORS estimates. In other words, ATP surveys refer to NPORS in order to represent groups like Republicans, Democrats, religiously affiliated adults and religiously unaffiliated adults proportional to their share of the U.S. population. The ATP weighting protocol also calibrates to other benchmarks, such as ACS demographic figures and CPS benchmarks for voter registration status and volunteerism.

Pew Research Center is weighting on political party affiliation, but isn’t that an attitude?

It’s correct that whether someone considers themselves a Republican or a Democrat is an attitude, not a fixed characteristic, such as year of birth. But there is a way to weight on political party affiliation even though it is an attitude and without forcing the poll’s partisan distribution to align with a benchmark. 

Pew Research Center started implementing this approach in 2021. It begins with measuring the survey panelists’ political party affiliation at a certain point in time (typically, each summer). Ideally, the reference survey will measure the same construct at the same point in time. We launched NPORS because we control its timing as well as the American Trends Panel’s timing, allowing us to achieve this syncing.

NPORS and ATP measurements of political party are collected at approximately the same time each summer. We may then conduct roughly 25 surveys on the ATP over the next year. For each of those 25 surveys, we append the panelists’ party affiliation answers from the summer  to the current survey. To illustrate, let’s say that a survey was conducted in December. When researchers weight the December ATP survey, they take the measurement of party taken in the summer and weight that to the NPORS estimates for the partisan distribution of U.S. adults during the summer time frame. If, for example, Democrats were more likely than Republicans to respond to the December survey, the weighting to the NPORS target would help reduce the differential partisan nonresponse bias. 

Critically, if the hypothetical December poll featured a fresh measurement of political party affiliation (typically asked about three times a year on the ATP), the new December answers do  not  get forced to any target. The new partisan distribution is allowed to vary. In this way, we can both address the threat from differential partisan nonresponse and measure an attitude that changes over time (without dictating the outcome). Each summer, the process starts anew by measuring political party on the ATP at basically the same time as the NPORS data collection. 

Is the NPORS design connected to the American Trends Panel?

A key feature of NPORS is that respondents are not members of a survey panel. It is a fresh, random sample of U.S. adults. This matters because some people are willing to take a onetime survey like NPORS but are not interested in taking surveys on an ongoing basis as part of a panel. That said, in certain years, NPORS serves as a recruitment survey for the ATP. After the NPORS questions, we ask respondents if they would be willing to take future surveys. People who accept and those who decline are both part of the NPORS survey. But only those who consent to future surveys are eventually invited to join the ATP.

Can other survey researchers use NPORS?

Yes. As a nonprofit organization, we seek to make our research as useful to policymakers, survey practitioners and scholars as possible. As with the Center’s other survey work, the estimates and data are freely available. 

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Using AI at Work Makes Us Lonelier and Less Healthy

  • David De Cremer
  • Joel Koopman

research and survey article

Employees who use AI as a core part of their jobs report feeling more isolated, drinking more, and sleeping less than employees who don’t.

The promise of AI is alluring — optimized productivity, lightning-fast data analysis, and freedom from mundane tasks — and both companies and workers alike are fascinated (and more than a little dumbfounded) by how these tools allow them to do more and better work faster than ever before. Yet in fervor to keep pace with competitors and reap the efficiency gains associated with deploying AI, many organizations have lost sight of their most important asset: the humans whose jobs are being fragmented into tasks that are increasingly becoming automated. Across four studies, employees who use it as a core part of their jobs reported feeling lonelier, drinking more, and suffering from insomnia more than employees who don’t.

Imagine this: Jia, a marketing analyst, arrives at work, logs into her computer, and is greeted by an AI assistant that has already sorted through her emails, prioritized her tasks for the day, and generated first drafts of reports that used to take hours to write. Jia (like everyone who has spent time working with these tools) marvels at how much time she can save by using AI. Inspired by the efficiency-enhancing effects of AI, Jia feels that she can be so much more productive than before. As a result, she gets focused on completing as many tasks as possible in conjunction with her AI assistant.

  • David De Cremer is a professor of management and technology at Northeastern University and the Dunton Family Dean of its D’Amore-McKim School of Business. His website is daviddecremer.com .
  • JK Joel Koopman is the TJ Barlow Professor of Business Administration at the Mays Business School of Texas A&M University. His research interests include prosocial behavior, organizational justice, motivational processes, and research methodology. He has won multiple awards from Academy of Management’s HR Division (Early Career Achievement Award and David P. Lepak Service Award) along with the 2022 SIOP Distinguished Early Career Contributions award, and currently serves on the Leadership Committee for the HR Division of the Academy of Management .

Partner Center

China leads the world in adoption of generative AI, survey shows

  • Medium Text

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai

Sign up here.

Reporting by Eduardo Baptista in Beijing; Additional reporting by Liam Mo; Editing by Clarence Fernandez

Our Standards: The Thomson Reuters Trust Principles. New Tab , opens new tab

A Microsoft logo is seen in Issy-les-Moulineaux near Paris

Technology Chevron

Microsoft ditches openai board observer seat amid regulatory scrutiny.

Microsoft has ditched its board observer seat at OpenAI that has drawn regulatory scrutiny on both sides of the Atlantic, saying it was not necessary after the AI start-up's governance had improved significantly in the past eight months.

Samsung Electronics Union workers begin a three-day strike, in Hwaseong

IMAGES

  1. (PDF) Principles of survey research part 6: Data analysis

    research and survey article

  2. (PDF) Understanding and Evaluating Survey Research

    research and survey article

  3. (PDF) Principles of survey research: part 3: constructing a survey

    research and survey article

  4. (PDF) Survey Research

    research and survey article

  5. A Comprehensive Guide to Survey Research Methodologies

    research and survey article

  6. (PDF) The Importance of Survey Research Standards

    research and survey article

VIDEO

  1. AI researchers on the future of AI #artificialintelligence #future #milestone #sciencenews

  2. Survey Research

  3. Survey Research/types and advantages of survey research

  4. Research in 3 Minutes: Peer Review

  5. Differences between SURVEY article and REVIEW article

  6. What is a Survey and How to Design It? Research Beast

COMMENTS

  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  2. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  3. Survey response rates: Trends and a validity assessment framework

    Survey methodology has been and continues to be a pervasively used data-collection method in social science research. To better understand the state of the science, we first analyze response-rate information reported in 1014 surveys described in 703 articles from 17 journals from 2010 to 2020.

  4. A quick guide to survey research

    Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates. Go to:

  5. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  6. High-Impact Articles

    High-Impact Articles. Journal of Survey Statistics and Methodology, sponsored by the American Association for Public Opinion Research and the American Statistical Association, began publishing in 2013.Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  7. Conducting Online Surveys

    Abstract. There is an established methodology for conducting survey research that aims to ensure rigorous research and robust outputs. With the advent of easy-to-use online survey platforms, however, the quality of survey studies has declined. This article summarizes the pros and cons of online surveys and emphasizes the key principles of ...

  8. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  9. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  10. Advance articles

    Research Article 27 June 2023. Survey Consent to Administrative Data Linkage: Five Experiments on Wording and Format. Annette Jäckle and others. ... The Use of QR Codes to Encourage Participation in Mail Push-To-Web Surveys: An Evaluation of Experiments from 2015 and 2022

  11. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  12. Survey Research

    Survey research. Kerry Tanner, in Research Methods for Students, Academics and Professionals (Second Edition), 2002. Introduction to survey research. Survey research involves the collection of primary data from all or part of a population, in order to determine the incidence, distribution, and interrelationships of certain variables within the population. . It encompasses a variety of data ...

  13. SURVEY RESEARCH

    Abstract For the first time in decades, conventional wisdom about survey methodology is being challenged on many fronts. The insights gained can not only help psychologists do their research better but also provide useful insights into the basics of social interaction and cognition. This chapter reviews some of the many recent advances in the literature, including the following: New findings ...

  14. Reporting Guidelines for Survey Research: An Analysis of ...

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  15. Good practice in the conduct and reporting of survey research

    What is survey research? Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth and Joseph Rowntree ), and indeed survey research remains most used in applied ...

  16. Full article: Analyzing survey data in marketing research: A guide for

    Introduction. Surveys are at the forefront of the broader marketing discipline - mostly because they are relatively cheap, can quickly reach a large number of people, and are likely to generate findings that advance theory and practice (Hulland et al., Citation 2018).Despite these benefits, numerous researchers have overlooked certain analytical techniques that must be undertaken when using ...

  17. A Comprehensive Guide to Survey Research Methodologies

    In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey. ‍ What is Survey Research ‍ A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular ...

  18. A critical look at online survey or questionnaire-based research

    Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in ...

  19. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  20. PDF Survey Research

    The National Science Foundation turned to survey research for the 2000 National Survey because it is an efficient method for systematically collecting data from a broad spectrum of individuals and educational settings. As you probably have observed, a great many researchers choose this method of data collection.

  21. Journals

    Read more about Survey Research Newsletter. Survey Research Methods. This peer-reviewed journal aims to be a high quality scientific publication that will be of interest to researchers in all disciplines involved in the design, implementation and analysis of surveys. The journal is published electronically with free and open access via the ...

  22. What is the difference between a survey paper and a ...

    Survey articles are not that common in some fields of study. While there is a slight overlap between a survey article and a review article, a systematic review is quite different. ... As a primer on the major types of secondary research articles, you may refer to this piece: Secondary research - the basics of narrative reviews, systematic ...

  23. Survey research

    Abstract. Survey research is a unique methodology that can provide insight into individuals' perspectives and experiences and can be collected on a large population-based sample. Specifically, in plastic surgery, survey research can provide patients and providers with accurate and reproducible information to assist with medical decision-making.

  24. How Pew Research Center Uses Its National Public Opinion Reference

    In 2020, Pew Research Center launched a new project called the National Public Opinion Reference Survey (NPORS). NPORS is an annual, cross-sectional survey of U.S. adults. Respondents can answer either by paper or online, and they are selected using address-based sampling from the United States Postal Service's computerized delivery sequence file.

  25. Research: Using AI at Work Makes Us Lonelier and Less Healthy

    Joel Koopman is the TJ Barlow Professor of Business Administration at the Mays Business School of Texas A&M University. His research interests include prosocial behavior, organizational justice ...

  26. Why was I eliminated from a survey?

    It's all about something called the 'screening process'. In the world of market research, the screening process is an essential step in ensuring the validity of survey results. This article provides a simple, in-depth explanation of how screening works and why you might sometimes be screened out of a survey.

  27. China leads the world in adoption of generative AI, survey shows

    In a survey of 1,600 decision-makers in industries worldwide by U.S. AI and analytics software company SAS and Coleman Parkes Research, 83% of Chinese respondents said they used generative AI, the ...

  28. More than half of Gen Zers think they 'can easily make a career in

    Three in 10 teenager and early adults would pay to be an influencer.