U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Adv Pract Oncol
  • v.6(2); Mar-Apr 2015

Logo of jadpraconcol

Understanding and Evaluating Survey Research

A variety of methodologic approaches exist for individuals interested in conducting research. Selection of a research approach depends on a number of factors, including the purpose of the research, the type of research questions to be answered, and the availability of resources. The purpose of this article is to describe survey research as one approach to the conduct of research so that the reader can critically evaluate the appropriateness of the conclusions from studies employing survey research.


Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research strategies (e.g., using questionnaires with numerically rated items), qualitative research strategies (e.g., using open-ended questions), or both strategies (i.e., mixed methods). As it is often used to describe and explore human behavior, surveys are therefore frequently used in social and psychological research ( Singleton & Straits, 2009 ).

Information has been obtained from individuals and groups through the use of survey research for decades. It can range from asking a few targeted questions of individuals on a street corner to obtain information related to behaviors and preferences, to a more rigorous study using multiple valid and reliable instruments. Common examples of less rigorous surveys include marketing or political surveys of consumer patterns and public opinion polls.

Survey research has historically included large population-based data collection. The primary purpose of this type of survey research was to obtain information describing characteristics of a large sample of individuals of interest relatively quickly. Large census surveys obtaining information reflecting demographic and personal characteristics and consumer feedback surveys are prime examples. These surveys were often provided through the mail and were intended to describe demographic characteristics of individuals or obtain opinions on which to base programs or products for a population or group.

More recently, survey research has developed into a rigorous approach to research, with scientifically tested strategies detailing who to include (representative sample), what and how to distribute (survey method), and when to initiate the survey and follow up with nonresponders (reducing nonresponse error), in order to ensure a high-quality research process and outcome. Currently, the term "survey" can reflect a range of research aims, sampling and recruitment strategies, data collection instruments, and methods of survey administration.

Given this range of options in the conduct of survey research, it is imperative for the consumer/reader of survey research to understand the potential for bias in survey research as well as the tested techniques for reducing bias, in order to draw appropriate conclusions about the information reported in this manner. Common types of error in research, along with the sources of error and strategies for reducing error as described throughout this article, are summarized in the Table .

An external file that holds a picture, illustration, etc.
Object name is jadp-06-168-g01.jpg

Sources of Error in Survey Research and Strategies to Reduce Error

The goal of sampling strategies in survey research is to obtain a sufficient sample that is representative of the population of interest. It is often not feasible to collect data from an entire population of interest (e.g., all individuals with lung cancer); therefore, a subset of the population or sample is used to estimate the population responses (e.g., individuals with lung cancer currently receiving treatment). A large random sample increases the likelihood that the responses from the sample will accurately reflect the entire population. In order to accurately draw conclusions about the population, the sample must include individuals with characteristics similar to the population.

It is therefore necessary to correctly identify the population of interest (e.g., individuals with lung cancer currently receiving treatment vs. all individuals with lung cancer). The sample will ideally include individuals who reflect the intended population in terms of all characteristics of the population (e.g., sex, socioeconomic characteristics, symptom experience) and contain a similar distribution of individuals with those characteristics. As discussed by Mady Stovall beginning on page 162, Fujimori et al. ( 2014 ), for example, were interested in the population of oncologists. The authors obtained a sample of oncologists from two hospitals in Japan. These participants may or may not have similar characteristics to all oncologists in Japan.

Participant recruitment strategies can affect the adequacy and representativeness of the sample obtained. Using diverse recruitment strategies can help improve the size of the sample and help ensure adequate coverage of the intended population. For example, if a survey researcher intends to obtain a sample of individuals with breast cancer representative of all individuals with breast cancer in the United States, the researcher would want to use recruitment strategies that would recruit both women and men, individuals from rural and urban settings, individuals receiving and not receiving active treatment, and so on. Because of the difficulty in obtaining samples representative of a large population, researchers may focus the population of interest to a subset of individuals (e.g., women with stage III or IV breast cancer). Large census surveys require extremely large samples to adequately represent the characteristics of the population because they are intended to represent the entire population.


Survey research may use a variety of data collection methods with the most common being questionnaires and interviews. Questionnaires may be self-administered or administered by a professional, may be administered individually or in a group, and typically include a series of items reflecting the research aims. Questionnaires may include demographic questions in addition to valid and reliable research instruments ( Costanzo, Stawski, Ryff, Coe, & Almeida, 2012 ; DuBenske et al., 2014 ; Ponto, Ellington, Mellon, & Beck, 2010 ). It is helpful to the reader when authors describe the contents of the survey questionnaire so that the reader can interpret and evaluate the potential for errors of validity (e.g., items or instruments that do not measure what they are intended to measure) and reliability (e.g., items or instruments that do not measure a construct consistently). Helpful examples of articles that describe the survey instruments exist in the literature ( Buerhaus et al., 2012 ).

Questionnaires may be in paper form and mailed to participants, delivered in an electronic format via email or an Internet-based program such as SurveyMonkey, or a combination of both, giving the participant the option to choose which method is preferred ( Ponto et al., 2010 ). Using a combination of methods of survey administration can help to ensure better sample coverage (i.e., all individuals in the population having a chance of inclusion in the sample) therefore reducing coverage error ( Dillman, Smyth, & Christian, 2014 ; Singleton & Straits, 2009 ). For example, if a researcher were to only use an Internet-delivered questionnaire, individuals without access to a computer would be excluded from participation. Self-administered mailed, group, or Internet-based questionnaires are relatively low cost and practical for a large sample ( Check & Schutt, 2012 ).

Dillman et al. ( 2014 ) have described and tested a tailored design method for survey research. Improving the visual appeal and graphics of surveys by using a font size appropriate for the respondents, ordering items logically without creating unintended response bias, and arranging items clearly on each page can increase the response rate to electronic questionnaires. Attending to these and other issues in electronic questionnaires can help reduce measurement error (i.e., lack of validity or reliability) and help ensure a better response rate.

Conducting interviews is another approach to data collection used in survey research. Interviews may be conducted by phone, computer, or in person and have the benefit of visually identifying the nonverbal response(s) of the interviewee and subsequently being able to clarify the intended question. An interviewer can use probing comments to obtain more information about a question or topic and can request clarification of an unclear response ( Singleton & Straits, 2009 ). Interviews can be costly and time intensive, and therefore are relatively impractical for large samples.

Some authors advocate for using mixed methods for survey research when no one method is adequate to address the planned research aims, to reduce the potential for measurement and non-response error, and to better tailor the study methods to the intended sample ( Dillman et al., 2014 ; Singleton & Straits, 2009 ). For example, a mixed methods survey research approach may begin with distributing a questionnaire and following up with telephone interviews to clarify unclear survey responses ( Singleton & Straits, 2009 ). Mixed methods might also be used when visual or auditory deficits preclude an individual from completing a questionnaire or participating in an interview.


Fujimori et al. ( 2014 ) described the use of survey research in a study of the effect of communication skills training for oncologists on oncologist and patient outcomes (e.g., oncologist’s performance and confidence and patient’s distress, satisfaction, and trust). A sample of 30 oncologists from two hospitals was obtained and though the authors provided a power analysis concluding an adequate number of oncologist participants to detect differences between baseline and follow-up scores, the conclusions of the study may not be generalizable to a broader population of oncologists. Oncologists were randomized to either an intervention group (i.e., communication skills training) or a control group (i.e., no training).

Fujimori et al. ( 2014 ) chose a quantitative approach to collect data from oncologist and patient participants regarding the study outcome variables. Self-report numeric ratings were used to measure oncologist confidence and patient distress, satisfaction, and trust. Oncologist confidence was measured using two instruments each using 10-point Likert rating scales. The Hospital Anxiety and Depression Scale (HADS) was used to measure patient distress and has demonstrated validity and reliability in a number of populations including individuals with cancer ( Bjelland, Dahl, Haug, & Neckelmann, 2002 ). Patient satisfaction and trust were measured using 0 to 10 numeric rating scales. Numeric observer ratings were used to measure oncologist performance of communication skills based on a videotaped interaction with a standardized patient. Participants completed the same questionnaires at baseline and follow-up.

The authors clearly describe what data were collected from all participants. Providing additional information about the manner in which questionnaires were distributed (i.e., electronic, mail), the setting in which data were collected (e.g., home, clinic), and the design of the survey instruments (e.g., visual appeal, format, content, arrangement of items) would assist the reader in drawing conclusions about the potential for measurement and nonresponse error. The authors describe conducting a follow-up phone call or mail inquiry for nonresponders, using the Dillman et al. ( 2014 ) tailored design for survey research follow-up may have reduced nonresponse error.


Survey research is a useful and legitimate approach to research that has clear benefits in helping to describe and explore variables and constructs of interest. Survey research, like all research, has the potential for a variety of sources of error, but several strategies exist to reduce the potential for error. Advanced practitioners aware of the potential sources of error and strategies to improve survey research can better determine how and whether the conclusions from a survey research study apply to practice.

The author has no potential conflicts of interest to disclose.

Loading metrics

Open Access


Research Article

Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices

* E-mail: [email protected]

Affiliation Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliation Canadian Institutes of Health Research, Ottawa, Canada

Affiliation Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Medicine, University of Ottawa, Ottawa, Canada

  • Carol Bennett, 
  • Sara Khangura, 
  • Jamie C. Brehaut, 
  • Ian D. Graham, 
  • David Moher, 
  • Beth K. Potter, 
  • Jeremy M. Grimshaw


  • Published: August 2, 2011
  • https://doi.org/10.1371/journal.pmed.1001069
  • Reader Comments

Table 1

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).


There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.

Please see later in the article for the Editors' Summary

Citation: Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, et al. (2011) Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices. PLoS Med 8(8): e1001069. https://doi.org/10.1371/journal.pmed.1001069

Academic Editor: Rachel Jewkes, Medical Research Council, South Africa

Received: December 23, 2010; Accepted: June 17, 2011; Published: August 2, 2011

Copyright: © 2011 Bennett et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: Funding, in the form of salary support, was provided by the Canadian Institutes of Health Research [MGC – 42668]. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Editors' Summary

Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.

Why Was This Study Done?

This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.

What Did the Researchers Do and Find?

The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.

The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

What Do These Findings Mean?

Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Additional Information

Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069 .

  • More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
  • More information about STROBE is available on the STROBE Statement web site


Surveys are a research method by which information is typically gathered by asking a subset of people questions on a specific topic and generalising the results to a larger population [1] , [2] . They are an essential component of many types of research including public opinion, politics, health, and others. Surveys are especially important when addressing topics that are difficult to assess using other approaches (e.g., in studies assessing constructs that require individual self-report about beliefs, knowledge, attitudes, opinions, or satisfaction). However, there is substantial literature to show that the methods used in conducting survey research can significantly affect the reliability, validity, and generalisability of study results [3] , [4] . Without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics.

Reporting guidelines are evidence-based, validated tools that employ expert consensus to specify minimum criteria for authors to report their research such that readers can critically appraise and interpret study findings [5] – [7] . More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Network's website ( www.equator-network.org ). There is increasing evidence that reporting guidelines are achieving their aim of improving the quality of reporting of health research [8] – [11] .

Given the growth in the number and range of reporting guidelines, the need for guidance on how to develop a guideline has been addressed [7] . A well-structured development process for reporting guidelines includes a review of the literature to determine whether a reporting guideline already exists (i.e., a needs assessment) [7] . The needs assessment should also include a search for evidence on the quality of reporting of published research in the domain of interest [7] .

The series of studies reported here was conducted to help determine whether there is a need for survey research reporting guidelines. We sought to identify any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research. The objectives of our study were:

  • to identify current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature;
  • to conduct a systematic review of evidence on the quality of reporting of surveys; and
  • to identify key quality criteria for the conduct of survey research and to review how they are being reported through a review of recently published reports of self-administered surveys.

Part 1: Identification of Current Guidance for Survey Research

Identifying guidance in “instructions to authors” sections in peer reviewed journals..

Using a strategy originally developed by Altman [12] to assess endorsement of CONSORT by top medical journals, we identified the top five journals from each of 33 medical specialties, and the top 15 journals from the general and internal medicine category, using Web of Science citation impact factors (list of journals available on request). The final sample consisted of 165 unique journals (15 appeared in more than one specialty).

We reviewed each journal's “Instructions to Authors” web pages as well as related PDF documents between January 12 and February 9, 2009. We used the “find” features of the Firefox web browser and Adobe Reader software to identify the following search terms: survey, questionnaire, response, response rate, respond, and non-responder. Web pages were hand searched for statements relevant to survey research. We also conducted an electronic search (MEDLINE 1950 – February Week 1, 2009; terms: survey, questionnaire) to identify whether the journals have published survey research.

Any relevant text was summarized by journal into categories: “No guidance” (survey related term found; however, no reporting guidance provided); “One directive” (survey related term(s) found that included one brief statement, directive or reference(s) relevant to reporting survey research); and “Guidance” (survey related term(s) including more than one statement, instruction and/or directive relevant to reporting survey research). Coding was carried out by one coder (SK) and verified by a second coder (CB).

Identifying published survey reporting guidelines.

MEDLINE (1950 – April Week 1, 2011) and PsycINFO (1806 – April Week 1, 2011) electronic databases were searched via Ovid to identify relevant citations. The MEDLINE electronic search strategy ( Text S1 ), developed by an information specialist, was modified as required for the PsycINFO database. For all papers meeting eligibility criteria, we hand-searched the reference lists and used the “Related Articles” feature in PubMed. Additionally, we reviewed relevant textbooks and web sites. Two reviewers (SK, CB) independently screened titles and abstracts of all unique citations to identify English language papers and resources that provided explicit guidance on the reporting of survey research. Full-text reports of all records passing the title/abstract screen were retrieved and independently reviewed by two members of the research team; there were no disagreements regarding study inclusion and all eligible records passing this stage of screening were included in this review. One researcher (CB) undertook a thematic analysis of identified guidance (e.g., sample selection, response rate, background, etc.), which was subsequently reviewed by all members of the research team. Data were summarized as frequencies.

Part 2: Systematic Review of Published Studies on the Quality of Survey Reporting

The results of the above search strategy ( Text S1 ) were also screened by the two reviewers to identify publications providing evidence on the quality of reporting of survey research in the health science literature. We identified the aspects of reporting survey research that were addressed in these evaluative studies and summarized their results descriptively.

Part 3: Assessment of Quality of Survey Reporting

The results from Part 1 and Part 2 identified items critical to reporting survey research and were used to inform the development of a data abstraction tool. Thirty-two items were deemed most critical to the reporting of survey research on that basis. These were compiled and categorized into a draft data abstraction tool that was reviewed and modified by all the authors, who have expertise in research methodology and survey research. The resulting draft data abstraction instrument was piloted by two researchers (CB, SK) on a convenience sample of survey articles identified by the authors. Items were added and removed and the wording was refined and edited through discussion and consensus among the coauthors. The revised final data abstraction tool ( Table S1 ) comprised 33 items.

Aiming for a minimum sample size of 100 studies, we searched the top 15 journals (by impact factor) from each of four broad areas of health research: health science, public health, general/internal medicine, and medical informatics. These categories, identified through Web of Science, were known to publish survey research and covered a broad range of the biomedical literature. An Ovid MEDLINE search of these 57 journals (three were included in more than one topic area) included Medical Subject Heading (MeSH) terms (“Questionnaires,” “Data Collection,” and “Health Surveys”) and keyword terms (“survey” and “questionnaire”). The search was limited to studies published between January 2008 and February 2009.

We defined a survey as a research method by which information is gathered by asking people questions on a specific topic and the data collection procedure is standardized and well defined. The information is gathered from a subset of the population of interest with the intent of generating summary statistics that are generalisable to the larger population [1] , [2] .

Two reviewers (CB, SK) independently screened all citations (title and abstract) to determine whether the study used a survey instrument consistent with our definition. The same reviewers screened all full-text articles of citations meeting our inclusion criteria, and those whose eligibility remained unclear. We included all primary reports of self-administered surveys, excluding secondary analyses, longitudinal studies, or surveys that were administered openly through the web (i.e., studies that lacked a clearly defined sampling frame). Duplicate data extraction was completed by the two reviewers. Inconsistencies were resolved by discussion and consensus.

Part 1: Identification of Current Guidance for Survey Research – “Instructions to Authors”

Of the 165 journals searched, 154 (93.3%) did not provide any guidance on survey reporting. Of these 154, 126 (81.8%) have published survey research, while 28 have not. Of the 11 journals providing some guidance, eight provided a brief phrase, statement of guidance, or reference; and three provided more substantive guidance, including more than one directive or statement. Examples are provided in Table 1 . Although no reporting guidelines for survey research were identified, several journals referenced the EQUATOR Network's web site. The EQUATOR Network includes two papers relevant to reporting survey research [13] , [14] .


  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image


The EQUATOR Network also links to the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement ( www.strobe-statement.org ). Although the STROBE Statement includes cross-sectional studies, a class of studies that subsumes surveys, not all surveys are epidemiological. Additionally, STROBE does not include Methods ' and Results ' reporting characteristics that are unique to surveys ( Table S1 ).

Part 1: Identification of Current Guidance for Survey Research - Published Survey Reporting Guidelines

Our search identified 2,353 unique records ( Figure 1 ), which were title-screened. One-hundred sixty-four records were included in the abstract screen, from which 130 were excluded. The remaining 34 records were retrieved for full-text screening to determine eligibility. There was substantial agreement between reviewers across all the screening phases (kappa  =  0.73; 95% CI 0.69–0.77).



We identified five papers [13] – [17] and one internet site [18] that provided guidance on the reporting of survey research. None of these sources reported using valid measures or explicit methods for development. In all cases, in addition to more descriptive details, the guidance was presented in the form of a numbered or bulleted checklist. One checklist was excluded from our descriptive analysis as it was very specific to the reporting of internet surveys [16] . Two checklists were combined for analysis because one [14] was a slightly modified version of the other [17] .

Amongst the four checklists, 38 distinct reporting items were identified and grouped in eight broad themes: background, methods, sample selection, research tool, results, response rates, interpretation and discussion, and ethics and disclosure ( Table 2 ). Only two items appeared in all four checklists: providing a description of the questionnaire instrument and describing the representativeness of the sample to the population of interest. Nine items appear in three checklists, 17 items appear in two checklists, and 10 items appear in only one checklist.



Screening results are presented in Figure 1 . Eight papers were identified that addressed the quality of reporting of some aspect of survey research. Five studies [19] – [23] addressed the reporting of response rates; three evaluated the reporting of non-response analyses in survey research [20] , [21] , [24] ; and two assessed the degree to which authors make their survey instrument available to readers ( Table 3 ) [25] , [26] .



Part 3: Assessment of Quality of Survey Reporting from the Biomedical Literature

Our search identified 1,719 citations: 1,343 citations were excluded during title/abstract screening because these studies did not use a survey instrument as their primary research tool. Three hundred seventy-six citations were retrieved for full-text review. Of those, 259 did not meet our eligibility criteria; reasons for their exclusion are reported in Figure 2 . The remaining 117 articles, reporting results from self-administered surveys, were retained for data abstraction.



The 117 articles were published in 34 different journals: 12 journals from health science, seven from medical informatics, 10 from general/internal medicine, and eight from public health ( Table S2 ). The median number of pages per study was 8 (range 3–26). Of the 33 items that were assessed using our data abstraction form, the median number of items reported was 18 (range 11–25).

Reporting Characteristics: Title, Abstract, and Introduction

The majority (113 [97%]) of articles used the term “survey” or “questionnaire” in the title or abstract; four articles did not use a term to indicate that the study was a survey. While all of the articles presented a background to their research, 17 (15%) did not identify a specific purpose, aim, goal, or objective of the study. ( Table 4 )



Reporting Characteristics: Methods

Approximately one-third (40 [34%]) of survey research reports did not provide access to the questionnaire items used in the study in either the article, appendices, or an online supplement. Of those studies that reported the use of existing survey questionnaires, the majority (40/52 [77%]) did not report the psychometric properties of the tool (although all but two did reference their sources). The majority of studies that developed a novel questionnaire (91/111 [82%]) failed to clearly describe the development process and/or did not describe the methods used to pre-test the tool; the majority (89/111 [80%]) also failed to report the reliability or validity of a newly developed survey instrument. For those papers which used survey instruments that required scoring (n = 95), 63 (66%) did not provide a description of the scoring procedures.

With respect to a description of sample selection methods, 104 (89%) studies did not describe the sample's representativeness of the population of interest. The majority (110 [94%]) of studies also did not present a sample size calculation or other justification of the sample size.

There were 23 (20%) papers for which we could not determine the mode of survey administration (i.e., in-person, mail, internet, or a combination of these). Forty-one (35%) articles did not provide information on either the type (i.e. phone, e-mail, postal mail) or the number of contact attempts. For 102 (87%) papers, there was no description of who was identified as the organization/group soliciting potential research subjects for their participation in the survey.

Twelve (10%) papers failed to provide a description of the methods used to analyse the data (i.e., a description of the variables that were analysed, how they were manipulated, and the statistical methods used). However, for a further 55 (47%) studies, the data analysis would be a challenge to replicate based on the description provided in the research report. Very few studies provided methods for analysis of non-response error, calculating response rates, or handling missing item data (15 [13%], 5 [4%], and 13 [11%] respectively). The majority (112 [96%]) of the articles did not provide a definition or cut-off limit for partial completion of questionnaires.

Reporting Characteristics: Results

While the majority (89 [76%]) of papers provided a defined response rate, 28 studies (24%) failed to define the reported response rate (i.e., no information was provided on the definition of the rate or how it was calculated), provided only partial information (e.g., response rates were reported for only part of the data, or some information was reported but not a response rate), or provided no quantitative information regarding a response rate. The majority (104 [87%]) of studies did not report the sample disposition (i.e., describing the number of complete and partial returned questionnaires according to the number of potential participants known to be eligible, of unknown eligibility, or known to be ineligible). More than two-thirds (80 [68%]) of the reports provided no information on how non-respondents differed from respondents.

Reporting Characteristics: Discussion and Ethical Quality Indicators

While all of the articles summarized their results with regard to the objectives, and the majority (110 [94%]) described the limitations of their study, most (90 [77%]) did not outline the strengths of their study and 70 (60%) did not include any discussion of the generalisability of their results.

When considering the ethical quality indicators, reporting was varied. While three-quarters (86 [74%]) of the papers reported their source of funding, approximately the same proportion (88 [75%]) did not include any information on consent procedures for research participants. One-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

Our comprehensive review, to identify relevant guidance for survey research and evidence on the quality of reporting of surveys, substantiates the need for a reporting guideline for survey research. Overall, our results show that few medical journals provide guidance to authors regarding survey research. Furthermore, no validated guidelines for reporting surveys currently exist. Previous reviews of survey reporting quality and our own review of 117 published studies revealed that many criteria are poorly reported.

Surveys are common in health care research; we identified more than 117 primary reports of self-administered surveys in 34 high-impact factor journals over a one-year period. Despite this, the majority of these journals provided no guidance to authors for reporting survey research. This may stem, at least partly, from the fact that validated guidelines for survey research do not exist and that recommended quality criteria vary considerably. The recommended reporting criteria that we identified in the published literature are not mutually exclusive, and there is perhaps more overlap if one takes into account implicit and explicit considerations. Regardless of these limitations, the lack of clear guidance has contributed to inconsistency in the literature; both this work and that of others [19] – [26] shows that key survey quality characteristics are often under-reported.

Self-administered sample surveys are a type of observational study and for that reason they can fall within the scope of STROBE. However, there are methodological features relevant to sample surveys that need to be highlighted in greater detail. For example, surveys that use a probability sampling design do so in order to be able to generalise to a specific target population (many other types of observational research may have a more “infinite” target population); this emphasizes the importance of coverage error and non-response error – topics that have received attention in the survey literature. Thus, in our data abstraction tool, we placed emphasis on specific methodological details excluded from STROBE – such as non-response analysis, details of strategies used to increase response rates (e.g., multiple contacts, mode of contact of potential participants), and details of measurement methods (e.g., making the instrument available so that readers can consider questionnaire formatting, question framing, choice of response categories, etc.).

Consistent with previous work [25] , [26] , fully one-third of our sample failed to provide access to any survey questions used in the study. This poses challenges both for critical analysis of the studies and for future use of the tools, including replication in new settings. These challenges will be particularly apparent as the articles age and study authors become more difficult to contact [25] .

Assessing descriptions of the study population and sampling frame posed particular challenges in this study. It was often unclear whom the authors considered to be the population of interest. To standardise our assessment of this item, we used a clearly delineated definition of “survey population” and “sampling frame” [3] , [27] . A survey reporting guideline could help this issue by clearly defining the difference between the terms and descriptions of “population” and “sampling frame.”

Our results regarding reporting of response rates and non-response analysis were similar to previously published studies [19] – [24] . In our sample, 24% of papers assessed did not provide a defined response rate and 68% did not provide results from non-response analysis. The wide variation in how response rates are reported in the literature is perhaps a historical reflection of the limited consensus or explicit journal policy for response rate reporting [22] , [28] , [29] . However, despite lack of explicit policies regarding acceptable standards for response rates or the reporting of response rates, journal editors are known to have implicit policies for acceptable response rates when considering the publication of surveys [17] , [22] , [29] , [30] . Given the concern regarding declining response rates to surveys [31] , there is a need to ensure that aspects of the survey's design and conduct are well reported so that reviewers can adequately assess the degree of bias that may be present and allay concerns over the representativeness of the survey population.

With regard to the ethical quality indicators, sources of study funding were often reported (74%) in this sample of articles. However, the reporting of research ethics board approval and subject consent procedures were reported far less often. In particular, the reporting of informed consent procedures was often absent in studies where physicians, residents, other clinicians or health administrators were the subjects. This finding may suggest that researchers do not perceive doctors and other health-care professionals and administrators to be research subjects in the same way they perceive patients and members of the public to be. It could also reflect a lack of current guidelines that specifically address the ethical use of health services professionals and staff as research subjects.

Our research is not without limitations. With respect to the review of journals' “Instructions to Authors,” the study was cross-sectional in contrast with the dynamic nature of web pages. Since our searches in early 2009, several journals have updated their web pages. It has been noted that at least one has added a brief reference to the reporting of survey research.

A second limitation is that our sample included only the contents of “Instructions to Authors” web pages for higher-impact factor journals. It is possible that journals with lower impact factors contain guidance for reporting survey research. We chose this approach, which replicates previous similar work [12] , to provide a defensible sample of journals.

</?twb=.3w?>Third, the problem of identifying non-randomised studies in electronic searches is well known and often related to the inconsistent use of terminology in the original papers. It is possible that our search strategy failed to identify relevant articles. However, it is unlikely that there is an existing guideline for survey research that is in widespread use, given our review of actual surveys, instructions to authors, and reviews of reporting quality.

Fourth, although we restricted our systematic review search strategy to two health science databases, our hand search did identify one checklist that was not specific to the health science literature [18] . The variation in recommended reporting criteria amongst the checklists may, in part, be due to variation in the different domains (i.e., health science research versus public opinion research).

Additionally, we did not critically appraise the quality of evidence for items included in the checklists nor the quality of the studies that addressed the quality of reporting of some aspect of survey research. For our review of current reporting practices for surveys, we were unable to identify validated tools for evaluation of these studies. While we did use a comprehensive and iterative approach to develop our data abstraction tool, we may not have captured information on characteristics deemed important by other researchers. Lastly, our sample was limited to self-administered surveys, and the results may not be generalisable to interviewer-administered surveys.

Recently, Moher and colleagues outlined the importance of a structured approach to the development of reporting guidelines [7] . Given the positive impact that reporting guidelines have had on the quality of reporting of health research [8] – [11] , and the potential for a positive upstream effect on the design and conduct of research [32] , there is a fundamental need for well-developed reporting guidelines. This paper provides results from the initial steps in a structured approach to the development of a survey reporting guideline and forms the foundation for our further work in this area.

In conclusion, there is limited guidance and no consensus regarding the optimal reporting of survey research. While some key criteria are consistently reported by authors publishing their survey research in peer-reviewed journals, the majority are under-reported. As in other areas of research, poor reporting compromises both transparency and reproducibility, which are fundamental tenets of research. Our findings highlight the need for a well developed reporting guideline for survey research – possibly an extension of the guideline for observational studies in epidemiology (STROBE) – that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Supporting Information

Data abstraction tool items and overlap with STROBE.


Journals represented by 117 included articles.


Ovid MEDLINE search strategy.



We thank Risa Shorr (Librarian, The Ottawa Hospital) for her assistance with designing the electronic search strategy used for this study.

Author Contributions

Conceived and designed the experiments: JG DM CB SK JB IG BP. Analyzed the data: CB SK JB DM JG. Contributed to the writing of the manuscript. CB SK JB IG DM BP JG. ICMJE criteria for authorship read and met. CB SK JB IG DM BP JG. Acqusition of data: CB SK.

  • 1. Groves RM, Fowler FJ, Couper MP, Lepkowski JM, Singer E, et al. (2004) Survey Methodology. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • 2. Aday LA, Cornelius LJ (2006) Designing and Conducting Health Surveys. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • View Article
  • Google Scholar
  • 6. EQUATOR NetworkIntroduction to Reporting Guidelines. Available: http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines/#what . Accessed 23 November 2009.
  • 18. AAPORHome page of the American Association of Public Opinion Research (AAPOR). Available: http://www.aapor.org . Accessed 20 January 2009.
  • 22. Johnson T, Owens L (2003) Survey Response Rate Reporting in the Professional Literature. Available: http://www.amstat.org/sections/srms/proceedings/y2003/Files/JSM2003-000638.pdf . Accessed 11 July 2011.
  • 27. Dillman DA (2007) Mail and Internet Surveys: The Tailored Design Method. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research and survey article

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!


AI-Based Services in Market Research

AI-Based Services Buying Guide for Market Research (based on ESOMAR’s 20 Questions) 

May 20, 2024

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 12 May 2024

Is the Internet bad for you? Huge study reveals surprise effect on well-being

  • Carissa Wong

You can also search for this author in PubMed   Google Scholar

A woman and a man sit in bed in a dark bedroom, distracted by a laptop computer and a smartphone respectively.

People who had access to the Internet scored higher on measures of life satisfaction in a global survey. Credit: Ute Grabowsky/Photothek via Getty

A global, 16-year study 1 of 2.4 million people has found that Internet use might boost measures of well-being, such as life satisfaction and sense of purpose — challenging the commonly held idea that Internet use has negative effects on people’s welfare.

research and survey article

US TikTok ban: how the looming restriction is affecting scientists on the app

“It’s an important piece of the puzzle on digital-media use and mental health,” says psychologist Markus Appel at the University of Würzburg in Germany. “If social media and Internet and mobile-phone use is really such a devastating force in our society, we should see it on this bird’s-eye view [study] — but we don’t.” Such concerns are typically related to behaviours linked to social-media use, such as cyberbullying, social-media addiction and body-image issues. But the best studies have so far shown small negative effects, if any 2 , 3 , of Internet use on well-being, says Appel.

The authors of the latest study, published on 13 May in Technology, Mind and Behaviour , sought to capture a more global picture of the Internet’s effects than did previous research. “While the Internet is global, the study of it is not,” said Andrew Przybylski, a researcher at the University of Oxford, UK, who studies how technology affects well-being, in a press briefing on 9 May. “More than 90% of data sets come from a handful of English-speaking countries” that are mostly in the global north, he said. Previous studies have also focused on young people, he added.

To address this research gap, Pryzbylski and his colleagues analysed data on how Internet access was related to eight measures of well-being from the Gallup World Poll , conducted by analytics company Gallup, based in Washington DC. The data were collected annually from 2006 to 2021 from 1,000 people, aged 15 and above, in 168 countries, through phone or in-person interviews. The researchers controlled for factors that might affect Internet use and welfare, including income level, employment status, education level and health problems.

Like a walk in nature

The team found that, on average, people who had access to the Internet scored 8% higher on measures of life satisfaction, positive experiences and contentment with their social life, compared with people who lacked web access. Online activities can help people to learn new things and make friends, and this could contribute to the beneficial effects, suggests Appel.

The positive effect is similar to the well-being benefit associated with taking a walk in nature, says Przybylski.

However, women aged 15–24 who reported having used the Internet in the past week were, on average, less happy with the place they live, compared with people who didn’t use the web. This could be because people who do not feel welcome in their community spend more time online, said Przybylski. Further studies are needed to determine whether links between Internet use and well-being are causal or merely associations, he added.

The study comes at a time of discussion around the regulation of Internet and social-media use , especially among young people. “The study cannot contribute to the recent debate on whether or not social-media use is harmful, or whether or not smartphones should be banned at schools,” because the study was not designed to answer these questions, says Tobias Dienlin, who studies how social media affects well-being at the University of Vienna. “Different channels and uses of the Internet have vastly different effects on well-being outcomes,” he says.

doi: https://doi.org/10.1038/d41586-024-01410-z

Vuorre, M. & Przybylski, A. K. Technol. Mind Behav . https://doi.org/10.1037/tmb0000127 (2024).

Article   Google Scholar  

Heffer, T. et al. Clin. Psychol. Sci. 7 , 462–470 (2018).

Coyne, S. M., Rogers, A. A., Zurcher, J. D., Stockdale, L. & Booth, M. Comput. Hum. Behav . 104 , 106160 (2020).

Download references

Reprints and permissions

Related Articles

research and survey article

  • Public health

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

News Feature 14 MAY 24

Daniel Kahneman obituary: psychologist who revolutionized the way we think about thinking

Daniel Kahneman obituary: psychologist who revolutionized the way we think about thinking

Obituary 03 MAY 24

Pandemic lockdowns were less of a shock for people with fewer ties

Pandemic lockdowns were less of a shock for people with fewer ties

Research Highlight 01 MAY 24

Could bird flu in cows lead to a human outbreak? Slow response worries scientists

Could bird flu in cows lead to a human outbreak? Slow response worries scientists

News 17 MAY 24

Neglecting sex and gender in research is a public-health risk

Neglecting sex and gender in research is a public-health risk

Comment 15 MAY 24

Interpersonal therapy can be an effective tool against the devastating effects of loneliness

Correspondence 14 MAY 24

The origin of the cockroach: how a notorious pest conquered the world

The origin of the cockroach: how a notorious pest conquered the world

News 20 MAY 24

How religious scientists balance work and faith

How religious scientists balance work and faith

Career Feature 20 MAY 24

Senior Postdoctoral Research Fellow

Senior Postdoctoral Research Fellow required to lead exciting projects in Cancer Cell Cycle Biology and Cancer Epigenetics.

Melbourne University, Melbourne (AU)

University of Melbourne & Peter MacCallum Cancer Centre

research and survey article

Overseas Talent, Embarking on a New Journey Together at Tianjin University

We cordially invite outstanding young individuals from overseas to apply for the Excellent Young Scientists Fund Program (Overseas).

Tianjin, China

Tianjin University (TJU)

research and survey article

Chair Professor Positions in the School of Pharmaceutical Science and Technology

SPST seeks top Faculty scholars in Pharmaceutical Sciences.

Chair Professor Positions in the School of Precision Instruments and Optoelectronic Engineering

We are committed to accomplishing the mission of achieving a world-top-class engineering school.

Chair Professor Positions in the School of Mechanical Engineering

Aims to cultivate top talents, train a top-ranking faculty team, construct first-class disciplines and foster a favorable academic environment.

research and survey article

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Search Menu

Advance articles

  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Submit?
  • About Journal of Survey Statistics and Methodology
  • About the American Association for Public Opinion Research
  • About the American Statistical Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Small Area Prediction for Exponential Dispersion Families under Informative Sampling

  • View article
  • Supplementary data

Should We Offer Web, Paper, or Both? A Comparison of Single- and Mixed-Response Mode Treatments in a Mail Survey

When to text how the timing of text message contacts in mixed-mode surveys impacts response, three approaches to improve inferences based on survey data collected with mixed-mode designs, optimal allocation under anticipated nonresponse, frequent survey requests and declining response rates: evidence from the 2020 census and household surveys, effects of a web–mail mode on response rates and responses to a care experience survey: results of a randomized experiment, bayesian multisource hierarchical models with applications to the monthly retail trade survey, the efficacy of propensity score matching for separating selection and measurement effects across different survey modes, bayesian quantile regression models for complex survey data under informative sampling, sequential and concurrent mixed-mode designs: a tailored approach, a cost–benefit analysis of reinterview designs for estimating and adjusting mode measurement effects: a case study for the dutch health survey and labour force survey, optimal predictors of general small area parameters under an informative sample design using parametric sample distribution models, supplementing a paper questionnaire with web and two-way short message service (sms) surveys, optimal conformal prediction for small areas, text messages to facilitate the transition to web-first sequential mixed-mode designs in longitudinal surveys, estimation of a population total under nonresponse using follow-up, reconsidering sampling and costs for face-to-face surveys in the 21st century, concurrent, web-first, or web-only how different mode sequences perform in recruiting participants for a self-administered mixed-mode panel study, improving the efficiency of outbound cati as a nonresponse follow-up mode in address-based samples: a quasi-experimental evaluation of a dynamic adaptive design, the effects of placement and order on consent to data linkage in a web survey, interviewer ratings of physical appearance in a large-scale survey in china, the prevalence and nature of cognitive interviewing as a survey questionnaire evaluation method in the united states, investigating respondent attention to experimental text lengths, a catch-22—the test–retest method of reliability estimation, improving donor imputation using the prediction power of random forests: a combination of swisscheese and missforest, peekaboo the effect of different visible cash display and amount options during mail contact when recruiting to a probability-based panel, correction to: correcting selection bias in big data by pseudo-weighting, responsive and adaptive designs in repeated cross-national surveys: a simulation study, a mixture model approach to assessing measurement error in surveys using reinterviews, using auxiliary information in probability survey data to improve pseudo-weighting in nonprobability samples: a copula model approach, incorporating adaptive survey design in a two-stage national web or mail mixed-mode survey: an experiment in the american family health study, proxy survey cost indicators in interviewer-administered surveys: are they actually correlated with costs, total bias in income surveys when nonresponse and measurement errors are correlated, interviewer effects on the measurement of physical performance in a cross-national biosocial survey, survey consent to administrative data linkage: five experiments on wording and format, experimenting with qr codes and envelope size in push-to-web surveys, optimizing data collection interventions to balance cost and quality in a sequential multimode survey, conjugate modeling approaches for small area estimation with heteroscedastic structure, email alerts.

  • Recommend to your Library


  • Online ISSN 2325-0992
  • Copyright © 2024 American Association for Public Opinion Research
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Americans’ Changing Relationship With Local News

As news consumption habits become more digital, U.S. adults continue to see value in local outlets

Table of contents.

  • 1. Attention to local news
  • 2. Local news topics
  • Americans’ changing local news providers
  • How people feel about their local news media’s performance
  • Most Americans think local journalists are in touch with their communities
  • Interactions with local journalists
  • 5. Americans’ views on the financial health of local news
  • Acknowledgments
  • The American Trends Panel survey methodology

Reporters question a defense attorney at Harris County Criminal Courts at Law in Houston on March 26, 2024. (Yi-Chin Lee/Houston Chronicle via Getty Images)

The Pew-Knight Initiative supports new research on how Americans absorb civic information, form beliefs and identities, and engage in their communities.

Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. Knight Foundation is a social investor committed to supporting informed and engaged communities. Learn more >

Pew Research Center conducted this study to better understand the local news habits and attitudes of U.S. adults. It is a follow-up to a similar study conducted in 2018 .

The survey of 5,146 U.S. adults was conducted from Jan. 22 to 28, 2024. Everyone who completed the survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories.  Read more about the ATP’s methodology .

Refer to the topline for the questions used for this survey , along with responses, and to the methodology for more details.

This is a Pew Research Center report from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation. Find related reports online at https://www.pewresearch.org/pew-knight/ .

The local news landscape in America is going through profound changes as both news consumers and producers continue to adapt to a more digital news environment. We recently asked U.S. adults about the ways they access local news, as well as their attitudes toward local journalism, finding that:

A bar chart showing Americans increasingly prefer digital pathways to local news

  • A growing share of Americans prefer to get local news online, while fewer are getting news on TV or in print. And newspapers are no longer primarily consumed as a print product – the majority of readers of local daily newspapers now access them digitally.
  • The share of U.S. adults who say they are paying close attention to local news has dropped since our last major survey of attitudes toward local news in 2018, mirroring declining attention to national news.
  • Americans still see value in local news and local journalists. A large majority say local news outlets are at least somewhat important to the well-being of their local community. Most people also say local journalists are in touch with their communities and that their local news media perform well at several aspects of their jobs, such as reporting the news accurately.
  • At the same time, a relatively small share of Americans (15%) say they have paid for local news in the last year. And many seem unaware of the major financial challenges facing local news: A 63% majority (albeit a smaller majority than in 2018) say they think their local news outlets are doing very or somewhat well financially.
  • Majorities of both major parties say local media in their area are doing their jobs well. While Republicans and GOP-leaning independents are slightly less positive than Democrats and Democratic leaners in their opinions of local media, views of local news don’t have the same stark political divides that exist within Americans’ opinions about national media .
  • Most Americans say local journalists should remain neutral on issues in their community, but a substantial minority say local journalists should take a more active role. About three-in-ten say local journalists should advocate for change in their communities, a view that’s especially common among Democrats and younger adults.

These are some of the key findings from a new Pew Research Center survey of about 5,000 U.S. adults conducted in January 2024. This is the first in a series of Pew Research Center reports on local news from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation.

Americans largely hold positive views of local news organizations

At a time when many local news outlets are struggling and Americans’ trust in the news media has waned, the vast majority of U.S. adults (85%) say local news outlets are at least somewhat important to the well-being of their local community. This includes 44% who say local journalism is extremely or very important to their community

About seven-in-ten U.S. adults (69%) say that local journalists in their area are mostly in touch with their community, up from 63% who said this in 2018. And most Americans also say their local news organizations are doing well at four key roles:

A bar chart showing most Americans say local media are doing well at different aspects of reporting

  • Reporting news accurately (71%)
  • Covering the most important stories (68%)
  • Being transparent (63%)
  • Keeping an eye on local political leaders (61%).

These are relatively positive views compared with how Americans see news organizations more broadly. For instance, a 2022 Pew Research Center survey found that fewer than half of U.S. adults say that news organizations in general do a very or somewhat good job of covering the most important stories, reporting the news accurately and serving as a watchdog over elected leaders.

A bar chart showing majorities of both political parties believe their local news media do various aspects of their jobs well

What’s more, views toward local news are not as politically polarized as Americans’ opinions about the news media overall. While Republicans and GOP-leaning independents are not quite as positive as Democrats and Democratic leaners in some of their assessments of local journalists, most Republicans still say the local media in their area are doing their jobs well.

For example, roughly three-quarters of Democrats (78%) say their local media do well at reporting news accurately, compared with about two-thirds of Republicans (66%).

By comparison, the 2022 survey found that 51% of Democrats and just 17% of Republicans say that news organizations in general do a very or somewhat good job of reporting the news accurately.

Jump to more information on views toward local news organizations.

A bar chart showing declines in attention to both local and national news

Fewer Americans are closely following local news – and other types of news

Despite these positive views toward local news organizations, there are signs that Americans are engaging less with local journalism than they used to.

The share of Americans who say they follow local news very closely has fallen by 15 percentage points since 2016 (from 37% to 22%). Most U.S. adults still say they follow local news at least somewhat closely (66%), but this figure also has dropped in recent years.

A line chart showing Americans’ preferred path to local news is moving online

This trend is not unique to local news – Americans’ attention to national and international news also has declined.

The local news landscape is becoming more digital

The ways in which Americans access local news are changing, reflecting an increasingly digital landscape – and matching patterns in overall news consumption habits .

Preferred pathways to local news

  • Fewer people now say they prefer to get local news through a television set (32%, down from 41% who said the same in 2018).
  • Americans are now more likely to say they prefer to get local news online, either through news websites (26%) or social media (23%). Both of these numbers have increased in recent years.
  • Smaller shares prefer getting their local news from a print newspaper or on the radio (9% each).

Specific sources for local news

The types of sources (e.g., outlets or organizations) Americans are turning to are changing as well:

A bar chart showing more Americans get local news from online forums than daily newspapers

  • While local television stations are still the most common source of local news beyond friends, family and neighbors, the share who often or sometimes get news there has declined from 70% to 64% in recent years.
  • Online forums, such as Facebook groups or the Nextdoor app, have become a more common destination for local news: 52% of U.S. adults say they at least sometimes get local news from these types of forums, up 14 percentage points from 2018. This is on par with the percentage who get local news at least sometimes from local radio stations.
  • Meanwhile, a third of Americans say they at least sometimes get local news from a daily newspaper, regardless of whether it is accessed via print, online or through a social media website – down 10 points from 2018. The share of Americans who get local news from newspapers is now roughly on par with the share who get local news from local government agencies (35%) or local newsletters or Listservs (31%).

Not only are fewer Americans getting local news from newspapers, but local daily newspapers are now more likely to be accessed online than in print.

A bar chart showing local newspapers are no longer accessed primarily through print

  • 31% of those who get news from daily newspapers do so via print, while far more (66%) do so digitally, whether through websites, apps, emails or social media posts that include content from the paper.
  • In 2018, just over half of those who got news from local daily newspapers (54%) did so from print, and 43% did so via a website, app, email or social media site.

There is a similar move toward digital access for local TV stations, though local TV news is still mostly consumed through a TV set.

  • In 2024, 62% of those getting news from local TV stations do so through a television, compared with 37% who do so through one of the digital pathways.
  • An even bigger majority of local TV news consumers (76%) got that news through a TV set in 2018.

Jump to more information on how people access local news.

The financial state of local news

The turmoil for the local news industry in recent years has come with major financial challenges. Circulation and advertising revenue for newspapers have seen sharp declines in the last decade, according to our analysis of industry data , and other researchers have documented that thousands of newspapers have stopped publishing in the last two decades. There also is evidence of audience decline for local TV news stations, although advertising revenue on local TV has been more stable.

A bar chart showing the share who think their local news is doing well financially has fallen since 2018 but is still a majority

When asked about the financial state of the news outlets in their community, a majority of Americans (63%) say they think their local news outlets are doing very or somewhat well, with a third saying that they’re not doing too well or not doing well at all. This is a slightly more pessimistic view than in 2018, when 71% said their local outlets were doing well, though it is still a relatively positive assessment of the financial state of the industry.

Just 15% of Americans say they have paid or given money to any local news source in the past year – a number that has not changed much since 2018. The survey also asked Americans who did not pay for news in the past year the main reason why not. The most common explanation is that people don’t pay because they can find plenty of free local news, although young adults are more inclined to say they just aren’t interested enough in local news to pay for it.

Jump to more information on how people view the financial state of local news.

Other key findings in this report

A bar chart showing weather, crime, traffic and government are all commonly followed local news topics

Americans get local news about a wide variety of topics. Two-thirds or more of U.S. adults at least sometimes get news about local weather, crime, government and politics, and traffic and transportation, while smaller shares (but still at least half) say they get local news about arts and culture, the economy, schools, and sports.

Relatively few Americans are highly satisfied with the coverage they see of many topics. The survey also asked respondents who at least sometimes get each type of local news how satisfied they are with the news they get. With the exception of weather, fewer than half say they are extremely or very satisfied with the quality of the news they get about each topic. For example, about a quarter of those who consume news about their local economy (26%) say they are extremely or very satisfied with this news. Read more about different local news topics in Chapter 2.

A bar chart showing younger adults are more likely to say that local journalists should advocate for change in the community

When asked whether local journalists should remain neutral on community issues or advocate for change in the community, a majority of Americans (69%) say journalists should remain neutral, reflecting more traditional journalistic norms. However, 29% say that local journalists should be advocating for change in their communities. Younger adults are the most likely to favor advocacy by journalists: 39% of those ages 18 to 29 say that local journalists should push for change, as do 34% of those 30 to 49. Read more about Americans’ views of the role of local journalists in Chapter 4.

Americans who feel a strong sense of connection to their community are more likely to engage with local news, say that local news outlets are important to the community, and rate local media more highly overall. For example, 66% of those who say they are very attached to their community say local news outlets are extremely or very important to the well-being of their local community, compared with 46% of those who are somewhat attached and 31% of those who are not very or not at all attached to their community.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Digital News Landscape
  • Journalists
  • Trust in Media

Introducing the Pew-Knight Initiative

8 facts about black americans and the news, audiences are declining for traditional news media in the u.s. – with some exceptions, how black americans engage with local news, local tv news fact sheet, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

George Whitesides.

‘The scientist is not in the business of following instructions.’

Mikhail Lukin (left) and Can Knaut stand near a quantum network node.

Glimpse of next-generation internet

Six layers of excitatory neurons color-coded by depth.

Epic science inside a cubic millimeter of brain

Portrait of Venki Ramakrishnan.

Venki Ramakrishnan.

Niles Singer/Harvard Staff Photographer

Science is making anti-aging progress. But do we want to live forever?

Nobel laureate details new book, which surveys research, touches on larger philosophical questions

Anne J. Manning

Harvard Staff Writer

Mayflies live for only a day. Galapagos tortoises can reach up to age 170. The Greenland shark holds the world record at over 400 years of life. 

Venki Ramakrishnan, Nobel laureate and author of the newly released “ Why We Die: The New Science of Aging and the Quest for Immortality ,” opened his packed Harvard Science Book Talk last week by noting the vast variabilities of lifespans across the natural world. Death is certain, so far as we know. But there’s no physical or chemical law that says it must happen at a fixed time, which raises other, more philosophical issues.

The “why” behind these enormous swings, and the quest to harness longevity for humans, have driven fevered attempts (and billions of dollars in research spending) to slow or stop aging. Ramakrishnan’s book is a dispassionate journey through current scientific understanding of aging and death, which basically comes down to an accumulation of chemical damage to molecules and cells.

“The question is whether we can tackle aging processes, while still keeping us who we are as humans,” said Ramakrishnan during his conversation with Antonio Regalado, a writer for the MIT Technology Review. “And whether we can do that in a safe and effective way.”

Even if immortality — or just living for a very, very long time — were theoretically possible through science, should we pursue it? Ramakrishnan likened the question to other moral ponderings.

“There’s no physical or chemical law that says we can’t colonize other galaxies, or outer space, or even Mars,” he said. “I would put it in that same category. And it would require huge breakthroughs, which we haven’t made yet.”

In fact, we’re a lot closer to big breakthroughs when it comes to chasing immortality. Ramakrishnan noted the field is moving so fast that a book like his can capture but a snippet. He then took the audience on a brief tour of some of the major directions of aging research. And much of it, he said, started in unexpected places.

Take rapamycin, a drug first isolated in the 1960s from a bacterium on Easter Island found to have antifungal, immunosuppressant, and anticancer properties. Rapamycin targets the TOR pathway, a large molecular signaling cascade within cells that regulates many functions fundamental to life. Rapamycin has garnered renewed attention for its potential to reverse the aging process by targeting cellular signaling associated with physiological changes and diseases in older adults.

Other directions include mimicking the anti-aging effects of caloric restriction shown in mice, as well as one particularly exciting area called cellular reprogramming. That means taking fully developed cells and essentially turning back the clock on their development.

The most famous foundational experiment in this area was by Kyoto University scientist and Nobel laureate Shinya Yamanaka, who showed that just four transcription factors could revert an adult cell all the way back to a pluripotent stem cell, creating what are now known as induced pluripotent stem cells.

Ramakrishnan , a scientist at England’s MRC Laboratory of Molecular Biology, won the 2009 Nobel Prize in chemistry for uncovering the structure of the ribosome. He said he felt qualified to write the book because he has “no skin in the game” of aging research. As a molecular biologist who has studied fundamental processes of how cells make proteins, he had connections in the field but wasn’t too close to any of it.

While researching the book, he took pains to avoid interviewing scientists with commercial ventures tied to aging.

The potential for conflicts of interest abound.

The world has seen an explosion in aging research in recent decades, with billions of dollars spent by government agencies and private companies . And the consumer market for products is forecast to hit $93 billion by 2027 .

As a result, false or exaggerated claims by companies promising longer life are currently on the rise, Ramakrishnan noted. He shared one example: Supplements designed to lengthen a person’s telomeres, or genetic segments that shrink with age, are available on Amazon.

“Of course, these are not FDA approved. There are no clinical trials, and it’s not clear what their basis is,” he said.

But still there appears to be some demand.

Get the best of the Gazette delivered to your inbox

By subscribing to this newsletter you’re agreeing to our privacy policy

Share this article

You might like.

George Whitesides became a giant of chemistry by keeping it simple

Mikhail Lukin (left) and Can Knaut stand near a quantum network node.

Physicists demo first metro-area quantum computer network in Boston

Six layers of excitatory neurons color-coded by depth.

Researchers publish largest-ever dataset of neural connections

Finding right mix on campus speech policies

Legal, political scholars discuss balancing personal safety, constitutional rights, academic freedom amid roiling protests, cultural shifts

Good genes are nice, but joy is better

Harvard study, almost 80 years old, has proved that embracing community helps us live longer, and be happier

Patients love telehealth—physicians are not so sure

IRL or URL? Many physicians and patients used to see medical care as something best done in-person (in real life, or IRL). But the pandemic has spurred a massive transition to virtual (or URL) care. According to our recent surveys of consumers and physicians, opinions are split on what happens next (see sidebar, “Our methodology”). As the pandemic evolves, consumers still prefer the convenience of digital engagement and virtual-care options, according to our recent McKinsey Consumer Health Insights Survey. This preference could help more patients access care, while also helping providers to grow.

Our methodology

To help our clients understand responses to COVID-19, McKinsey launched a research effort to gather insights from physicians into how the pandemic is affecting their ability to provide care, their financial situation, and their level of stress, as well as what kind of support would interest them. Nationwide surveys were conducted online in 2020 from April 27–May 5 (538 respondents), July 22–27 (150 respondents), and September 22–27 (303 respondents), as well as from March 25–April 5, 2021 (379 respondents).

The participants were US physicians in a variety of practice types and sizes, and a range of employment types. The specialties included general practice and family practice; cardiology; orthopedics, sports medicine and musculoskeletal; dermatology; general surgery; obstetrics and gynecology; oncology; ophthalmology; otorhinolaryngology and ENT; pediatrics; plastic surgery; physical medicine and rehabilitation; psychiatry and behavioral health; emergency medicine; and urology. These surveys built on a prior one of 1,008 primary-care, cardiology, and orthopedic-surgery physicians in April 2019.

To provide timely insights on the reported behaviors, concerns, and desired support of adult consumers (18 years and older) in response to COVID-19, McKinsey launched consumer surveys in 2020 (March 16–17, March 27–29, April 11–13, April 25–27, May 15–18, June 4–8, July 11–14, September 5–7, October 22–26, and November 20–December 6) and 2021 (January 4–11, February 8–12, March 15–22, April 24–May 2, June 4–13, and August 13–23). These surveys represent the stated perspectives of consumers and are not meant to indicate or predict their actual future behavior. (In these surveys, we asked consumers about “Coronavirus/COVID-19,” given the general public’s colloquial use of coronavirus to refer to COVID-19.)

Many digital start-ups and tech and retail giants are rising to the occasion, but our most recent (2021) McKinsey Physician Survey indicates that physicians may prefer a return to pre-COVID-19 norms. In this article, we explore the trends creating disconnects between consumers and physicians and share ideas on how providers could offer digital services that work not only for them but also for patients. Bottom line: a seamless IRL/URL offering could retain patients while delivering high-quality care. Everybody benefits.

The rise of telehealth

These materials reflect general insight based on currently available information, which has not been independently verified and is inherently uncertain. Future results may differ materially from any statements of expectation, forecasts, or projections. These materials are not a guarantee of results and cannot be relied upon. These materials do not constitute legal, medical, policy, or other regulated advice and do not contain all the information needed to determine a future course of action.

At the onset of the COVID-19 pandemic, both physicians and patients embraced telehealth: in April 2020, the number of virtual visits was a stunning 78 times higher than it had been two months earlier, accounting for nearly one-third of outpatient visits. In May 2021, 88 percent of consumers said that they had used telehealth services at some point since the COVID-19 pandemic began. Physicians also felt dramatically more comfortable with virtual care. Eighty-three percent of those surveyed in the 2021 McKinsey Physician Survey offered virtual services, compared with only 13 percent in 2019. 1 See sidebar on methodology; McKinsey Physician Surveys conducted nationally in five waves between May 2019 and April 2021; May 1, 2019, n = 1,008; May 5, 2020, n = 500; July 2, 2020, n = 150; September 27, 2020, n = 500; April 5, 2021, n = 379.

However, as of mid-2021, consumers’ embrace of telehealth appeared to have dimmed a bit  from its early COVID-19 peak: utilization was down to 38 times pre-COVID-19 levels. Also, more physicians were offering telehealth but recommending in-person care when possible in 2021, which could suggest that physicians are gravitating away from URL and would prefer a return to IRL care delivery (Exhibit 1).

Three trends from the late-stage pandemic

As COVID-19 continues, three emerging trends could set the stage for the next few years.

The number of virtual-first players keeps growing, and physicians struggle to keep up

The growth (and valuations) of virtual-first care providers suggest that demand by patients is persistent and growing. Teladoc increased the number of its visits by 156 percent in 2020, and its revenues jumped by 107 percent year over year. Amwell increased its supply of providers by 950 percent in 2020. 2 “Teladoc Health reports fourth-quarter and full-year 2020 results,” Teledoc Health, February 24, 2021; “Amwell announces results for the fourth quarter and full year 2020,” Amwell, March 24, 2021. By contrast, only 45 percent of physicians have been able to invest in telehealth during the pandemic, and only 16 percent have invested in other digital tools. Just 41 percent believe that they have the technology to deliver telehealth seamlessly. 3 McKinsey Physician Survey, April 5, 2021.

Some workflows, for example, require physicians to log into disparate systems that do not integrate seamlessly with an electronic health record (EHR). Audiovisual failures during virtual appointments continue to occur. To make these models work, providers may need to determine how to design operational workflows to make IRL/URL care as seamless as possible for both providers and patients. The workflows and care team models may need to vary, depending on the physician’s specialty and the amount of time they plan to devote to URL versus IRL care.

Patient–physician relationships are shifting

In McKinsey’s April 2021 Physician Survey, 58 percent of the respondents reported that they had lost patients to other physicians or to other health systems since the start of the COVID-19 pandemic. Corroborating those findings, our August 2021 survey of consumers showed that of those who had a primary-care physician (PCP), 15 percent had switched in the past year. Thirty-five percent of all consumers reported seeing a new healthcare provider who was not their regular PCP or specialist in the past year. Among consumers who had switched PCPs, 35 percent cited one or more reasons related to the patient experience—the desire for a PCP who better understood their needs (15 percent of respondents), a better experience (10 percent), or more convenient appointments (6 percent). Just half (50 percent) of consumers with a PCP say they are very satisfied. What’s more, Medicare regulations now give patients more ownership over their health data, and that could make it easier for them to switch physicians. 4 “Policies and technology for interoperability and burden reduction,” Centers for Medicare & Medicaid Services, December 9, 2021.

Physicians and patients see telehealth differently

Our surveys show that doctors and patients have starkly different opinions about telehealth and broader digital engagement (Exhibit 2). Take convenience: while two-thirds of physicians and 60 percent of patients said they agreed that virtual health is more convenient than in-person care for patients, only 36 percent of physicians find it more convenient for themselves.

This perception may be leading physicians to rethink telehealth. Most said they expect to return to a primarily in-person delivery model over the next year. Sixty-two percent said they recommend in-person over virtual care to patients. Physicians also expect telehealth to account for one-third less of their visits a year from now than it does today.

These physicians may be underestimating patient demand. Forty percent of patients in May 2021 said they believe they will continue to use telehealth in the pandemic’s aftermath. 5 McKinsey Consumer Health Insights Survey , May 7, 2021.

In November 2021, 55 percent of patients said they were more satisfied with telehealth/virtual care visits than with in-person appointments. 6 McKinsey Consumer Health Insights Survey , November 19, 2021. Thirty-five percent of consumers are currently using other digital services, such as ordering prescriptions online and home delivery. Of these, 42 percent started using these services during the pandemic and plan to keep using them, and an additional 15 percent are interested in starting digital services. 7 McKinsey Consumer Health Insights Survey , June 24, 2021.

Convenience is not the only concern. Physicians also worry about reimbursement. At the height of the COVID-19 pandemic in the United States, the Centers for Medicare & Medicaid Services (CMS) and several other payers switched to at-parity (equal) reimbursement for virtual and in-person visits. More than half of physician respondents said that if virtual rates were 15 percent lower than in-person rates, they would be less likely to offer telehealth. Telehealth takes investment: traditional providers may need time to transition their capital and operating expenses to deliver virtual care at a cost lower than that of IRL.

Four critical actions for providers to consider

Providers may want to define their IRL/URL care strategy to identify the appropriate places for various types of care—balancing clinical appropriateness with the preferences of physicians and patients.

Determine the most clinically appropriate setting

Clinical appropriateness may be the most crucial variable for deciding how and where to increase the utilization of telehealth. Almost half of physicians said they regard telehealth as appropriate for treatment of ongoing chronic conditions, and 38 percent said they believe it is appropriate when patients have an acute change in health—increases of 26 and 17 percentage points, respectively, since May 2019.

However, physicians remain conservative in their view of telehealth’s effectiveness compared with in-person care. Their opinions vary by visit type (Exhibit 3). Health systems may consider asking their frontline clinical-care delivery teams to determine the clinically appropriate setting for each type of care, taking into account whether physicians are confident that they can deliver equally high-quality care for both IRL and URL appointments.

Assess patient wants and needs in relevant markets and segments

Patient demand for telehealth remains high, but expectations appear to vary by age and income group, payer status, and type of care. Our survey shows that younger people (under the age of 55 ), people in higher income brackets (annual household income of $100,000 or more), and people with individual or employer-sponsored group insurance are more likely to use telehealth (Exhibit 4). Patient demand also is higher for virtual mental and behavioral health. Sixty-two percent of mental-health patients completed their most recent appointments virtually, but only 20 percent of patients logged in to see their primary-care provider, gynecologist, or pediatrician.

To meet market demand effectively, it may be crucial to base care delivery models on a deep understanding of the market, with a range of both IRL and URL options to meet the needs of multiple patient segments.

Partner with physicians to define a new operating model

Many physicians are turning away from the virtual operating model: 62 percent recommended in-person care in April 2021, up five percentage points since September 2020. As physicians evaluate their processes for 2022, 46 percent said they prefer to offer, at most, a couple of hours of virtual care each day. Twenty-nine percent would like to offer none at all—up ten percentage points from September 2020. Just 11 percent would dedicate one full day a week to telehealth, and almost none would want to offer virtual care full time (Exhibit 5).

To adapt to these views, care providers can try to meet the needs and the expectations of physicians. They could offer highly virtualized schedules to physicians who prefer telehealth, while allowing other physicians to remain in-person only. Matching the preferences of physicians may create the best experience both for them and for patients. Greater flexibility and greater control over decisions about when and how much virtual care to offer may also help address chronic physician burnout issues (Exhibit 6). Digital-first solutions (for example, online scheduling, digital registration, and virtual communications with providers) could also increase the reach of in-person-only care providers to the 60 percent of consumers interested in using these digital solutions after the pandemic abates.

Communicate clearly to patients and others

Physicians consistently emerge as the most trusted source of clinical information by patients: 90 percent consider providers  trustworthy for healthcare-related issues. 8 McKinsey Consumer Survey, May 2020. Providers could play a pivotal role in counseling patients on the importance of continuity of care, as well as what can be done safely and effectively by IRL and URL, respectively. The goal is to help patients receive the care that they need in a timely manner and in the most clinically appropriate setting.

Potential benefits to providers

The strategic, purposeful design of a hybrid IRL/URL healthcare delivery model that respects the preferences of patients and physicians and offers virtual care when it is appropriate clinically may allow healthcare providers to participate in the near term, retain clinical talent, offer better value-based care, and differentiate themselves strategically for the future.

Telehealth and broader digital engagement tools have enjoyed persistent patient demand throughout the pandemic. That demand may persist well after it. Investment in digital health companies has grown rapidly—reaching $21.6 billion in 2020, a 103 percent year-over-year increase—which also suggests that this approach to medicine has staying power. 9 Q4 and annual 2020 digital health (healthcare IT) funding and M&A report , Executive Summary, Digital Health Funding and M&A, Mercom Capital Group.

That level of demand offers the potential for growth when physicians can meet it. If only new entrants fully meet consumer demand, traditional providers who do not offer URL options may risk losing market share over time as a result of patients’ initial visit and downstream care decisions. What’s more, as healthcare reimbursement continues to move toward value, virtual-delivery options could become a strategic differentiator that helps providers better manage costs. 10 Brian W. Powers, MD, et al., “Association between primary care payment model and telemedicine use for Medicare Advantage enrollees during the COVID-19 pandemic,” JAMA Network , July 16, 2021.

In all likelihood, one of the critical steps in the process will be engaging physicians in the design of new virtual-care models—for example, determining clinical appropriateness, how and where physicians prefer to deliver care, and the workflows that will maximize their productivity. This has the added benefit of potentially also addressing the problem of physician burnout by offering a range of options for how and where clinicians practice.

Most important, virtual care can offer an opportunity to improve outcomes for patients meaningfully by delivering timely care to those who might otherwise delay it or who live in areas with provider shortages. In addition, patients’ most trusted advisers on care decisions are physicians, so virtual care gives them a meaningful opportunity to help patients access the care they need in a way that both parties may find convenient and appropriate. 11 “Public & physician trust in the U.S. healthcare system,” ABIM Foundation, surveys conducted on December 29, 2020 and February 5, 2021.

Physicians are evaluating a variety of factors for delivering care to patients during and, eventually, after the COVID-19 pandemic. The strategic, purposeful design of a hybrid IRL/URL healthcare delivery model offers a triple unlock: improving the value of healthcare while better meeting consumer demand and improving physicians’ engagement. The full unlock is not easy—it requires deep engagement and cooperation between administrators, clinicians, and frontline staff, as well as focused investment. But it will yield dividends for patients and providers alike in the long run.

Jenny Cordina is a partner in McKinsey’s Detroit office,  Jennifer Fowkes is a partner in the Washington, DC, office,  Rupal Malani, MD , is a partner in the Cleveland office, and  Laura Medford-Davis, MD , is an associate partner in the Houston office.

The article was edited by Elizabeth Newman, an executive editor in the Chicago office.

Explore a career with us

Related articles.

Physicians examine options in a post-COVID-19 era

Physicians examine options in a post-COVID-19 era

Increased workforce turnover and pressures straining provider operations

Increased workforce turnover and pressures straining provider operations

Telehealth: A quarter-trillion-dollar post-COVID-19 reality?

Telehealth: A quarter-trillion-dollar post-COVID-19 reality?


  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  2. Conducting Online Surveys

    Abstract. There is an established methodology for conducting survey research that aims to ensure rigorous research and robust outputs. With the advent of easy-to-use online survey platforms, however, the quality of survey studies has declined. This article summarizes the pros and cons of online surveys and emphasizes the key principles of ...

  3. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  4. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  5. Journal of Survey Statistics and Methodology

    The Journal of Survey Statistics and Methodology is an international, high-impact journal sponsored by the American Association for Public Opinion Research (AAPOR) and the American Statistical Association. Published since 2013, the journal has quickly become a trusted source for a wide range of high quality research in the field.

  6. Reducing respondents' perceptions of bias in survey research

    Survey research has become increasingly challenging. In many nations, response rates have continued a steady decline for decades, and the costs and time involved with collecting survey data have risen with it (Connelly et al., 2003; Curtin et al., 2005; Keeter et al., 2017).Still, social surveys are a cornerstone of social science research and are routinely used by the government and private ...

  7. Good practice in the conduct and reporting of survey research

    What is survey research? Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth [] and Joseph Rowntree []), and indeed survey research remains most used in applied social research.

  8. Survey response rates: Trends and a validity assessment framework

    Survey methodology has been and continues to be a pervasively used data-collection method in social science research. To better understand the state of the science, we first analyze response-rate information reported in 1014 surveys described in 703 articles from 17 journals from 2010 to 2020.

  9. High-Impact Articles

    High-Impact Articles. Journal of Survey Statistics and Methodology, sponsored by the American Association for Public Opinion Research and the American Statistical Association, began publishing in 2013.Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  10. Reporting Guidelines for Survey Research: An Analysis of ...

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  11. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  12. Survey Research

    Survey designs. Kerry Tanner, in Research Methods (Second Edition), 2018. Conclusion. Survey research designs remain pervasive in many fields. Surveys can appear deceptively simple and straightforward to implement. However valid results depend on the researcher having a clear understanding of the circumstances where their use is appropriate and the constraints on inference in interpreting and ...

  13. Writing Survey Questions

    May 26, 2021. Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  14. Full article: Analyzing survey data in marketing research: A guide for

    Introduction. Surveys are at the forefront of the broader marketing discipline - mostly because they are relatively cheap, can quickly reach a large number of people, and are likely to generate findings that advance theory and practice (Hulland et al., Citation 2018).Despite these benefits, numerous researchers have overlooked certain analytical techniques that must be undertaken when using ...

  15. Is the Internet bad for you? Huge study reveals surprise ...

    To address this research gap, Pryzbylski and his colleagues analysed data on how Internet access was related to eight measures of well-being from the Gallup World Poll, conducted by analytics ...

  16. Pew Research Center

    U.S. Surveys. Pew Research Center has deep roots in U.S. public opinion research. Launched as a project focused primarily on U.S. policy and politics in the early 1990s, the Center has grown over time to study a wide range of topics vital to explaining America to itself and to the world.

  17. Advance articles

    Research Article 27 June 2023. Survey Consent to Administrative Data Linkage: Five Experiments on Wording and Format. ... How the Timing of Text Message Contacts in Mixed-Mode Surveys Impacts Response . Advertisement. close advertisement. Advertisement. About Journal of Survey Statistics and Methodology; Editorial Board; Author Guidelines;

  18. Practices in Data-Quality Evaluation: A Large-Scale Review of Online

    In this study, I examine data-quality evaluation methods in online surveys and their frequency of use. Drawing from survey-methodology literature, I identified 11 distinct assessment categories and analyzed their prevalence across 3,298 articles published in 2022 from 200 psychology journals in the Web of Science Master Journal List.

  19. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  20. Americans' Changing Relationship With Local News

    These are some of the key findings from a new Pew Research Center survey of about 5,000 U.S. adults conducted in January 2024. This is the first in a series of Pew Research Center reports on local news from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation.

  21. Science is making anti-aging progress. But do we want to live forever?

    The world has seen an explosion in aging research in recent decades, with billions of dollars spent by government agencies and private companies. And the consumer market for products is forecast to hit $93 billion by 2027. As a result, false or exaggerated claims by companies promising longer life are currently on the rise, Ramakrishnan noted.

  22. The Deloitte Global 2024 Gen Z and Millennial Survey

    Download the 2024 Gen Z and Millennial Report. 5 MB PDF. To learn more about the mental health findings, read the Mental Health Deep Dive. The 13th edition of Deloitte's Gen Z and Millennial Survey connected with nearly 23,000 respondents across 44 countries to track their experiences and expectations at work and in the world more broadly.

  23. Patients love telehealth—physicians are not so sure

    To help our clients understand responses to COVID-19, McKinsey launched a research effort to gather insights from physicians into how the pandemic is affecting their ability to provide care, their financial situation, and their level of stress, as well as what kind of support would interest them. Nationwide surveys were conducted online in 2020 from April 27-May 5 (538 respondents), July 22 ...

  24. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  25. Strategies of Public University Building Maintenance—A Literature Survey

    In this article, comprehensive insights into the field of building maintenance, emphasizing the importance of keywords, collaborative efforts among authors, and the evolving research landscape, are provided. The use stage, as the longest phase in a building's life cycle, involves economic, technical, and social activities. Numerous authors have contributed to the broader topic of building ...