Qualitative vs Quantitative Research Methods & Data Analysis

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.
  • Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed numerically. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.
  • Qualitative research gathers non-numerical data (words, images, sounds) to explore subjective experiences and attitudes, often via observation and interviews. It aims to produce detailed descriptions and uncover new insights about the studied phenomenon.

On This Page:

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography .

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis .

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded .

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Mixed methods research
  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

Print Friendly, PDF & Email

positive rating

10 Advantages & Disadvantages of Quantitative Research

Quantitative research is a powerful tool for those looking to gather empirical data about their topic of study. Using statistical models and math, researchers evaluate their hypothesis.

10 Advantages & Disadvantages of Quantitative Research

Quantitative Research

When researchers look at gathering data, there are two types of testing methods they can use: quantitative research, or qualitative research. Quantitative research looks to capture real, measurable data in the form of numbers and figures; whereas qualitative research is concerned with recording opinion data, customer characteristics, and other non-numerical information.

Quantitative research is a powerful tool for those looking to gather empirical data about their topic of study. Using statistical models and math, researchers evaluate their hypothesis. An integral component of quantitative research - and truly, all research - is the careful and considered analysis of the resulting data points.

There are several key advantages and disadvantages to conducting quantitative research that should be considered when deciding which type of testing best fits the occasion.

5 Advantages of Quantitative Research

  • Quantitative research is concerned with facts & verifiable information.

Quantitative research is primarily designed to capture numerical data - often for the purpose of studying a fact or phenomenon in their population. This kind of research activity is very helpful for producing data points when looking at a particular group - like a customer demographic. All of this helps us to better identify the key roots of certain customer behaviors. 

Businesses who research their customers intimately often outperform their competitors. Knowing the reasons why a customer makes a particular purchasing decision makes it easier for companies to address issues in their audiences. Data analysis of this kind can be used for a wide range of applications, even outside the world of commerce. 

  • Quantitative research can be done anonymously. 

Unlike qualitative research questions - which often ask participants to divulge personal and sometimes sensitive information - quantitative research does not require participants to be named or identified. As long as those conducting the testing are able to independently verify that the participants fit the necessary profile for the test, then more identifying information is unnecessary. 

  • Quantitative research processes don't need to be directly observed.

Whereas qualitative research demands close attention be paid to the process of data collection, quantitative research data can be collected passively. Surveys, polls, and other forms of asynchronous data collection generate data points over a defined period of time, freeing up researchers to focus on more important activities. 

  • Quantitative research is faster than other methods.

Quantitative research can capture vast amounts of data far quicker than other research activities. The ability to work in real-time allows analysts to immediately begin incorporating new insights and changes into their work - dramatically reducing the turn-around time of their projects. Less delays and a larger sample size ensures you will have a far easier go of managing your data collection process.

  • Quantitative research is verifiable and can be used to duplicate results.

The careful and exact way in which quantitative tests must be designed enables other researchers to duplicate the methodology. In order to verify the integrity of any experimental conclusion, others must be able to replicate the study on their own. Independently verifying data is how the scientific community creates precedent and establishes trust in their findings.

5 Disadvantages of Quantitative Research

  • Limited to numbers and figures.

Quantitative research is an incredibly precise tool in the way that it only gathers cold hard figures. This double edged sword leaves the quantitative method unable to deal with questions that require specific feedback, and often lacks a human element. For questions like, “What sorts of emotions does our advertisement evoke in our test audiences?” or “Why do customers prefer our product over the competing brand?”, using the quantitative research method will not derive a meaningful answer.

  • Testing models are more difficult to create.

Creating a quantitative research model requires careful attention to be paid to your design. From the hypothesis to the testing methods and the analysis that comes after, there are several moving parts that must be brought into alignment in order for your test to succeed. Even one unintentional error can invalidate your results, and send your team back to the drawing board to start all over again.

  • Tests can be intentionally manipulative.  

Bad actors looking to push an agenda can sometimes create qualitative tests that are faulty, and designed to support a particular end result. Apolitical facts and figures can be turned political when given a limited context. You can imagine an example in which a politician devises a poll with answers that are designed to give him a favorable outcome - no matter what respondents pick.

  • Results are open to subjective interpretation.

Whether due to researchers' bias or simple accident, research data can be manipulated in order to give a subjective result. When numbers are not given their full context, or were gathered in an incorrect or misleading way, the results that follow can not be correctly interpreted. Bias, opinion, and simple mistakes all work to inhibit the experimental process - and must be taken into account when designing your tests. 

  • More expensive than other forms of testing. 

Quantitative research often seeks to gather large quantities of data points. While this is beneficial for the purposes of testing, the research does not come free. The grander the scope of your test and the more thorough you are in it’s methodology, the more likely it is that you will be spending a sizable portion of your marketing expenses on research alone. Polling and surveying, while affordable means of gathering quantitative data, can not always generate the kind of quality results a research project necessitates. 

Key Takeaways 

quantitative research weaknesses and strengths

Numerical data is a vital component of almost any research project. Quantitative data can provide meaningful insight into qualitative concerns. Focusing on the facts and figures enables researchers to duplicate tests later on, and create their own data sets.

To streamline your quantitative research process:

Have a plan. Tackling your research project with a clear and focused strategy will allow you to better address any errors or hiccups that might otherwise inhibit your testing. 

Define your audience. Create a clear picture of your target audience before you design your test. Understanding who you want to test beforehand gives you the ability to choose which methodology is going to be the right fit for them. 

Test, test, and test again. Verifying your results through repeated and thorough testing builds confidence in your decision making. It’s not only smart research practice - it’s good business.

About Author

quantitative research weaknesses and strengths

Send Your First Survey Today!

Set up and begin receiving results within minutes. Sign up for free, no contract required.

Helpfull is the easiest way to get feedback from thousands of people in minutes. Our online focus group platform provides a pool of qualified panelists to give you their real detailed opinions helping you make better, more informed decisions.

Vittana.org

13 Pros and Cons of Quantitative Research Methods

Quantitative research utilizes mathematical, statistical, and computational tools to derive results. This structure creates a conclusiveness to the purposes being studied as it quantifies problems to understand how prevalent they are.

It is through this process that the research creates a projectable result which applies to the larger general population.

Instead of providing a subjective overview like qualitative research offers, quantitative research identifies structured cause-and-effect relationships. Once the problem is identified by those involved in the study, the factors associated with the issue become possible to identify as well. Experiments and surveys are the primary tools of this research method to create specific results, even when independent or interdependent factors are present.

These are the quantitative research pros and cons to consider.

List of the Pros of Quantitative Research

1. Data collection occurs rapidly with quantitative research. Because the data points of quantitative research involve surveys, experiments, and real-time gathering, there are few delays in the collection of materials to examine. That means the information under study can be analyzed very quickly when compared to other research methods. The need to separate systems or identify variables is not as prevalent with this option either.

2. The samples of quantitative research are randomized. Quantitative research uses a randomized process to collect information, preventing bias from entering into the data. This randomness creates an additional advantage in the fact that the information supplied through this research can then be statistically applied to the rest of the population group which is under study. Although there is the possibility that some demographics could be left out despite randomization to create errors when the research is applied to all, the results of this research type make it possible to glean relevant data in a fraction of the time that other methods require.

3. It offers reliable and repeatable information. Quantitative research validates itself by offering consistent results when the same data points are examined under randomized conditions. Although you may receive different percentages or slight variances in other results, repetitive information creates the foundation for certainty in future planning processes. Businesses can tailor their messages or programs based on these results to meet specific needs in their community. The statistics become a reliable resource which offer confidence to the decision-making process.

4. You can generalize your findings with quantitative research. The issue with other research types is that there is no generalization effect possible with the data points they gather. Quantitative information may offer an overview instead of specificity when looking at target groups, but that also makes it possible to identify core subjects, needs, or wants. Every finding developed through this method can go beyond the participant group to the overall demographic being looked at with this work. That makes it possible to identify trouble areas before difficulties have a chance to start.

5. The research is anonymous. Researchers often use quantitative data when looking at sensitive topics because of the anonymity involved. People are not required to identify themselves with specificity in the data collected. Even if surveys or interviews are distributed to each individual, their personal information does not make it to the form. This setup reduces the risk of false results because some research participants are ashamed or disturbed about the subject discussions which involve them.

6. You can perform the research remotely. Quantitative research does not require the participants to report to a specific location to collect the data. You can speak with individuals on the phone, conduct surveys online, or use other remote methods that allow for information to move from one party to the other. Although the number of questions you ask or their difficulty can influence how many people choose to participate, the only real cost factor to the participants involves their time. That can make this option a lot cheaper than other methods.

7. Information from a larger sample is used with quantitative research. Qualitative research must use small sample sizes because it requires in-depth data points to be collected by the researchers. This creates a time-consuming resource, reducing the number of people involved. The structure of quantitative research allows for broader studies to take place, which enables better accuracy when attempting to create generalizations about the subject matter involved. There are fewer variables which can skew the results too because you’re dealing with close-ended information instead of open-ended questions.

List of the Cons of Quantitative Research

1. You cannot follow-up on any answers in quantitative research. Quantitative research offers an important limit: you cannot go back to participants after they’ve filled out a survey if there are more questions to ask. There is a limited chance to probe the answers offered in the research, which creates fewer data points to examine when compared to other methods. There is still the advantage of anonymity, but if a survey offers inconclusive or questionable results, there is no way to verify the validity of the data. If enough participants turn in similar answers, it could skew the data in a way that does not apply to the general population.

2. The characteristics of the participants may not apply to the general population. There is always a risk that the research collected using the quantitative method may not apply to the general population. It is easy to draw false correlations because the information seems to come from random sources. Despite the efforts to prevent bias, the characteristics of any randomized sample are not guaranteed to apply to everyone. That means the only certainty offered using this method is that the data applies to those who choose to participate.

3. You cannot determine if answers are true or not. Researchers using the quantitative method must operate on the assumption that all the answers provided to them through surveys, testing, and experimentation are based on a foundation of truth. There are no face-to-face contacts with this method, which means interviewers or researchers are unable to gauge the truthfulness or authenticity of each result.

A 2011 study published by Psychology Today looked at how often people lie in their daily lives. Participants were asked to talk about the number of lies they told in the past 24 hours. 40% of the sample group reported telling a lie, with the median being 1.65 lies told per day. Over 22% of the lies were told by just 1% of the sample. What would happen if the random sampling came from this 1% group?

4. There is a cost factor to consider with quantitative research. All research involves cost. There’s no getting around this fact. When looking at the price of experiments and research within the quantitative method, a single result mist cost more than $100,000. Even conducting a focus group is costly, with just four groups of government or business participants requiring up to $60,000 for the work to be done. Most of the cost involves the target audiences you want to survey, what the objects happen to be, and if you can do the work online or over the phone.

5. You do not gain access to specific feedback details. Let’s say that you wanted to conduct quantitative research on a new toothpaste that you want to take to the market. This method allows you to explore a specific hypothesis (i.e., this toothpaste does a better job of cleaning teeth than this other product). You can use the statistics to create generalizations (i.e., 70% of people say this toothpaste cleans better, which means that is your potential customer base). What you don’t receive are specific feedback details that can help you refine the product. If no one likes the toothpaste because it tastes like how a skunk smells, that 70% who say it cleans better still won’t purchase the product.

6. It creates the potential for an unnatural environment. When carrying out quantitative research, the efforts are sometimes carried out in environments which are unnatural to the group. When this disadvantage occurs, the results will often differ when compared to what would be discovered with real-world examples. That means researchers can still manipulate the results, even with randomized participants, because of the work within an environment which is conducive to the answers which they want to receive through this method.

These quantitative research pros and cons take a look at the value of the information collected vs. its authenticity and cost to collect. It is cheaper than other research methods, but with its limitations, this option is not always the best choice to make when looking for specific data points before making a critical decision.

  • Log in / Register

Better Thesis

  • Getting started
  • Criteria for a problem formulation
  • Find who and what you are looking for
  • Too broad, too narrow, or o.k.?
  • Test your knowledge
  • Lesson 5: Meeting your supervisor
  • Getting started: summary
  • Literature search
  • Searching for articles
  • Searching for Data
  • Databases provided by your library
  • Other useful search tools
  • Free text, truncating and exact phrase
  • Combining search terms – Boolean operators
  • Keep track of your search strategies
  • Problems finding your search terms?
  • Different sources, different evaluations
  • Extract by relevance
  • Lesson 4: Obtaining literature
  • Literature search: summary
  • Research methods
  • Combining qualitative and quantitative methods
  • Collecting data
  • Analysing data

Strengths and limitations

  • Explanatory, analytical and experimental studies
  • The Nature of Secondary Data
  • How to Conduct a Systematic Review
  • Directional Policy Research
  • Strategic Policy Research
  • Operational Policy Research
  • Conducting Research Evaluation
  • Research Methods: Summary
  • Project management
  • Project budgeting
  • Data management plan
  • Quality Control
  • Project control
  • Project management: Summary
  • Writing process
  • Title page, abstract, foreword, abbreviations, table of contents
  • Introduction, methods, results
  • Discussion, conclusions, recomendations, references, appendices, layout
  • Use citations correctly
  • Use references correctly
  • Bibliographic software
  • Writing process – summary
  • Research methods /
  • Lesson 1: Qualitative and quan… /

Quantitative method Quantitive data are pieces of information that can be counted and which are usually gathered by surveys from large numbers of respondents randomly selected for inclusion. Secondary data such as census data, government statistics, health system metrics, etc. are often included in quantitative research. Quantitative data is analysed using statistical methods. Quantitative approaches are best used to answer what, when and who questions and are not well suited to how and why questions.

Strengths Limitations
Findings can be generalised if selection process is well-designed and sample is representative of study population Related secondary data is sometimes not available or accessing available data is difficult/impossible
Relatively easy to analyse Difficult to understand context of a phenomenon
Data can be very consistent, precise and reliable Data may not be robust enough to explain complex issues

Qualitative method Qualitative data are usually gathered by observation, interviews or focus groups, but may also be gathered from written documents and through case studies.  In qualitative research there is less emphasis on counting numbers of people who think or behave in certain ways and more emphasis on explaining why people think and behave in certain ways.  Participants in qualitative studies often involve smaller numbers of tools include and utilizes open-ended questionnaires interview guides.  This type of research is best used to answer how and why questions and is not well suited to generalisable what, when and who questions.

Strengths Limitations
Complement and refine quantitative data Findings usually cannot be generalised to the study population or community
Provide more detailed information to explain complex issues More difficult to analyse; don’t fit neatly in standard categories
Multiple methods for gathering data on sensitive subjects Data collection is usually time consuming
Data collection is usually cost efficient

Learn more about using quantitative and qualitative approaches in various study types in the next lesson.

Your friend's e-mail

Message (Note: The link to the page is attached automtisk in the message to your friend)

Previous

weetech solution pvt ltd logo

Strengths and Weaknesses of Quantitative and Qualitative Research

Strengths and Weaknesses of Quantitative and Qualitative Research

Research plays a crucial role when it comes to achieving success in the world of business. When it comes to good research, both quantitative and qualitative research matters the most in building marketing strategies. Data gained through quantitative research includes demographics, consumer growth, and market trends. All of these help businesses and marketers build new theories. Qualitative data, on the other hand, tests the existing strategies or theories based on the gathered data from open-ended sources. Organizations need both methods to run their business smoothly. Upon combining both quantitative and qualitative research, you can get more objective insights from data and achieve more impactful results.

Let’s discover more about the quantitative and qualitative research, including their strengths and weaknesses. But first let’s understand what these two types of researches are. Here we go…

What is Quantitative Research?

Quantitative research is a systematic investigation. It majorly focuses on quantifying relationships, behaviors, phenomena, or other variables by collecting and analyzing numerical data. This type of research is done to test hypotheses, measure outcomes, and determine patterns and trends. It provides theoretical analysis of statistical data, i.e. insights, calculations, and estimations. This research method significantly gathers quantifiable data to perform the systematic investigation. It employs statistical, computational, and mathematical techniques to provide accurate and reliable outcomes.

For this research study, the researcher often collects statistically authentic and valid information by conducting online surveys , questionnaires, and online polls. In addition, they use sampling methods. More often than not, this method is employed used in the fields of social sciences, economics, health, and marketing, among others to get unbiased results. It helps in drawing valid conclusions and making informed decisions to introduce transformative changes to society. Let’s now take a look at the strengths of this type of research.

Strengths of Quantitative Research

Now that you have understood what exactly quantitative research is, it’s time to look at the strengths of this research type. Here we go…

  • Validity and Credibility : This type of research provide statistically valid and authentic results to help you make informed decisions.
  • Objectivity and Unbiased : Data collection is structured. Therefore, researchers’ biases and preconceptions do not impact their findings.
  • Broader Perspective : This allows for generalization and conclusions about a broader population. This makes the research findings more impactful and useful.
  • Clear and Accurate Results : The theoretical analysis of statistical data is clear and accurate. This promotes its easy explanation to a wider audience.
  • Forecast : This research study helps to forecast future trends. As a result of this, the researcher can make more informed decisions.
  • Diversity : This research method allows researchers to collect quantifiable data from diverse sources such as online surveys, questionnaires, and more.
  • Versatility : This is a versatile research method. It can be used in various organizations to benefit from data-driven decisions .

Let’s now take a look at the weaknesses of this type of research…

Weaknesses of Quantitative Data

Although quantitative research is versatile and its findings are very impactful, it has some weaknesses that you should know. Look at the following pointers to know the weaknesses of quantitative research:

  • Alien to Real-Life Situations : Data collection is structural; however, it is often limited in nature. Since it is used to collect quantifiable data, often it is not related to real-life situations.
  • Does Not Identify Causes : The objective of quantitative research is to find correlations between different variables. The researchers are concerned with how much and how many. However, they often avoid looking into the why part – why something happens.
  • No New Idea : The quantitative research aims to test the hypothesis of existing concepts. It does not emphasize generating new ideas or discovering uncovered areas.
  • No Subjectivity : It does not take into account human experience. There is no place for human opinions and feelings in this type of research.
  • Time Consuming : It uses a larger sample size and complex data sets for analysis. Therefore, it is more time-consuming compared to the other research methods.
  • Complex : The rigorous design of the study requires a high level of expertise to draw findings.

Common Types of Quantitative Research Methods

Go over the following types of quantitative research methods:

Surveys

Image Source – freepik/@upklyak

Researchers often conduct surveys to gather a huge data set that can be analyzed to identify patterns, relationships, and trends. For example, you can conduct online surveys to analyze customers’ experience with products or services. This analysis helps you identify customer satisfaction levels. Furthermore, you can discover areas of improvement or change.

2. Correlation Research

This is a non-experimental method. As the name says, this method is used to discover a correlation between two variables. It does not let extraneous variables intervene in the research study. If the correlation is positive, it indicates both variables are in the same direction. On the other hand, the negative correlation suggests both variables go in a negative direction. Furthermore, this type of research method only uses existing sources to analyze the dataset. Therefore, it is considered cost-effective.

3. Causal-Comparative Research

Causal-Comparative Research

Image Source – freepik

The casual-comparative method identifies cause and effect variables. Under this research method, one variable is dependent and another one is independent. Some researchers claim it to be similar to an experimental research method. However, this is not a complete experimental method.

4. Experimental Research

Experimental research method or true experimentation administers scientific techniques to test the hypothesis of the study. It aims to measure how independent variables impact dependent variables. Moreover, it controls extraneous variables to ensure the validity of the research design.

5. Result Analysis

Result Analysis

Take a look at the following two methods to do the result analysis on the quantitative research:

Descriptive Analysis

This computes or calculates your datasets using mean, median, and mode to summarize the statistical dataset.

Inferential Analysis

As the name suggests, this includes inferences about what the data means. To ensure its effectiveness, researchers employ the three most common methods, including cross-tabulation, factor analysis, and T-tests.

Let’s now take a look at the important pointers you need to keep in mind when constructing surveys.

Pointers to Keep in Mind While Constructing Surveys

Check out the following pointers to know how to administer a perfect survey for quantitative research:

  • The questions should be short and simple
  • You should avoid asking for misleading information
  • Images should be clear and legible
  • Grammar and spelling mistakes can make data quality poor. So, avoid them.

Why Is Quantitative Research Important to Marketing?

Take a look at the following details to know how quantitative research is important to marketing:

Real-Time Insights

Real-Time Insights

Quantitative research helps researchers gather real-time statistical data on market trends, consumer choice patterns, and the organization’s performance. Based on that, researchers compute or calculate complex datasets to gain insights into various aspects. Finally, the research insights help organizations understand the impact of their strategies. They can use this information to reform their business plans.

Improve Marketing Strategies

By gaining real-time data analytics reports on online marketing strategies , they can boost their brand visibility. This helps them determine new strategies for driving organic traffic to the website.

Competitor Analysis

Numeric data analysis helps organizations track competitors’ performance. Based on the in-depth analysis, they can compare their marketing strategies and performance with competitors. This helps them understand what they can do to increase their brand awareness.

Objectivity

The qualitative research method provides leaders with objective data. They can easily communicate this data with their team members. Furthermore, this objective data helps team members understand in which direction they should proceed to yield better results.

It’s now time to move on to another very crucial research type, i.e., Qualitative Research and understand it in detail. Here we go…

What is Qualitative Research

Qualitative research is an exploratory method. It primarily focuses on understanding human behavior, customer’s experiences, and social phenomena. It involves detailed and in-depth analysis. Unlike quantitative research, which emphasizes numerical data and statistical analysis, qualitative research strives to discover the causes of any problem by examining non-numerical data. Its main emphasis is on why rather than what. Essentially, it is subjective in nature because it typically relies on human experiences.

It employs open-ended techniques , including interviews, observations, focus groups, content analysis, and more, to collect rich data. This approach allows researchers to gain a deeper understanding of the context, motivations, and perspectives of participants. This method allows participants to express their issues and opinions in their own words. Based on the data, the researcher analyzes their attitudes, interests, behaviors, and motivations. It is often employed in the fields of education, sociology, psychology, and anthropology. The study focuses on the intricate and subtle aspects of human experiences. It often employs smaller sample sizes to facilitate an in-depth analysis of a problem. By capturing rich, detailed data, qualitative research offers a comprehensive view of the subject matter, highlighting themes, patterns, and relationships that cannot be gathered using quantitative research methods.

Strengths of Qualitative Research

Here are some of the noteworthy strengths of qualitative research that you must be aware of. Take a look…

  • Data Collection : The qualitative research method is not restricted to participants’ pre-defined questions. It focuses on open-ended methods to enable data collection. Interviews/observations help a researcher gain a complete understanding of respondents. All in all, this research method focuses on collecting rich and detailed data.
  • Novel Theories : This method allows researchers to generate new ideas/theories that can be opposite to conventional social beliefs and norms.
  • Express in Numeric : The qualitative research method allows researchers to convert research findings into numeric data for a better understanding.
  • Can combine with Quantitative Method : The researcher can combine qualitative research with quantitative research to gain incredible insights into the matter.
  • Flexibility : This type of research is more flexible in nature than any other form of research, and it provides room for adaptability.
  • Contextual Understanding : The researcher gains a deeper understanding of the social and cultural contexts of participants, resulting in more impactful findings.

Weaknesses of Qualitative Research

Weaknesses of Qualitative Research

  • Misleading Information : The researcher must adhere to rigorous standards when collecting and analyzing data. If they fail to do so, resources and expertise of low quality can lead to misleading results.
  • Can Not Be Generalized : It is challenging to draw broad conclusions and generalize the data to a larger population using this research design.
  • Time-consuming : In contrast to other research methods, the qualitative research method is time-consuming because it involves collecting data through multiple interviews and observations.
  • Less Valid : Because of the human experience intervention, the qualitative research findings are believed less valid and less authentic.

Common Types of Qualitative Research Methods

Here are some common types of qualitative research methods to know:

One-on-one Interview

One-on-one interviews have emerged as one of the most popular qualitative research methods. It involves face-to-face or online interviews of the participants. This research method aims to understand and analyze the opinions, ideas, and experiences of the interviewee.

Focus Groups

This research method involves the researcher organizing a small discussion or interview with a group of participants. All of the participants need to discuss a specific topic under this method. The objective of this study is to gain an understanding of the beliefs and considerations of the participants regarding a particular topic.

Discussion Boards

Online discussion boards have replaced traditional discussion boards . Under this research method, researchers provide students with a set of questions to make them participate in the debate. This is a highly efficient way to understand their perspectives, beliefs, and ideas in different situations.

Case study is yet another kind of method used for qualitative research. This method is primarily employed to gain in-depth information about the subject. It is important to note that the subject can encompass a wide range of entities, including organizations, countries, events, or individuals. A lot of researchers view the case study method as highly explanatory.

Pictures and Videos

Pictures and Videos

Pictures and videos are also used as qualitative research methods to understand human experience through image or video analysis. They enhance the richness of data by allowing participants to express themselves in a non-verbal way. Based on visual elements analysis, a researcher reveals insights into social, cultural, or psychological phenomena.

Record-Keeping or Logging

Under this research method, the researcher collects authentic and valid documents from various sources. Further, the information is used as data. The findings of this research method are considered valid and impactful.

Ethnographic Study

Under the ethnographic study, the researchers act as an active participant or observer to study participants in their natural settings. This allows them to understand their social context, culture, and behavior in a much better way.

Observation Method

The observation methods involve the researcher’s subjective interpretation to observe and analyze the attributes and characteristics of a phenomenon. The data collection relies heavily on the researcher’s keen senses of taste, smell, sight, and hearing. He conducts thorough data collection and carefully analyses the entire event.

Result Analysis

Here are the two methods that researchers often employ for the result analysis of the qualitative research:

Deductive Analysis

Deductive analysis is often used to test existing theories, ideas, or beliefs. In qualitative research, deductive analysis is the process of applying predetermined codes to the data. The codes are often generated from literature, theory, or propositions that the researcher has developed. Furthermore, this is a structured method because it applies already decided research design.

Inductive Analysis

Inductive analysis builds up new theories based on specific observations or patterns. The basis of these theories is what has been seen and how it has been seen. Furthermore, it is a flexible analysis that is open to new information.

Some people claim that surveys can only be used in quantitative research. But this is not true. You can conduct surveys in qualitative research as well to make informed decisions.

Check out the following pointers to learn what you should keep in mind while constructing surveys:

  • Use appropriate language
  • Avoid unnecessary capitalization in words or phrases
  • Use the correct format of the questions
  • Make sure that multiple-answer questions do not have conflicting answer choices.

Why Is Qualitative Research Important to Marketing?

Qualitative research is ideal for marketing because it helps organizations acquire trustworthy information regarding their consumers’ changing demands, preferences, or tastes. Go over the following pointers to understand why qualitative research is important in today’s marketing scenario. Take a look…

Build Strategies

Build Strategies

Image Source – freepik/@storyset

In this era of cut-throat competition, knowing about your customers is crucial. This is because based on that information only you can make right marketing decisions. Qualitative research helps organizations understand customers’ preferences and needs. Information gathered using qualitative research methods help businesses build new strategies to enhance customer’s experience. Strategies that businesses design using research data help them attract their target customers and improve their bottom lines.

Rebrand Products and Services

Often, researchers find this method very helpful. The information gathered using qualitative research helps businesses rebrand their products and services. Based on the results of the research, they come to know what their products and services lack. Also, they can determine what they can do to improve their products and services to attract their target customers.

Prevents the Risk of Customer Churning

Customer churning happens when a customer stops using a company’s products or services. However, qualitative research findings help companies understand their customer’s experience with their products and know what consumers want from their products or services. This reduces the risk of customer churning to a great extent.

Get Feedback from Customers

This method helps organizations get feedback on their products or services from customers. The feedback analysis makes a lot of sense in accelerating the organization’s growth.

The Bottom Line

So, this is all about the strengths and weaknesses of quantitative and qualitative research methodologies. Both quantitative and qualitative research methods showcase unique strengths, making them ideal for collecting data for different sectors. However, both methods do exhibit some weaknesses as well. Quantitative research excels in providing precise, measurable, and generalizable data through statistical analysis, while qualitative research offers rich, detailed insights into participants’ experiences, emotions, and social interactions. Quantitative research is considered best for testing hypotheses and identifying patterns across large populations.

At the same time, qualitative research is considered ideal for gaining a deeper understanding of underlying motivations and meanings. Quantitative research methodologies have a structured approach; however, they often avoid the complexities of human behavior and context. Well, that’s not the case with qualitative research methods. You can choose to use any of the research methods to write my essay for me online based on the industry you are serving. However, you can even combine both approaches to enjoy the benefits of both methods.

Thanks for reading!

Stay tuned for more such insightful posts!

author avatar

Related Posts:

Traditional Market Research - A Qualitative & Quantitative Approach based Methodology

1 thought on “ Strengths and Weaknesses of Quantitative and Qualitative Research ”

' src=

Thanks for your post.

Comments are closed.

  • Top Articles
  • Experiences

Strengths and Weaknesses of Quantitative and Qualitative Research

Insights from research, walking in your customers’ shoes.

Both qualitative and quantitative methods of user research play important roles in product development. Data from quantitative research—such as market size, demographics, and user preferences—provides important information for business decisions. Qualitative research provides valuable data for use in the design of a product—including data about user needs, behavior patterns, and use cases. Each of these approaches has strengths and weaknesses, and each can benefit from our combining them with one another. This month, we’ll take a look at these two approaches to user research and discuss how and when to apply them.

Quantitative Studies

Quantitative studies provide data that can be expressed in numbers—thus, their name. Because the data is in a numeric form, we can apply statistical tests in making statements about the data. These include descriptive statistics like the mean, median, and standard deviation, but can also include inferential statistics like t-tests, ANOVAs, or multiple regression correlations (MRC). Statistical analysis lets us derive important facts from research data, including preference trends, differences between groups, and demographics.

Multivariate statistics like the MRC or stepwise correlation regression break the data down even further and determine what factors—such as variances in preferences—we can attribute to differences between specific groups such as age groups. Quantitative studies often employ automated means of collecting data such as surveys, but we can also use other static methods—for example, examining preferences through two-alternative, forced-choice studies or examining error rates and time on task using competitive benchmarks.

Quantitative studies’ great strength is providing data that is descriptive—for example, allowing us to capture a snapshot of a user population—but we encounter difficulties when it comes to their interpretation. For example, Gallup polls commonly provide data about approval rates for the President of the United States, as shown in Figure 1, but don’t provide the crucial information that we would need to interpret that data.

Quantitative data for Gallup’s presidential approval poll

In the absence of the data that would be necessary to interpret these presidential job-approval numbers, it’s difficult to say why people approve or disapprove of the job that President Obama is doing. Some respondents may feel that President Obama is too liberal, while others may feel that he is too conservative in his actions, but without the necessary data, there is no way to tell.

In a product-development environment, this data deficiency can lead to critical errors in the design of a product. For example, a survey might report that the majority of users like 3D displays, which may lead to a product team’s choosing to integrate a 3D display into their product. However, if most users like only autostereoscopic 3D displays—that is, 3D displays that don’t require their wearing glasses—or like 3D displays only for watching sports or action movies on a television, using a 3D display that requires glasses for data visualization on a mobile device might not be a sound design direction.

Basically, statistical significance tells you whether your findings are real, while effect size tells you how much they matter. For example, if you were investigating whether adding a feature would increase a product’s value, you could have a statistically significant finding, but the magnitude of the increase in value might very small—say a few cents. In contrast, a meaningful effect size might result in an increase in value of $10 per unit. Typically, if you are able to achieve statistical significance with a smaller sample size, the effect size is fairly substantial. It is important to take both statistical significance and effect size into account when interpreting your data.

Qualitative Studies

Data from qualitative studies describes the qualities or characteristics of something. You cannot easily reduce these descriptions to numbers—as you can the findings from quantitative research; though you can achieve this through an encoding process. Qualitative research studies can provide you with details about human behavior, emotion, and personality characteristics that quantitative studies cannot match. Qualitative data includes information about user behaviors, needs, desires, routines, use cases, and a variety of other information that is essential in designing a product that will actually fit into a user’s life.

While quantitative research requires the standardization of data collection to allow statistical comparison, qualitative research requires flexibility, allowing you to respond to user data as it emerges during a session. Thus, qualitative research usually takes the form of either some form of naturalistic observation such as ethnography or structured interviews. In this case, a researcher must observe and document behaviors, opinions, patterns, needs, pain points, and other types of information without yet fully understanding what data will be meaningful.

Following data collection, rather than performing a statistical analysis, researchers look for trends in the data. When it comes to identifying trends, researchers look for statements that are identical across different research participants. The rule of thumb is that hearing a statement from just one participant is an anecdote; from two, a coincidence; and hearing it from three makes it a trend. The trends that you identify can then guide product development, business decisions, and marketing strategies.

Because you cannot subject these trends to statistical analysis, you cannot validate trends by calculating a p-value or an effect size—as you could validate quantitative data—so you must employ them with care. Plus, you should continually verify such data through an ongoing qualitative research program.

Additionally, because it is not possible to automate qualitative-data collection as effectively as you can automate quantitative-data collection, it is usually extremely time consuming and expensive to gather large amounts of data, as would be typical for quantitative research studies. Therefore, it is usual to perform qualitative research with only 6 to 12 participants, while for quantitative research, it’s common for there to be hundreds or even thousands of participants. As a result, qualitative research tends to have less statistical power than quantitative research when it comes to discovering and verifying trends.

Using Quantitative and Qualitative Research Together

While quantitative and qualitative research approaches each have their strengths and weaknesses, they can be extremely effective in combination with one another. You can use qualitative research to identify the factors that affect the areas under investigation, then use that information to devise quantitative research that assesses how these factors would affect user preferences. To continue our earlier example regarding display preferences: if qualitative research had identified display type—such as TV, computer monitor, or mobile phone display—the researchers could have used that information to construct quantitative research that would let them determine how these variables might affect user preferences. At the same time, you can build trends that you’ve identified through quantitative research into qualitative data-collection methods and, thus verify the trends.

While this might sound contrary to what we’ve described above, the approach is actually quite straightforward. An example of a qualitative trend might be that younger users prefer autostereoscopic displays only on mobile devices, while older users prefer traditional displays on all devices. You may have discovered this by asking an open-ended, qualitative question along these lines: “What do you think of 3D displays?” This question would have opened up a discussion about 3D displays that uncovered a difference between stereoscopic displays, autostereoscopic displays, and traditional displays. In a subsequent quantitative study, you could address these factors through a series of questions such as: “Rate your level of preference for a traditional 3D display—which requires your using 3D glasses—on a mobile device,” with options ranging from strongly prefer to strongly dislike . An automated system assigns a numeric value to whatever option a participant chooses, allowing a researcher to quickly gather and analyze large amounts of data.

37 Comments

The quantitative approach is so vital, even in our daily lives, because in most, if not all things we do in life, we measure to see how much there is of something.
Quantitative method is part of our daily life, even from birth, data are constantly being collected, assessed, and re-assessed as we grow.
I also support the quantitative data because it is much used and almost whatever we do involves it.
Yes. Both quantitative and qualitative research are important on their own. It depends on the situation where a researcher conducts a particular research, or he can go for the mixed method, too. For now, I am in need of sampling and non-sampling errors. Please help me understand its applications and the ways that can be checked? Types of sampling and all related information on this chapter. Expecting someone will help me on this soon.
Quantitative data provides the facts, but facts about people are just another construct of our society. For example, is something luxurious because it’s expensive or is it expensive because it’s luxurious? Business understands that neither method should be relied upon exclusively, which is why they use both. Anyone who thinks this is a competition between the two methods to somehow win out needs to read the article again. If you want to find out what happens when you think the only tool you need to make decisions in the social world is statistics, just type ‘New Coke’ into Google.
I also think that the quantitative approach is more important than the qualitative approach because we use it more and more in our life time.
I would suggest using both quantitative and qualitative. Both are strong ways of getting information and hearing the views and suggestions of others. It would be wiser to go for a mixed research method.
This quantitative approach is the approach used to show the transparency that at the end shows the democracy in the Great lakes countries. Thanks
Both methods are useful in real life situations. Which to use depends on the situation, and it’s not bad to combine both methods as this gives better and more accurate results.
Quantitative research requires high levels of statistical understanding to enable the measurements of descriptive and inferential statistics to be computed and interpreted, whereas qualitative methods are critical to identifying gaps in underserved areas in the society. More significantly, the use of a combination of the two is perfect.
Hi, I am Mark Jonson, and I am from New York, USA. Thanks for the article and wonderful example.
I am more confused when a particular method is considered superior over the other. I am more at ease looking at all three methods as situational—in that, some decision making requires the use of a quantitative, qualitative, or mixed method to accomplish my goals. For instance, it is suitable to use the quantitative method in studying birth and death rates in Europe and Africa, whereas the qualitative method suits a study on students’ behaviour relating to a particular course of study.
I think both qualitative and quantitative are good to go by, because the demerits of one are settled by the merits of the other.
The lapses that one has are covered by the other, so I think, for better findings and more accurate results, a mixed method answers it all.
Wonderfully great to me
Good article, provides a good general overview. As a marketing-research consultant I want to stress that qualitative research helps you much more to collect insights for user stories—if you do SCRUM—get the reasons why that make you differ and not differ from competitors and that would allow you to positively stand out in the market. Quantification is great. I love the stats, measurements. Yet my clients get great stuff out of qual that quant could never deliver because it is tool for specific purposes—as qual is. If you have both in your toolbox and know how to handle them, you get a better product. Use them and use them wisely, know the strengths and weaknesses of both—or get someone who does—because your competitor might just do it right now.
Both methods play an equal role, especially in research, and may also influence each other. This will depend on time and the necessity for each method.
Both methods are relevant because they drive individuals to the same conclusions.
“On the other hand, if you achieve statistical significance with a small sample size, you don’t need to increase your sample size; the finding is true regardless.” This is not true! A significance level set to 0.05 (5%), implies that it is acceptable to have a 5% probability of incorrectly rejecting the null hypothesis. That is, one might observe statistical significance, regardless of sample size, but this may be a false positive—that is, the effect occurs by chance or due to the co-occurrence of other factors. Low statistical power—because of small sample sizes, small effects, or both—negatively affects the likelihood that a nominally statistically significant finding—that is, finding of a p-value of ~.05—actually reflects a true effect. See this example . In general, one should be cautious about making inferences based on results drawn from a small sample.
It must be remembered that the two methods are not competing. They complement each other. Employing both techniques is the surest way to get your research budget well spent.
Minini, Faith Harrison—In my opinion, all three research approaches—quantitative, qualitative, and mixed methods—are very useful in informing UX practice. However, I prefer qualitative research for the reasons that studies are cheaper to embark on and the means of data collection and analysis are less stressful. However, employing both research approaches in any given study—especially studies involving large populations in countries’ health issues—provides the best results.
Thanks for the article. Both methods are useful, but it depends on the goal of the research.
I think qualitative research is best because it involves face-to-face conversation with the respondents. It gives true and reliable data as compared to quantitative research, because those researchers obtain data only from a given source and quantify it.
I need the advantages and disadvantages of using the T-test data collection method for the United States Parcel Service about their competition. I am not sure which is better for this, t-test or not, since t-test deals in small samples whereas UPS is global. I still have to know some disadvantages and advantages though.
i think qualitative research gives you detailed information and really goes into knowing much about a phenomenon, unlike quantitative’s giving you statistics.
I think a qualitative approach is more imperative. It provides greater richness and more detailed information about a smaller number of people.
I think qualitative research is easier to make meaning from, as it simplifies the phenomena by giving details on the issues.
I beg to differ from most comments. I support qualitative research because of the quality of its results.
Good, indeed.
I now understand the concept of quantitative research. Thanks for your contribution.
This concept of quantitative research is good. Nice write-up. You can as well make a video of this and place it on Netflix for people to watch.
“While quantitative and qualitative research approaches each have their strengths and weaknesses, they can be extremely effective in combination with one another.” - very insightful and so true! Thanks for posting this post, it was, indeed, a very interesting read. However, I, personally, prefer the quantitative approach. It can provide a person with a higher quality of the result.
For the ultimate quality of both methods, a foolproof system has to be found to eliminate biases. It is almost impossible. This is the basic problem that has to be solved.
I think both qualitative and quantitative approaches are vital. The approach that the researcher will adopt should be informed by the research question that the researcher is trying to resolve.
Everyone’s story is unique. Where your story starts may not be up to you, but where it ends definitely is. Every twist and turn is an opportunity to choose what comes next. Make that choice authentically yours, and you can’t do anything but succeed. Your Rough Draft We all have a different way of finding out what will work for us. But no matter which route we take on the journey to success—however you define it—we have to get into the messy and the profound in equal measure. And once it all comes together, the structure will make sense: the who, the why, and the how.
I think both qualitative and quantitative approaches are vital. The approach that the researcher will adopt should be informed by the research question that the researcher is trying to solve.

Join the Discussion

Demetrius madrigal.

VP, UX & Consumer Insights at 30sec.io

Co-Founder and VP of Research & Product Development at Metric Lab

Redwood City, California, USA

Demetrius Madrigal

Bryan McClain

President & Co-Founder at Metric Lab

Strategic UX Adviser & Head of Business Development at 30sec.io

Bryan McClain

Other Columns by Demetrius Madrigal

  • Research Drives Innovation
  • How to Know When Your Product Is Going to Fail
  • Ahead of the Curve: Technology Trends and the Human Experience
  • Planning User Research Throughout the Development Cycle

Other Columns by Bryan McClain

  • Inspiration Beyond the Lab

Other Articles by Demetrius Madrigal

  • How Piracy Has Become the Best User Experience in Media
  • Testing the User Experience: Consumer Emotions and Brand Success

Other Articles on User Research

  • Designing Technology: Backtracking to Meet Users’ Needs While Innovating
  • Designing for the User: How Form Insights Shape UX Design Decisions
  • Making Product Managers and UX Designers Wear Users’ Hats
  • How Can UX Research Help Struggling SaaS Products for Businesses Become Successful?

New on UXmatters

  • Innovating the Next Generation of Mobile Apps
  • The Role of Sound Design in UX Design: Beyond Notifications and Alerts
  • Enhancing the User Experience by Leveraging Customer Feedback
  • Effective Strategies for Enhancing the User Experience During Waiting Periods
  • How Good UX Design Can Transform Lead Generation

Share this article

quantitative research weaknesses and strengths

Understanding quantitative research evidence

What is quantitative research.

There are two main types of research study: quantitative and qualitative (though some studies use a mixture of these methods).

Quantitative research deals with numbers and measurement and will usually use statistical analysis to draw conclusions. There are two main types of quantitative research: Randomised Controlled Trials (RCTs) and various kinds of trials using observational data (sometimes called population or cohort studies). The section below explains the differences between these two types of study, and their strengths and weaknesses.

The results of quantitative research are often quoted as the Relative Risk which means how much more common it is for a problem to occur in one group than another. In some types of research, particularly population studies (see below) you will see a similar measure called the Odds Ratio. However, for someone trying to decide about a treatment or medical intervention it is often more useful to look at the Absolute Risk . This is the chance of a problem occurring, usually expressed as one in 100, one in 1000 etc.

Qualitative research involves exploring people’s opinions, experiences and preferences in depth, usually through interviews, group discussions or questionnaires.

This article is mainly about the use and limitations of different types of quantitative research study. For more about understanding the research relating to pregnancy and birth see our book AIMS Guide to Safety In Childbirth (principal author Gemma McKenzie)

What are the limitations of quantitative research?

Most of the evidence that is used to make recommendations about maternity care comes from quantitative research. This gives us evidence to compare the outcomes from different medical interventions or approaches to maternity care. Unfortunately, evidence is often not clear-cut or may be lacking altogether, and sometimes the research that has been done is of poor quality.

In thinking about what research evidence can tell us, it’s also important to keep the following limitations in mind.

Definition of the study group

In any kind of research, grouping people together according to one characteristic, such as age, BMI or the fact that they conceived through IVF, ignores the fact that there could be important differences between the individuals in a group. For example, pregnancy and birth outcomes might be very different for someone over 40 who has a healthy lifestyle, is completely well and has had a straightforward pregnancy, compared to one who has existing health problems, or who smokes or drinks heavily, yet recommendations may be made based purely on their age.

Publication and reporting bias

Selective reporting, in other words the failure to publish the results of some studies, may be due to Journals only accepting articles about research which have interesting findings or those which support the status quo.

Another issue is what authors choose to report from their research. They may decide not to publish at all, or only report the findings they wanted to prove, or only report some of the outcomes that they measured. Sometimes, if the study did not show the results the researchers were hoping it would, they will pick on some other finding to report, even if that wasn’t an original aim of the study, or claim that there was “a trend towards” a finding even if it was not statistically significant (see below). It is often necessary to dig into the detail of a study to see whether the headline results are supported by the evidence or not.

Short term outcomes

Quantitative research usually only looks at outcomes that can be measured in the short term. Occasionally there will be follow-up studies that seek to understand the long-term consequences of a medical intervention, but these are the exception. This means that we often don’t know about all the risks, as those that arise after the study ends will not have been recorded.

Focus on selected benefits and risks

Studies are only able to focus on a small number of outcomes, so may not be able to provide information about other risks and benefits which you would want to know about when making a decision. This is partly because of practical considerations, but also because if you include a very large number of outcomes in a study it makes the conclusions less reliable, as it much more likely that something will be found that is a chance finding, but not a real effect.

Lack of views of study participants

Only rarely do the researchers carrying out a clinical study ask how the people receiving the care felt about it. Even when they do, the information is usually limited to something that can be quantified, such as asking ‘how satisfied’ they were with the care.

What is a Randomised controlled trial?

The kind of research that is usually considered to be the ’gold standard’ is a Randomised Controlled Trial (RCT). This is where a group of people is randomly divided into two or more groups, each of which receives a different treatment or type of care. The fact that the allocation into groups is random helps to ensure that the groups contain a similar mix of people. That way any difference in outcomes is likely to be due to the treatment or care received, rather than to the groups including more or fewer people with certain characteristics.

RCTs can work very well if it is a case of comparing something like the effectiveness of two drugs but are more problematic when researching something as complex as pregnancy and labour. There are also limitations on how reliable the findings of any RCT can be, as discussed below.

As a result of these problems it is often the case that too few large and well-conducted RCTs have been done to allow any meaningful conclusions to be drawn about what care is best. In some cases, no good studies have been done at all.

The findings of even the best RCT will be limited to answering a specific question about a particular treatment for a particular group of people (and often in a particular healthcare setting such as a hospital). It won’t be able to tell us everything that we might want to know about that treatment for other people or in other settings, or where additional factors are involved.

Blinding of participants and researchers

Each of the groups in an RCT receives a different treatment, so that the outcomes can be compared. Ideally, such trials would be ’blinded’ which means that neither the person nor those conducting the trial know which group an individual is in.

The problem with RCTs looking at care in pregnancy and labour is that blinding isn’t possible. Knowing which group a pregnant woman or person is in may affect both their level of anxiety (which itself can affect the outcomes) and the behaviour of their doctors and midwives, resulting in unconscious bias. For example, a doctor may believe that waiting for labour to start is riskier than inducing it after a set number of weeks of pregnancy. If they are caring for a mother in the group that waited for labour to begin, the doctor may feel they need to intervene if they notice signs that would not normally cause them concern. This could affect the frequency of unnecessary caesareans and assisted births. This and other issues are discussed in the article “Routine induction of labour at 41 weeks gestation: nonsensus consensus” 1

Other issues of Bias

The results of an RCT can also be misleading if the study was carried out in a way that made it biased. For example, there might be important differences between the groups if

  • the way in which people were allocated was not truly random
  • a lot more people from one group dropped out during the study for some reason
  • if there was a lot of cross-over.

Cross-over is where high numbers of people ended up having the opposite treatment or care to the one they were intended to have. Some cross-over in RCT is expected. For example, in an RCT on planned caesareans it's very likely that some women allocated to a planned caesarean will go into labour before it can be done, and some who are in the planned vaginal birth group will decide to have a caesarean because a concern has arisen before their labour started. For the results to remain valid this cross-over needs to remain low.

In research on induction of labour it is quite common for a high proportion of those allocated to the expectant management (waiting for labour) group to have their labours induced because they have reached a pre-set deadline for the birth to take place or there is a concern over their or their baby's well-being. Similarly, some of those allocated to the induction group may go into labour before the induction is started. This can reduce the reliability of the findings.

Low recruitment rates

It is often not possible t o recruit a large enough sample to be able to measure a difference in outcomes. For example, to detect a difference in very rare occurrences such as stillbirth it has been estimated that it would be necessary to include between 16,000 and 30,000 pregnancies in the trial, and this is not usually possible in practice. 2

What is a meta-analysis?

One way around the problem of recruiting a large enough sample is a ’meta-analysis', a type of review which combines the data from multiple studies and uses statistical methods to analyse it. Effectively, a group of studies are analysed as if they were all part of one big study, but there are problems with this approach.

A meta-analysis can only be as good as the trials that go into it, and the results can vary according to which trials the authors choose to include. It is also difficult for a meta-analysis to compare the results of RCTs if they were carried out in different ways or using different methods, or if some important outcomes were not reported in all the studies.

The authors of such reviews will usually comment on the quality of the studies that they have selected for inclusion, and on the overall quality of the evidence available. The typical rating is High/Moderate/Low/Very Low where ‘High’ means that the authors are very confident of having detected a real effect and ‘Very Low’ means they are not at all confident. 3

One of the best-known sources of meta-analyses for all kinds of medical questions is the global Cochrane network. Their approach is explained here www.cochrane.org/about-us .

What are observational or population studies?

These are studies which observe how the outcomes in real life situations differ between groups defined by one or more characteristics or by a difference in the treatment that they receive. They are also sometimes called cohort studies. There is no random allocation of people to the different groups in this type of study, so they are usually considered to provide poorer quality evidence than an RCT. Nevertheless, they can provide useful information, especially on subjects where large, good-quality RCTs are lacking.

Mostly these are retrospective studies which look back at the records of a population, often over a period of years, and try to identify whether there were certain groups who were more likely than others to experience a given outcome. Alternatively, the investigators may look at how outcomes differed before and after a change in standard care procedures or else compare outcomes according to differences in characteristics such as age or the presence of a health condition.

There are also prospective studies which define the groups to be studied at the start of the research and then follow up what happens to those who fall within these groups, for example, those that do and don’t have their labours induced.

The advantages of this type of study are that they can often involve larger samples than RCTs are able to recruit, and they are looking at what happened in real life. In some cases, especially where it would be impractical or unethical to do an RCT they may provide the only evidence we have.

The disadvantages are that the records that are used are often incomplete, and because there is no randomisation there could be important differences between the groups or in the ways in which their labours were managed which are often not identified but could have a major impact on the results.

What is “Statistical significance” and why does it matter?

Studies can report their findings in different ways, but the authors should always use some sort of statistical analysis to check the probability that their results reflect a real difference. If it is highly unlikely that the finding occurred by chance, this is usually described as being a “statistically significant” result. Note that this is a technical use of the word “significant”. It is not saying anything about the relevance or importance of the finding.

Most medical studies use a significance level of 5%. This means that there’s a 95% chance that the finding is real, but still a 5% chance that it is not real. Another way to look at this is that there is a one in 20 (5%) chance that a result found by a study is not a real effect, so a recommendation that is based on just one result in one study may not be reliable.

It is usual to report the 95% confidence intervals for a research finding. This tells you the range of values within which there is a 95% chance that the true value lies. In other words, if an RCT finds that something is 2.5 times more common in one situation than in another, it might report this Relative Risk (RR) as RR 2.5 (95%CI 1.6 to 3.2). That means that the most likely value for the RR is 2.5, and there is a 95% chance that the true value lies somewhere between 1.6 and 3.2. This means we can be fairly confident that the outcome being measured really is more common in one situation than in the other. We can also be fairly confident that it is between 1.6 and 3.2 times more common. However, there is a 5% chance that the true value is outside these limits, (meaning that there is a 2.5% chance (one in 40) that the RR is less than 1.6, and a 2.5% chance that it is greater than 3.2).

If both the upper and lower confidence intervals are less than one, then this also indicates that the result is statistically significant, but that there is a reduction rather than an increase in the risk.

For a result to be statistically significant both figures must be greater than one or both less than one. This indicates that it is likely that the study has identified a real effect. If the lower number is less than one (which implies that the effect is to reduce the risk) and the upper figure is greater than one (which implies that it increases the risk) the result is not statistically significant. The study does not show the whether the risk is reduced or increased in one situation compared to the other.

When the numbers in the confidence interval are very different (but either both greater than one or both less than one) this is referred to as having a wide confidence interval. This means we can be confident that there is an effect, but not very sure how big the effect is. We can still say what the most likely value is, but the true value could be very different.

Sometimes when a study did not produce the result that was expected (or hoped for) authors may make more of non-significant results than they should. The sort of phrases to watch out for are things like “there was a trend towards x” or “the findings were borderline significant” or “there was an increased/decreased chance of x, but this did not achieve significance”. What all these phrases mean is that we don’t know whether the effect is real or not, and there is a high chance that the effect was not a real, because the finding was not statistically significant.

Critiquing research

Research papers, especially if they are likely to have a major impact on clinical practice, will often be ‘critiqued’ by other experts in the field. This means reviewing the strengths and limitations of the research and deciding whether the conclusions can be relied on. A critique of an RCT will usually include analysis of things like:

  • Whether the research question was clearly defined, and appropriate outcomes reported
  • How good the random allocation of people to the different groups was, and whether it resulted groups with similar characteristics (e.g. age, education level etc.)
  • Whether there were any differences in the care given to each group, apart from the treatment being investigated
  • How much cross-over there was between group
  • Whether a large proportion of those enrolled in the study dropped out along the way
  • The size of effect, statistical significance and confidence intervals for each outcome reported

If you want to get a feel for how reliable a piece of research is, it is worth looking to see whether a critique has been published. Examples of this include “Routine induction of labour at 41 weeks: nonsensus consensus” 1 and “Parsing the ARRIVE Trial: Should First-Time Parents Be Routinely Induced at 39 Weeks?” 4

An example of tools to use if you want to try critiquing different types of research yourself can be found here casp-uk.net/casp-tools-checklists

  • Menticoglou S.M. and Hall S.F. “Routine induction of labour at 41 weeks: nonsensus consensus” BJOG 109 (5) May 2002 pp485-491 obgyn. onlinelibrary.wiley.com/doi/abs/10.1111/j.1471-0528.2002.01004.x
  • Mandruzzato G. et al “Guidelines for the management of postterm pregnancy.” J Perinat Med. 2010 Mar;38(2):111-9 www.ncbi.nlm.nih.gov/pubmed/20156009/
  • Siemieniuk R. and Guyatt G. “What is GRADE?” BMJ Best Practice bestpractice.bmj.com/info/toolkit/learn-ebm/what-is-grade/
  • Goer H. “Parsing the ARRIVE Trial: Should First-Time Parents Be Routinely Induced at 39 Weeks?” 2018 www.lamaze.org/Connecting-the-Dots/parsing-the-arrive-trial-should-first-time-parents-be-routinely-induced-at-39-weeks

Written by: Nadia Higson Reviewed by: Debbie Chippington Derrick Reviewed on: 13/12/2023 Next review needed: 13/12/2025

AIMS does not give medical advice. Our website provides evidence-based information to support informed decision-making. The AIMS Helpline volunteers will be happy to provide further information and support. Please email [email protected] or ring 0300 365 0663.

If you found this information page helpful please consider becoming an AIMS member or making a donation to support the work of AIMS. We are a small charity that accepts no commercial sponsorship, in order to preserve our reputation for providing impartial, evidence-based information.

MAKE A DONATION

Buy AIMS a Coffee with Ko-Fi

AIMS supports all maternity service users to navigate the system as it exists, and campaigns for a system which truly meets the needs of all.

Latest Content

Mental health and pregnancy - phoeb….

AIMS Journal, 2024, Vol 36, No 2 By Phoebe Howe In early 2016, I was diagnosed with Emotionally Unstable Personality Disorder (EUPD, formally known as Borderline Personal…

Type 1 diabetes and maternity care:…

AIMS Journal, 2024, Vol 36, No 2 By Jane Furness My daughter is two and a half years old now, but I still have daily flashbacks of our pregnancy and birth together. My hu…

Epilepsy and pregnancy

AIMS Journal, 2024, Vol 36, No 2 Kim Morley is a nurse and midwife with advanced qualifications who has been instrumental in providing specialised care for women with epi…

AIMS AGM 2024 All members welcome to join us in Birmingham or online - further details to follow in AIMS Members Mailing Please email [email protected] if you plan to att…

AIMS Workshop: The Foundation Stone…

Join us for an interactive online AIMS workshop: " The Foundation Stones for Supporting the Physiological Process in Pregnancy and Birth ". In this workshop discussion we…

Wales & South West England Maternit…

For practising and student midwives, academics, health visitors, neonatal nurses, obs & gynae teams, doulas and other allied healthcare professionals from both sides of t…

Latest Campaigns

Aims letter to wes streeting.

AIMS has written to Wes Streeting MP, welcoming him to the role of Secretary of State for Health and Social Care. We acknowledge his awareness that maternity services are…

Involving Service User Voices in Ma…

This is an edited version of an invited talk given by Jo Dagustun, AIMS Campaigns Team, to the International Labour and Birth Research Conference UK, 24 - 26 April 2023.…

Birth Trauma Inquiry Open Letter in…

We write this letter in response to the recently published APPG Report on Birth Trauma which can be found here The report was extremely moving and we honour the brave con…

Home

Quantitative research: Definition, characteristics, benefits, limitations, and best practices

quantitative research

Quantitative research characteristics

Benefits and limitations, best practices for quantitative research.

Researchers use different research methods as research is carried out for various purposes. Two main forms of research, qualitative and quantitative, are widely used in different fields. While qualitative research involves using non-numeric data, quantitative research is the opposite and utilizes non-numeric data. Although quantitative research data may not offer deeper insights into the issue, it is the best practice in some instances, especially if you need to collect data from a large sample group. Quantitative research is used in various fields, including sociology, politics, psychology, healthcare, education, economics, and marketing.

Earl R. Babbie notes: "Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques. Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon."

Below are some of the characteristics of quantitative research.

Large sample size

The ability to use larger sample sizes is undoubtedly one of the biggest perks of quantitative research.

Measurability

Due to its quantitative nature, the data gathered through quantitative data collection methods is easily measurable.

Close-ended questions

Quantitative research utilizes close-ended questions, which can be both beneficial and disadvantageous.

Reusability

Since it doesn't involve open-ended questions, quantitative research results can be used in other similar research projects.

Reliability

Quantitative data is considered more reliable since it is usually free of researcher bias.

Generalization

Quantitative research uses larger sample sizes, so it is assumed that it can be generalized easily.

Since quantitative research relies on data that can be measured, there are a lot of benefits offered by quantitative methods.

Quantitative research benefits

  • Easier to analyze

Analyzing numeric data is easier; in that context, quantitative research can bring large amounts of data in a short period. There is numerous quantitative data analysis software that lets the researcher analyze the data fast.

  • Allows using large sample sizes

Quantitative research involves using close-ended questions or simple "yes and no" questions. Therefore, it is easier to analyze quantitative data. In that sense, it can be distributed to practically as many people as you can. A large sample size usually means more accurate research results.

  • More engaging

As quantitative research questions don't feature open-ended questions, participants are more eager to respond to questions. With open-ended qualitative questions, participants sometimes need to write a wall of text, and that is undesirable for many of them. It is easier to choose "yes or no" as it doesn't require much effort. A more engaging research survey means more feedback.  

  • Less biased and more accurate

Qualitative research uses open-ended questions, and since the feedback is often open to interpretation, researchers might be biased when analyzing the data. That is not the case with quantitative research, as it involves answers to preset questions. Less biased data means more accurate data.

  • Needs less time and effort

In all stages of research, quantitative research requires much less time and effort when compared with qualitative research. With different software, it is possible to create, send and analyze a huge volume of quantitative data in just a few clicks. Unlike qualitative in-depth interviews that usually require participants to be in a specific office, quantitative research isn't geographically bound to any location and can be carried out online.

Quantitative research limitations

  • Limited information on the subject - 

Using close-ended questions means there isn't much to interpret. It doesn't allow the researcher to get answers to "why" questions. If you want to get in-depth information on the subject, you need to carry out qualitative research.

  • Can be costly

Although it allows the researcher to reach a higher sample size, finding a large number of participants is expensive, considering you have to pay each participant.

  • Difficulty in confirming the feedback

Quantitative research doesn't usually involve observing participants or talking with them about their answers; therefore, it is difficult to guess if the data gathered from them is accurate all the time. With qualitative methods, you get a chance to observe participants and ask follow-up questions to confirm their answers.

What kind of research do you need?

It may sound too obvious, but you may want to think about the type of research you need to carry out before you start with one. Sometimes quantitative research is not the best practice for a given subject, and you may need to go with qualitative research.  

Clear research goals

Setting a research goal is the first thing every researcher does before setting out to carry out actual research. The success of the research hugely depends on the clearly defined research goals. In other words, it's a make or break point for most research projects. Having confusing research goals is what usually fails the entire project and results in a loss of time and money.

Use user-friendly structure

When creating your surveys and questionnaires, use a user-friendly layout and keep it simple, so it's more engaging for the users. A lot of software offers simple survey templates that you can use effectively.

Choose the right sample

Although quantitative research allows the research to use large sample sizes, it is essential to choose the right sample group. The sample group you're trying to get feedback from may not represent your target audience. Therefore, think twice before allocating resources to gathering data from them.

Pay attention to questions

Quantitative research uses closed-ended questions, which means you need to be very careful with the questions you choose. One of the benefits of quantitative research is that it gives you the ability to predetermine the questions, so you need to use this chance and think about the best possible questions you may use for a better result. With quantitative research questions, you usually don't get a chance to ask follow-up questions.

Let your bias out of the research

We already mentioned that quantitative research is less biased than qualitative research, but it doesn't mean that it's completely free of bias. In this form of research, bias comes with specifically designed questions. The researcher may frame the questions in a way that the feedback may reflect what the researcher wants. In that sense, it is important to leave all the biased questions out you feel can alter the end result of the research.

English

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Appraising Quantitative Research in Health Education: Guidelines for Public Health Educators

Leonard jack, jr..

Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971

Sandra C. Hayes

Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388

Jeanfreau G. Scharalda

Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853

Barbara Stetson

Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904

Nkenge H. Jones-Jack

Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080

Matthew Valliere

Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652

William R. Kirchain

Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971

Michael Fagen

Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice , Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551

Cris LeBlanc

Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971

Many practicing health educators do not feel they possess the skills necessary to critically appraise quantitative research. This publication is designed to help provide practicing health educators with basic tools helpful to facilitate a better understanding of quantitative research. This article describes the major components—title, introduction, methods, analyses, results and discussion sections—of quantitative research. Readers will be introduced to information on the various types of study designs and seven key questions health educators can use to facilitate the appraisal process. Upon reading, health educators will be in a better position to determine whether research studies are well designed and executed.

Appraising the Quality of Quantitative Research in Health Education

Practicing health educators often find themselves with little time to read published research in great detail. Some health educators with limited time to read scientific papers may get frustrated as they get bogged down trying to understand research terminology, methods, and approaches. The purpose of appraising a scientific publication is to assess whether the study’s research questions (hypotheses), methods and results (findings) are sufficiently valid to produce useful information ( Fowkes and Fulton, 1991 ; Donnelly, 2004 ; Greenhalgh and Taylor, 1997 ; Johnson and Onwuegbuze, 2004 ; Greenhalgh, 1997 ; Yin, 2003; and Hennekens and Buring, 1987 ). Having the ability to deconstruct and reconstruct scientific publications is a critical skill in a results-oriented environment linked to increasing demands and expectations for improved program outcomes and strong justifications to program focus and direction. Health educators do must not solely rely on the opinions of researchers, but, rather, increase their confidence in their own abilities to discern the quality of published scientific research. Health educators with little experience reading and appraising scientific publications, may find this task less difficult if they: 1) become more familiar with the key components of a research publication, and 2) utilize questions presented in this article to critically appraise the strengths and weaknesses of published research.

Key Components of a Scientific Research Publication

The key components of a research publication should provide important information that is needed to assess the strengths and weaknesses of the research. Key components typically include the: publication title , abstract , introduction , research methods used to address the research question(s) or hypothesis, statistical analysis used, results , and the researcher’s interpretation and conclusion or recommended use of results to inform future research or practice. A brief description of these components follows:

Publication Title

A general heading or description should provide immediate insight into the intent of the research. Titles may include information regarding the focus of the research, population or target audience being studied, and study design.

An abstract provides the reader with a brief description of the overall research, how it was done, statistical techniques employed, key results,and relevant implications or recommendations.

Introduction

This section elaborates on the content mentioned in the abstract and provides a better idea of what to anticipate in the manuscript. The introduction provides a succinct presentation of previously published literature, thus offering a purpose (rationale) for the study.

This component of the publication provides critical information on the type of research methods used to conduct the study. Common examples of study designs used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trial. The methods section should contain information on the inclusion and exclusion criteria used to identify participants in the study.

Quantitative data contains information that is quantifiable, perhaps through surveys that are analyzed using statistical tests to determine if the results happened by chance. Two types of statistical analyses are used: descriptive and inferential ( Johnson and Onwuegbuze, 2004 ). Descriptive statistics are used to describe the basic features of the study data and provide simple summaries about the sample and measures. With inferential statistics, researchers are trying to reach conclusions that extend beyond the immediate data alone. Thus, they use inferential statistics to make inferences from the data to more general conditions.

This section presents the reader with the researcher’s data and results of statistical analyses described in the method section. Thus, this section must align closely with the methods section.

Discussion (Conclusion)

This section should explain what the data means thereby summarizing main results and findings for the reader. Important limitations (such as the use of a non-random sample, the absence of a control group, and short duration of the intervention) should be discussed. Researchers should discuss how each limitation can impact the applicability and use of study results. This section also presents recommendations on ways the study can help advance future health education and practice.

Critically Appraising the Strengths and Weaknesses of Published Research

During careful reading of the analysis, results, and discussion (conclusion) sections, what key questions might you ask yourself in order to critically appraise the strengths and weaknesses of the research? Based on a careful review of the literature ( Greenhalgh and Taylor, 1997 ; Greenhalgh, 1997 ; and Hennekens and Buring, 1987 ) and our research experiences, we have identified seven key questions around which to guide your assessment of quantitative research.

1) Is a study design identified and appropriately applied?

Study designs refer to the methodology used to investigate a particular health phenomenon. Becoming familiar with the various study designs will help prepare you to critically assess whether its selection was applied adequately to answer the research questions (or hypotheses). As mentioned previously, common examples of study designs frequently used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trail. A brief description of each can be found in Table 1 .

Definitions of Study Designs

A cross-sectional study is a descriptive study in which disease, risk factors, or other characteristics are measured simultaneously (at one particular point in time) in a given population ( ).
A cohort study is an analytical study in which individuals with differing exposures to a suspected factor are identified and then observed for the occurrence of certain health effects over a period of time ( ). Comparison may be made with a control group, but interventions are not normally applied in cohort studies.
A case-control study is an analytical study which compares individuals who have a specific condition (“cases”) with a group of individuals without the condition (“controls”) ( ). A case-control study generally depends on the collection of retrospective data, thus introducing the possibility of recall bias. Recall bias is the tendency of subjects to report events in a manner that is different between the two groups studied.
A controlled trial is an experimental study comparing the intervention administered in one group of individuals (also referred as treatment, experimental or study group) and the outcome compared to a similar group (control group) that did not receive the intervention ( ). A controlled trial may or may not use randomization to assign individuals to groups, and it may or may not use blinding to prevent them from knowing which treatment they get. In the event study participants are randomly assigned (meaning everyone has an equal chance of being selected) to a treatment or control group, this study design would be referred to as a randomized controlled trial.

2) Is the study sample representative of the group from which it is drawn?

The study sample must be representative of the group from which it is drawn. The study sample must therefore be typical of the wider target audience to whom the research might apply. Addressing whether the study sample is representative of the group from which it is drawn will require the researcher to take into consideration the sampling method and sample size.

Sampling Method

Many sampling methods are used individually or in combination. Keep in mind that sampling methods are divided into two categories: probability sampling and non-probability sampling ( Last, 2001 ). Probability sampling (also called random sampling) is any sampling scheme in which the probability of choosing each individual is the same (or at least known, so it can be readjusted mathematically to be equal). Non-probability sampling is any sampling scheme in which the probability of an individual being chosen is unknown. Typically, researchers should offer a rationale for utilizing non-probability sampling, and when utilized, be aware of its limitations. For example, use of a convenience sample (choosing individuals in an unstructured manner) can be justified when collecting pilot data around which future studies employing more rigorous sampling methods will be utilized.

Sample Size

Established statistical theories and formulas are used to generate sample size calculations—the recommended number of individuals necessary in order to have sufficient power to detect meaningful results at a certain level of statistical significance. In the methods section, look for a statement or two confirming whether steps where taken to obtain the appropriate sample size.

3) In research studies using a control group, is this group adequate for the purpose of the study?

Source of controls.

In case-control and cohort studies, the source of controls should be such that the distribution of characteristics not under investigation are similar to those in the cases or study cohort.

In case-control studies both cases and controls are often matched on certain characteristics such as age, sex, income, and race. The criteria used for including and excluding study participants must be adequately described and examined carefully. Inclusion and exclusion criteria may include: ethnicity, age of diagnosis, length of time living with a health condition, geographic location, and presence or absence of complications. You should critically assess whether matching across these characteristics actually occurred.

4) What is the validity of measurements and outcomes identified in the study?

Validity is the extent to which a measurement captures what it claims to measure. This might take the form of questions contained on a survey, questionnaire or instrument. Researchers should address one or more of the following types of validity: face, content, criterion-related, and construct ( Last, 2001 ; William and Donnelly, 2008).

Face validity

Face validity assures that, upon examination, the variable of interest can measure what it intends to measure. If the researcher has chosen to study a variable that has not been studied before, he/she usually will need to start with face validity.

Content validity

Content validity involves comparing the content of the measurement technique to the known literature on the topic and validating the fact that the tool (e.g., survey, questionnaire) does represent the literature accurately.

Criterion-related validity

Criterion-related validity involves making sure the measures within a survey when tested proves to be effective in predicting criterion or indicators of a construct.

Construct validity

Construct validity deals with the validation of the construct that underlies the research. Here, researchers test the theory that underlies the hypothesis or research question.

5) To what extent is a common source of bias called blindness taken into account?

During data collection, a common source of bias is that subjects and/or those collecting the data are not blind to the purpose of the research. This can likely be the result of researchers going the extra mile to make sure those in the experimental group benefit from the intervention ( Fowkes and Fulton, 1991 ). Inadequate blindness can be a problem in studies utilizing all types of study designs. While total blindness is not possible, appraising whether steps were taken to be sure issues related to ensure blindness occurred is essential.

6) To what extent is the study considered complete with regard to drop outs and missing data?

Regardless of the study design employed, one must assess not only the proportion of drop outs in each group, but also why they dropped out. This may point to possible bias, as well as determine what efforts were taken to retain participants in the study.

Missing data

Despite the fact that missing data are a part of almost all research, it should still be appraised. There are several reasons why the data may be missing. The nature and extent to which data is missing should be explained.

7) To what extent are study results influenced by factors that negatively impact their credibility?

Contamination.

In research studies comparing the effectiveness of a structured intervention, contamination occurs when the control group makes changes based on learning what those participating in the intervention are doing. Despite the fact that researchers typically do not report the extent to which contamination occurs, you should nevertheless try to assess whether contamination negatively impacted the credibility of study results.

Confounding factors

A confounding factor in a study is a variable which is related to one or more of the measurements (measures or variables) defined in a study. A confounding factor may mask an actual association or falsely demonstrate an apparent association between the study variables where no real association between them exists. If confounding factors are not measured and considered, study results may be biased and compromised.

The guidelines and questions presented in this article are by no means exhaustive. However, when applied, they can help health education practitioners obtain a deeper understanding of the quality of published research. While no study is 100% perfect, we do encourage health education practitioners to pause before taking researchers at their word that study results are both accurate and impressive. If you find yourself answering ‘no’ to a majority of the key questions provided, then it is probably safe to say that, from your perspective, the quality of the research is questionable.

Over time, as you repeatedly apply the guidelines presented in this article, you will become more confident and interested in reading research publications from beginning to end. While this article is geared to health educators, it can help anyone interested in learning how to appraise published research. Table 2 lists additional reading resources that can help improve one’s understanding and knowledge of quantitative research. This article and the reading resources identified in Table 2 can serve as useful tools to frame informative conversations with your peers regarding the strengths and weaknesses of published quantitative research in health education.

Publications on How to Read, Write and Appraise Quantitative Research

Contributor Information

Leonard Jack, Jr., Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

Sandra C. Hayes, Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388.

Jeanfreau G. Scharalda, Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853.

Barbara Stetson, Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904.

Nkenge H. Jones-Jack, Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080.

Matthew Valliere, Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652.

William R. Kirchain, Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971.

Michael Fagen, Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice , Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551.

Cris LeBlanc, Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

  • Fowkes FG, Fulton PM. Critical appraisal of published research: introductory guidelines. British Medical Journal. 1991; 302 :1136–40. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Donnelly RA. The Complete Idiots Guide to Statistics. Alpha Books; New York, NY: 2004. pp. 6–7. [ Google Scholar ]
  • Greenhalgh T, Taylor R. How to read a paper: Papers that go beyond numbers (qualitative research) British Medical Journal. 1997; 315 :740–743. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Greenhalgh T. How to read a paper: Assessing the methodological quality of published papers. British Medical Journal. 315 :305–308. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Johnson RB, Onwuegbuze AJ. Mixed methods research: A research paradigm whose time has come. Educational Researcher. 2004; 33 :14–26. [ Google Scholar ]
  • Hennekens CH, Buring JE. Epidemiology in Medicine. Little, Brown and Company; Boston, Massachusetts: 1987. pp. 106–108. [ Google Scholar ]
  • Last JM. A dictionary of epidemiology. 4. Oxford University Press, Inc; New York, New York: 2001. [ Google Scholar ]
  • Trochim WM, Donnelly J. Research methods knowledge base. 3. Atomic Dog; Mason, Ohio: 2008. pp. 6–8. [ Google Scholar ]

quantitative research weaknesses and strengths

Strengths and Weaknesses of Quantitative and Qualitative Research

There are few things more useful in developing and implementing strategies than reliable data. The only drawback is that this information can be difficult to understand, which results in many business owners knowing nothing about their own research.

When starting a company or building a product, most people ask themselves the question: qualitative or quantitative research? Given the importance of coming up with a good strategy, this is not an easy question to answer.

Here is a quick look at the strengths and weaknesses of quantitative research.

What Is Quantitative Research?

Quantitative research is a study of numerical data whose purpose is to measure the strength and direction of relationships between variables. Quantitative research uses statistics to make sense of numerical data.

Quantitative research is based on numerical data gathered from different types of research methods, such as questionnaires, structured interviews, and statistical analysis.

Quantitative research involves questions that can be answered by counting or measuring, such as, how many people purchased a product? How many people are satisfied with the customer service ? What are the demographics of customers in different age groups?

For your study to be quantitative, you need to use numerical data to either prove or disprove your hypothesis.

For example, a quantitative research about a new product launch could use data like the average consumption of products in the category among the target population, the number of competitors and their individual market share, pricing points, and the marketing budget required to launch a brand awareness campaign, to mention a few.

This type of research helps you to understand your market and target audience, so you can make informed decisions about your product or service.

The biggest advantage of quantitative research is the ability to analyze large volumes of data and make conclusions based on that data.

Difference Between Qualitative And Quantitative Research

The main difference is this – Qualitative research methods include the collection of data through the use of open-ended questions, unstructured interviews, or observations, whereas, Quantitative research focuses on gathering numerical data and making generalizations about groups of people, situations, or phenomena.

Understanding human behavior and its governing reasons are the ultimate goals of Qualitative research. The discipline explores the “why” and “how” of decision-making.

Quantitative data collection methods are more structured than qualitative data collection ones.

When you need to gather a large amount of information from a group of people, there are many ways to do so. In quantitative research, data can be collected using a variety of methods, including surveys, interviews, observation, and online polls.

A good researcher knows when to use qualitative research (to understand opinions) vs quantitative research (to test objectively). 

For example, if you want to know what people think about a particular topic, then qualitative research would be best; but if you want to determine how many people are aware of a particular issue, then quantitative research would be better.

When you use both qualitative and quantitative research methods in your surveys, you will gain results that reach a lot of people as well as deeper insights from those people. With the right question types and analysis, you can use quantitative research to gain statistically significant insights into your target audience’s attitudes and behaviors.

Qualitative questions are useful for gathering detailed feedback on open-ended topics like:

Customer satisfaction. Qualitative questions let customers explain how they feel about your company’s products or services, and why they feel that way.

Employee engagement. Use open-ended questions to solicit employee feedback on company culture, management practices, benefits, and more.

Service performance. Learn why customers choose your brand over competitors’ by asking for the specific reasons for their decision.

Market research. Open-ended questions help you identify the most important factors that influence customers’ purchasing decisions in your market.

Quantitative research is ideal for:

  • Collecting data at scale (e.g., using survey software)
  • Reaching a large number of respondents in a short period of time
  • Analyzing trends that apply to large groups of people (e.g., gender differences)
  • Highlighting broad patterns or relationships between variables
  • Predicting likelihoods based on certain factors (e.g., age, income)
  • Driving the direction of future quantitative studies (i.e., hypothesis testing)

Importance Of Quantitative Research

The importance of quantitative research is that it provides an objective way to measure things, as well as a means of testing theories. Additionally, the results of quantitative research may be more easily replicated by other researchers.

Quantitative research is conducted in an effort to find numbers and statistical analysis to determine relationships between two or more variables. The process involves taking data from various sources and then organizing it into a format that can be used for statistical analysis.

One advantage of quantitative research is its ability to measure hard numbers and facts. This makes it much simpler to analyze data. 

For example, if you wanted to know the average income of people living in a certain area, all you would have to do is calculate the number of participants in your study who earn above and below a certain amount. You could also compare this data with other areas to see which has the highest average income levels.

Another advantage is that quantitative research allows researchers to replicate their findings using different samples or methods. The ability to replicate results ensures accuracy and consistency in results obtained from different studies conducted on similar topics over time. 

Furthermore, this type of research may reveal new insights into how something works because it focuses on measurable relationships rather than just observations about what happens in nature or human behavior itself.

Characteristics Of Quantitative Research

Quantitative research is the type of research that most people think about when they hear the word “research”. It involves creating statistical models, analyzing data, and using mathematical theories to understand how things work.

Quantitative research is used to identify factors that affect relationships between variables. Quantitative research is widely used in psychology, economics, demography, and marketing. It is often used in natural sciences, such as biology and chemistry, and in social sciences, such as sociology and psychology. Quantitative research involves the use of computational, mathematical, or statistical techniques.

For example, if a researcher believes that watching television makes people more violent, he or she may use quantitative methods to test this theory by counting the number of violent acts depicted in a week’s worth of programming and comparing it with the number of violent crimes committed for the same time period.

These are some essential characteristics of Quantitative research:

  • The focus is on measurement, analysis, and prediction of phenomena through the use of mathematical models and theories.
  • Quantitative research’s objective is to obtain information about the current status of a given phenomenon.
  • The focus is on variables and the relationships between them.
  • The researcher can manipulate variables, which is why experiments are often used in quantitative research.
  • Quantitative research includes formal data collection methods.
  • The results are based on large sample sizes, so the results have high statistical power and are more likely to be statistically significant (i.e., not due to chance).
  • Data is analyzed using statistical techniques.
  • Quantitative research typically uses deductive reasoning.
  • Variables must be identified and measured using reliable instruments and procedures; using multiple methods of measurement increases the reliability and validity of results (triangulation).

The design of a quantitative research question must be structured or ‘closed’ so that it can be answered using a predetermined response format (usually dichotomous or multiple choice) or scaled responses. 

The design of the quantitative research question should not allow respondents to answer in their own words. This will make it impossible to use the data in any meaningful way. 

The quantitative design will measure whether a change has occurred from a specific point in time, but will not determine why a change has occurred.

Quantitative research questions are best for giving an overview or analysis of a particular business, industry, or topic. Therefore, they need to be researched in detail so that the researcher can be confident that enough information exists to answer the questions. If there is no literature available on the topic, then it is unlikely that you will have sufficient knowledge to investigate the topic effectively.

Conducting thorough industry research is crucial in ensuring that the quantitative research questions are well-informed and grounded in existing knowledge.

Strengths Of Quantitative Research

Quantitative research is often used to ask questions that can be answered with numerical data. It has a number of strengths:

  • Standardized data collection

This means that the same instruments are used with all the participants in a study, and the data is collected in a uniform way. This makes it easier to compare results across groups of participants or to test hypotheses on a larger scale.

  • Objectivity

The standardization of both data collection and analysis can make results from studies more objective than those with qualitative research methods. The use of statistics and hard numbers can also give your findings authority when you publish them online or in a print journal. This objectivity makes it easier for researchers to explain why their findings are reliable and true.

  • Difficult Data Collection

Quantitative studies can also provide researchers with data about phenomena that are difficult or impossible to measure directly, such as attitudes, beliefs, and values.

Quantitative research allows for larger sample sizes, which increases the reliability of your results. It also moves quickly and can produce results that are easy to share with others, because they’re often presented as percentages.

  • Generalizability

You might find that what you learn applies not only to your research participants but also to people who weren’t included in your study. For example, if you ask 1,000 people what’s important to them about their job, you might find out some things about how work affects happiness that could be true for other people as well.

  • Evidence Collection

The design of a quantitative study allows the researcher to collect numerical data that can be analyzed using statistical tests. This provides an opportunity for the researcher to support or refute theories by collecting evidence that is statistically significant.

Weaknesses Of Quantitative Research

Quantitative research is a useful tool for measuring and describing the world as it exists, but it has its weaknesses as well.

Quantitative data is often criticized for being too detached from real-life situations; this criticism typically stems from the fact that the data collected tends to be structured and limited in nature. 

Some have argued that quantitative analysis does not provide people with a full picture of complex issues or human behavior since it is concerned with measuring and counting specific variables.

Quantitative researchers are concerned with how much and how many, but their methods don’t allow them to understand why something happens. They can find correlations between factors, but not necessarily causes. 

For example, they might discover that people who drink more coffee have higher rates of cardiovascular disease than people who drink less coffee, but they can’t conclude that drinking coffee causes heart problems.

Quantitative research doesn’t always take into account a human element. People make decisions based on more than just mathematical calculations, and that’s an important part of the human experience. It’s also difficult to account for the subjective nature of human experience in quantitative methods such as surveys and questionnaires.

Quantitative research tends to minimize the role of the researcher in the research process, thereby reducing the amount of information that can be obtained on contextual factors.

Quantitative research tends not to generate new ideas or shed light on unexplored areas because they focus on testing hypotheses derived from existing theories and concepts.

Types Of Quantitative Research

There are five main types of Quantitative research:

  • Descriptive Research

Descriptive research produces a description of what already exists in a group or population. It usually involves taking a sample from the population in order to describe a certain characteristic of the entire group. 

It does not seek to explain why things are a certain way or how they came about but rather describes what is and what is not.

  • Correlational Research

Correlational research investigates relationships between variables as well as how these variables interact with one another. 

Unlike descriptive research, correlational research goes beyond description by seeking to identify the strength, direction, and nature of relationships between two or more variables. 

While it cannot be used to determine causality due to its correlational nature, it can be used to predict outcomes based on the relationship that exists between variables.

  • Experimental Research

Experimental research involves testing a hypothesis by conducting experiments using various methods such as controlled laboratory-based scenarios, field experiments, and randomized trials. 

Experimental design involves the manipulation and measurement of variables to observe their effect on each other. This enables researchers to determine cause-and-effect relationships between variables.

  • Survey Research

Survey research is a quantitative method that involves the usage of different research instruments such as questionnaires or schedules to gather data. 

Surveys are usually done in cases where it is difficult to conduct an experiment such as in the case of social sciences. 

The most common forms of survey research include mail surveys, telephone interviews, and face-to-face interviews.

  • Causal-Comparative Research

Causal-comparative research is a type of research that is used when the researcher has limited control over variables, such as in a field experiment. This type of research does not involve randomization of participants or experimental manipulation, as in true experimental studies.

The name causal-comparative research comes from two terms, causal and comparative. Causal implies that the study attempts to determine whether one variable causes another. Comparative indicates that groups are compared but not randomly assigned to groups by the researcher.

When To Use Quantitative Research

Quantitative research is a great way to collect data on a large scale when you have many respondents. 

This can be useful when you need a lot of data points and/or want to record responses for future analysis. It’s also good for surveys that are complex and/or have any questions. 

If your audience is large (across multiple locations, or across countries) or if you have a smaller audience but want them to complete your survey in their own language, quantitative research is the way to go.

If your business is just getting started with market research, quantitative methods will give you an excellent baseline of information upon which to build later qualitative research projects.

Qualitative research gets to the heart of your problem, giving you much more detailed data than quantitative methods would. 

Qualitative research is more appropriate for projects that:

  • require more in-depth answers than “yes” or “no”
  • have small sample sizes
  • require detailed interviews or observations
  • are exploratory in nature

Is Qualitative Or Quantitative Research Better?

A good thing to keep in mind is that there isn’t really a “right” answer – it all depends on what you are trying to find out!

Qualitative and Quantitative research is often seen as opposing approaches to research, but they both have their advantages and disadvantages. While there is a lot of debate between these two types of studies, they are not mutually exclusive and can work together to generate meaningful results.

Qualitative research gathers information that seeks to describe a topic more than measure it. Qualitative research is often used to conduct market analysis and identify consumer trends, motivations and behaviors.

Quantitative research is the best way to reveal and prove a cause-and-effect relationship. If you want to make an argument about why something is happening, quantitative research can help you do that. 

For example, if you wanted to say that more guns in the hands of private citizens lowered crime rates, you could run a study with data on crime rates and gun ownership across states and find statistical correlations between them.

Qualitative research describes and interprets what people say and do. Instead of using numbers to describe some phenomenon, it uses words and pictures instead. It’s best for exploring questions that don’t have clear answers yet, like how people feel about a new product or how they respond to a new marketing campaign.

For example, if you wanted to know how people reacted when they saw your new TV commercial, the best way would probably be to show it to people in a focus group and tape their reactions. The group moderator might ask some follow-up questions and people might comment on each other’s reactions, but the goal is less about making an argument than understanding what’s happening.

Is Survey Qualitative Or Quantitative Research?

A survey can be considered qualitative or quantitative depending on the type of questions asked. 

Quantitative surveys ask closed-ended questions – those requiring a “yes” or “no”, a number rating, or a selection from a predetermined list of answers (e.g., choose from “Excellent”, “Good”, etc.). These kinds of questions allow for analysis that can be statistically inferred across the entire population being surveyed.

Qualitative surveys (also known as unstructured interviews) ask open-ended questions that require respondents to provide free-form answers, which cannot be statistically inferred across the entire population being surveyed and therefore may not scale well if the sample size is very large.

Is Questionnaire A Quantitative Research?

A questionnaire is a series of questions or other prompts for gathering information about a subject. Although many researchers use questionnaires for statistical analysis, this is not always the case. So, yes, a questionnaire can be both, qualitative as well as quantitative, depending on the type of questions it contains.

The questionnaire is an integral part of survey research. It is a written or verbal series of questions pertaining to a specific topic, to which the respondent provides answers. 

Questionnaires are usually designed to obtain information from a large number of respondents on one or more occasions. 

The structured interview is normally used where it is necessary to keep close control over the questioning and to ensure that all respondents are asked exactly the same questions in precisely the same way.

The design process can be complex and time-consuming and many aspects need to be decided by the researcher before starting to write up the questionnaire:

  • How will you distribute it? By hand? By mail? Online?
  • What type of language will you use? Formal? Informal? Will it be general, or will specific jargon be included?
  • How long will your questionnaire be?

Is Statistics Quantitative Research?

Quantitative research involves statistical analysis, such as calculating averages or percentages in surveys. In its most basic form, you count things, and then you make conclusions based on the numbers — usually about how common something is.

Statistics is a quantitative research method. It is used to quantify opinions, attitudes, and behaviors. This method involves the statistical analysis of data collected through polls, questionnaires, or surveys. The survey could be administered through personal interviews, telephone conversations, or the use of online survey forms.

This method is the most widely used method in business research. Most businesses make decisions based on quantitative methods. It is easy to administer with a large population size by using computers for ease of calculation and preparation of reports. It is also easy to understand and implement because it uses statistical terms that are easy to understand and interpret. This method is also used in both small and large businesses to make decisions based on quantified data.

Is Quasi-Experimental Quantitative Research?

Quasi-Experimental research is another type of experimental research design. Therefore, it is quantitative research. The difference between them is that the quasi-experimental design does not include a random sample. With this type of design, a researcher will create an experimental group and a control group, but not through random selection. Instead, the researcher will identify participants in each group based on criteria such as specific characteristics or behavior.

One advantage of Quasi-Experimental research is that it is easier to carry out than randomized experiments. It can also be less expensive because it does not require random assignment to groups. 

However, the researcher may have trouble determining whether the results from these groups are credible because there could be mitigating factors impacting the results that were not controlled for in the study’s design.

Does Quantitative Research Have Hypothesis?

Yes, quantitative research methods do have hypotheses. In fact, the whole idea of quantitative research is to test a hypothesis.

The hypothesis of quantitative research must always be stated in a clear manner. This is because the hypothesis helps to explain the relationship that exists between the different variables that have been used for the study.

However, quantitative research does not have a single hypothesis; it always has more than one hypothesis. The number and nature of these hypotheses will depend on the scope and coverage of the study or even research. The researcher will use these hypotheses to conclude whether there was any correlation between the variables that were used, or rather whether one variable had an effect on another variable.

Does Quantitative Research Use Interviews?

Interviews in quantitative research are often structured. This means that the interviewer asks the same questions, in the same order, of every respondent.

This is so that researchers are able to make comparisons between groups of people and draw conclusions about them.

For example, if a survey was looking at how many hours a week people spend on homework, it would be useful to know the subject they are studying and their level of education. These questions would be asked before asking about study time specifically so that any differences between groups can be explored further.

Respondents are also given a limited number of response options to choose from, for example, 1-5 hours 6-10 hours 11-15 hours 16-20 hours 20+ hours. 

Structured interviews also make it easier for data to be analyzed by computer programs or entered into databases.

Does Quantitative Research Focus On Human Experiences?

Quantitative research focuses on human experiences and looks into why people do certain things while others do not carry out the same actions at all. 

Quantitative research is also known as positivist research. 

It is a systematic process of collecting, organizing, analyzing, and interpreting numerical data. 

Quantitative researchers are involved in the entire research process from defining the problem to shaping the findings for presentation. 

They use probability sampling techniques, which refer to selecting samples from a population in such a way that each individual has an equal chance of being selected.

How To Determine Sample Size For Quantitative Research?

There are several methods you can use to determine the sample size. Some methods include using statistic tables and online calculators. Other methods involve using formulas to estimate sample size.

1. Using Statistic Tables

The first method you can use to calculate sample size involves using statistic tables. You need two parameters to do this; they include a confidence level and margin of error.

2. Online Calculators

The second method is by using online calculators like Survey Monkey or Raosoft Sample Size Calculator. To use these calculators, you need to fill out information such as the population, confidence interval, and margin of error among others, and click on calculate button.

3. Using Formulas

A sample size formula can be used to calculate the appropriate sample size based on factors such as population size, the margin of error, and confidence level. There are various formulas you can choose from.

Cochran’s Sample Size Formula is a common one: 

This formula can be used when one needs to determine the appropriate sample size for estimating a proportion or a percentage. 

The formula is: n = (Z 2 *p*q)/e 2 ; 

where n = sample size; p = estimated proportion; q = 1-p; e = margin of error; Z = z-score for confidence level selected. For example, 0.05 for 95% confidence interval.

Is Quantitative Research Objective?

Quantitative research focuses on measurable concepts and uses precise measurements and analysis to answer a specific question. It is thoroughly objective in nature. 

This type of research aims at testing theories by examining the relationship among variables with the help of different research tools. The relationship between variables can be causal or correlational.

In other words, quantitative researchers are more interested in determining whether the data gathered shows a true representation of the population under study.

Is Quantitative Research Scientific And Measurable?

The scientific and measurable characteristic of quantitative research is one of its greatest strengths. In fact, it’s the reason why so many scientists prefer quantitative research over qualitative research. Quantitative research can be reproduced and validated by other researchers, which makes the results generalizable and very reliable.

Because quantitative research is so reliable, it can be used to create a theory or model that accurately describes a phenomenon. 

For example, because Newton’s laws of motion have been verified by countless experiments, we can use them to develop complex models for predicting how objects will behave in different situations.

The data can be obtained using various instruments such as questionnaires and surveys. Quantitative research gathers information that is measurable, such as age, number of hours worked, and so on.

The main objective of quantitative research is to measure phenomena. It allows for the collection of numerical data that can be analyzed in order to explain what is being measured. This type of research aims at verifying theories and hypotheses by means of observation and measurements of variables.

Quantitative research does not deal with subjective ideas or opinions, but with measurable facts. It uses a deductive approach to gather information from a large sample, which then can be used to infer conclusions about the population from which it was drawn.

It can be quite useful to understand what quantitative research is, particularly when you are doing some research of your own. By understanding more about the process, you will be better prepared to make quantitative research and turn it into useful information.

Quantitative research is one of the more scientific/technical forms of market research. It’s a good way to get specific and detailed data (hence quantitative). Not only will you get statistics, numbers, etc., but you’ll actually truly learn something. It’s a great way to find out exactly what your audience wants.

Ultimately, both types of research complement one another. If you don’t have enough data yet, qualitative research can help you identify potential problems in your quantitative study. Even if you have an abundance of data from a previous research project, conducting a qualitative study prior to analyzing your quantitative data and drawing conclusions can lead to better results.

Related Posts

Monthly subscriptions vs. pay as you go – what billing is good for your business.

Finding investors

How Much Time Does It Take To Find An Investor?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

The strengths and weaknesses of quantitative and qualitative research: what method for nursing?

Affiliation.

  • 1 Department of Professional Development, Wealden College of Health and Social Studies, East Surrey Hospital, Redhill, Surrey, England.
  • PMID: 7822608
  • DOI: 10.1046/j.1365-2648.1994.20040716.x

The overall purpose of research for any profession is to discover the truth of the discipline. This paper examines the controversy over the methods by which truth is obtained, by examining the differences and similarities between quantitative and qualitative research. The historically negative bias against qualitative research is discussed, as well as the strengths and weaknesses of both approaches, with issues highlighted by reference to nursing research. Consideration is given to issues of sampling; the relationship between the researcher and subject; methodologies and collated data; validity; reliability, and ethical dilemmas. The author identifies that neither approach is superior to the other; qualitative research appears invaluable for the exploration of subjective experiences of patients and nurses, and quantitative methods facilitate the discovery of quantifiable information. Combining the strengths of both approaches in triangulation, if time and money permit, is also proposed as a valuable means of discovering the truth about nursing. It is argued that if nursing scholars limit themselves to one method of enquiry, restrictions will be placed on the development of nursing knowledge.

PubMed Disclaimer

Similar articles

  • Reliability and validity in research. Roberts P, Priest H. Roberts P, et al. Nurs Stand. 2006 Jul 12-18;20(44):41-5. doi: 10.7748/ns2006.07.20.44.41.c6560. Nurs Stand. 2006. PMID: 16872117 Review.
  • The value of combining qualitative and quantitative approaches in nursing research by means of method triangulation. Foss C, Ellefsen B. Foss C, et al. J Adv Nurs. 2002 Oct;40(2):242-8. doi: 10.1046/j.1365-2648.2002.02366.x. J Adv Nurs. 2002. PMID: 12366654
  • Case study: design? Method? Or comprehensive strategy? Jones C, Lyons C. Jones C, et al. Nurse Res. 2004;11(3):70-6. doi: 10.7748/nr2004.04.11.3.70.c6206. Nurse Res. 2004. PMID: 15065485 Review.
  • Triangulation as a method for contemporary nursing research. Halcomb E, Andrew S. Halcomb E, et al. Nurse Res. 2005;13(2):71-82. doi: 10.7748/nr.13.2.71.s8. Nurse Res. 2005. PMID: 16416981 Review.
  • Mixed methods research for the novice researcher. Giddings LS, Grant BM. Giddings LS, et al. Contemp Nurse. 2006 Oct;23(1):3-11. doi: 10.5172/conu.2006.23.1.3. Contemp Nurse. 2006. PMID: 17083315 Review.
  • Development of a Culturally Adapted Dietary Intervention to Reduce Alzheimer's Disease Risk among Older Black Adults. Shaw AR, Key MN, Fikru S, Lofton S, Sullivan DK, Berkley-Patton J, Glover CM, Burns JM, Vidoni ED. Shaw AR, et al. Int J Environ Res Public Health. 2023 Sep 2;20(17):6705. doi: 10.3390/ijerph20176705. Int J Environ Res Public Health. 2023. PMID: 37681845 Free PMC article.
  • Driving forces of GPs' migration in Europe: an exploratory qualitative study. Velgan M, Vanderheyde T, Kalda R, Michels N. Velgan M, et al. BJGP Open. 2023 Jun 27;7(2):BJGPO.2022.0132. doi: 10.3399/BJGPO.2022.0132. Print 2023 Jun. BJGP Open. 2023. PMID: 36717117 Free PMC article.
  • The influence of patriarchy on Nepali-speaking Bhutanese women's diabetes self-management. Sharma A, Stuckey H, Mendez-Miller M, Cuffee Y, Juris AJ, McCall-Hosenfeld JS. Sharma A, et al. PLoS One. 2022 Sep 14;17(9):e0268559. doi: 10.1371/journal.pone.0268559. eCollection 2022. PLoS One. 2022. PMID: 36103470 Free PMC article.
  • User Perceptions of Different Vital Signs Monitor Modalities During High-Fidelity Simulation: Semiquantitative Analysis. Akbas S, Said S, Roche TR, Nöthiger CB, Spahn DR, Tscholl DW, Bergauer L. Akbas S, et al. JMIR Hum Factors. 2022 Mar 18;9(1):e34677. doi: 10.2196/34677. JMIR Hum Factors. 2022. PMID: 35119375 Free PMC article.
  • Numbers and narratives: how qualitative methods can strengthen the science of paediatric antimicrobial stewardship. Woods-Hill CZ, Xie A, Lin J, Wolfe HA, Plattner AS, Malone S, Chiotos K, Szymczak JE. Woods-Hill CZ, et al. JAC Antimicrob Resist. 2022 Jan 22;4(1):dlab195. doi: 10.1093/jacamr/dlab195. eCollection 2022 Mar. JAC Antimicrob Resist. 2022. PMID: 35098126 Free PMC article. Review.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

The Strengths and Weaknesses of Quantitative and Qualitative Approaches

The Strengths and Weaknesses of Quantitative and Qualitative Approaches

Table 3 The Strengths and Weaknesses of Quantitative and Qualitative...

Context in source publication

  • J Asian Afr Stud
  • Marcela Ondekova

Olive McCarthy

  • Prince Emeka Ndimele
  • Adewale Adebola Rashidat
  • Aparna Sathya Murthy
  • Perspect Clin Res

SR Disha

  • K Merin Eldhose

Yashashri Shetty

  • Rand Maraqa
  • Raya Hashem

Monica Hernandez

  • I N D M Hikmah

Widiatmaka

  • Getahun Chalachew

Desta

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

HSCI 703 Quantitative Research Methods

  • Course Description

For information regarding prerequisites for this course, please refer to the  Academic Course Catalog .

Course Guide

View this course’s outcomes, policies, schedule, and more.*

*The information contained in our Course Guides is provided as a sample. Specific course curriculum and requirements for each course are provided by individual instructors each semester. Students should not use Course Guides to find and complete assignments, class prerequisites, or order books.

Research design, research methods and statistical analysis are critical elements of scientific discovery. Understanding the characteristics of quantitative study design, quantitative data collection and quantitative data analysis is a cornerstone of doctoral studies. Here students will explore research design, research methods and data analysis using the foundational skills of quantitative analysis. Doctoral candidates need fundamental skills using statistical software to analyze data. In this course students will use SPSS to analyze raw data. Finally, critical analysis is a cornerstone of doctoral studies. In this course students will critically analyze data presented in peer reviewed sources.

Course Assignment

Textbook readings and lecture presentations.

No details available.

Course Requirements Checklist

After reading the Course Syllabus and  Student Expectations , the student will complete the related checklist found in the Course Overview.

Discussions (4)

Discussions are collaborative learning experiences. Therefore, the student will complete four discussion assignments of at least 500 words and two replies of at least 300 words in this course. In general, these discussions are for the student’s interaction with peers rather than a focus on form (CLO: A, D). 

Study Design Analysis Project

Student will analyze three study designs highlighting their strengths and weaknesses (CLO: A). 

Data Collection and Transformation Project

Students will create and transform a small data set using a survey tool and transform the data for analysis (CLO: E). 

Data Analysis Projects (3)

Students will conduct analysis using SPSS or R software on data sets provided. Data analysis projects will include Student’s t-test, ANOVA and Chi-Square (CLO: B). 

Critical Analysis of Peer-Reviewed Article

Student will critically analyze the methods section of a peer reviewed article of their choosing that closely aligns with their proposed research question (CLO: C). 

Quizzes (8)

Each quiz will cover the Learn material for the assigned module. Each quiz will contain 5 multiple-choice questions and 1 upload question, and have a 25-minute time limit. (CLO: B, D). 

Top 1% For Online Programs

Have questions about this course or a program?

Speak to one of our admissions specialists.

Inner Navigation

  • Assignments

Have questions?

quantitative research weaknesses and strengths

Are you ready to change your future?

Apply FREE This Week*

Request Information

*Some restrictions may occur for this promotion to apply. This promotion also excludes active faculty and staff, military, non-degree-seeking, DGIA, Continuing Education, WSB, and certificate students.

Request Information About a Program

Request info about liberty university online, choose a program level.

Choose a program level

Bachelor’s

Master’s

Certificate

Select a Field of Study

Select a field of study

Select a Program

Select a program

Next: Contact Info

Legal first name.

Enter legal first name

Legal Last Name

Enter legal last name

Enter an email address

Enter a phone number

Full Address

Enter an address

Apt., P.O. Box, or can’t find your address? Enter it manually instead .

Select a Country

Street Address

Enter Street Address

Enter State

ZIP/Postal Code

Enter Zip Code

Back to automated address search

Start my application now for FREE

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 14, Issue 8
  • Validation of a quantitative instrument measuring critical success factors and acceptance of Casemix system implementation in the total hospital information system in Malaysia
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Noor Khairiyah Mustafa 1 , 2 ,
  • http://orcid.org/0000-0002-4741-5970 Roszita Ibrahim 1 ,
  • Zainudin Awang 3 ,
  • Azimatun Noor Aizuddin 1 , 4 ,
  • Syed Mohamed Aljunid Syed Junid 5
  • 1 Department of Public Health Medicine , Universiti Kebangsaan Malaysia Fakulti Perubatan , Cheras , Federal Territory of Kuala Lumpur , Malaysia
  • 2 Ministry of Health Malaysia , Putrajaya , Malaysia
  • 3 Faculty of Business Management , Universiti Sultan Zainal Abidin , Kuala Terengganu , Malaysia
  • 4 International Casemix Centre (ITCC) , Hospital Universiti Kebangsaan Malaysia , Cheras , Kuala Lumpur , Malaysia
  • 5 Department of Public Health and Community Medicine , International Medical University , Kuala Lumpur , Federal Territory of Kuala Lumpur , Malaysia
  • Correspondence to Dr Roszita Ibrahim; roszita{at}ppukm.ukm.edu.my

Objectives This study aims to address the significant knowledge gap in the literature on the implementation of Casemix system in total hospital information systems (THIS). The research focuses on validating a quantitative instrument to assess medical doctors’ acceptance of the Casemix system in Ministry of Health (MOH) Malaysia facilities using THIS.

Designs A sequential explanatory mixed-methods study was conducted, starting with a cross-sectional quantitative phase using a self-administered online questionnaire that adapted previous instruments to the current setting based on Human, Organisation, Technology-Fit and Technology Acceptance Model frameworks, followed by a qualitative phase using in-depth interviews. However, this article explicitly emphasises the quantitative phase.

Setting The study was conducted in five MOH hospitals with THIS technology from five zones.

Participants Prior to the quantitative field study, rigorous procedures including content, criterion and face validation, translation, pilot testing and exploratory factor analysis (EFA) were undertaken, resulting in a refined questionnaire consisting of 41 items. Confirmatory factor analysis (CFA) was then performed on data collected from 343 respondents selected via stratified random sampling to validate the measurement model.

Results The study found satisfactory Kaiser-Meyer-Olkin model levels, significant Bartlett’s test of sphericity, satisfactory factor loadings (>0.6) and high internal reliability for each item. One item was eliminated during EFA, and organisational characteristics construct was refined into two components. The study confirms unidimensionality, construct validity, convergent validity, discriminant validity and composite reliability through CFA. After the instrument’s validity, reliability and normality have been established, the questionnaire is validated and deemed operational.

Conclusion By elucidating critical success factor and acceptance of Casemix, this research informs strategies for enhancing its implementation within the THIS environment. Moving forward, the validated instrument will serve as a valuable tool in future research endeavours aimed at evaluating the adoption of the Casemix system within THIS, addressing a notable gap in current literature.

  • quality in health care
  • public health
  • health economics

Data availability statement

No data are available.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2023-082547

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

The rigorous validation process of the questionnaire, including initial validation, translation, pre-testing and exploratory factor analysis using pilot test data, followed by confirmatory factor analysis using field data, enhances the reliability and validity of the instrument used for data collection.

The use of statistical techniques such as the Kaiser-Meyer-Olkin (KMO) measure, Bartlett’s test of sphericity, factor loadings, Cronbach’s alpha and various validity tests (unidimensionality, construct validity, convergent validity, discriminant validity) ensures the robustness of the analysis.

While the large sample size enhances generalisability to some extent, the study was conducted in only five selected hospitals in Malaysia; thus, the findings may not be representative of all hospitals in the country or other healthcare systems.

This study does not include other professional roles, such as paramedics, medical record officers, information technology officers and finance officers because the knowledge and involvement of these roles in the Casemix system are not comparable to that of medical doctors.

The findings of the study may be specific to the healthcare context in Malaysia and may not be directly applicable to other countries or healthcare systems with different sociocultural, organisational or technological characteristics.

Introduction

The global healthcare landscape is witnessing profound evolution driven by an array of challenges, including the rise of non-communicable diseases, the resurgence of communicable diseases, demographic shifts and escalating healthcare costs. 1 Governments and healthcare authorities worldwide are under mounting pressure to navigate these complexities while optimising operational efficiency and ensuring equitable access to quality healthcare services. 1 Within this context, Malaysia has emerged as a proactive player, spearheading innovative strategies to streamline healthcare delivery and bolster system performance. The Ministry of Health (MOH) Malaysia’s proactive stance is exemplified by its robust efforts to standardise and enhance the quality of healthcare services through the implementation of clinical standards and pathways based on international best practices. 2 Notably, initiatives such as the hospital information system (HIS) and the Casemix system have been instrumental in revolutionising healthcare management practices and fostering a culture of continuous improvement. 3–6

Background of Hospital Information System (HIS)

The HIS stands as a cornerstone of technological innovation in healthcare management, offering a comprehensive platform for efficient data collection, storage and processing. 7 HIS responsibilities include managing shared information, enhancing medical record quality, overseeing healthcare quality and error reduction, promoting institutional transparency, analysing healthcare economics and reducing examination and treatment durations. 8–13 In Malaysia, the adoption of HIS, categorised into total hospital information system (THIS), intermediate hospital information system (IHIS) and basic hospital information system, has paved the way for seamless integration of patient data, administrative tasks, and financial transactions and appointment management into a single system within a hospital. 14–19 The pioneering implementation of a fully integrated paperless system as a THIS facility at Hospital Selayang underscores Malaysia’s commitment to embracing cutting-edge technology to enhance healthcare delivery. 20–22 Today, 19 out of 149 Malaysian hospitals have IT facilities. 23 24 Despite challenges during implementation, the overall advantage of using a comprehensive system is priceless. 22 25–29

Background of Casemix system

The Casemix system is a global system that categorises patient information and treatments based on their types and associated costs, aiming to identify patients with similar resource needs and treatment expenses. 30 31 It is widely used globally such as in the USA, Western Europe, Australia, Eastern Europe and Asia, playing a crucial role in hospital financing. 32 33 Originating from Australia, it optimises resource utilisation, improves cost transparency and enhances healthcare service efficiency. 34 35 However, its adoption in developing nations like Malaysia faces challenges due to technological constraints and resource limitations. 23 36 37 The Malaysian diagnosis-related group (MalaysianDRG) Casemix system categorises patients based on healthcare costs, improving efficiency and resource allocation. 38–40 This system enhances provider payment measurement, healthcare service quality, equity and efficiency, and assists policymakers in allocating cash for hospitals. 24 41 The information from the MalaysianDRG is integrated into the executive information system, providing access to system outputs such as DRG, severity of illness, average cost per disease and Casemix Index. 38–40

Integration of Casemix within HIS

The integration of Casemix within HIS frameworks represents a paradigm shift in healthcare management, offering a unified platform for data-driven decision-making, performance monitoring and quality improvement initiatives. 42 In the USA, there is a need to evaluate existing HIS against advanced hardware and software. 42 As hospitals face public opposition due to rising medical expenses, governments are under pressure to manage healthcare costs more effectively. 42 Casemix-based reimbursement policies aim to compensate medical expenses based on Casemix rather than the number of services provided. 42 By consolidating clinical, administrative and financial data within a single system, Casemix-based systems are multifaceted and require organisational restructuring and educational initiatives for successful implementation. 33 Strategies such as providing feedback to clinicians and integrating decentralised databases into HIS are crucial for ensuring data credibility and accuracy. 33 Transitioning from traditional medical record management to health information management requires careful planning and adjustments due to the lack of automation in the current HIS. 33

Theoretical and conceptual framework

Multiple frameworks are commonly used to evaluate technology systems’ acceptance and success attributes. There are noteworthy frameworks, such as the technology acceptance model (TAM), the DeLone and McLean Information Systems Success Model (ISSM), the HOT-Fit Evaluation Framework and the Unified Theory of Acceptance and Use of Technology (UTAUT). The TAM is a widely used framework for assessing the acceptability and success of technology systems, particularly in HIS. 43–47 It suggests that user perceptions of ease of use, usefulness and intention to use significantly impact system usage. 43–47 The DeLone and McLean ISSM evaluates the effectiveness of information systems by examining relationships between system quality, information quality, user happiness, individual impact, organisational impact and overall system success. 48 49 The HOT-Fit Evaluation Framework, evolved from the ISSM, evaluates the congruence of persons, organisations and technology within an information system, considering technological variables, organisational factors and human factors. 12 50 The UTAUT enhances the TAM by incorporating additional elements such as social impact, enabling situations, and behavioural intentions. 43 51 52

By integrating these frameworks within the context of Casemix implementation within THIS, the investigators aim to assess critical success factors and address barriers to adoption and acceptance, facilitating seamless integration and maximising the potential of healthcare modernisation efforts. Hence, the investigators opted to integrate HOT-Fit and TAM frameworks as this study’s conceptual framework to achieve the research’s specific objectives, scope and contextual considerations (see figure 1 ). HOT-Fit offers a comprehensive framework for examining the alignment between human, organisational and technological factors, while TAM provides a focused lens on individual-level technology acceptance dynamics. 12 44–47 Based on the current study’s conceptual framework, the HOT-Fit framework focuses on technological constructs like system, information and service quality, while the TAM framework covers human dimensions like perceived ease of use, usefulness, intention to use and acceptance. The integration of these frameworks is crucial for achieving the study’s specific and general objectives. Thus, these two frameworks are suitable and deemed appropriate for this study. On the other hand, UTAUT does not appear suitable for the current investigation due to the broad scope and complexity of existing TAM with additional external variables and ISSM was also not selected due to its simplicity. 43 51 52

  • Download figure
  • Open in new tab
  • Download powerpoint

Conceptual framework.

This current study aims to evaluate the critical success factors (CSFs) and doctors’ acceptance of Casemix implementation within the THIS environment to understand the issues MOH Malaysia facilities experience better, fill a research gap on Casemix implementation and help shape plans for modernising healthcare. A comprehensive tool, such as a questionnaire, was created to meet the study objectives. This paper aims to examine a multidimensional instrument that was created to meet the study objectives. Consequently, the exploratory factor analysis (EFA) is instrumental in uncovering underlying factors within observed variables to ensure precision and robustness, while confirmatory factor analysis (CFA) was needed to verify the measurement model’s linkages and confirm that the theoretical model was valid, reliable and suitable for data collection, thereby yielding valuable insights. 53–56 Given its merits, the current study used CFA to evaluate the measurement model’s validity. After validation processes, structural equation modelling (SEM) was employed to analyse how exogenous, mediating and endogenous constructs interrelate and determine parameters into a structural model to analyse direct, mediating and moderating effects on the study’s goals and hypotheses. While the technology evaluation frameworks offer crucial insights, it is essential to note that Casemix is designed to organise patient data and treatment costs rather than analyse the acceptability and success of technology systems. Moreover, meeting the study objectives for evaluating Casemix adoption in THIS can be done without a separate instrument for each system. It can assist healthcare organisations and policymakers in understanding CSFs facilitating the implementation and acceptance of the Casemix system, and guiding the development of targeted strategies for seamless implementation, enhancing patient care, work efficiency and resource allocation. Therefore, a reliable and valid quantitative instrument is required to achieve these goals.

Methodology

Study design and ethical approval, study design.

This study employed a sequential explanatory mixed-methods design. Nevertheless, the researchers in the present article solely highlight the exploration and development of items, as well as the reliability and validation of the quantitative study. The data collection for the quantitative pilot study was from 1–14 February 2023, the quantitative phase was from 1 April to 31 June 2023, the qualitative pilot study was on 15 September 2023 and the qualitative field study was from 17 October 2023 to 4 January 2024. This paper highlights on the development of instruments for quantitative phase procedures and findings of the validation of quantitative study only. The quantitative phase used a cross-sectional study design to gather data throughout a specified duration. 53 57 58

Ethical approval

This study has obtained ethical approval from:

The Medical Research Ethics Committee of the Faculty of Medicine, Universiti Kebangsaan Malaysia (JEP-2022–777), see ( online supplemental file 1 ), and

The Medical Research Ethics Committee of the Ministry of Health Malaysia (NMRR ID-22–02621-DKX), see ( online supplemental file 2 ).

Supplemental material

Study instrument.

This study used a self-administered questionnaire to collect data on the CSF and acceptance of Casemix in THIS environment. The instrument was developed in Malay and English for a better understanding of the respondents due to the geographical areas of the study where Malay is the national language of Malaysia. The questionnaire comprised 60 items divided into three sections, each with a limited number of constructs. Section 1a consists of 8 questions that collected demographic information such as age, gender, educational background and work experience in the MOH Malaysia and current hospital. Section 1b assessed the comprehension/knowledge level of the Casemix system using 10 items. Meanwhile, Section 2 represented the perceived Critical Success Factors of Casemix implementation in the THIS context, consisting of 37 items within six constructs: system quality (SY)—4 items, information quality (IQ)—5 items, service quality (SQ)—5 items, organisational factors (O)—9 items, perceived ease of use (PEOU)—5 items, perceived usefulness (PU)—4 items and intention to use (ITU)—5 items. Section 3 encompasses the outcome of the study which is the user acceptance (UA) construct, which contains 5 items.

The study incorporates and modifies existing scholarly works rooted in the Human Organisation Technology (HOT-Fit) and TAM frameworks for sections 2 and 3. The two sections, each evaluated using a 10-point interval Likert scale. The 10-point interval scale offers respondents a greater range of response possibilities that align with their precise evaluation of a question. 55 56 59 60 A score of 1 represents ‘strongly disagree’, while a score of 10 represents ‘strongly agree’. The constructs and components of the instrument were derived from previous research. 12 43 44 48 50 61–64 These items represented eight constructs: SY, IQ, SQ, ORG, PEOU, PU, ITU and User Acceptance.

The constructs described in sections 2 and 3 underwent initial validation, reliability testing and EFA, using pilot data. CFA was also performed using field data. Details regarding the development validation and reliability procedures of the instrument are provided in subsequent sections. Hence, to facilitate transparency and reproducibility, a blank copy of the measurement instrument developed and validated in this study has been included as a supplementary file (see online supplemental file 3 : Blank Copy of Quantitative Instrument).

Independent variables

A few constructs have been examined in this study as mentioned in Subsection 1.6, the conceptual framework comprising technology, organisation and human dimensions.

Technological factors

Constructs such as system quality (SY), information quality (IQ) and service quality (SY) constitute the technological factors. Addressing system quality issues is imperative for fostering user acceptance and realising system benefits. 43 Reliable and accurate systems with dependable functionality enhance user acceptance, while a user-friendly interface and seamless performance enhance user experience. Integration with existing systems promotes acceptability and interoperability. 43 44 Conversely, information quality, encompassing data security and privacy, is crucial in safeguarding patient data, bolstering user confidence and fostering system adoption. 65 Service quality encompasses the support and assistance provided during and after system implementation, with practical training, responsive helpdesk support, and ongoing maintenance contributing to user satisfaction and system success. 51 66 67 Hence, these three constructs encompassing technological dimensions were adapted from the HOT-Fit framework. 12 50

Organizational characteristics

Organisational dimensions, such as an organisational structure and environment, can limit or facilitate the acceptance or implementation of technical advancements. 68 The elements of organisational dimension were the most generally surveyed attributes in IT adoption in organisations. 69 Previous research has identified relative benefit, centralisation, formalisation, top management support and perceived cost as essential organisational elements influencing any organisation’s decision to embrace current information systems technologies. Management barriers are defined as a lack of efficient planning, a lack of trained people, and limits linked to training courses, according to Abdulrahman and Subramanian. 70 The management, technological, ethical-legal and financial barriers were all integrated into the organisational factor category in this study. Previous research has found that technology adoption rates are related to preparedness and impediments to readiness. 71 Along with several other studies, senior leaders play a critical role in using information systems at the organisational level. 72 Direct involvement of senior executives in IS operations demonstrates the importance of IS and ensures their support and involvement in the overall performance of IS efforts in the organisation. 73 Organisational environment and structure can influence user acceptance of information technology, underscoring the importance of organisational improvement initiatives to enhance user acceptance. 74–77 Hence, this primary construct encompassing organisational dimensions was adapted from the HOT-Fit framework. 12 50

Human factors

The TAM is a framework that consists of five fundamental elements: PEOU, PU, ITU, actual system use and external Variables. 78–81 PEOU is a subjective evaluation of a technology’s ease of use, influenced by usability, training and user assistance. 78–81 PU quantifies the level of usefulness attributed to technology, influenced by factors such as usefulness and compatibility with user needs and responsibilities. 78–81 Intention to Use (ITU), External factors, such as organisational regulations, access and availability, can also influence the interactions within the model. 78–81 External variables, such as individual variances, cultural influences and supportive environments, can either amplify or reduce the impact of perceived ease of use and usefulness on behavioural intention and actual use. 78–81 The TAM has been a crucial paradigm for understanding technology acceptance and has significantly impacted research in information systems and technology adoption. The HOT-Fit Evaluation technique, which focuses on system use and user satisfaction, is suitable for this study. 12 50 These two constructs are interconnected to PEOU and PU, delineated by the TAM framework. 78–81 For successful implementation of an information system, medical doctors perceive it as easy to use (PEOU) through adequate training, user-friendly interfaces and intuitive system design. 78–81 Healthcare providers should also perceive the system as useful (PU) to ensure successful implementation, highlighting its benefits such as improved efficiency, quality of care and cost control. 78–81

Dependent variable

The only dependent variable in this study is acceptance which is adapted from the TAM. 44 45 82 The study presents a pragmatic taxonomy of eight different implementation outcomes, including acceptability/acceptance, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration and sustainability. 64 Acceptability is a crucial aspect of implementation, referring to the acceptance of a specific intervention, practice, technology or service within a specific care setting. 64 It can be measured from the perspective of various stakeholders, such as administrators, payers, providers and consumers. 64 Ratings of acceptability are assumed to be dynamic and may differ during pre-implementation and throughout various stages of implementation. In similar literature, Proctor et al delineated examples of measuring provider and patient acceptability/acceptance including case managers’ acceptance of evidence-based procedures in a child welfare system and patients’ acceptance of alcohol screening in an emergency department. 64 The terms acceptability and acceptance are interchangeably used to describe implementation outcomes. Therefore, in this study, the researchers would like to explore the acceptance of the Casemix system in the MOH’s THIS facilities.

Patients and public involvement

Participants in this study were medical doctors and this study did not involve any patients or the public. Hence, there was no patient or public involvement in this study.

Initial validation processes

The initial validation procedures were conducted to establish the content, criteria and face validity/pre-test of the instrument for the field study.

Content validity

Content validity is significant when developing new measurement tools because it links abstract ideas with tangible and measurable indicators. 83 This involves two main steps: identifying the all-inclusive domain of the relevant content and developing items that correspond to this domain. 83 The Content Validity Index (CVI) is often used to measure this validity. 84–86 Recent studies have demonstrated the content validity of assessment tools using the CVI. 87–90 The best method for calculating the CVI, suggesting that the number of experts reviewing an instrument should range from 2 to 20. 84–86 91 92 Typically, the number of experts varies from 2 to 20 individuals. 93 For the current study, two experts from the Hospital Financing (Casemix Subunit) at MOH Malaysia were selected. This is coherent with the number of experts that are recommended by a few literature in online supplemental file 4A . 84 There are two types of CVI: I-CVI for individual items and S-CVI for overall scales. 84–86 91 92 S-CVI can be calculated by averaging the I-CVI scores (S-CVI/Ave) or by the proportion of items rated as relevant by all experts (S-CVI/UA). 84–86 91 92 Before calculating CVI, relevance ratings are converted to binary scores. The relevance rating was re-coded as 1 (scale of 3 or 4) or 0 (scale of 1 or 2), as indicated in online supplemental file 4B . Online supplemental file 4C reveals two experts’ item-scale relevance evaluations to exhibit CVI Index calculation. In this study, the experts validated the questionnaire contents, achieving perfect scores of 3 or 4 for all items, resulting in S-CVI/Ave and S-SCVI/UA scores of 1.00. In conclusion, a thorough methodological approach to content validation, based on current data and best practices, is essential to confirm the overall validity of an evaluation.

Criterion validity

Criterion validity denotes to the degree of correlation between a measure and other established measures for the same construct. 62 88 89 94 95 An academic statistics expert and an expert in questionnaire development and validation procedures reviewed criterion validity. This can be reviewed in online supplemental file 5 . Subsequently, a certified translator translated the instrument from English to Malay back-to-back precisely.

Face validity

A face validity assessment was undertaken to evaluate the questionnaire’s consistency of responses, clarity, comprehensibility, ambiguity and overall comments. Before commencing the pilot study and fieldwork, the researchers acknowledged and resolved the concerns that were previously mentioned. 62 90 96 Following the validation process, 11 respondents were purposefully selected for face validity also known as pre-testing to accomplish the prerequisite for face validation. Furthermore, they must meet exclusion criteria like those stipulated for participants in the field study. Subsequently, these respondents were excluded from participation in the quantitative field study. The study population will be described further in Subsection 2.6.2. The objective of this pre-test or face-validation process was to assess the consistency of responses, and clarity, ambiguity and overall design of the questionnaire. 97 This will be done through the evaluation from the online Google Form of the Questionnaire. Before conducting the pilot study and fieldwork, the researchers took into consideration the concerns that had been raised. 97 The face validity result has been uploaded as online supplemental file 6 .

Quantitative pilot test and EFA

The pilot study was conducted at a Federal Territory hospital in Malaysia, Hospital W. The pilot study population also possess similar characteristics to the participants/samples involved in the subsequent quantitative field study. Additionally, these respondents were excluded from participation in the quantitative field study. This study used a minimum of 100 samples to ensure valid results for the EFA. 97 98 Hence, since the current pilot study is using EFA, the minimal sample size of 100 is therefore supported by a few studies and books experienced in research and validation procedures. 54–56 97 99 Therefore, to account for a projected drop-out rate of 20%, the minimum sample size for this preliminary pilot study was determined to be 125 medical doctors. 100 The research was conducted without participant or public involvement in the design, conduct, reporting or dissemination strategies. The data collection method was also like the field study. It was employed using an online Google Form Questionnaire. Participants were asked to scan a Google Form link or QR code to access information sheets, consent forms and online questionnaires. Each participant was notified that their information would be kept private their anonymity would be retained solely for the study, and they could withdraw at any time.

The pilot study will use EFA to measure data from a collection of hidden concepts. EFA is a method that generates more accurate results when each shared component is represented by many measured variables, either exogenous or endogenous constructs. 54–56 97 98 101–103 The collected data will be used to identify and quantify the dimensionality of items that assess the construct. 53–56 59 60 104 EFA is essential to determine whether items in a construct produce distinct dimensions from those found in previous studies. 53–56 59 60 104 Factors’ dimensionality may change as they are transported from other domains to a new research topic, and fluctuations in the population’s cultural heritage, socioeconomic status and passage of time might affect dimensionality. The EFA methodology uses principal component analysis (PCA) to decrease the amount of data, but it fails to discern between common and unique changes efficiently. 97 98 PCA is indicated when there is no known theoretical framework or model, and it is used to create the first solutions in EFA. Four requirements of PCA included (1) components with eigenvalues more than one, (2) factor loadings greater than 0.60 for practical relevance, (3) no item cross-loadings greater than 0.50 and (4) each factor has at least three items to be retained. 97 98 The data’s eligibility for factor analysis was determined using the Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) of >0.6 and Bartlett’s test of sphericity. 55 56 105–107 The effectiveness of Bartlett’s test for factor analysis hinges on the significant result, with a value near p<0.001 (p<0.05) indicating acceptability. 53–56 107 The scree plot also determined the best number of constructs to keep. 53–56

Quantitative field study

Study location.

The present study gathered data from five hospitals situated in various Malaysian zones—South, North, West, East and East Malaysia—that are outfitted with the Total Hospital Information System (THIS) and Casemix system. The study used cluster sampling to select study sites in Malaysia, dividing the country into five distinct clusters. Five hospitals that had successfully implemented Casemix for at least 3 years were chosen to represent different regions of Malaysia. Hospital N was selected for the northern region, Hospital E for the eastern region, Hospital S for the southern region and Hospital W for the central/western region. Hospital EM was chosen for East Malaysia. Cluster sampling is suitable when the research encompasses a vast geographical expanse.

Target population for the study

The target population for this study was medical doctors by profession working in hospitals under MOH in 2023. The study collectively obtained a sampling frame of 3580 medical doctors by profession, encompassing hospital directors, deputy directors (medical division), consultants/specialists, medical officers and house officers from the five selected hospitals. These doctors should fulfil the inclusion and exclusion criteria of this study as follows:

Inclusion criteria

Permanent/ contract of service medical doctors who were posted to current participating hospital.

Has working experience in the current participating hospital for at least 3 months.

Agree to participate in the study.

Exclusion criteria

Attachment medical doctors.

Refuse to participate in the study.

The study population of face validation/pre-test and pilot test has characteristics similar to those of the study population in the field study. The pre-test and pilot-test samples will also be excluded from samples in the field study. Participants were given surveys to complete at their own pace, without fear or pressure.

Sample size and sampling method

The target population was selected using proportionate stratified random sampling, dividing the total population into homogeneous groups. 16 108–110 Proportionate stratified random sampling is a probability sampling method that includes separating the entire population into similar groups (strata) to conduct the sampling process.

The authors are concerned about the sample size needed for CFA validation of the measurement model. However, current studies do not have a consensus on the appropriate sample size. For small indicators, a minimum sample size of 100–150 respondents is often needed, 111–113 whereas, precise analysis for CFA may require 250–500 respondents. 114 115 Some authors suggested the following suggestions for the sample size requirement: (a) a sample size to parameter ratio of 5 or 10, (b) ten cases per observation/indicator and (c) 100 cases/observations per group for multigroup modelling. 116–118 In conclusion, the researchers opted to employ five times the number of indicators in the questionnaire because the number of indicators for latent variables is large. 116 119 The final questionnaires contain 59 items, requiring a total sample size of 295. However, there is an additional 20% anticipated dropout rate. The sample size was estimated using the formula n=n/1-d (n=total samples, n=minimum required samples and d=drop out rates), yielding a minimum sample size of 369. 100 This is also corroborated by other research, which states that because the conceptual framework in this study consists of eight constructs, each with at least four items, the required sample size is 300, with an additional 20% expected drop-out rate, the calculated sample size was 375. 56 97 100 102 As a result, the researchers opted to distribute questionnaires to the 375 participants using proportionate stratified random sampling depending on their professional roles as suggested. 56 97 102 116

Data collection methods

The data collection method for the quantitative field study is similar to the techniques used in the quantitative pilot study. This data collection method was elaborated in Subsection 2.4.4. However, the link for the participant information sheet (PIS) and informed consent forms was included on the first page of the questionnaire which is https://bit.ly/3F8IF2e . The participant’s information sheet and informed consent forms are attached as online supplemental files 7 and 8 , respectively. Similarly to the quantitative pilot study, respondents may do so freely without losing their data if they withdraw from the survey midway. Participants were assured that their information would be kept confidential and that their anonymity would be strictly protected during the field study. Participants who wish to participate must first consent and complete all survey questions. They were also instructed to contact the lead investigator with any questions. The participants have up to 2 weeks to complete and submit the online questionnaire. All survey information was linked to a research identification number. For example, study identifications 001 to 375 on the subject data sheets will be used instead of the subject’s name. The appropriate senior management and Casemix System Coordinators (CSCs), the department’s Casemix Coordinator and Heads of Department will be contacted 3 days before the data gathering session concludes. All measures were taken to safeguard participants’ privacy and anonymity.

Data analysis using Confirmatory Factor Analysis (CFA)

Once the EFA technique has been completed, these constructs and emerging components of the revised conceptual framework were used in the field study. Hair et al and Awang et al described two distinct models in the field study: the measurement model used in the CFA technique and the structural model used to estimate paths using the SEM. 54–56 97 99 This study paradigm has the features of a confirmatory form of research, with a focus on behavioural components. This type of SEM is known as covariance based-SEM (CB-SEM) and exhibits theory testing or theory-driven research that integrates existing theories to replicate an established theory into a new domain, confirming a pre-specified relationship. 54–56 97 99

The SPSS Analysis of Moment Structures (AMOS) V.24.0 software was used in CFA to evaluate the unidimensionality, validity and reliability of the measurement model. 53 54 56 The instrument’s normality is also achieved using CFA. 53 54 56 There are two ways to validate measurement models: pooled and individual CFA. 54–56 120 121 Pooled-confirmatory factor analysis’ (Pooled-CFA) higher degree of freedom enables model identification even when some constructs have fewer than four components. 54–56 120 121 The missing data will be omitted/discarded from the analysis. To ensure unidimensionality, the permissible loading factor for each latent construct is calculated, and items that cannot fit into the measurement model due to low factor loading are excluded. 53 55 56 97 122–125 The cut-off value for acceptable factor loading varies depending on the research goal. However, this study used a threshold value of 0.5 to minimise item deletion. 53 55 56 97 121 122 126 Convergent validity is assessed by calculating the average variance explained (AVE) for each construct. 53 55 56 97 111 122 Meanwhile, composite reliability (CR) assesses how often a construct’s underlying variables are used in structural equation modeling. 53 55 56 97 122 A latent construct’s CR must be 0.6 to achieve composite reliability. 53 55 56 97 122

Several fitness indicators were reported among scholars. Some recommendations are to report fit indices as absolute fit (chi-squared goodness-of-fit (Χ 2 ) and standardised root mean square residual, or SRMR), parsimony-corrected fit (root mean square error of approximation, or RMSEA), Comparative Fit Index (CFI) and comparative fit (Tucker-Lewis Fit Index (TLI)). 54–56 99 123 124 126–129 They advised using at least one index from the three fitness categories: absolute fit, incremental fit and parsimonious fit. 54–56 123 124 126–129 A model fit was indicated using a set of cut-off values: RMSEA values from 0.05 to 1.00, CFI >0.90 and Chisq/df<5.00, which would imply a reasonable fit. 53–56 126 129–131

Findings for the pilot test through exploratory factor analysis

Out of the required minimum sample size of 125, a total of 106 participants took part in the quantitative pilot study, resulting in an 84.8% response rate. According to Hair et al and Awang et al , in order to conduct an EFA, at least 100 samples are needed. 54–56 97 However, considering a potential drop-out rate of 20%, the minimum required sample size for this pilot study is 125. Researchers performed an EFA to find the primary dimensions from a wide set of latent constructs represented by 42 items before conducting the CFA. EFA uses PCA as the extraction method to reduce data and create a hypothesis or model without pre-existing preconceptions about the variables’ quantity or nature. 54–56 97 132 The EFA deemed indicators above 0.60 significant, and indicators loading into the same component were combined to match the measurement model. 97 The measurement model (for CFA) and structural model (for path estimation) of SEM will use EFA results. 54–56 97 99 EFA was used to evaluate and appraise the items measuring the construct, while CFA was used to validate the measurement. 12 43 44 50 61 EFA and CFA used pilot and field study data, respectively. EFA is a method used to select factors for retention or removal, using PCA and varimax rotation. It is a popular orthogonal factor rotation approach that clarifies factor analysis. 53 55 56 97 122 The extraction technique reduces the organisational factors (O) from nine to eight items, with one item, ‘Organisational competency to provide the resources for the implementation of the Casemix system in THIS setting,’ not reaching the factor loading of 0.6, hence it was 55 97 see table 1 .

  • View inline

Factor loading of EFA with PCA and varimax rotation

To prepare for the next stage, the researcher reorganises the objects into their respective components and begins data collection in the field study. The EFA results also reveal that the two components of the organisational characteristics (O) construct were later named organisational structure (STR) and organisational environment (ENV). 53 55 56 97 122 The instrument was used for 41 items in the field study and analysed with Cronbach’s alpha, ensuring its internal reliability for the field study, 53–56 97 133 see table 2 below.

The number of items for each construct before and after EFA and Cronbach’s alpha

Consolidating correlated variables was EFA’s primary goal. EFA established eight constructs from the pilot study data and according to the researcher’s conceptual framework (See figure 1 ). 53–55 The overall results of KMO and Bartlett’s sphericity test for all constructs, see table 3 . The KMO value was 0.859, which is larger than 0.6. The result of Bartlett’s test of sphericity shows that p value <0.001 yielded statistically significant findings, which is p value <0.05. 53 55 56 97 122 Therefore, it is appropriate to proceed with further study.

Results of the KMO and Bartlett’s test of sphericity

The amount of variance accounted for, referred to as total variance explained (TVE), 53–56 97 see table 1 ( online supplemental file 9 ). Each component had an eigenvalue larger than 1 and the TVE was 84.07%, exceeding 60%. 53–56 97 The researcher should contemplate incorporating more items to assess the structures as it indicates that the existing items are inadequate for accurately assessing the constructs if the TVE is less than 60%. However, this does not occur in the present study.

The EFA approach also includes the scree plot. The researcher can ascertain the number of components by observing the distinct slopes in the scree plot. 53–56 97 The scree plot exhibits nine distinct slopes, as shown in figure 1 ( online supplemental file 9 ). Hence, the EFA identifies a total of nine components.

Cronbach’s alpha would calculate measuring each item’s internal reliability. Internal reliability assesses how well the selected items measure the same construct. 53–56 97 133 All constructs topped 0.7 Cronbach’s Alpha. Hence, this instrument is reliable for use in field study.

Findings for the field study through the confirmatory factor analysis

The ultimate measurement tool for field study comprises 41 elements from the EFA procedure. To adequately address the intricacy of the quantitative instrument for the field study, the researchers determined that a minimum of 300 samples was necessary to implement CFA. 97 An additional 20% drop-out rate resulted in a minimum sample size of 375 individuals for the field study. Hence, out of this sample, only 343 participants answered, indicating a response rate of 91.5%. 100 No missing data was reported.

CFA validates factor loading and assessment in this study. The researcher tests a theory or model using CFA. Unlike EFA, CFA is a form of structural equation modelling that makes assumptions and expectations about the number of factors and which factor theories or models best suit prior theory. 53–56 97 EFA relied mainly on outer loading; however, factor loadings and fitness indices are now considered. Researchers must confirm that both folds meet standards. CFA also lets academics test financial literacy indicators and measurement models. Thus, a proper measuring model helps researchers interpret their data.

Validity, unidimensionality and reliability were necessary for all latent construct assessment models. 53 55 56 97 122 The latent construct measurement model needed convergent, construct and discriminant validity. 53 55 56 97 122 AVE assesses convergent validity, while measurement model fitness indicators determine construct validity. 54–56 On the other hand, composite reliability (CR) was used to calculate instrument reliability since it was better than Cronbach’s alpha. 54–56 133

Figure 2 shows that Pooled-CFA validated all latent constructs in the measurement model simultaneously. These constructs were aggregated using double-headed arrows to execute a Pooled-CFA. Pooled-CFA’s increased degree of freedom allows model identification even when some constructs have fewer than four components. 54–56 Pooled-CFA was employed in this investigation since only one construct has two components.

Result from Pooled-CFA procedure.

Uni-dimensionality

Unidimensionality is a set of variables that can be explained by one construct. 7–9 Unidimensionality is achieved when all construct-specific measuring items have acceptable factor loading. 54–56 Remove CFA components with low factor loadings from the measurement model until fit indices are met. 53–56 97 134 Table 4 summarises the build items with factor loadings >0.6. 54–56

Factor loading of all items, composite reliability (CR) and average variant extracted (AVE) and normality testing

Convergent validity

Convergent validity is a group of indicators that measures a construct. 54–56 97 135 It assesses the strength of correlations between items that are hypothesised to measure the same latent construct. 56 97 The average variance extracted (AVE) statistic can be used to verify the convergent validity of a construct. If the concept’s AVE is more than 0.5, it possesses convergent validity. 53 56 97 136 Table 4 shows that the AVE for all structures was more than 0.5. Organisational characteristics/factors (ORG) AVE shows the highest AVE, which was 0.857, and environment component, the lowest AVE, which is 0.699. The model is, therefore, convergently valid.

Construct validity

When all model fitness indices met the criteria, construct validity was attained. 55 56 97 Construct validity was established using absolute, incremental and parsimonious fit indices. 55 56 97 Some researchers recommend using one fitness index from each model fit category. 55 56 97 This study employed RMSEA, CFI and normed chi-square (x2)/df as its main indicators. According to table 5 , this instrument met all three fitness indices: (1) the RMSEA value was below the threshold of 0.08 (0.054), confirming the absolute fit index; (2) the instrument achieved the incremental fit index category by obtaining a CFI value above 0.90; and (3) the parsimonious fit index, measured using Chisq/df, yielded a value of 2.014, which is below the accepted value of 3.0. 55 56 97 This study proved the instrument’s construct validity.

Fitness index summary

Discriminant validity

The survey’s discriminant validity was tested to ensure no redundant constructs were found in the model. The model is discriminant when the square root of the average variance extracted (AVE) for each construct is greater than its correlation value with other constructs. 55 56 136 Table 6 summarises the discriminant validity index, which showed that all constructs met the threshold. 55 56 136 The diagonal values (bold font) in this table were greater than all other values in their row and column, suggesting discriminant validity for all constructs. 55 56 136

Discriminant Validity Index

Composite reliability

Estimating model reliability uses composite reliability (CR). 55 56 97 CR between 0.6 and 0.7 is acceptable. 55 56 97 Table 4 above shows that the instrument’s composite reliability exceeded 0.6 for all structures. The environment component had the lowest CR (0.903), while the information quality construct had the highest (0.954). Therefore, this instrument’s composite reliability is accomplished.

Normality assessment

Each item evaluating the construct’s distributional normality was assessed. All skewness values must be within the usual range. 56 97 Skewness between −1.5 and 1.5 is considered acceptable. All model components’ skewness values are between −1.5 and 1.5, indicating their normal distribution. 56 97 The instrument’s data distribution met the normality condition, as shown in table 4 .

This study focused on redeveloping and validating an instrument to gauge medical doctors’ intent to use and accept the Casemix system within the Total Hospital Information System (THIS) context. The EFA and CFA indicated that the instrument was well-designed and validated for assessing medical practitioners’ acceptance of the Casemix system in THIS setting. 55 56 97 The acceptance of the Casemix system among medical physicians in hospital information systems was found to be influenced by various factors including system and service quality, perceived ease of use, usefulness, relevance to clinical practice, training and good organisational support, impact on efficiency and productivity, and confidence in information quality involving data accuracy and security. Healthcare organisations must address these components to gain physician acceptance. 43 44 137 They can optimise Casemix system use, improving patient care and results. 137

Principal findings

Findings of exploratory factor analysis (efa).

The pilot test data was analysed using EFA, which helps researchers understand complex datasets and discover observed variable correlations. 55 56 97 EFA reduces variable dimensions by identifying common patterns, shaping fundamental factors that influence observable variables and grouping related variables. 122 126 138 It simplifies model design by computing factor loadings, which indicate the intensity and direction of factor-observable variable interactions. EFA also finds underlying components in a dataset, while CFA analyses and confirms an EFA-proposed factor structure. 55 56 97

All structures underwent KMO and Bartlett’s sphericity tests, with all structures having KMO values over 0.6. 55 56 105–107 The scree plot, part of EFA, was used to count components and found nine constructs on 42 items. 55 56 105–107 The study found that one construct should now have two parts, mainly due to demographic changes, particularly socioeconomic status and education. Component 1 explained 14.115% of construct variance, while component 9 explained 6.610%. All constructs had 84.07% total variance Explained (TVE), exceeding the minimum threshold of 60%. 55 56 60 112 129

The EFA discovered nine components, including O1-O9 for organisational factors. 43 45 50 139 41 of 42 items had factor loadings above 0.6, requiring item O1 to be eliminated. 53 55 56 97 122 Only organisational factors (O) had nine items reduced to eight following extraction. The remaining seven constructs had only one component and no additional components, resembling HOT-Fit and TAM framework organisational constructs.

The study stresses tool dependability and internal consistency, using markers such as Cronbach’s alpha (α), person reliability, person measure and valid responses. 133 140 A Cronbach’s alpha coefficient of 0.7 or above is acceptable in social science and other studies. 53 138 141 142 Internal reliability is measured by how well-selected items measure the same idea. 53–56 97 98 133 143 The researcher reordered questionnaire items for the field investigation, and CFA authenticated and confirmed all eight constructs on field data, which is elaborated further in the next Subsection 4.1.2.

Findings of Confirmatory Factor Analysis (CFA)

Once the pilot data was assessed and the EFA was commenced, the final questionnaire will be used in the quantitative field study. Eventually, another procedure will be conducted to validate the questionnaire, also known as CFA, based on the field study data. The CFA will validate the instrument’s convergent, construct and discriminant validity. Unidimensionality, composite reliability and normality evaluations are also needed to reveal whether the instrument’s items are valid. 53–56 97 Therefore, the findings of this study demonstrate that the quantitative instrument has been validated and proven reliable for assessing medical practitioners’ intention to use and accept the Casemix system within the context of THIS. Using EFA and CFA is imperative for ensuring the instrument’s validity, reliability and trustworthiness. 53–56 97

By using EFA, the organisational factors (O) emerged into two components. The organisational factors (O) construct was renamed as organisational characteristics (ORG) in the measurement model, and the newly emerged components were named organisational structure (STR) and organisational environment (ENV). Measurement models refer to the implicit or explicit models that relate the latent variable to its indicators. 55 56 97 The organisational characteristics (ORG) construct is assessed as a second-order construct due to the emerged components. When dealing with a complex framework, researchers can choose to do the CFA individually for each second-order construct, and then followed by Pooled-CFA, through item parcelling or straight away employ Pooled-CFA. 55 56 The use of Pooled-CFA is beneficial because of its improved efficiency, effectiveness and ability to address identification difficulties. 55 56 However, although there are many constructs in this study, this measurement model only includes one second-order construct, which is the (ORG) construct with two emerged components. The other seven constructs are made up exclusively of first-order constructs, each consisting of a maximum of five items. Therefore, a direct Pooled-CFA was employed. 55 56

This study uses CFA to validate factor loading and assessment in a theory or model. 53–56 97 CFA is a form of structural equation modelling that makes assumptions and expectations about the number of factors and which factor theories or models best suit prior theory. 53–56 97 According to Baharum et al in their few studies, they measured success factors in newly graduated nurses’ adaptation and validation procedures. 129 144 145 Likewise, for example, CFA also allows academics to test financial literacy indicators and measurement models, ensuring that a proper measuring model helps researchers interpret their data as elaborated in a few studies. 146–148

Validity, unidimensionality and reliability were necessary for all latent construct assessment models. 53 55 56 97 122 The latent construct measurement model needed convergent, construct and discriminant validity. 53 55 56 97 122 Convergent validity is assessed using the average variance extracted (AVE) statistic, while construct validity is determined by measurement model fitness indicators. 54–56 Composite reliability (CR) was used to calculate instrument reliability since it was better than Cronbach’s alpha. 54–56 133

Unidimensionality is a set of variables that can be explained by one construct. 7–9 Unidimensionality is achieved when all construct-specific measuring items have acceptable factor loading. 54–56 Convergent validity is a group of indicators that are considered to measure a construct. 54–56 97 135 Convergent validity is achieved when the concept’s AVE is more than 0.5, and the highest AVE for all structures was 0.857. 53 56 97 136 Normality assessment was conducted on each item evaluating the construct’s distributional normality, with skewness values within the usual range (–1.5 to 1.5). 56 97 The instrument’s data distribution met the normality condition.

Construct validity is attained when all model fitness indices meet the criteria, using absolute, incremental and parsimonious fit indices. 55 56 97 The instrument met all three fitness indices, confirming the absolute fit index with RMSEA=0.054 (aim<0.1), achieving the incremental fit index category by obtaining a CFI value above 0.90 and yielding a parsimonious fit index of 2.014 (aim<5.0). 55 56 97

Discriminant validity was tested to ensure no redundant constructs were found in the model. 55 56 136 The model obtained discriminant validity since each construct’s square root of average variance extracted (AVE) is bigger than its correlation value with other constructs. 55 56 136 The summary discriminant validity index showed all constructs met discriminant validity.

The instrument’s composite reliability exceeded 0.6 for all structures, with the environment component having the lowest CR (0.903) and the information quality construct having the highest (0.954). 55 56 136 Calculating model reliability with composite reliability (CR). 55 56 97 Acceptable CR is 0.6–0.7. 55 56 97 As shown in table 1 , the instrument’s composite reliability exceeded 0.6 for all constructs. The environment component (ENV) had the lowest CR (0.903), while information quality had the highest (0.954). Thus, this instrument’s composite reliability is achieved.

Therefore, all necessary procedures to determine validity, reliability and normalcy were conducted, and no items were excluded. As a result, the total number of items remained at 41. Construct, convergent, discriminant validities and composite reliability have all been attained. All things satisfied the criteria of normality.

Strengths and weaknesses of the study

There are various ways in which this study could benefit the medical community and policymakers. 149 150 The research assesses important success elements that affect physicians’ adoption of the Casemix system in hospitals that have a THIS. Policymakers and hospital administrators may find it easier to pinpoint the critical elements influencing the Casemix system’s effective deployment with the aid of the study’s findings. 151 To successfully implement clinical pathway/case management programmes, policymakers may find the study to help understand the significance of ongoing clinician support and acceptance, top management leadership and support, and a committed team of case managers, nurses and paramedical professionals. 151 152 Policymakers can potentially use the findings to impact admissions decisions, thereby increasing clinical practice openness. 152–154

Strengths and limitations exist in this research. One of the strengths of the study was that it employed a sequential explanatory mixed-method approach to investigate the CSFs and acceptance of the Casemix system among medical practitioners in THIS. 58 155 156 The findings revealed that there might be unnoticed CSFs in the quantitative phase, suggesting the need for a qualitative method to identify more CSFs, perceptions and challenges/barriers. Quantitative data support hypothesised associations, but qualitative data provide in-depth data to supplement quantitative conclusions. 157 The mixed-method approach is expected to improve research design and yield more valid results.

Additionally, another strength of this study is that it uses a strict methodological approach to instrument development and validation. It uses both EFA with pilot test data and CFA using field data, which makes the instrument used for data collection more reliable and valid. Many statistical tests were used to make sure the instrument worked well and the analysis was accurate. These included the KMO measure, Bartlett’s test of sphericity, systematic deletion of items based on factor loadings, Cronbach’s alpha and different validity tests such as unidimensionality, construct validity, convergent validity and discriminant validity. 55 56 105–107

Although the study had a large sample size, it was only conducted in five selected hospitals in Malaysia. Therefore, the findings may not accurately represent all THIS hospitals in the country or other healthcare systems. Other professional positions, including paramedics, medical record officers, information technology officers and finance officers, are not included in this study since their involvement and level of understanding in the Casemix system are not similar to that of medical practitioners, despite being relatively involved in the Casemix system. Hence, this may limit the generalisability of the findings could be a potential weakness of the study. The study’s findings are likely to be distinctive/unique to the healthcare setting in Malaysia and may or may not be directly transferable to other nations or healthcare systems that have distinct sociocultural, organisational or technological characteristics. While this study’s findings are rooted in Malaysia’s healthcare setting, where the Casemix system and THIS are prevalent, their applicability to other countries or healthcare systems with different sociocultural, organisational or technological characteristics should be carefully considered. Despite this, there are potential avenues through which the insights gained from this research could benefit other nations or healthcare systems. For example, the principles of efficiency and effectiveness in healthcare management highlighted in this study could be adapted and implemented in various settings. Additionally, the lessons learnt from the challenges faced in Malaysia’s healthcare system could serve as valuable guidance for other countries looking to improve their systems.

Strengths and weaknesses concerning other studies

Compared with previous studies, this research contributes to the field by providing a validated instrument tailored to assess the acceptance of the Casemix system within the THIS environment. Prior literature has examined various aspects of Casemix implementation in Malaysia as well as in other countries. However, no one has investigated Casemix in THIS or even in HIS. Thus, this study offers a comprehensive evaluation tool that addresses critical success factors influencing medical doctors’ acceptance, filling a significant research gap. Given the absence of prior research in this area, the newly created quantitative tool would be advantageous in achieving the study objectives and serve as a point of reference for future investigations.

However, previous literature by Beth Reid describes the importance of developing Casemix-based hospital information system management. 33 The Casemix-based hospital information system is a comprehensive approach to healthcare management that involves estimating costs per diagnosis-related group (DRG), building a Casemix-based system and addressing organisational design and education issues for successful implementation. 33 It is crucial to provide Casemix reports to hospital staff and clinicians to identify errors in data. Improving the quality of data is essential for both hospitals and universities. To ensure the credibility of the HIS, it must tap into decentralised databases to ensure common input data for each patient’s diseases and procedures. 33 Sharing data is beneficial for clinicians as it allows them to avoid investing time and effort in ensuring database accuracy to discover that the data used for Casemix activities, such as funding, is obtained from the medical record. 40 This approach is essential for ensuring the accuracy and efficiency of healthcare management. 33

Additionally, a study by Saizan showed that THIS hospital showed the lowest Casemix performance in terms of accuracy of the main diagnosis, the completeness of other diagnoses, and the coding of main and other diagnoses. 16 This article outlines two themes with three subthemes, each theme based on why the performance is the lowest. These two themes are the poor commitment of clinicians and obstacles in the work process. Furthermore, another study revealed that one THIS hospital in Malaysia had the lowest Casemix performance in terms of main diagnosis accuracy, other diagnosis completeness, and main diagnosis and other diagnostic coding accuracy. 16 This article presents two overarching themes, each consisting of three subthemes based on the qualitative, in-depth interview findings. These themes are centred around the underlying reasons behind the lowest Casemix performance. The two main themes identified are the lack of dedication among professionals and the challenges encountered in the workflow.

Meaning of the study: possible explanations and implications

The validated and reliable instrument developed in this study holds implications for clinicians, policymakers and healthcare organisations aiming to optimise Casemix system implementation within HIS. Identifying critical factors influencing acceptance, such as system, information and service quality, is imperative to meet study objectives. Organisational characteristics such as environment and structure, as well as human factors such as perceived ease of use and perceived usefulness, the findings offer actionable insights for enhancing system adoption, utilisation and success. Policymakers and hospital administrators can use these findings to streamline Casemix deployment strategies, improving patient care outcomes and operational efficiency within the THIS.

First, while the specific details of the findings may not directly translate to other contexts, the underlying principles and methodologies employed in this study can serve as a valuable template for researchers in different settings. By adapting and contextualising the research methods and instruments used in this study, researchers in other countries can conduct similar investigations tailored to their healthcare environments. 158 159

Second, the identification and evaluation of critical success factors for implementing healthcare information systems, such as the Casemix system, are universal challenges healthcare organisations face worldwide. 33 158 160 Because of this, the conceptual framework and analytical methods created in this study can help us understand what makes people accept and use these kinds of systems in different situations. Researchers and policymakers in other countries can leverage these insights to inform their strategies for implementing and optimising healthcare information systems.

Additionally, while the contexts and details of the Casemix system and THIS may vary across different countries, the broader goals of improving resource allocation, clinical decision-making and quality of care are shared objectives across healthcare systems globally. Therefore, the findings of this study, particularly regarding the factors influencing system acceptance and success, have the potential to resonate with stakeholders in other countries who are working towards similar goals. 151 161 162

Overall, while recognising the contextual specificity of the study’s findings, there is potential for the insights generated to contribute to the broader body of knowledge on healthcare information systems and inform practices in other countries or healthcare settings with distinct characteristics. Through collaboration and adaptation, the lessons learnt from this research can be extrapolated and applied to diverse healthcare contexts, ultimately contributing to advancing healthcare delivery worldwide. 33 158 160 By sharing best practices and lessons learnt, healthcare systems around the world can benefit from the findings of this study and improve their information systems. This collaborative approach can lead to more efficient and effective healthcare delivery on a global scale.

Unanswered questions and future research

The current study proposes employing this instrument in future research, broadening the target population to include more professional occupations and increasing the sample size for more robust results. The novelty of this research lies in its comprehensive analysis of the direct and indirect effects of these parameters on user acceptance of implementing Casemix within THIS environment. SEM was employed to investigate the proposed model. Apart from that, mediating effects have been examined in this study involving a few critical constructs, such as PEOU, PU and ITU, using similar analysis methods. Additionally, more information on moderating characteristics, including age, gender, professional positions, degree of education, years of experience in MOH Malaysia and current THIS hospital and Casemix system knowledge, could improve the instrument. These moderating effects were examined using SEM as well.

The innovation of this study is that it examines the CSFs that influence the acceptance of the Casemix system in the THIS environment, specifically in MOH hospitals in Malaysia. The immediate findings have clear significance for healthcare organisations and policymakers in Malaysia, and even globally. However, the more significant implications for readers in other countries are also relevant. First and foremost, recognising CSF in implementing the Casemix system provides valuable information that can be applied to healthcare systems, especially those equipped with THIS facility universally. Gaining insight into these aspects can provide valuable strategic decision-making guidance in other nations seeking to implement or improve similar systems within their healthcare infrastructure.

Furthermore, the study uses a methodological approach that involves the use of a mixed-methods approach. The quantitative phase, elaborated on in this article, employs a reliable quantitative instrument that validates exploratory and confirmatory factor analyses and reliability testing. Moreover, semi-structured, in-depth interviews were conducted with the Deputy Directors representing the top management and the CSCs of 5 participating hospitals. Hence, these mixed-methods studies provide a strong foundation for evaluating the adoption of the Casemix system within healthcare information systems. Readers from different countries might use and modify these approaches to conduct comparable investigations in their specific circumstances, enhancing the comprehension of healthcare informatics worldwide.

Moreover, the study highlights the significance of interdisciplinary collaboration among healthcare practitioners, technology specialists and policymakers in facilitating the practical application of the Casemix system as one of the clinical and costing modules essential in healthcare settings, especially in facilities equipped with HIS. This interdisciplinary approach to tackling issues in healthcare informatics is generally applicable and can be implemented in various countries and healthcare systems.

To summarise, this study’s immediate findings may address the CSF of the Casemix system implementation within THIS of the healthcare system in Malaysia. However, its broader significance lies in providing valuable insights, methodological frameworks and interdisciplinary approaches that can be applied globally to adopt the Casemix system within the realm of the HIS in other countries, and it is not only applicable locally in the Malaysian setting.

In summary, this research has comprehensively evaluated the fundamental principles outlined in the conceptual framework. Various methodological approaches, including content validity, criterion validity, translation, pre-testing for face validity, pilot testing using EFA and field study employing CFA, have been employed to assess the validity of the items. 12 43 44 50 61 The EFA analysis computed KMO, Bartlett’s test for sphericity and Cronbach’s alpha values, all meeting the criteria for sample adequacy, sphericity and internal reliability. 53–56 97 Additionally, the CFA analysis tested for unidimensionality, construct validity, convergent validity, discriminant validity, composite reliability and normality, further confirming the validity and reliability of the instrument used to evaluate critical success factors and the acceptance of the Casemix system within the THIS context. 53–56 97

Consequently, this validated instrument holds promise for future quantitative analyses, including covariance-based structural equation modeling (CB-SEM) or variance-based structural equation modeling (VB-SEM). In this study, CB-SEM, in conjunction with SPSS-AMOS V.24.0, was used to explore the direct, indirect, mediating and moderating effects among the constructs outlined in the conceptual framework. The findings from these quantitative analyses will be presented in forthcoming articles, providing further insights into the Casemix system’s applicability within the current healthcare landscape. Moreover, the instrument’s demonstrated statistical reliability and validity position is a valuable tool for future research endeavours concerning the Casemix system in the THIS context, addressing an existing research gap. With the establishment of the instrument’s normality, validity and reliability, it can now be considered operational and validated for use in subsequent studies. This research holds the potential to enhance our understanding of the critical success factors and acceptance of the Casemix system, thereby facilitating its improved implementation within the THIS setting. Moving forward, the instrument will be instrumental in conducting further research initiatives to assess the adoption and effectiveness of the Casemix system in THIS environment, addressing a current scarcity of literature.

Ethics statements

Patient consent for publication.

Consent obtained directly from patient(s).

Ethics approval

This study was approved by both the Medical Research Ethics Committee from the Ministry of Health and the Medical Research Ethics Committee from the Faculty of Medicine, Universiti Kebangsaan Malaysia with the reference numbers: NMRR ID-22-02621-DKX and JEP-2022-777 respectively. Informed consent was obtained from all participants through the Google form with a statement that all data would be confidential. All methods were carried out under the ethical standards of the institutional research committee and conducted according to the Declaration of Helsinki. All methods were performed based on the relevant guidelines and regulations. This study was not funded by any grants. The authors declare there were no conflicts of interest concerning this article.

Acknowledgments

In recognition of their involvement and contributions to this study, the authors would like to express their gratitude to the respondents. In addition, the authors would like to express their gratitude to all content and criterion validators of this study: Dr. Fawzi Zaidan and Dr. Nuratfina from the Hospital Financing (Casemix) Unit of the Ministry of Health Malaysia, and Prof. Dr. Zainudin Awang from Universiti Sultan Zainal Abidin. Their remarks and recommendations made a significant contribution to the advancement of this instrument.We express our appreciation to the Casemix System Coordinators, as well as the Hospital and the Deputy Directors from Hospitals W, E, S, N, and EM, for their great collaboration in distributing the questionnaire link and for actively engaging in this study.

Additionally, for their suggestions on improving this paper, the authors would like to express their gratitude to the reviewers. Finally, we also want to express our appreciation to Associate Professor Ts. Dr. Mohd Sharizal for proofreading this article.

  • Braithwaite J ,
  • Westbrook J ,
  • Coiera E , et al
  • Medical Development Division MOH
  • van Boekholt TA ,
  • Putera KAS ,
  • Jihan Noris N
  • Ministry of Health Malaysia
  • Borzekowski DLG
  • Vaganova E ,
  • Ishchuk T ,
  • Zemtsov A , et al
  • Rahimly A , Universiti Teknologi Malaysia , et al
  • Tachinardi U ,
  • Gutierrez MA ,
  • Moura L , et al
  • Stergioulas LK
  • Ismail NI ,
  • Abdullah NH
  • Merican I ,
  • Abdul Hamid NB
  • Nor MZM , et al
  • Abdullah NH ,
  • Shamsudin A , et al
  • Alipour J ,
  • Mehdipour Y ,
  • Desi Hertin R ,
  • Ismael Al-Sanjary O
  • Sulaiman H ,
  • Wickramasinghe N
  • Ramayah T ,
  • Shamsuddin A
  • Mustaffa HR ,
  • Aljunid SM ,
  • Ahmed Z , et al
  • Sapar A , et al
  • Taufik Jamil A ,
  • Fareed A , et al
  • Nilashi M ,
  • Ibrahim O , et al
  • Mastura S ,
  • Zafirah SA ,
  • Puteh SEW , et al
  • Kusumadewi S ,
  • Roger France FH
  • Gardner B , et al
  • Hamzah Al-Junid SM ,
  • Ali Jadoo SA ,
  • Nur AM , et al
  • Fedorowicz J
  • Venkatesh V ,
  • Morris MG ,
  • Davis GB , et al
  • Po-An Hsieh JJ ,
  • Lotfnezhad Afshar H , et al
  • DeLone WH ,
  • Papazafeiropoulou A ,
  • Paul RJ , et al
  • Norman Huang NT
  • Khechine H ,
  • Pascot D , et al
  • Zainudin NFS
  • Afthanorhan A ,
  • Lim SH , et al
  • Creswell JW ,
  • Siddiqui BA
  • Siddiqui BA ,
  • Awang Z , et al
  • Erlirianto LM ,
  • Herdiyanti A
  • Parasuraman A ,
  • Zeithaml VA ,
  • Proctor EK ,
  • Landsverk J ,
  • Aarons G , et al
  • Proctor E ,
  • Silmere H ,
  • Raghavan R , et al
  • Strong DM ,
  • Davoudizadeh R ,
  • Hosseini Seno SA , Department of Management, Ferdowsi University of Mashhad, Mashhad, Iran , et al
  • Johnson EC ,
  • Hameed MA ,
  • Counsell S ,
  • Abdulrahman MD ,
  • Subramanian N
  • Zayyad MA ,
  • Armstrong CP ,
  • Sambamurthy V
  • Dauwed M , et al
  • Committee on Diagnostic Error in Health Care
  • Board on Health Care Services
  • Institute of Medicine
  • Cortimiglia MN ,
  • Oviedo-Trespalacios O
  • Bagozzi RP ,
  • Davies FD ,
  • Venkatesh V
  • Carmines E ,
  • Hadie SNH ,
  • Ismail ZIM , et al
  • Yusoff MSB ,
  • Lee YY , et al
  • Baharuddin KA ,
  • Mohamed SA , et al
  • Mohamad Marzuki MF ,
  • Yaacob NA ,
  • Yusoff MSB , Department of Medical Education, School of Medical Sciences, Universiti Sains Malaysia, Malaysia
  • Armstrong TS ,
  • Eriksen L , et al
  • Jung SY , et al
  • Thong JYL ,
  • Babin BJ , et al
  • Howard MC ,
  • MacCallum RC ,
  • Widaman KF ,
  • Zhang S , et al
  • Alkhawaja MI ,
  • Sobihah M ,
  • Yahaya TAB ,
  • Suandi T , et al
  • Bartlett MS
  • Marshall R , et al
  • Aljunid SM , et al
  • Bujang MA ,
  • Soelar SA , et al
  • Velicer WF ,
  • Fitriana N ,
  • Hutagalung FD ,
  • Tinsley HEA ,
  • Boomsma A ,
  • Hoogland JJ
  • Hoogland JJ ,
  • Bentler PM ,
  • Nunnally JC ,
  • Bernstein IH
  • Mohamad M , et al
  • Shkeer AS ,
  • Mohamad M ,
  • Peterson J ,
  • McGillis Hall L ,
  • O’Brien-Pallas L , et al
  • Mostapa MR , et al
  • Rahlin NA ,
  • Afthanorhan A , et al
  • Abdul Halim B , et al
  • Baharum H ,
  • Hanim Ahmad J ,
  • Mohamed Zin N
  • Wan Afthanorhan WMA ,
  • Bentler & Kano
  • Cronbach LJ
  • Asnawi AA ,
  • Fornell C ,
  • Mohamad MM ,
  • Sulaiman NL ,
  • Sern LC , et al
  • Papazafeiropoulou A , et al
  • Abdul Aziz A ,
  • Amlus MH , et al
  • Nur Izzati R
  • Kamarul Azmi J
  • Azma Rahlin N ,
  • Zulkifli Abdul Rahim M , et al
  • McKenna L , et al
  • Zainol NR , et al
  • Baistaman J ,
  • Anthony M ,
  • Law SH , et al
  • Hovenga EJS ,
  • Gleditsch KS
  • Horrigan JB
  • Erismann S ,
  • Pesantes MA ,
  • Beran D , et al
  • Creswell JD
  • Ridenour CS ,
  • Cottrell K , et al
  • Children W , et al
  • Baharudin AS
  • Todd PA , et al
  • Jöreskog KG ,
  • Tanaka JS ,
  • McDonald RP ,

Contributors All authors, NKM, RI, ZA, ANA and SMASJ, have substantial contributions to the conception or design of the work; the acquisition, analysis or interpretation of data for the work; and drafting the work or reviewing it critically for important intellectual content. NKM carried out the pilot test and fieldwork, prepared the literature review, extensive search of articles, critical review of articles, performed the statistical analysis, interpretations, and technical parts, and designed the organization of this paper and original draft write-up. RI advised and supervised the overall write-up and conducted the final revisions of the article. ZA checked and validated the statistical analysis and interpretation of the results. ANA and SMASJ co-supervised the study, the manuscript preparation and the article revision. All authors have read and agreed to the final draft of the manuscript, hence, obtaining a final approval of the version to be published. Additionally, all authors agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. RI is responsible for the overall content as guarantor, since she is a corresponding author for this study. The guarantor accepts full responsibility for the finished work and/or the conduct of the study, has access to the data and controls the decision to publish.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

IMAGES

  1. LESSON 2 RDL 2

    quantitative research weaknesses and strengths

  2. 1 Model of Strength and Weaknesses of Qualitative and Quantitative

    quantitative research weaknesses and strengths

  3. STRENGTHS AND WEAKNESSES OF QUANTITATIVE RESEARCH

    quantitative research weaknesses and strengths

  4. PPT

    quantitative research weaknesses and strengths

  5. Strengths and Weaknesses of Quantitative Research

    quantitative research weaknesses and strengths

  6. Strengths and Weaknesses of Quantitative Research

    quantitative research weaknesses and strengths

COMMENTS

  1. Qualitative vs Quantitative Research: What's the Difference?

    Advantages. The main difference between quantitative and qualitative research is the type of data they collect and analyze. Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed numerically. Quantitative research is often used ...

  2. 10 Quantitative Research Advantages & Disadvantages-Helpfull

    5 Disadvantages of Quantitative Research. Limited to numbers and figures. Quantitative research is an incredibly precise tool in the way that it only gathers cold hard figures. This double edged sword leaves the quantitative method unable to deal with questions that require specific feedback, and often lacks a human element.

  3. 13 Pros and Cons of Quantitative Research Methods

    List of the Pros of Quantitative Research. 1. Data collection occurs rapidly with quantitative research. Because the data points of quantitative research involve surveys, experiments, and real-time gathering, there are few delays in the collection of materials to examine. That means the information under study can be analyzed very quickly when ...

  4. Strengths and Limitations of Qualitative and Quantitative Research Methods

    Jamshed (2014) advocates the use of interviewing and observation as two main methods. to have an in depth and extensive understanding of a complex reality. Qualitative studies ha ve been used in a ...

  5. Characteristics, Strengths and Weaknesses of Quantitative Research

    3.The research study can usually be replicated or repeated, given its high reliability. Between qualitative and quantitative research, the latter is easier to replicate than the former. Since, quantitative research uses a structured research instrument, and deals with numbers and systematic procedure; it is highly replicative in nature. The ...

  6. Strengths and limitations

    Strengths. Limitations. Complement and refine quantitative data. Findings usually cannot be generalised to the study population or community. Provide more detailed information to explain complex issues. More difficult to analyse; don't fit neatly in standard categories. Multiple methods for gathering data on sensitive subjects.

  7. What Is Quantitative Research? An Overview and Guidelines

    The necessity, importance, relevance, and urgency of quantitative research are articulated, establishing a strong foundation for the subsequent discussion, which delineates the scope, objectivity, goals, data, and methods that distinguish quantitative research, alongside a balanced inspection of its strengths and shortcomings, particularly in ...

  8. Conducting and Writing Quantitative and Qualitative Research

    This article also provides information on the strengths and weaknesses of quantitative and qualitative research. Such information would help researchers appreciate the roles and applications of both research types and how to gain from each or their combination. ... Quantitative research usually includes descriptive, correlational, causal ...

  9. PDF Characteristics, Strengths, Weaknesses, Importance, and kinds of

    STRENGTHS AND WEAKNESSES OF QUANTITATIVE RESEARCH STRENGTHS WEAKNESSES 1. LARGER SAMPLE SIZE - In quantitative research, the higher the sample size, the more accurate are the mean values, and the ...

  10. The strengths and weaknesses of research designs involving quantitative

    This paper presents a critical review of the strengths and weaknesses of research designs involving quantitative measures and, in particular, experimental research. The review evolved during the planning stage of a PhD project that sought to determine the effects of witnessed resuscitation on bereaved relatives.

  11. Strengths and weaknesses of Quantitative and Qualitative Research

    However, both methods do exhibit some weaknesses as well. Quantitative research excels in providing precise, measurable, and generalizable data through statistical analysis, while qualitative research offers rich, detailed insights into participants' experiences, emotions, and social interactions.

  12. Strengths and Weaknesses of Quantitative and Qualitative Research

    Data from quantitative research—such as market size, demographics, and user preferences—provides important information for business decisions. Qualitative research provides valuable data for use in the design of a product—including data about user needs, behavior patterns, and use cases. Each of these approaches has strengths and ...

  13. Understanding quantitative research evidence

    The section below explains the differences between these two types of study, and their strengths and weaknesses. The results of quantitative research are often quoted as the Relative Risk which means how much more common it is for a problem to occur in one group than another. In some types of research, particularly population studies (see below ...

  14. Quantitative research: Definition, characteristics, benefits

    Quantitative research doesn't usually involve observing participants or talking with them about their answers; therefore, it is difficult to guess if the data gathered from them is accurate all the time. With qualitative methods, you get a chance to observe participants and ask follow-up questions to confirm their answers. ...

  15. Appraising Quantitative Research in Health Education: Guidelines for

    The key components of a research publication should provide important information that is needed to assess the strengths and weaknesses of the research. ... in Table 2 can serve as useful tools to frame informative conversations with your peers regarding the strengths and weaknesses of published quantitative research in health education ...

  16. PDF The Strengths and Weaknesses of Research Methodology: Comparison and

    The illustrated diagram "Fig. 1" is the entire description of strengths and weaknesses of qualitative and quantitative research methodologies. The left side and right side are consisting specific strengths and weaknesses of both qualitative and quantitative research methodology approaches. The middle of qualitative and

  17. Strengths and Weaknesses of Quantitative and Qualitative Research

    The main difference is this - Qualitative research methods include the collection of data through the use of open-ended questions, unstructured interviews, or observations, whereas, Quantitative research focuses on gathering numerical data and making generalizations about groups of people, situations, or phenomena.

  18. Strengths and Weaknesses of Quantitative Research

    This video lecture discusses the strengths and weaknesses of quantitative research research. The content of this video is different from the content of the v...

  19. The Strengths and Weaknesses of Research Methodology: Comparison and

    Quantitative and qualitative research approaches have both strengths and weaknesses (Choy, 2014). The mixed-method research approach is considered as an approach that draws upon the strengths of ...

  20. The strengths and weaknesses of quantitative and qualitative research

    The author identifies that neither approach is superior to the other; qualitative research appears invaluable for the exploration of subjective experiences of patients and nurses, and quantitative methods facilitate the discovery of quantifiable information. Combining the strengths of both approaches in triangulation, if time and money permit ...

  21. Exploring the Strengths and Weaknesses of Quantitative and ...

    There are many different methods and approaches that UX researchers can use to collect and analyze data, and two of the most common are quantitative and qualitative research. Quantitative research is focused on collecting and analyzing numerical data. It involves the use of statistical techniques to measure and analyze patterns in large datasets.

  22. The Strengths and Weaknesses of Quantitative and ...

    Download Table | The Strengths and Weaknesses of Quantitative and Qualitative Approaches from publication: Integrating Quantitative and Qualitative Research for Country Case Studies of Development ...

  23. HSCI 703 Quantitative Research Methods

    HSCI 703 Quantitative Research Methods ... Understanding the characteristics of quantitative study design, quantitative data collection and quantitative data analysis is a cornerstone of doctoral ...

  24. The strengths and weaknesses of research designs involving quantitative

    This paper presents a critical review of the strengths and weaknesses of research designs involving quantitative measures and, in particular, experimental research. The review evolved during the planning stage of a PhD project that sought to determine the effects of witnessed resuscitation on bereaved relatives. The discussion is therefore supported throughout by reference to bereavement ...

  25. Validation of a quantitative instrument measuring critical success

    Strengths and weaknesses of the study. There are various ways in which this study could benefit the medical community and policymakers.149 150 The research assesses important success elements that affect physicians' adoption of the Casemix system in hospitals that have a THIS.