What Is Hypothesis Testing? Types and Python Code Example

MENE-EJEGI OGBEMI

Curiosity has always been a part of human nature. Since the beginning of time, this has been one of the most important tools for birthing civilizations. Still, our curiosity grows — it tests and expands our limits. Humanity has explored the plains of land, water, and air. We've built underwater habitats where we could live for weeks. Our civilization has explored various planets. We've explored land to an unlimited degree.

These things were possible because humans asked questions and searched until they found answers. However, for us to get these answers, a proven method must be used and followed through to validate our results. Historically, philosophers assumed the earth was flat and you would fall off when you reached the edge. While philosophers like Aristotle argued that the earth was spherical based on the formation of the stars, they could not prove it at the time.

This is because they didn't have adequate resources to explore space or mathematically prove Earth's shape. It was a Greek mathematician named Eratosthenes who calculated the earth's circumference with incredible precision. He used scientific methods to show that the Earth was not flat. Since then, other methods have been used to prove the Earth's spherical shape.

When there are questions or statements that are yet to be tested and confirmed based on some scientific method, they are called hypotheses. Basically, we have two types of hypotheses: null and alternate.

A null hypothesis is one's default belief or argument about a subject matter. In the case of the earth's shape, the null hypothesis was that the earth was flat.

An alternate hypothesis is a belief or argument a person might try to establish. Aristotle and Eratosthenes argued that the earth was spherical.

Other examples of a random alternate hypothesis include:

  • The weather may have an impact on a person's mood.
  • More people wear suits on Mondays compared to other days of the week.
  • Children are more likely to be brilliant if both parents are in academia, and so on.

What is Hypothesis Testing?

Hypothesis testing is the act of testing whether a hypothesis or inference is true. When an alternate hypothesis is introduced, we test it against the null hypothesis to know which is correct. Let's use a plant experiment by a 12-year-old student to see how this works.

The hypothesis is that a plant will grow taller when given a certain type of fertilizer. The student takes two samples of the same plant, fertilizes one, and leaves the other unfertilized. He measures the plants' height every few days and records the results in a table.

After a week or two, he compares the final height of both plants to see which grew taller. If the plant given fertilizer grew taller, the hypothesis is established as fact. If not, the hypothesis is not supported. This simple experiment shows how to form a hypothesis, test it experimentally, and analyze the results.

In hypothesis testing, there are two types of error: Type I and Type II.

When we reject the null hypothesis in a case where it is correct, we've committed a Type I error. Type II errors occur when we fail to reject the null hypothesis when it is incorrect.

In our plant experiment above, if the student finds out that both plants' heights are the same at the end of the test period yet opines that fertilizer helps with plant growth, he has committed a Type I error.

However, if the fertilized plant comes out taller and the student records that both plants are the same or that the one without fertilizer grew taller, he has committed a Type II error because he has failed to reject the null hypothesis.

What are the Steps in Hypothesis Testing?

The following steps explain how we can test a hypothesis:

Step #1 - Define the Null and Alternative Hypotheses

Before making any test, we must first define what we are testing and what the default assumption is about the subject. In this article, we'll be testing if the average weight of 10-year-old children is more than 32kg.

Our null hypothesis is that 10 year old children weigh 32 kg on average. Our alternate hypothesis is that the average weight is more than 32kg. Ho denotes a null hypothesis, while H1 denotes an alternate hypothesis.

Step #2 - Choose a Significance Level

The significance level is a threshold for determining if the test is valid. It gives credibility to our hypothesis test to ensure we are not just luck-dependent but have enough evidence to support our claims. We usually set our significance level before conducting our tests. The criterion for determining our significance value is known as p-value.

A lower p-value means that there is stronger evidence against the null hypothesis, and therefore, a greater degree of significance. A p-value of 0.05 is widely accepted to be significant in most fields of science. P-values do not denote the probability of the outcome of the result, they just serve as a benchmark for determining whether our test result is due to chance. For our test, our p-value will be 0.05.

Step #3 - Collect Data and Calculate a Test Statistic

You can obtain your data from online data stores or conduct your research directly. Data can be scraped or researched online. The methodology might depend on the research you are trying to conduct.

We can calculate our test using any of the appropriate hypothesis tests. This can be a T-test, Z-test, Chi-squared, and so on. There are several hypothesis tests, each suiting different purposes and research questions. In this article, we'll use the T-test to run our hypothesis, but I'll explain the Z-test, and chi-squared too.

T-test is used for comparison of two sets of data when we don't know the population standard deviation. It's a parametric test, meaning it makes assumptions about the distribution of the data. These assumptions include that the data is normally distributed and that the variances of the two groups are equal. In a more simple and practical sense, imagine that we have test scores in a class for males and females, but we don't know how different or similar these scores are. We can use a t-test to see if there's a real difference.

The Z-test is used for comparison between two sets of data when the population standard deviation is known. It is also a parametric test, but it makes fewer assumptions about the distribution of data. The z-test assumes that the data is normally distributed, but it does not assume that the variances of the two groups are equal. In our class test example, with the t-test, we can say that if we already know how spread out the scores are in both groups, we can now use the z-test to see if there's a difference in the average scores.

The Chi-squared test is used to compare two or more categorical variables. The chi-squared test is a non-parametric test, meaning it does not make any assumptions about the distribution of data. It can be used to test a variety of hypotheses, including whether two or more groups have equal proportions.

Step #4 - Decide on the Null Hypothesis Based on the Test Statistic and Significance Level

After conducting our test and calculating the test statistic, we can compare its value to the predetermined significance level. If the test statistic falls beyond the significance level, we can decide to reject the null hypothesis, indicating that there is sufficient evidence to support our alternative hypothesis.

On the other contrary, if the test statistic does not exceed the significance level, we fail to reject the null hypothesis, signifying that we do not have enough statistical evidence to conclude in favor of the alternative hypothesis.

Step #5 - Interpret the Results

Depending on the decision made in the previous step, we can interpret the result in the context of our study and the practical implications. For our case study, we can interpret whether we have significant evidence to support our claim that the average weight of 10 year old children is more than 32kg or not.

For our test, we are generating random dummy data for the weight of the children. We'll use a t-test to evaluate whether our hypothesis is correct or not.

For a better understanding, let's look at what each block of code does.

The first block is the import statement, where we import numpy and scipy.stats . Numpy is a Python library used for scientific computing. It has a large library of functions for working with arrays. Scipy is a library for mathematical functions. It has a stat module for performing statistical functions, and that's what we'll be using for our t-test.

The weights of the children were generated at random since we aren't working with an actual dataset. The random module within the Numpy library provides a function for generating random numbers, which is randint .

The randint function takes three arguments. The first (20) is the lower bound of the random numbers to be generated. The second (40) is the upper bound, and the third (100) specifies the number of random integers to generate. That is, we are generating random weight values for 100 children. In real circumstances, these weight samples would have been obtained by taking the weight of the required number of children needed for the test.

Using the code above, we declared our null and alternate hypotheses stating the average weight of a 10-year-old in both cases.

t_stat and p_value are the variables in which we'll store the results of our functions. stats.ttest_1samp is the function that calculates our test. It takes in two variables, the first is the data variable that stores the array of weights for children, and the second (32) is the value against which we'll test the mean of our array of weights or dataset in cases where we are using a real-world dataset.

The code above prints both values for t_stats and p_value .

Lastly, we evaluated our p_value against our significance value, which is 0.05. If our p_value is less than 0.05, we reject the null hypothesis. Otherwise, we fail to reject the null hypothesis. Below is the output of this program. Our null hypothesis was rejected.

In this article, we discussed the importance of hypothesis testing. We highlighted how science has advanced human knowledge and civilization through formulating and testing hypotheses.

We discussed Type I and Type II errors in hypothesis testing and how they underscore the importance of careful consideration and analysis in scientific inquiry. It reinforces the idea that conclusions should be drawn based on thorough statistical analysis rather than assumptions or biases.

We also generated a sample dataset using the relevant Python libraries and used the needed functions to calculate and test our alternate hypothesis.

Thank you for reading! Please follow me on LinkedIn where I also post more data related content.

Technical support engineer with 4 years of experience & 6 months in data analytics. Passionate about data science, programming, & statistics.

If you read this far, thank the author to show them you care. Say Thanks

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

Logo for Open Educational Resources

Chapter 18. Data Analysis and Coding

Introduction.

Piled before you lie hundreds of pages of fieldnotes you have taken, observations you’ve made while volunteering at city hall. You also have transcripts of interviews you have conducted with the mayor and city council members. What do you do with all this data? How can you use it to answer your original research question (e.g., “How do political polarization and party membership affect local politics?”)? Before you can make sense of your data, you will have to organize and simplify it in a way that allows you to access it more deeply and thoroughly. We call this process coding . [1] Coding is the iterative process of assigning meaning to the data you have collected in order to both simplify and identify patterns. This chapter introduces you to the process of qualitative data analysis and the basic concept of coding, while the following chapter (chapter 19) will take you further into the various kinds of codes and how to use them effectively.

To those who have not yet conducted a qualitative study, the sheer amount of collected data will be a surprise. Qualitative data can be absolutely overwhelming—it may mean hundreds if not thousands of pages of interview transcripts, or fieldnotes, or retrieved documents. How do you make sense of it? Students often want very clear guidelines here, and although I try to accommodate them as much as possible, in the end, analyzing qualitative data is a bit more of an art than a science: “The process of bringing order, structure, and interpretation to a mass of collected data is messy, ambiguous, time-consuming, creative, and fascinating. It does not proceed in a linear fashion: it is not neat. At times, the researcher may feel like an eccentric and tormented artist; not to worry, this is normal” ( Marshall and Rossman 2016:214 ).

To complicate matters further, each approach (e.g., Grounded Theory, deep ethnography, phenomenology) has its own language and bag of tricks (techniques) when it comes to analysis. Grounded Theory, for example, uses in vivo coding to generate new theoretical insights that emerge from a rigorous but open approach to data analysis. Ethnographers, in contrast, are more focused on creating a rich description of the practices, behaviors, and beliefs that operate in a particular field. They are less interested in generating theory and more interested in getting the picture right, valuing verisimilitude in the presentation. And then there are some researchers who seek to account for the qualitative data using almost quantitative methods of analysis, perhaps counting and comparing the uses of certain narrative frames in media accounts of a phenomenon. Qualitative content analysis (QCA) often includes elements of counting (see chapter 17). For these researchers, having very clear hypotheses and clearly defined “variables” before beginning analysis is standard practice, whereas the same would be expressly forbidden by those researchers, like grounded theorists, taking a more emergent approach.

All that said, there are some helpful techniques to get you started, and these will be presented in this and the following chapter. As you become more of an expert yourself, you may want to read more deeply about the tradition that speaks to your research. But know that there are many excellent qualitative researchers that use what works for any given study, who take what they can from each tradition. Most of us find this permissible (but watch out for the methodological purists that exist among us).

Null

Qualitative Data Analysis as a Long Process!

Although most of this and the following chapter will focus on coding, it is important to understand that coding is just one (very important) aspect of the long data-analysis process. We can consider seven phases of data analysis, each of which is important for moving your voluminous data into “findings” that can be reported to others. The first phase involves data organization. This might mean creating a special password-protected Dropbox folder for storing your digital files. It might mean acquiring computer-assisted qualitative data-analysis software ( CAQDAS ) and uploading all transcripts, fieldnotes, and digital files to its storage repository for eventual coding and analysis. Finding a helpful way to store your material can take a lot of time, and you need to be smart about this from the very beginning. Losing data because of poor filing systems or mislabeling is something you want to avoid. You will also want to ensure that you have procedures in place to protect the confidentiality of your interviewees and informants. Filing signed consent forms (with names) separately from transcripts and linking them through an ID number or other code that only you have access to (and store safely) are important.

Once you have all of your material safely and conveniently stored, you will need to immerse yourself in the data. The second phase consists of reading and rereading or viewing and reviewing all of your data. As you do this, you can begin to identify themes or patterns in the data, perhaps writing short memos to yourself about what you are seeing. You are not committing to anything in this third phase but rather keeping your eyes and mind open to what you see. In an actual study, you may very well still be “in the field” or collecting interviews as you do this, and what you see might push you toward either concluding your data collection or expanding so that you can follow a particular group or factor that is emerging as important. For example, you may have interviewed twelve international college students about how they are adjusting to life in the US but realized as you read your transcripts that important gender differences may exist and you have only interviewed two women (and ten men). So you go back out and make sure you have enough female respondents to check your impression that gender matters here. The seven phases do not proceed entirely linearly! It is best to think of them as recursive; conceptually, there is a path to follow, but it meanders and flows.

Coding is the activity of the fourth phase . The second part of this chapter and all of chapter 19 will focus on coding in greater detail. For now, know that coding is the primary tool for analyzing qualitative data and that its purpose is to both simplify and highlight the important elements buried in mounds of data. Coding is a rigorous and systematic process of identifying meaning, patterns, and relationships. It is a more formal extension of what you, as a conscious human being, are trained to do every day when confronting new material and experiences. The “trick” or skill is to learn how to take what you do naturally and semiconsciously in your mind and put it down on paper so it can be documented and verified and tested and refined.

At the conclusion of the coding phase, your material will be searchable, intelligible, and ready for deeper analysis. You can begin to offer interpretations based on all the work you have done so far. This fifth phase might require you to write analytic memos, beginning with short (perhaps a paragraph or two) interpretations of various aspects of the data. You might then attempt stitching together both reflective and analytical memos into longer (up to five pages) general interpretations or theories about the relationships, activities, patterns you have noted as salient.

As you do this, you may be rereading the data, or parts of the data, and reviewing your codes. It’s possible you get to this phase and decide you need to go back to the beginning. Maybe your entire research question or focus has shifted based on what you are now thinking is important. Again, the process is recursive , not linear. The sixth phase requires you to check the interpretations you have generated. Are you really seeing this relationship, or are you ignoring something important you forgot to code? As we don’t have statistical tests to check the validity of our findings as quantitative researchers do, we need to incorporate self-checks on our interpretations. Ask yourself what evidence would exist to counter your interpretation and then actively look for that evidence. Later on, if someone asks you how you know you are correct in believing your interpretation, you will be able to explain what you did to verify this. Guard yourself against accusations of “ cherry-picking ,” selecting only the data that supports your preexisting notion or expectation about what you will find. [2]

The seventh and final phase involves writing up the results of the study. Qualitative results can be written in a variety of ways for various audiences (see chapter 20). Due to the particularities of qualitative research, findings do not exist independently of their being written down. This is different for quantitative research or experimental research, where completed analyses can somewhat speak for themselves. A box of collected qualitative data remains a box of collected qualitative data without its written interpretation. Qualitative research is often evaluated on the strength of its presentation. Some traditions of qualitative inquiry, such as deep ethnography, depend on written thick descriptions, without which the research is wholly incomplete, even nonexistent. All of that practice journaling and writing memos (reflective and analytical) help develop writing skills integral to the presentation of the findings.

Remember that these are seven conceptual phases that operate in roughly this order but with a lot of meandering and recursivity throughout the process. This is very different from quantitative data analysis, which is conducted fairly linearly and processually (first you state a falsifiable research question with hypotheses, then you collect your data or acquire your data set, then you analyze the data, etc.). Things are a bit messier when conducting qualitative research. Embrace the chaos and confusion, and sort your way through the maze. Budget a lot of time for this process. Your research question might change in the middle of data collection. Don’t worry about that. The key to being nimble and flexible in qualitative research is to start thinking and continue thinking about your data, even as it is being collected. All seven phases can be started before all the data has been gathered. Data collection does not always precede data analysis. In some ways, “qualitative data collection is qualitative data analysis.… By integrating data collection and data analysis, instead of breaking them up into two distinct steps, we both enrich our insights and stave off anxiety. We all know the anxiety that builds when we put something off—the longer we put it off, the more anxious we get. If we treat data collection as this mass of work we must do before we can get started on the even bigger mass of work that is analysis, we set ourselves up for massive anxiety” ( Rubin 2021:182–183 ; emphasis added).

The Coding Stage

A code is “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” ( Saldaña 2014:5 ). Codes can be applied to particular sections of or entire transcripts, documents, or even videos. For example, one might code a video taken of a preschooler trying to solve a puzzle as “puzzle,” or one could take the transcript of that video and highlight particular sections or portions as “arranging puzzle pieces” (a descriptive code) or “frustration” (a summative emotion-based code). If the preschooler happily shouts out, “I see it!” you can denote the code “I see it!” (this is an example of an in vivo, participant-created code). As one can see from even this short example, there are many different kinds of codes and many different strategies and techniques for coding, more of which will be discussed in detail in chapter 19. The point to remember is that coding is a rigorous systematic process—to some extent, you are always coding whenever you look at a person or try to make sense of a situation or event, but you rarely do this consciously. Coding is the process of naming what you are seeing and how you are simplifying the data so that you can make sense of it in a way that is consistent with your study and in a way that others can understand and follow and replicate. Another way of saying this is that a code is “a researcher-generated interpretation that symbolizes or translates data” ( Vogt et al. 2014:13 ).

As with qualitative data analysis generally, coding is often done recursively, meaning that you do not merely take one pass through the data to create your codes. Saldaña ( 2014 ) differentiates first-cycle coding from second-cycle coding. The goal of first-cycle coding is to “tag” or identify what emerges as important codes. Note that I said emerges—you don’t always know from the beginning what will be an important aspect of the study or not, so the coding process is really the place for you to begin making the kinds of notes necessary for future analyses. In second-cycle coding, you will want to be much more focused—no longer gathering wholly new codes but synthesizing what you have into metacodes.

You might also conceive of the coding process in four parts (figure 18.1). First, identify a representative or diverse sample set of interview transcripts (or fieldnotes or other documents). This is the group you are going to use to get a sense of what might be emerging. In my own study of career obstacles to success among first-generation and working-class persons in sociology, I might select one interview from each career stage: a graduate student, a junior faculty member, a senior faculty member.

hypothesis coding meaning

Second, code everything (“ open coding ”). See what emerges, and don’t limit yourself in any way. You will end up with a ton of codes, many more than you will end up with, but this is an excellent way to not foreclose an interesting finding too early in the analysis. Note the importance of starting with a sample of your collected data, because otherwise, open coding all your data is, frankly, impossible and counterproductive. You will just get stuck in the weeds.

Third, pare down your coding list. Where you may have begun with fifty (or more!) codes, you probably want no more than twenty remaining. Go back through the weeds and pull out everything that does not have the potential to bloom into a nicely shaped garden. Note that you should do this before tackling all of your data . Sometimes, however, you might need to rethink the sample you chose. Let’s say that the graduate student interview brought up some interesting gender issues that were pertinent to female-identifying sociologists, but both the junior and the senior faculty members identified as male. In that case, I might read through and open code at least one other interview transcript, perhaps a female-identifying senior faculty member, before paring down my list of codes.

This is also the time to create a codebook if you are using one, a master guide to the codes you are using, including examples (see Sample Codebooks 1 and 2 ). A codebook is simply a document that lists and describes the codes you are using. It is easy to forget what you meant the first time you penciled a coded notation next to a passage, so the codebook allows you to be clear and consistent with the use of your codes. There is not one correct way to create a codebook, but generally speaking, the codebook should include (1) the code (either name or identification number or both), (2) a description of what the code signifies and when and where it should be applied, and (3) an example of the code to help clarify (2). Listing all the codes down somewhere also allows you to organize and reorganize them, which can be part of the analytical process. It is possible that your twenty remaining codes can be neatly organized into five to seven master “themes.” Codebooks can and should develop as you recursively read through and code your collected material. [3]

Fourth, using the pared-down list of codes (or codebook), read through and code all the data. I know many qualitative researchers who work without a codebook, but it is still a good practice, especially for beginners. At the very least, read through your list of codes before you begin this “ closed coding ” step so that you can minimize the chance of missing a passage or section that needs to be coded. The final step is…to do it all again. Or, at least, do closed coding (step four) again. All of this takes a great deal of time, and you should plan accordingly.

Researcher Note

People often say that qualitative research takes a lot of time. Some say this because qualitative researchers often collect their own data. This part can be time consuming, but to me, it’s the analytical process that takes the most time. I usually read every transcript twice before starting to code, then it usually takes me six rounds of coding until I’m satisfied I’ve thoroughly coded everything. Even after the coding, it usually takes me a year to figure out how to put the analysis together into a coherent argument and to figure out what language to use. Just deciding what name to use for a particular group or idea can take months. Understanding this going in can be helpful so that you know to be patient with yourself.

—Jessi Streib, author of The Power of the Past and Privilege Lost 

Note that there is no magic in any of this, nor is there any single “right” way to code or any “correct” codes. What you see in the data will be prompted by your position as a researcher and your scholarly interests. Where the above codes on a preschooler solving a puzzle emerged from my own interest in puzzle solving, another researcher might focus on something wholly different. A scholar of linguistics, for example, may focus instead on the verbalizations made by the child during the discovery process, perhaps even noting particular vocalizations (incidence of grrrs and gritting of the teeth, for example). Your recording of the codes you used is the important part, as it allows other researchers to assess the reliability and validity of your analyses based on those codes. Chapter 19 will provide more details about the kinds of codes you might develop.

Saldaña ( 2014 ) lists seven “necessary personal attributes” for successful coding. To paraphrase, they are the following:

  • Having (or practicing) good organizational skills
  • Perseverance
  • The ability and willingness to deal with ambiguity
  • Flexibility
  • Creativity, broadly understood, which includes “the ability to think visually, to think symbolically, to think in metaphors, and to think of as many ways as possible to approach a problem” (20)
  • Commitment to being rigorously ethical
  • Having an extensive vocabulary [4]

Writing Analytic Memos during/after Coding

Coding the data you have collected is only one aspect of analyzing it. Too many beginners have coded their data and then wondered what to do next. Coding is meant to help organize your data so that you can see it more clearly, but it is not itself an analysis. Thinking about the data, reviewing the coded data, and bringing in the previous literature (here is where you use your literature review and theory) to help make sense of what you have collected are all important aspects of data analysis. Analytic memos are notes you write to yourself about the data. They can be short (a single page or even a paragraph) or long (several pages). These memos can themselves be the subject of subsequent analytic memoing as part of the recursive process that is qualitative data analysis.

Short analytic memos are written about impressions you have about the data, what is emerging, and what might be of interest later on. You can write a short memo about a particular code, for example, and why this code seems important and where it might connect to previous literature. For example, I might write a paragraph about a “cultural capital” code that I use whenever a working-class sociologist says anything about “not fitting in” with their peers (e.g., not having the right accent or hairstyle or private school background). I could then write a little bit about Bourdieu, who originated the notion of cultural capital, and try to make some connections between his definition and how I am applying it here. I can also use the memo to raise questions or doubts I have about what I am seeing (e.g., Maybe the type of school belongs somewhere else? Is this really the right code?). Later on, I can incorporate some of this writing into the theory section of my final paper or article. Here are some types of things that might form the basis of a short memo: something you want to remember, something you noticed that was new or different, a reaction you had, a suspicion or hunch that you are developing, a pattern you are noticing, any inferences you are starting to draw. Rubin ( 2021 ) advises, “Always include some quotation or excerpt from your dataset…that set you off on this idea. It’s happened to me so many times—I’ll have a really strong reaction to a piece of data, write down some insight without the original quotation or context, and then [later] have no idea what I was talking about and have no way of recreating my insight because I can’t remember what piece of data made me think this way” ( 203 ).

All CAQDAS programs include spaces for writing, generating, and storing memos. You can link a memo to a particular transcript, for example. But you can just as easily keep a notebook at hand in which you write notes to yourself, if you prefer the more tactile approach. Drawing pictures that illustrate themes and patterns you are beginning to see also works. The point is to write early and write often, as these memos are the building blocks of your eventual final product (chapter 20).

In the next chapter (chapter 19), we will go a little deeper into codes and how to use them to identify patterns and themes in your data. This chapter has given you an idea of the process of data analysis, but there is much yet to learn about the elements of that process!

Qualitative Data-Analysis Samples

The following three passages are examples of how qualitative researchers describe their data-analysis practices. The first, by Harvey, is a useful example of how data analysis can shift the original research questions. The second example, by Thai, shows multiple stages of coding and how these stages build upward to conceptual themes and theorization. The third example, by Lamont, shows a masterful use of a variety of techniques to generate theory.

Example 1: “Look Someone in the Eye” by Peter Francis Harvey ( 2022 )

I entered the field intending to study gender socialization. However, through the iterative process of writing fieldnotes, rereading them, conducting further research, and writing extensive analytic memos, my focus shifted. Abductive analysis encourages the search for unexpected findings in light of existing literature. In my early data collection, fieldnotes, and memoing, classed comportment was unmistakably prominent in both schools. I was surprised by how pervasive this bodily socialization proved to be and further surprised by the discrepancies between the two schools.…I returned to the literature to compare my empirical findings.…To further clarify patterns within my data and to aid the search for disconfirming evidence, I constructed data matrices (Miles, Huberman, and Saldaña 2013). While rereading my fieldnotes, I used ATLAS.ti to code and recode key sections (Miles et al. 2013), punctuating this process with additional analytic memos. ( 2022:1420 )

Example 2:” Policing and Symbolic Control” by Mai Thai ( 2022 )

Conventional to qualitative research, my analyses iterated between theory development and testing. Analytical memos were written throughout the data collection, and my analyses using MAXQDA software helped me develop, confirm, and challenge specific themes.…My early coding scheme which included descriptive codes (e.g., uniform inspection, college trips) and verbatim codes of the common terms used by field site participants (e.g., “never quit,” “ghetto”) led me to conceptualize valorization. Later analyses developed into thematic codes (e.g., good citizens, criminality) and process codes (e.g., valorization, criminalization), which helped refine my arguments. ( 2022:1191–1192 )

Example 3: The Dignity of Working Men by Michèle Lamont ( 2000 )

To analyze the interviews, I summarized them in a 13-page document including socio-demographic information as well as information on the boundary work of the interviewees. To facilitate comparisons, I noted some of the respondents’ answers on grids and summarized these on matrix displays using techniques suggested by Miles and Huberman for standardizing and processing qualitative data. Interviews were also analyzed one by one, with a focus on the criteria that each respondent mobilized for the evaluation of status. Moreover, I located each interviewee on several five-point scales pertaining to the most significant dimensions they used to evaluate status. I also compared individual interviewees with respondents who were similar to and different from them, both within and across samples. Finally, I classified all the transcripts thematically to perform a systematic analysis of all the important themes that appear in the interviews, approaching the latter as data against which theoretical questions can be explored. ( 2000:256–257 )

Sample Codebook 1

This is an abridged version of the codebook used to analyze qualitative responses to a question about how class affects careers in sociology. Note the use of numbers to organize the flow, supplemented by highlighting techniques (e.g., bolding) and subcoding numbers.

01. CAPS: Any reference to “capitals” in the response, even if the specific words are not used

01.1: cultural capital 01.2: social capital 01.3: economic capital

(can be mixed: “0.12”= both cultural and asocial capital; “0.23”= both social and economic)

01. CAPS: a reference to “capitals” in which the specific words are used [ bold : thus, 01.23 means that both social capital and economic capital were mentioned specifically

02. DEBT: discussion of debt

02.1: mentions personal issues around debt 02.2: discusses debt but in the abstract only (e.g., “people with debt have to worry”)

03. FirstP: how the response is positioned

03.1: neutral or abstract response 03.2: discusses self (“I”) 03.3: discusses others (“they”)

Sample Coded Passage:

“I was really hurt when I didn’t get that scholarship.  It was going to cost me thousands of dollars to stay in the program, and I was going to have to borrow all of it.  My faculty advisor wasn’t helpful at all.  They told 03.2
me not to worry about it, because it wasn’t really that much money!  I almost fell over when they said that!  Like, do they not understand what it’s like to be poor?  I just felt so isolated then.  I was on my own. 02.1. 01.3
I couldn’t talk to anyone about it, because no one else seemed to worry about it. Talk about economic capital!”

* Question: What other codes jump out to you here? Shouldn’t there be a code for feelings of loneliness or alienation? What about an emotions code ?

Sample Codebook 2

CODE DEFINITION WHEN TO APPLY IN VIVO EXAMPLE
ALIENATION Feeling out of place in academia Any time uses the word alienation or impostor syndrome or feeling out of place “I was so lonely in graduate school. It was an alienating experience.”
CULTURAL CAPITAL Knowledge or other cultural resources that affect success in academia When “cultural capital” is used but also when knowledge or lack of knowledge about cultural things are discussed “We went to a fancy restaurant after my job interview and I was paralyzed with fear because I did not know which fork I was supposed to be using. Yikes!”
SOCIAL CAPITAL Social networks that advance success in academia When “social capital” is used but also when social networks are discussed or knowing the right people “I didn’t know who to turn to. It seemed like everyone else had parents who could help them and I didn’t know anyone else who had ever even gone to college!”

This is an example that uses "word" categories only, with descriptions and examples for each code

Further Readings

Elliott, Victoria. 2018. “Thinking about the Coding Process in Qualitative Analysis.” Qualitative Report 23(11):2850–2861. Address common questions those new to coding ask, including the use of “counting” and how to shore up reliability.

Friese, Susanne. 2019. Qualitative Data Analysis with ATLAS.ti. 3rd ed. A good guide to ATLAS.ti, arguably the most used CAQDAS program. Organized around a series of “skills training” to get you up to speed.

Jackson, Kristi, and Pat Bazeley. 2019. Qualitative Data Analysis with NVIVO . 3rd ed. Thousand Oaks, CA: SAGE. If you want to use the CAQDAS program NVivo, this is a good affordable guide to doing so. Includes copious examples, figures, and graphic displays.

LeCompte, Margaret D. 2000. “Analyzing Qualitative Data.” Theory into Practice 39(3):146–154. A very practical and readable guide to the entire coding process, with particular applicability to educational program evaluation/policy analysis.

Miles, Matthew B., and A. Michael Huberman. 1994. Qualitative Data Analysis: An Expanded Sourcebook . 2nd ed. Thousand Oaks, CA: SAGE. A classic reference on coding. May now be superseded by Miles, Huberman, and Saldaña (2019).

Miles, Matthew B., A. Michael Huberman, and Johnny Saldaña. 2019. Qualitative Data Analysis: A Methods Sourcebook . 4th ed. Thousand Oaks, CA; SAGE. A practical methods sourcebook for all qualitative researchers at all levels using visual displays and examples. Highly recommended.

Saldaña, Johnny. 2014. The Coding Manual for Qualitative Researchers . 2nd ed. Thousand Oaks, CA: SAGE. The most complete and comprehensive compendium of coding techniques out there. Essential reference.

Silver, Christina. 2014. Using Software in Qualitative Research: A Step-by-Step Guide. 2nd ed. Thousand Oaks, CA; SAGE. If you are unsure which CAQDAS program you are interested in using or want to compare the features and usages of each, this guidebook is quite helpful.

Vogt, W. Paul, Elaine R. Vogt, Diane C. Gardner, and Lynne M. Haeffele2014. Selecting the Right Analyses for Your Data: Quantitative, Qualitative, and Mixed Methods . New York: The Guilford Press. User-friendly reference guide to all forms of analysis; may be particularly helpful for those engaged in mixed-methods research.

  • When you have collected content (historical, media, archival) that interests you because of its communicative aspect, content analysis (chapter 17) is appropriate. Whereas content analysis is both a research method and a tool of analysis, coding is a tool of analysis that can be used for all kinds of data to address any number of questions. Content analysis itself includes coding. ↵
  • Scientific research, whether quantitative or qualitative, demands we keep an open mind as we conduct our research, that we are “neutral” regarding what is actually there to find. Students who are trained in non-research-based disciplines such as the arts or philosophy or who are (admirably) focused on pursuing social justice can too easily fall into the trap of thinking their job is to “demonstrate” something through the data. That is not the job of a researcher. The job of a researcher is to present (and interpret) findings—things “out there” (even if inside other people’s hearts and minds). One helpful suggestion: when formulating your research question, if you already know the answer (or think you do), scrap that research. Ask a question to which you do not yet know the answer. ↵
  • Codebooks are particularly useful for collaborative research so that codes are applied and interpreted similarly. If you are working with a team of researchers, you will want to take extra care that your codebooks remain in synch and that any refinements or developments are shared with fellow coders. You will also want to conduct an “intercoder reliability” check, testing whether the codes you have developed are clearly identifiable so that multiple coders are using them similarly. Messy, unclear codes that can be interpreted differently by different coders will make it much more difficult to identify patterns across the data. ↵
  • Note that this is important for creating/denoting new codes. The vocabulary does not need to be in English or any particular language. You can use whatever words or phrases capture what it is you are seeing in the data. ↵

A first-cycle coding process in which gerunds are used to identify conceptual actions, often for the purpose of tracing change and development over time.  Widely used in the Grounded Theory approach.

A first-cycle coding process in which terms or phrases used by the participants become the code applied to a particular passage.  It is also known as “verbatim coding,” “indigenous coding,” “natural coding,” “emic coding,” and “inductive coding,” depending on the tradition of inquiry of the researcher.  It is common in Grounded Theory approaches and has even given its name to one of the primary CAQDAS programs (“NVivo”).

Computer-assisted qualitative data-analysis software.  These are software packages that can serve as a repository for qualitative data and that enable coding, memoing, and other tools of data analysis.  See chapter 17 for particular recommendations.

The purposeful selection of some data to prove a preexisting expectation or desired point of the researcher where other data exists that would contradict the interpretation offered.  Note that it is not cherry picking to select a quote that typifies the main finding of a study, although it would be cherry picking to select a quote that is atypical of a body of interviews and then present it as if it is typical.

A preliminary stage of coding in which the researcher notes particular aspects of interest in the data set and begins creating codes.  Later stages of coding refine these preliminary codes.  Note: in Grounded Theory , open coding has a more specific meaning and is often called initial coding : data are broken down into substantive codes in a line-by-line manner, and incidents are compared with one another for similarities and differences until the core category is found.  See also closed coding .

A set of codes, definitions, and examples used as a guide to help analyze interview data.  Codebooks are particularly helpful and necessary when research analysis is shared among members of a research team, as codebooks allow for standardization of shared meanings and code attributions.

The final stages of coding after the refinement of codes has created a complete list or codebook in which all the data is coded using this refined list or codebook.  Compare to open coding .

A first-cycle coding process in which emotions and emotionally salient passages are tagged.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

hypothesis coding meaning

Qualitative Data Coding 101

How to code qualitative data, the smart way (with examples).

By: Jenna Crosley (PhD) | Reviewed by:Dr Eunice Rautenbach | December 2020

As we’ve discussed previously , qualitative research makes use of non-numerical data – for example, words, phrases or even images and video. To analyse this kind of data, the first dragon you’ll need to slay is  qualitative data coding  (or just “coding” if you want to sound cool). But what exactly is coding and how do you do it? 

Overview: Qualitative Data Coding

In this post, we’ll explain qualitative data coding in simple terms. Specifically, we’ll dig into:

  • What exactly qualitative data coding is
  • What different types of coding exist
  • How to code qualitative data (the process)
  • Moving from coding to qualitative analysis
  • Tips and tricks for quality data coding

Qualitative Data Coding: The Basics

What is qualitative data coding?

Let’s start by understanding what a code is. At the simplest level,  a code is a label that describes the content  of a piece of text. For example, in the sentence:

“Pigeons attacked me and stole my sandwich.”

You could use “pigeons” as a code. This code simply describes that the sentence involves pigeons.

So, building onto this,  qualitative data coding is the process of creating and assigning codes to categorise data extracts.   You’ll then use these codes later down the road to derive themes and patterns for your qualitative analysis (for example, thematic analysis ). Coding and analysis can take place simultaneously, but it’s important to note that coding does not necessarily involve identifying themes (depending on which textbook you’re reading, of course). Instead, it generally refers to the process of  labelling and grouping similar types of data  to make generating themes and analysing the data more manageable. 

Makes sense? Great. But why should you bother with coding at all? Why not just look for themes from the outset? Well, coding is a way of making sure your  data is valid . In other words, it helps ensure that your  analysis is undertaken systematically  and that other researchers can review it (in the world of research, we call this transparency). In other words, good coding is the foundation of high-quality analysis.

Definition of qualitative coding

What are the different types of coding?

Now that we’ve got a plain-language definition of coding on the table, the next step is to understand what overarching types of coding exist – in other words, coding approaches . Let’s start with the two main approaches, inductive and deductive .

With deductive coding, you, as the researcher, begin with a set of  pre-established codes  and apply them to your data set (for example, a set of interview transcripts). Inductive coding on the other hand, works in reverse, as you create the set of codes based on the data itself – in other words, the codes emerge from the data. Let’s take a closer look at both.

Deductive coding 101

With deductive coding, we make use of pre-established codes, which are developed before you interact with the present data. This usually involves drawing up a set of  codes based on a research question or previous research . You could also use a code set from the codebook of a previous study.

For example, if you were studying the eating habits of college students, you might have a research question along the lines of 

“What foods do college students eat the most?”

As a result of this research question, you might develop a code set that includes codes such as “sushi”, “pizza”, and “burgers”.  

Deductive coding allows you to approach your analysis with a very tightly focused lens and quickly identify relevant data . Of course, the downside is that you could miss out on some very valuable insights as a result of this tight, predetermined focus. 

Deductive coding of data

Inductive coding 101 

But what about inductive coding? As we touched on earlier, this type of coding involves jumping right into the data and then developing the codes  based on what you find  within the data. 

For example, if you were to analyse a set of open-ended interviews , you wouldn’t necessarily know which direction the conversation would flow. If a conversation begins with a discussion of cats, it may go on to include other animals too, and so you’d add these codes as you progress with your analysis. Simply put, with inductive coding, you “go with the flow” of the data.

Inductive coding is great when you’re researching something that isn’t yet well understood because the coding derived from the data helps you explore the subject. Therefore, this type of coding is usually used when researchers want to investigate new ideas or concepts , or when they want to create new theories. 

Inductive coding definition

A little bit of both… hybrid coding approaches

If you’ve got a set of codes you’ve derived from a research topic, literature review or a previous study (i.e. a deductive approach), but you still don’t have a rich enough set to capture the depth of your qualitative data, you can  combine deductive and inductive  methods – this is called a  hybrid  coding approach. 

To adopt a hybrid approach, you’ll begin your analysis with a set of a priori codes (deductive) and then add new codes (inductive) as you work your way through the data. Essentially, the hybrid coding approach provides the best of both worlds, which is why it’s pretty common to see this in research.

Need a helping hand?

hypothesis coding meaning

How to code qualitative data

Now that we’ve looked at the main approaches to coding, the next question you’re probably asking is “how do I actually do it?”. Let’s take a look at the  coding process , step by step.

Both inductive and deductive methods of coding typically occur in two stages:  initial coding  and  line by line coding . 

In the initial coding stage, the objective is to get a general overview of the data by reading through and understanding it. If you’re using an inductive approach, this is also where you’ll develop an initial set of codes. Then, in the second stage (line by line coding), you’ll delve deeper into the data and (re)organise it according to (potentially new) codes. 

Step 1 – Initial coding

The first step of the coding process is to identify  the essence  of the text and code it accordingly. While there are various qualitative analysis software packages available, you can just as easily code textual data using Microsoft Word’s “comments” feature. 

Let’s take a look at a practical example of coding. Assume you had the following interview data from two interviewees:

What pets do you have?

I have an alpaca and three dogs.

Only one alpaca? They can die of loneliness if they don’t have a friend.

I didn’t know that! I’ll just have to get five more. 

I have twenty-three bunnies. I initially only had two, I’m not sure what happened. 

In the initial stage of coding, you could assign the code of “pets” or “animals”. These are just initial,  fairly broad codes  that you can (and will) develop and refine later. In the initial stage, broad, rough codes are fine – they’re just a starting point which you will build onto in the second stage. 

Qualitative Coding By Experts

How to decide which codes to use

But how exactly do you decide what codes to use when there are many ways to read and interpret any given sentence? Well, there are a few different approaches you can adopt. The  main approaches  to initial coding include:

  • In vivo coding 

Process coding

  • Open coding

Descriptive coding

Structural coding.

  • Value coding

Let’s take a look at each of these:

In vivo coding

When you use in vivo coding , you make use of a  participants’ own words , rather than your interpretation of the data. In other words, you use direct quotes from participants as your codes. By doing this, you’ll avoid trying to infer meaning, rather staying as close to the original phrases and words as possible. 

In vivo coding is particularly useful when your data are derived from participants who speak different languages or come from different cultures. In these cases, it’s often difficult to accurately infer meaning due to linguistic or cultural differences. 

For example, English speakers typically view the future as in front of them and the past as behind them. However, this isn’t the same in all cultures. Speakers of Aymara view the past as in front of them and the future as behind them. Why? Because the future is unknown, so it must be out of sight (or behind us). They know what happened in the past, so their perspective is that it’s positioned in front of them, where they can “see” it. 

In a scenario like this one, it’s not possible to derive the reason for viewing the past as in front and the future as behind without knowing the Aymara culture’s perception of time. Therefore, in vivo coding is particularly useful, as it avoids interpretation errors.

Next up, there’s process coding , which makes use of  action-based codes . Action-based codes are codes that indicate a movement or procedure. These actions are often indicated by gerunds (words ending in “-ing”) – for example, running, jumping or singing.

Process coding is useful as it allows you to code parts of data that aren’t necessarily spoken, but that are still imperative to understanding the meaning of the texts. 

An example here would be if a participant were to say something like, “I have no idea where she is”. A sentence like this can be interpreted in many different ways depending on the context and movements of the participant. The participant could shrug their shoulders, which would indicate that they genuinely don’t know where the girl is; however, they could also wink, showing that they do actually know where the girl is. 

Simply put, process coding is useful as it allows you to, in a concise manner, identify the main occurrences in a set of data and provide a dynamic account of events. For example, you may have action codes such as, “describing a panda”, “singing a song about bananas”, or “arguing with a relative”.

hypothesis coding meaning

Descriptive coding aims to summarise extracts by using a  single word or noun  that encapsulates the general idea of the data. These words will typically describe the data in a highly condensed manner, which allows the researcher to quickly refer to the content. 

Descriptive coding is very useful when dealing with data that appear in forms other than traditional text – i.e. video clips, sound recordings or images. For example, a descriptive code could be “food” when coding a video clip that involves a group of people discussing what they ate throughout the day, or “cooking” when coding an image showing the steps of a recipe. 

Structural coding involves labelling and describing  specific structural attributes  of the data. Generally, it includes coding according to answers to the questions of “ who ”, “ what ”, “ where ”, and “ how ”, rather than the actual topics expressed in the data. This type of coding is useful when you want to access segments of data quickly, and it can help tremendously when you’re dealing with large data sets. 

For example, if you were coding a collection of theses or dissertations (which would be quite a large data set), structural coding could be useful as you could code according to different sections within each of these documents – i.e. according to the standard  dissertation structure . What-centric labels such as “hypothesis”, “literature review”, and “methodology” would help you to efficiently refer to sections and navigate without having to work through sections of data all over again. 

Structural coding is also useful for data from open-ended surveys. This data may initially be difficult to code as they lack the set structure of other forms of data (such as an interview with a strict set of questions to be answered). In this case, it would useful to code sections of data that answer certain questions such as “who?”, “what?”, “where?” and “how?”.

Let’s take a look at a practical example. If we were to send out a survey asking people about their dogs, we may end up with a (highly condensed) response such as the following: 

Bella is my best friend. When I’m at home I like to sit on the floor with her and roll her ball across the carpet for her to fetch and bring back to me. I love my dog.

In this set, we could code  Bella  as “who”,  dog  as “what”,  home  and  floor  as “where”, and  roll her ball  as “how”. 

Values coding

Finally, values coding involves coding that relates to the  participant’s worldviews . Typically, this type of coding focuses on excerpts that reflect the values, attitudes, and beliefs of the participants. Values coding is therefore very useful for research exploring cultural values and intrapersonal and experiences and actions.   

To recap, the aim of initial coding is to understand and  familiarise yourself with your data , to  develop an initial code set  (if you’re taking an inductive approach) and to take the first shot at  coding your data . The coding approaches above allow you to arrange your data so that it’s easier to navigate during the next stage, line by line coding (we’ll get to this soon). 

While these approaches can all be used individually, it’s important to remember that it’s possible, and potentially beneficial, to  combine them . For example, when conducting initial coding with interviews, you could begin by using structural coding to indicate who speaks when. Then, as a next step, you could apply descriptive coding so that you can navigate to, and between, conversation topics easily. You can check out some examples of various techniques here .

Step 2 – Line by line coding

Once you’ve got an overall idea of our data, are comfortable navigating it and have applied some initial codes, you can move on to line by line coding. Line by line coding is pretty much exactly what it sounds like – reviewing your data, line by line,  digging deeper  and assigning additional codes to each line. 

With line-by-line coding, the objective is to pay close attention to your data to  add detail  to your codes. For example, if you have a discussion of beverages and you previously just coded this as “beverages”, you could now go deeper and code more specifically, such as “coffee”, “tea”, and “orange juice”. The aim here is to scratch below the surface. This is the time to get detailed and specific so as to capture as much richness from the data as possible. 

In the line-by-line coding process, it’s useful to  code everything  in your data, even if you don’t think you’re going to use it (you may just end up needing it!). As you go through this process, your coding will become more thorough and detailed, and you’ll have a much better understanding of your data as a result of this, which will be incredibly valuable in the analysis phase.

Line-by-line coding explanation

Moving from coding to analysis

Once you’ve completed your initial coding and line by line coding, the next step is to  start your analysis . Of course, the coding process itself will get you in “analysis mode” and you’ll probably already have some insights and ideas as a result of it, so you should always keep notes of your thoughts as you work through the coding.  

When it comes to qualitative data analysis, there are  many different types of analyses  (we discuss some of the  most popular ones here ) and the type of analysis you adopt will depend heavily on your research aims, objectives and questions . Therefore, we’re not going to go down that rabbit hole here, but we’ll cover the important first steps that build the bridge from qualitative data coding to qualitative analysis.

When starting to think about your analysis, it’s useful to  ask yourself  the following questions to get the wheels turning:

  • What actions are shown in the data? 
  • What are the aims of these interactions and excerpts? What are the participants potentially trying to achieve?
  • How do participants interpret what is happening, and how do they speak about it? What does their language reveal?
  • What are the assumptions made by the participants? 
  • What are the participants doing? What is going on? 
  • Why do I want to learn about this? What am I trying to find out? 
  • Why did I include this particular excerpt? What does it represent and how?

The type of qualitative analysis you adopt will depend heavily on your research aims, objectives and research questions.

Code categorisation

Categorisation is simply the process of reviewing everything you’ve coded and then  creating code categories  that can be used to guide your future analysis. In other words, it’s about creating categories for your code set. Let’s take a look at a practical example.

If you were discussing different types of animals, your initial codes may be “dogs”, “llamas”, and “lions”. In the process of categorisation, you could label (categorise) these three animals as “mammals”, whereas you could categorise “flies”, “crickets”, and “beetles” as “insects”. By creating these code categories, you will be making your data more organised, as well as enriching it so that you can see new connections between different groups of codes. 

Theme identification

From the coding and categorisation processes, you’ll naturally start noticing themes. Therefore, the logical next step is to  identify and clearly articulate the themes  in your data set. When you determine themes, you’ll take what you’ve learned from the coding and categorisation and group it all together to develop themes. This is the part of the coding process where you’ll try to draw meaning from your data, and start to  produce a narrative . The nature of this narrative depends on your research aims and objectives, as well as your research questions (sounds familiar?) and the  qualitative data analysis method  you’ve chosen, so keep these factors front of mind as you scan for themes. 

Themes help you develop a narrative in your qualitative analysis

Tips & tricks for quality coding

Before we wrap up, let’s quickly look at some general advice, tips and suggestions to ensure your qualitative data coding is top-notch.

  • Before you begin coding,  plan out the steps  you will take and the coding approach and technique(s) you will follow to avoid inconsistencies. 
  • When adopting deductive coding, it’s useful to  use a codebook  from the start of the coding process. This will keep your work organised and will ensure that you don’t forget any of your codes. 
  • Whether you’re adopting an inductive or deductive approach,  keep track of the meanings  of your codes and remember to revisit these as you go along.
  • Avoid using synonyms  for codes that are similar, if not the same. This will allow you to have a more uniform and accurate coded dataset and will also help you to not get overwhelmed by your data.
  • While coding, make sure that you  remind yourself of your aims  and coding method. This will help you to  avoid  directional drift , which happens when coding is not kept consistent. 
  • If you are working in a team, make sure that everyone has  been trained and understands  how codes need to be assigned. 

32 Comments

Finan Sabaroche

I appreciated the valuable information provided to accomplish the various stages of the inductive and inductive coding process. However, I would have been extremely satisfied to be appraised of the SPECIFIC STEPS to follow for: 1. Deductive coding related to the phenomenon and its features to generate the codes, categories, and themes. 2. Inductive coding related to using (a) Initial (b) Axial, and (c) Thematic procedures using transcribe data from the research questions

CD Fernando

Thank you so much for this. Very clear and simplified discussion about qualitative data coding.

Kelvin

This is what I want and the way I wanted it. Thank you very much.

Prasad

All of the information’s are valuable and helpful. Thank for you giving helpful information’s. Can do some article about alternative methods for continue researches during the pandemics. It is more beneficial for those struggling to continue their researchers.

Bahiru Haimanot

Thank you for your information on coding qualitative data, this is a very important point to be known, really thank you very much.

Christine Wasanga

Very useful article. Clear, articulate and easy to understand. Thanks

Andrew Wambua

This is very useful. You have simplified it the way I wanted it to be! Thanks

elaine clarke

Thank you so very much for explaining, this is quite helpful!

Enis

hello, great article! well written and easy to understand. Can you provide some of the sources in this article used for further reading purposes?

Kay Sieh Smith

You guys are doing a great job out there . I will not realize how many students you help through your articles and post on a daily basis. I have benefited a lot from your work. this is remarkable.

Wassihun Gebreegizaber Woldesenbet

Wonderful one thank you so much.

Thapelo Mateisi

Hello, I am doing qualitative research, please assist with example of coding format.

A. Grieme

This is an invaluable website! Thank you so very much!

Pam

Well explained and easy to follow the presentation. A big thumbs up to you. Greatly appreciate the effort 👏👏👏👏

Ceylan

Thank you for this clear article with examples

JOHNSON Padiyara

Thank you for the detailed explanation. I appreciate your great effort. Congrats!

Kwame Aboagye

Ahhhhhhhhhh! You just killed me with your explanation. Crystal clear. Two Cheers!

Stacy Ellis

D0 you have primary references that was used when creating this? If so, can you share them?

Ifeanyi Idam

Being a complete novice to the field of qualitative data analysis, your indepth analysis of the process of thematic analysis has given me better insight. Thank you so much.

Takalani Nemaungani

Excellent summary

Temesgen Yadeta Dibaba

Thank you so much for your precise and very helpful information about coding in qualitative data.

Ruby Gabor

Thanks a lot to this helpful information. You cleared the fog in my brain.

Derek Jansen

Glad to hear that!

Rosemary

This has been very helpful. I am excited and grateful.

Robert Siwer

I still don’t understand the coding and categorizing of qualitative research, please give an example on my research base on the state of government education infrastructure environment in PNG

Uvara Isaac Ude

Wahho, this is amazing and very educational to have come across this site.. from a little search to a wide discovery of knowledge.

Thanks I really appreciate this.

Jennifer Maslin

Thank you so much! Very grateful.

Vanassa Robinson

This was truly helpful. I have been so lost, and this simplified the process for me.

Julita Maradzika

Just at the right time when I needed to distinguish between inductive and

deductive data analysis of my Focus group discussion results very helpful

Sergio D. Mahinay, Jr.

Very useful across disciplines and at all levels. Thanks…

Estrada

Hello, Thank you for sharing your knowledge on us.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Table of Contents

Testing your python code with hypothesis, installing & using hypothesis, a quick example, understanding hypothesis, using hypothesis strategies, filtering and mapping strategies, composing strategies, constraints & satisfiability, writing reusable strategies with functions.

  • @composite: Declarative Strategies
  • @example: Explicitly Testing Certain Values

Hypothesis Example: Roman Numeral Converter

I can think of a several Python packages that greatly improved the quality of the software I write. Two of them are pytest and hypothesis . The former adds an ergonomic framework for writing tests and fixtures and a feature-rich test runner. The latter adds property-based testing that can ferret out all but the most stubborn bugs using clever algorithms, and that’s the package we’ll explore in this course.

In an ordinary test you interface with the code you want to test by generating one or more inputs to test against, and then you validate that it returns the right answer. But that, then, raises a tantalizing question: what about all the inputs you didn’t test? Your code coverage tool may well report 100% test coverage, but that does not, ipso facto , mean the code is bug-free.

One of the defining features of Hypothesis is its ability to generate test cases automatically in a manner that is:

Repeated invocations of your tests result in reproducible outcomes, even though Hypothesis does use randomness to generate the data.

You are given a detailed answer that explains how your test failed and why it failed. Hypothesis makes it clear how you, the human, can reproduce the invariant that caused your test to fail.

You can refine its strategies and tell it where or what it should or should not search for. At no point are you compelled to modify your code to suit the whims of Hypothesis if it generates nonsensical data.

So let’s look at how Hypothesis can help you discover errors in your code.

You can install hypothesis by typing pip install hypothesis . It has few dependencies of its own, and should install and run everywhere.

Hypothesis plugs into pytest and unittest by default, so you don’t have to do anything to make it work with it. In addition, Hypothesis comes with a CLI tool you can invoke with hypothesis . But more on that in a bit.

I will use pytest throughout to demonstrate Hypothesis, but it works equally well with the builtin unittest module.

Before I delve into the details of Hypothesis, let’s start with a simple example: a naive CSV writer and reader. A topic that seems simple enough: how hard is it to separate fields of data with a comma and then read it back in later?

But of course CSV is frighteningly hard to get right. The US and UK use '.' as a decimal separator, but in large parts of the world they use ',' which of course results in immediate failure. So then you start quoting things, and now you need a state machine that can distinguish quoted from unquoted; and what about nested quotes, etc.

The naive CSV reader and writer is an excellent stand-in for any number of complex projects where the requirements outwardly seem simple but there lurks a large number of edge cases that you must take into account.

Here the writer simply string quotes each field before joining them together with ',' . The reader does the opposite: it assumes each field is quoted after it is split by the comma.

A naive roundtrip pytest proves the code “works”:

And evidently so:

And for a lot of code that’s where the testing would begin and end. A couple of lines of code to test a couple of functions that outwardly behave in a manner that anybody can read and understand. Now let’s look at what a Hypothesis test would look like, and what happens when we run it:

At first blush there’s nothing here that you couldn’t divine the intent of, even if you don’t know Hypothesis. I’m asking for the argument fields to have a list ranging from one element of generated text up to ten. Aside from that, the test operates in exactly the same manner as before.

Now watch what happens when I run the test:

Hypothesis quickly found an example that broke our code. As it turns out, a list of [','] breaks our code. We get two fields back after round-tripping the code through our CSV writer and reader — uncovering our first bug.

In a nutshell, this is what Hypothesis does. But let’s look at it in detail.

Simply put, Hypothesis generates data using a number of configurable strategies . Strategies range from simple to complex. A simple strategy may generate bools; another integers. You can combine strategies to make larger ones, such as lists or dicts that match certain patterns or structures you want to test. You can clamp their outputs based on certain constraints, like only positive integers or strings of a certain length. You can also write your own strategies if you have particularly complex requirements.

Strategies are the gateway to property-based testing and are a fundamental part of how Hypothesis works. You can find a detailed list of all the strategies in the Strategies reference of their documentation or in the hypothesis.strategies module.

The best way to get a feel for what each strategy does in practice is to import them from the hypothesis.strategies module and call the example() method on an instance:

You may have noticed that the floats example included inf in the list. By default, all strategies will – where feasible – attempt to test all legal (but possibly obscure) forms of values you can generate of that type. That is particularly important as corner cases like inf or NaN are legal floating-point values but, I imagine, not something you’d ordinarily test against yourself.

And that’s one pillar of how Hypothesis tries to find bugs in your code: by testing edge cases that you would likely miss yourself. If you ask it for a text() strategy you’re as likely to be given Western characters as you are a mishmash of unicode and escape-encoded garbage. Understanding why Hypothesis generates the examples it does is a useful way to think about how your code may interact data it has no control over.

Now if it were simply generating text or numbers from an inexhaustible source of numbers or strings, it wouldn’t catch as many errors as it actually does . The reason for that is that each test you write is subjected to a battery of examples drawn from the strategies you’ve designed. If a test case fails, it’s put aside and tested again but with a reduced subset of inputs, if possible. In Hypothesis it’s known as shrinking the search space to try and find the smallest possible result that will cause your code to fail. So instead of a 10,000-length string, if it can find one that’s only 3 or 4, it will try to show that to you instead.

You can tell Hypothesis to filter or map the examples it draws to further reduce them if the strategy does not meet your requirements:

Here I ask for integers where the number is greater than 0 and is evenly divisible by 8. Hypothesis will then attempt to generate examples that meets the constraints you have imposed on it.

You can also map , which works in much the same way as filter. Here I’m asking for lowercase ASCII and then uppercasing them:

Having said that, using either when you don’t have to (I could have asked for uppercase ASCII characters to begin with) is likely to result in slower strategies.

A third option, flatmap , lets you build strategies from strategies; but that deserves closer scrutiny, so I’ll talk about it later.

You can tell Hypothesis to pick one of a number of strategies by composing strategies with | or st.one_of() :

An essential feature when you have to draw from multiple sources of examples for a single data point.

When you ask Hypothesis to draw an example it takes into account the constraints you may have imposed on it: only positive integers; only lists of numbers that add up to exactly 100; any filter() calls you may have applied; and so on. Those are constraints. You’re taking something that was once unbounded (with respect to the strategy you’re drawing an example from, that is) and introducing additional limitations that constrain the possible range of values it can give you.

But consider what happens if I pass filters that will yield nothing at all:

At some point Hypothesis will give up and declare it cannot find anything that satisfies that strategy and its constraints.

Hypothesis gives up after a while if it’s not able to draw an example. Usually that indicates an invariant in the constraints you’ve placed that makes it hard or impossible to draw examples from. In the example above, I asked for numbers that are simultaneously below zero and greater than zero, which is an impossible request.

As you can see, the strategies are simple functions, and they behave as such. You can therefore refactor each strategy into reusable patterns:

The benefit of this approach is that if you discover edge cases that Hypothesis does not account for, you can update the pattern in one place and observe its effects on your code. It’s functional and composable.

One caveat of this approach is that you cannot draw examples and expect Hypothesis to behave correctly. So I don’t recommend you call example() on a strategy only to pass it into another strategy.

For that, you want the @composite decorator.

@composite : Declarative Strategies

If the previous approach is unabashedly functional in nature, this approach is imperative.

The @composite decorator lets you write imperative Python code instead. If you cannot easily structure your strategy with the built-in ones, or if you require more granular control over the values it emits, you should consider the @composite strategy.

Instead of returning a compound strategy object like you would above, you instead draw examples using a special function you’re given access to in the decorated function.

This example draws two randomized names and returns them as a tuple:

Note that the @composite decorator passes in a special draw callable that you must use to draw samples from. You cannot – well, you can , but you shouldn’t – use the example() method on the strategy object you get back. Doing so will break Hypothesis’s ability to synthesize test cases properly.

Because the code is imperative you’re free to modify the drawn examples to your liking. But what if you’re given an example you don’t like or one that breaks a known invariant you don’t wish to test for? For that you can use the assume() function to state the assumptions that Hypothesis must meet if you try to draw an example from generate_full_name .

Let’s say that first_name and last_name must not be equal:

Like the assert statement in Python, the assume() function teaches Hypothesis what is, and is not, a valid example. You use this to great effect to generate complex compound strategies.

I recommend you observe the following rules of thumb if you write imperative strategies with @composite :

If you want to draw a succession of examples to initialize, say, a list or a custom object with values that meet certain criteria you should use filter , where possible, and assume to teach Hypothesis why the value(s) you drew and subsequently discarded weren’t any good.

The example above uses assume() to teach Hypothesis that first_name and last_name must not be equal.

If you can put your functional strategies in separate functions, you should. It encourages code re-use and if your strategies are failing (or not generating the sort of examples you’d expect) you can inspect each strategy in turn. Large nested strategies are harder to untangle and harder still to reason about.

If you can express your requirements with filter and map or the builtin constraints (like min_size or max_size ), you should. Imperative strategies that use assume may take more time to converge on a valid example.

@example : Explicitly Testing Certain Values

Occasionally you’ll come across a handful of cases that either fails or used to fail, and you want to ensure that Hypothesis does not forget to test them, or to indicate to yourself or your fellow developers that certain values are known to cause issues and should be tested explicitly.

The @example decorator does just that:

You can add as many as you like.

Let’s say I wanted to write a simple converter to and from Roman numerals.

Here I’m collecting Roman numerals into numerals , one at a time, by looping over SYMBOLS of valid numerals, subtracting the value of the symbol from number , until the while loop’s condition ( number >= 1 ) is False .

The test is also simple and serves as a smoke test. I generate a random integer and convert it to Roman numerals with to_roman . When it’s all said and done I turn the string of numerals into a set and check that all members of the set are legal Roman numerals.

Now if I run pytest on it seems to hang . But thanks to Hypothesis’s debug mode I can inspect why:

Ah. Instead of testing with tiny numbers like a human would ordinarily do, it used a fantastically large one… which is altogether slow.

OK, so there’s at least one gotcha; it’s not really a bug , but it’s something to think about: limiting the maximum value. I’m only going to limit the test, but it would be reasonable to limit it in the code also.

Changing the max_value to something sensible, like st.integers(max_value=5000) and the test now fails with another error:

It seems our code’s not able to handle the number 0! Which… is correct. The Romans didn’t really use the number zero as we would today; that invention came later, so they had a bunch of workarounds to deal with the absence of something. But that’s neither here nor there in our example. Let’s instead set min_value=1 also, as there is no support for negative numbers either:

OK… not bad. We’ve proven that given a random assortment of numbers between our defined range of values that, indeed, we get something resembling Roman numerals.

One of the hardest things about Hypothesis is framing questions to your testable code in a way that tests its properties but without you, the developer, knowing the answer (necessarily) beforehand. So one simple way to test that there’s at least something semi-coherent coming out of our to_roman function is to check that it can generate the very numerals we defined in SYMBOLS from before:

Here I’m sampling from a tuple of the SYMBOLS from earlier. The sampling algorithm’ll decide what values it wants to give us, all we care about is that we are given examples like ("I", 1) or ("V", 5) to compare against.

So let’s run pytest again:

Oops. The Roman numeral V is equal to 5 and yet we get five IIIII ? A closer examination reveals that, indeed, the code only yields sequences of I equal to the number we pass it. There’s a logic error in our code.

In the example above I loop over the elements in the SYMBOLS dictionary but as it’s ordered the first element is always I . And as the smallest representable value is 1, we end up with just that answer. It’s technically correct as you can count with just I but it’s not very useful.

Fixing it is easy though:

Rerunning the test yields a pass. Now we know that, at the very least, our to_roman function is capable of mapping numbers that are equal to any symbol in SYMBOLS .

Now the litmus test is taking the numeral we’re given and making sense of it. So let’s write a function that converts a Roman numeral back into decimal:

Like to_roman we walk through each character, get the numeral’s numeric value, and add it to the running total. The test is a simple roundtrip test as to_roman has an inverse function from_roman (and vice versa) such that :

Invertible functions are easier to test because you can compare the output of one against the input of another and check if it yields the original value. But not every function has an inverse, though.

Running the test yields a pass:

So now we’re in a pretty good place. But there’s a slight oversight in our Roman numeral converters, though: they don’t respect the subtraction rule for some of the numerals. For instance VI is 6; but IV is 4. The value XI is 11; and IX is 9. Only some (sigh) numerals exhibit this property.

So let’s write another test. This time it’ll fail as we’ve yet to write the modified code. Luckily we know the subtractive numerals we must accommodate:

Pretty simple test. Check that certain numerals yield the value, and that the values yield the right numeral.

With an extensive test suite we should feel fairly confident making changes to the code. If we break something, one of our preexisting tests will fail.

The rules around which numerals are subtractive is rather subjective. The SUBTRACTIVE_SYMBOLS dictionary holds the most common ones. So all we need to do is read ahead of the numerals list to see if there exists a two-digit numeral that has a prescribed value and then we use that instead of the usual value.

The to_roman change is simple. A union of the two numeral symbol dictionaries is all it takes . The code already understands how to turn numbers into numerals — we just added a few more.

This method requires Python 3.9 or later. Read how to merge dictionaries

If done right, running the tests should yield a pass:

And that’s it. We now have useful tests and a functional Roman numeral converter that converts to and from with ease. But one thing we didn’t do is create a strategy that generates Roman numerals using st.text() . A custom composite strategy to generate both valid and invalid Roman numerals to test the ruggedness of our converter is left as an exercise to you.

In the next part of this course we’ll look at more advanced testing strategies.

Unlike a tool like faker that generates realistic-looking test data for fixtures or demos, Hypothesis is a property-based tester . It uses heuristics and clever algorithms to find inputs that break your code.

Testing a function that does not have an inverse to compare the result against – like our Roman numeral converter that works both ways – you often have to approach your code as though it were a black box where you relinquish control of the inputs and outputs. That is harder, but makes for less brittle code.

It’s perfectly fine to mix and match tests. Hypothesis is useful for flushing out invariants you would never think of. Combine it with known inputs and outputs to jump start your testing for the first 80%, and augment it with Hypothesis to catch the remaining 20%.

Be Inspired Sign up and we’ll tell you about new articles and courses

Absolutely no spam. We promise!

Liked the Article?

Why not follow us …, be inspired get python tips sent to your inbox.

We'll tell you about the latest courses and articles.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Coding: An Overview and Guide to Qualitative Data Analysis for Integral Researchers

Profile image of Nicholas Hedlund

RELATED PAPERS

Journal of Motivation, Emotion, and Personality: Reversal Theory Studies

Michael Apter

Brit Ross Winthereik

Vicky Johnson

2019 ASEE Annual Conference & Exposition Proceedings

Attention, Perception, & Psychophysics

Janelle McFeetors

Andrei Alexandrescu

Focus on learning problems in …

Peter Liljedahl

Sreemaa Sree

Communication Methods and Measures

Klaus Krippendorff

Erin Sharpe

Qualitative Research Methods in Human Geography; OUP-Canada, 4th Edition.

2019 ASEE Annual Conference & Exposition Proceedings

Corey M. Abramson

Learning and Motivation

thomas zentall

Aaron Alvarez

Essays in Honour of Ingvar Johansson on His Seventieth Birthday

giovanni camardi

nathan jurgenson

International Journal of Educational Research, 124 (2024) 102307, 1-17.

Tienhui Chiang

Marina Kholodnaya

Information Sciences

Sunny Weather

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

helpful professor logo

Inductive Coding: A Step-by-Step Guide for Researchers

Inductive Coding: A Step-by-Step Guide for Researchers

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

inductive coding examples and definition, explained below

Inductive coding is a qualitative data analysis method used to develop themes from a text such as an interview transcript.

It’s the most common data analysis method used by my research students.

This blog post will give you everything you need to write your own research methods section on inductive coding.

In your data analysis section in your methodology chapter, you will want to:

  • Define inductive coding,
  • Highlight its benefits and limitations (explaining how the benefits outweigh the limitations), then
  • Give a clear step-by-step overview of exactly how you will code your data.

Let’s get started.

chris

Inductive Coding Definition and Key Features

Inductive coding is a research method for generating themes from textual datasets, such as transcribed semi-structured interviews or transcriptions of speeches and audio.

It is also known as “open coding” or “data-driven coding”.

Below are some key features of inductive coding, each of which you should note in your methods section.

1. It is a Qualitative Approach

It is a qualitative method , meaning its focus is on generating themes that are contextualized, interpretive, and engaged with socially-mediated meanings (i.e. pragmatics and semantics , both of which are worth exploring to familiarize yourself with how to read texts-in-context). 

Note that as a qualitative method of content analysis , its strength is in its ability to develop deep and nuanced understandings of texts. It doesn’t focus on counting occurrences of words or phrases. Rather, it attempts to generate overarching themes from texts.

2. Themes Emerge During the Coding Process

There are two ways you could code the text. You could either:

  • Apply a pre-existing theoretical framework and pre-existing themes to the text before coding the data. This is good for testing hypotheses, or
  • Approach the data with an open mind, with the goal of having themes organically emerge during the reading.

Inductive coding uses the second approach: our goal is to let themes emerge during our close reading of the text. If you want to test pre-set themes against the dataset, you might want to read my article on deductive coding.

This open-minded approach, where themes emerge during, rather than before, we read the data, means inductive coding is called a grounded approach (Charmaz, 2014).

There are several advantages to letting the themes emerge during the data coding process. These include:

  • Researchers can uncover themes that they may not have previously considered.
  • Researchers’ initial biases are (supposedly) not baked into the themes that are generated (Thomas, 2006).
  • It can lead to a more nuanced and open-minded understanding of the studied phenomena.

3. It is an Iterative Process

Inductive coding is an iterative process, which means that it involves repeatedly reading the data and refining the codes over each re-reading of the text.

This means that, as we become more familiar and immersed in the text, we should be able to refine our insights and find deeper and deeper layers to the text each time we read it (Elo & Kyngäs, 2008).

Inductive vs Deductive Coding

Inductive and deductive coding are two primary methods used in qualitative research to analyze and interpret data.

In your methods section, it’s worth explaining why you chose an inductive approach over a deductive approach.

Here’s a definition of each:

  • Inductive Coding: Codes and themes are derived directly from the data (i.e. while you read the transcripts), not from pre-conceived theories or hypotheses.
  • Deductive Coding: Codes and themes are pre-determined based on existing theories, hypotheses, or research questions. Reading the data is about testing the theories (e.g. seeing how strong your pre-conceived themes and hypotheses are) rather than generating themes and theories from the text.

Inductive and deductive coding are in many ways opposites.

Inductive coding starts with specific observations made while reading the text, then develops broader patterns or themes from the dataset. Eventually, you’ll be able to develop a theory or overarching argument that answers the research question.

The advantage of inductive coding is that it allows for new and unexpected themes to emerge from the data, potentially leading to novel insights. However, it lacks the clear structure of a deductive coding method. Researchers (especially new researchers) can feel lost and confused during the theme generation process.

Conversely, deductive coding begins with a theory or hypothesis, then seeks to test this against the observed data. For example, if you already have four themes that you really want to test in the dataset, you might use deductive coding to see if the themes match your hypothesis that you’ve already generated prior to reading the data.

Deductive coding allows researchers to directly address specific research questions or hypotheses. I will often use it if I’ve done a literature review and concluded the literature review with a focused interest on addressing a ‘gap in the literature’ or testing someone themes that I found emerged from the academic literature on the topic.

The deductive approach to coding can be more straightforward and quicker than inductive coding as it involves coding data to pre-determined categories. However, a significant downside (and, to me, the reason inductive coding is superior) is that deductive coding may not capture all the nuances in the data and often fails to reveal unexpected or novel themes not accounted for in the initial hypothesis, nor previous literature on the topic.

Below is a comparison table that outlines the differences:

Inductive CodingDeductive Coding
Codes and themes are derived directly from the dataCodes and themes are pre-determined based on existing theories, hypotheses, or research questions
Broadening: Begins with specific observations, leading to broader patterns or themesNarrowing: Begins with a theory or hypothesis, and tests this against observed data
Allows for new and unexpected themes to emerge from the dataMore straightforward and quicker than inductive coding as it involves coding data to pre-determined categories
Time-consuming and requires significant expertiseMay not capture all the nuances in the data; unexpected or novel themes may be overlooked

Inductive Coding Step by Step Guide

At this point, you’re probably wondering exactly how to go about reading your data and developing themes (aka ‘coding’).

Inductive coding aims to be flexible and let themes emerge as we work. Nevertheless, you will still need some guidelines for approaching the dataset so you know what to do first, second, third, etc.

For this, I lean heavily on  Braun and Clarke’s (2006) Using thematic analysis in psychology ( access it here ) and Attride-Stirling’s (2001) Thematic Networks ( access it here ). The following step-by-step guide is based on and adapted from their important work.

Step 1: Read the Data

The first step in inductive coding is immersing yourself in the data by reading and rereading your transcripts multiple times (Elo & Kyngäs, 2008). Literally read the transcripts .

At this point, you may wish to take notes on a notebook or post-it on your initial thoughts or impressions. These general notes will likely be messy – they are your initial thoughts. Don’t overthink it or try to impose too much order at this point. Simply take rough notes . Your goal is to start to engage with the transcripts and ‘get to know them’.

Example Imagine you are working with interview transcripts from first-year teachers who you interviewed about their transitions from college to teaching. During the initial reading of the data, you might take some rough notes on the experiences, emotions, and events detailed by the teachers.

Step 2: Create Codes

In the second read-through, you should have some general idea about ideas that will recur throughout the text.

You will read a comment and be able to remember that this comment comes up again later in the transcript. In other words, you can read the transcripts with greater contextual knowledge.

At this point, directly label the transcripts with codes, or base notes on ideas you have noticed have come up more than once in the dataset (see example below).

These observations might not be combined into coherent themes yet, but they represent potential areas for more in-depth analysis, which will be brought together into a coherent theme in the next step.

Example When re-reading your transcript on the new teachers’ interviews, start to take notes in the margins on topics that seem to jump out because they occur several times. For example, beside one paragraph you might write ‘talks about anxiety’ or ‘feels like she lacks support’. These are your basic codes.

Step 3: Develop Basic Themes

In your third reading, your goal is to combine your codes into what Attride-Stirling (2001) call ‘basic themes’. Identify codes that seem to point to a similar theme, or recurring idea, that you see throughout your dataset.

These basic themes are the building blocks of your emerging hypothesis or argument in your dissertation. They might be used to create paragraphs in your ‘discussion’ section.

Note that this is the longest step as it is an iterative process that involves moving back and forth between the data, codes, and emerging themes, continuously refining and defining them until you have a set of themes that accurately represent the data (Elo & Kyngäs, 2008).

Example You might find that you wrote ‘talks about anxiety’ on the margin of one transcript, ‘feels worried’ on another, and ‘is losing sleep’ on a third. Here, it’s your job as the researcher to be able to recognize that these codes all cohere around a basic theme: “Anxiety”. Beside each of these comments, tag them as being a part of the theme ‘Anxiety’. I use # to write themes, so beside each of those three margin comments, I would write #Anxiety

Step 4: Develop Organizational Themes

Now you have basic themes, you’ll want to see if you can combine themes that point to a consistent higher-level idea emergent from the data. Attride-Stirling calls these ‘organizational themes’. I use organizational themes as the basis for chapters in my discussion and analysis.

Example Returning to our transcript of new teachers, I might have found the following themes: #Anxiety, #Stress, #Joy, #Exhaustion, #Learning-From-Mistakes, #Building-Resources, #Behavior-Management-Improvement, #New-Skills. Here, I feel I can develop two organizational themes:

  • “New Teachers are Feeling Overwhelmed” (combining #Anxiety, #Stress, and #Exhaustion).
  • “New Teachers are Experiencing a Rapid Learning Curve” (combining #Learning-From-Mistakes, #Building-Resources, and #Behavior-Management-Improvement, #New-Skills) 

Step 5: Develop a Global Theme

Lastly, you are going to want to develop a global theme, which is the core argument of your dissertation. It is the one-sentence elevator pitch about your dissertation which draws together all of your findings into one final theme, stated as an argument. See the example below.

Example I need to find an overarching argument that combines my two organizational themes “New teachers are feeling overwhelmed” and “New teachers are experiencing a rapid learning curve.” I decide upon the global theme: “New teachers need extra support to manage overwhelm during their rapid learning curve.”

From Inductive Coding to Dissertation Discussion

Allow the inductive coding process to shape the organizational structure of the discussion sections of your dissertation.

Generally, I encourage students to shape their dissertation in the following way:

  • The global theme becomes the thesis statement , phrased as an assertion about the dataset.
  • The organizational themes , which describe underpin global theme, become the discussion chapters, phrased again as assertions about the dataset.
  • The basic themes become sub-sections in the discussion chapters. Each basic theme could be 500-800 words long and include key quotes directly from the dataset, as well as detailed analysis of the quotes.

For the above sample study on the experiences of new teachers, the discussion section would end up looking like this:

Title: “The Experiences and Challenges of Transitioning to Teaching”

Thesis Statement: “New teachers can experience overwhelm from a rapid learning curve and require support to manage their professional transition.” (Global Theme)

  • Sub-Section 1.1: How new teachers learn from their mistakes (Basic Theme)
  • Sub-Section 1.2: New teachers expend time and energy building a resource library from scratch (Basic Theme)
  • Sub-Section 1.3: The new teachers are struggling with behavior management in the classroom (Basic Theme)
  • Sub-Section 1.4: The new ew teachers are developing a vast array of new skills in the first year (Basic Theme)
  • Sub-Section 2.1: The new teachers report high levels of anxiety (Basic Theme)
  • Sub-Section 2.2: The new teachers report high levels of stress (Basic Theme)
  • Sub-Section 2.3: The new teachers report high levels of exhaustion (Basic Theme)

Strengths and Limitations of an Inductive Coding Approach

  • Discovery of New Themes: One of the key strengths of an inductive coding approach is the ability to uncover novel themes or patterns that may not have been previously considered (Thomas, 2006). Because inductive coding does not require a pre-existing theoretical framework, researchers are often able to identify unexpected trends and insights in their data.
  • Grounded in Data: Inductive coding is inherently grounded in the data itself, offering an authentic, detailed, and nuanced understanding of the studied phenomena. This ensures that findings are directly linked to the perspectives and experiences of the participants (Braun & Clarke, 2006).
  • Flexible and Adaptable: This approach is flexible and can adapt to the needs of various research designs . It does not require hypotheses or pre-determined categories, making it suitable for exploratory studies or those with less known about the topic (Charmaz, 2014).

Limitations

  • Time-consuming: The inductive coding process can be quite time-intensive, as it involves close, detailed analysis of the data and requires multiple iterations to refine codes and themes (Elo & Kyngäs, 2008).
  • Requires Skill: To perform inductive coding effectively, researchers must have a solid understanding of the process and a certain level of qualitative analysis skill. Without it, there’s a risk of missing significant nuances in the data or misinterpreting participant meanings (Saldana, 2015). New researchers should study pragmatics, semiotics, or semantics to develop skills in coding data.
  • Potential for Researcher Bias: While the approach is grounded in the data, the interpretative nature of inductive coding means that researcher bias can potentially influence the coding and theme development process. Researchers must be conscious of this and take steps to limit the influence of their preconceptions on the analysis (Charmaz, 2014). To minimize this bias, you could ask peers to also code the data separately from you, and compare and contrast your themes .

Inductive coding is my preferred approach to coding data for a qualitative dissertation based on textual dataset, such as transcriptions of semi-structured interviews or content analyses of purely textual datasets. With more complex datasets, such as multimodal texts, you might need to use an approach such as a discourse analysis , multimodal analysis or media analysis method . If your goal is to directly test a set of pre-defined themes, you’ll want to go with a deductive coding approach.

Be sure to clearly explain why you chose inductive coding in your methods section, including by explaining its weaknesses and strengths, before stating why on balance you think the strengths outweigh the weaknesses. Furthermore, you will need to explain your step-by-step process of generating codes, basic themes, organizational themes, and a global theme, so your readers have a clear understanding of how you went about analyzing the data.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3 (2), 77-101.

Charmaz, K. (2014). Constructing grounded theory . New York: Sage.

Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62 (1), 107-115.

Saldana, J. (2015). The coding manual for qualitative researchers . New York: Sage.

Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27 (2), 237-246.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Number Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Word Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Outdoor Games for Kids
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 50 Incentives to Give to Students

1 thought on “Inductive Coding: A Step-by-Step Guide for Researchers”

' src=

Thank you for this summary! Appreciated

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Introduction to Hypothesis Testing

Learn how to run t-tests and binomial tests in this introduction to Hypothesis Testing.

  • AI assistance for guided coding help
  • A certificate of completion

Skill level

Time to complete

Prerequisites

About this course

Get started with hypothesis testing by examining a one-sample t-test and binomial tests — both used for drawing inference about a population based on a smaller sample from that population.

Skills you'll gain

Set up a hypothesis test

Explain what statistical significance means in the context of hypothesis tests

Conduct t-tests and binomial tests

Find out what you’ll learn in Hypothesis Testing and why it’s important.

Hypothesis testing basics with t-tests

Learn about the foundations of hypothesis testing and one-sample t-tests.

Certificate of completion available with Plus or Pro

The platform

Hands-on learning

An AI-generated hint within the instructions of a Codecademy project

Earn a certificate of completion

  • Show proof Receive a certificate that demonstrates you've completed a course or path.
  • Build a collection The more courses and paths you complete, the more certificates you collect.
  • Share with your network Easily add certificates of completion to your LinkedIn profile to share your accomplishments.

hypothesis coding meaning

Reviews from learners

Our learners work at.

  • Google Logo
  • Amazon Logo
  • Microsoft Logo
  • Reddit Logo
  • Spotify Logo
  • YouTube Logo
  • Instagram Logo

Join over 50 million learners and start Introduction to Hypothesis Testing today!

Looking for something else, related resources, software testing methodologies, testing types, introduction to testing with mocha and chai, related courses and paths, hypothesis testing with python, hypothesis testing: significance thresholds, hypothesis testing: associations, browse more topics.

  • Data Analytics 2,203,462 learners enrolled
  • Data Science 4,198,672 learners enrolled
  • Code Foundations 7,006,965 learners enrolled
  • Computer Science 5,452,173 learners enrolled
  • Web Development 4,680,733 learners enrolled
  • Python 3,394,548 learners enrolled
  • For Business 3,058,339 learners enrolled
  • JavaScript 2,738,682 learners enrolled
  • HTML & CSS 2,208,086 learners enrolled

Two people in conversation while learning to code with Codecademy on their laptops

Unlock additional features with a paid plan

Practice projects, assessments, certificate of completion.

Hypothesis Testing in R: A Comprehensive Guide with Code and Examples

hypothesis coding meaning

Hypothesis testing is a fundamental statistical technique used to make inferences about population parameters based on sample data. It helps us determine whether an observed effect or difference is statistically significant or if it could have occurred by random chance. In this guide, we’ll explore hypothesis testing in R, a powerful statistical programming language, through practical examples and code snippets.

Understanding the Hypothesis Testing Process

Before diving into code examples, let’s grasp the key concepts of hypothesis testing:

  • Null Hypothesis (H0): This is the default assumption that there is no significant effect, difference, or relationship in the population. It’s often denoted as H0.
  • Alternative Hypothesis (Ha): This is the statement we want to test; it asserts that there is a significant effect, difference, or relationship in the population. It’s often denoted as Ha.
  • Significance Level (α): This is the predetermined threshold that defines when we reject the null hypothesis. Common values are 0.05 or 0.01, representing a 5% or 1% chance of making a Type I error (false positive), respectively.
  • Test Statistic: A statistic calculated from the sample data that measures the strength of evidence against the null hypothesis.
  • P-value: The probability of observing a test statistic as extreme as, or more extreme than, the one calculated from the sample data under the null hypothesis. A smaller p-value suggests stronger evidence against the null hypothesis.
  • Decision Rule: Based on the p-value, we decide whether to reject the null hypothesis. If the p-value is less than α, we reject H0; otherwise, we fail to reject it.

Now, let’s explore some practical examples of hypothesis testing in R.

Example 1: One-Sample T-Test

Suppose we have a dataset of exam scores, and we want to test if the average score is significantly different from 75.

In this example, we perform a one-sample t-test to determine if the sample mean is significantly different from 75. The resulting p-value will help us make the decision.

Example 2: Two-Sample T-Test

Let’s say we want to compare the exam scores of two different classes (Class A and Class B) to see if there is a significant difference between their average scores.

Here, we perform a two-sample t-test to compare the means of two independent samples (Class A and Class B).

Example 3: Chi-Square Test of Independence

Suppose we have data on the preferred mode of transportation for two groups of people (Group X and Group Y), and we want to test if there is an association between the groups and their transportation preferences.

In this example, we use a chi-square test to determine if there is an association between the groups and their transportation preferences.

Hypothesis testing is a powerful tool for making data-driven decisions in various fields, from medicine to business. In R, you can conduct a wide range of hypothesis tests using built-in functions and libraries like t.test() and chisq.test() . Remember to set your significance level appropriately, and interpret the results cautiously based on the p-value.

By mastering hypothesis testing in R, you’ll be better equipped to draw meaningful conclusions from your data and make informed decisions in your research and analysis.

hypothesis coding meaning

Post navigation

Previous post.

Data Manipulation tips in R

How to Use A Log Transformation in R To Rescale Your Data

5 Best Practices for Database Partitioning in Cloud Environments

5 Best Practices for Database Partitioning in Cloud Environments

Horizontal and Vertical Partitioning in Databases

Horizontal and Vertical Partitioning in Databases

Data Cleaning with Proven Strategies

Beginner’s Guide to Tidying Up Your Datasets: Data Cleaning 101 with Proven Strategies

hypothesis coding meaning

  • School Guide
  • Mathematics
  • Number System and Arithmetic
  • Trigonometry
  • Probability
  • Mensuration
  • Maths Formulas
  • Class 8 Maths Notes
  • Class 9 Maths Notes
  • Class 10 Maths Notes
  • Class 11 Maths Notes
  • Class 12 Maths Notes

Hypothesis is a testable statement that explains what is happening or observed. It proposes the relation between the various participating variables. Hypothesis is also called Theory, Thesis, Guess, Assumption, or Suggestion. Hypothesis creates a structure that guides the search for knowledge.

In this article, we will learn what is hypothesis, its characteristics, types, and examples. We will also learn how hypothesis helps in scientific research.

Hypothesis

Table of Content

What is Hypothesis?

Hypothesis meaning, characteristics of hypothesis, sources of hypothesis, types of hypothesis, simple hypothesis, complex hypothesis, directional hypothesis, non-directional hypothesis, null hypothesis (h0), alternative hypothesis (h1 or ha), statistical hypothesis, research hypothesis, associative hypothesis, causal hypothesis, hypothesis examples, simple hypothesis example, complex hypothesis example, directional hypothesis example, non-directional hypothesis example, alternative hypothesis (ha), functions of hypothesis, how hypothesis help in scientific research.

A hypothesis is a suggested idea or plan that has little proof, meant to lead to more study. It’s mainly a smart guess or suggested answer to a problem that can be checked through study and trial. In science work, we make guesses called hypotheses to try and figure out what will happen in tests or watching. These are not sure things but rather ideas that can be proved or disproved based on real-life proofs. A good theory is clear and can be tested and found wrong if the proof doesn’t support it.

A hypothesis is a proposed statement that is testable and is given for something that happens or observed.
  • It is made using what we already know and have seen, and it’s the basis for scientific research.
  • A clear guess tells us what we think will happen in an experiment or study.
  • It’s a testable clue that can be proven true or wrong with real-life facts and checking it out carefully.
  • It usually looks like a “if-then” rule, showing the expected cause and effect relationship between what’s being studied.

Here are some key characteristics of a hypothesis:

  • Testable: An idea (hypothesis) should be made so it can be tested and proven true through doing experiments or watching. It should show a clear connection between things.
  • Specific: It needs to be easy and on target, talking about a certain part or connection between things in a study.
  • Falsifiable: A good guess should be able to show it’s wrong. This means there must be a chance for proof or seeing something that goes against the guess.
  • Logical and Rational: It should be based on things we know now or have seen, giving a reasonable reason that fits with what we already know.
  • Predictive: A guess often tells what to expect from an experiment or observation. It gives a guide for what someone might see if the guess is right.
  • Concise: It should be short and clear, showing the suggested link or explanation simply without extra confusion.
  • Grounded in Research: A guess is usually made from before studies, ideas or watching things. It comes from a deep understanding of what is already known in that area.
  • Flexible: A guess helps in the research but it needs to change or fix when new information comes up.
  • Relevant: It should be related to the question or problem being studied, helping to direct what the research is about.
  • Empirical: Hypotheses come from observations and can be tested using methods based on real-world experiences.

Hypotheses can come from different places based on what you’re studying and the kind of research. Here are some common sources from which hypotheses may originate:

  • Existing Theories: Often, guesses come from well-known science ideas. These ideas may show connections between things or occurrences that scientists can look into more.
  • Observation and Experience: Watching something happen or having personal experiences can lead to guesses. We notice odd things or repeat events in everyday life and experiments. This can make us think of guesses called hypotheses.
  • Previous Research: Using old studies or discoveries can help come up with new ideas. Scientists might try to expand or question current findings, making guesses that further study old results.
  • Literature Review: Looking at books and research in a subject can help make guesses. Noticing missing parts or mismatches in previous studies might make researchers think up guesses to deal with these spots.
  • Problem Statement or Research Question: Often, ideas come from questions or problems in the study. Making clear what needs to be looked into can help create ideas that tackle certain parts of the issue.
  • Analogies or Comparisons: Making comparisons between similar things or finding connections from related areas can lead to theories. Understanding from other fields could create new guesses in a different situation.
  • Hunches and Speculation: Sometimes, scientists might get a gut feeling or make guesses that help create ideas to test. Though these may not have proof at first, they can be a beginning for looking deeper.
  • Technology and Innovations: New technology or tools might make guesses by letting us look at things that were hard to study before.
  • Personal Interest and Curiosity: People’s curiosity and personal interests in a topic can help create guesses. Scientists could make guesses based on their own likes or love for a subject.

Here are some common types of hypotheses:

  • Non-directional Hypothesis
Simple Hypothesis guesses a connection between two things. It says that there is a connection or difference between variables, but it doesn’t tell us which way the relationship goes.
Complex Hypothesis tells us what will happen when more than two things are connected. It looks at how different things interact and may be linked together.
Directional Hypothesis says how one thing is related to another. For example, it guesses that one thing will help or hurt another thing.
Non-Directional Hypothesis are the one that don’t say how the relationship between things will be. They just say that there is a connection, without telling which way it goes.
Null hypothesis is a statement that says there’s no connection or difference between different things. It implies that any seen impacts are because of luck or random changes in the information.
Alternative Hypothesis is different from the null hypothesis and shows that there’s a big connection or gap between variables. Scientists want to say no to the null hypothesis and choose the alternative one.
Statistical Hypotheis are used in math testing and include making ideas about what groups or bits of them look like. You aim to get information or test certain things using these top-level, common words only.
Research Hypothesis comes from the research question and tells what link is expected between things or factors. It leads the study and chooses where to look more closely.
Associative Hypotheis guesses that there is a link or connection between things without really saying it caused them. It means that when one thing changes, it is connected to another thing changing.
Causal Hypothesis are different from other ideas because they say that one thing causes another. This means there’s a cause and effect relationship between variables involved in the situation. They say that when one thing changes, it directly makes another thing change.

Following are the examples of hypotheses based on their types:

  • Studying more can help you do better on tests.
  • Getting more sun makes people have higher amounts of vitamin D.
  • How rich you are, how easy it is to get education and healthcare greatly affects the number of years people live.
  • A new medicine’s success relies on the amount used, how old a person is who takes it and their genes.
  • Drinking more sweet drinks is linked to a higher body weight score.
  • Too much stress makes people less productive at work.
  • Drinking caffeine can affect how well you sleep.
  • People often like different kinds of music based on their gender.
  • The average test scores of Group A and Group B are not much different.
  • There is no connection between using a certain fertilizer and how much it helps crops grow.
  • Patients on Diet A have much different cholesterol levels than those following Diet B.
  • Exposure to a certain type of light can change how plants grow compared to normal sunlight.
  • The average smarts score of kids in a certain school area is 100.
  • The usual time it takes to finish a job using Method A is the same as with Method B.
  • Having more kids go to early learning classes helps them do better in school when they get older.
  • Using specific ways of talking affects how much customers get involved in marketing activities.
  • Regular exercise helps to lower the chances of heart disease.
  • Going to school more can help people make more money.
  • Playing violent video games makes teens more likely to act aggressively.
  • Less clean air directly impacts breathing health in city populations.

Hypotheses have many important jobs in the process of scientific research. Here are the key functions of hypotheses:

  • Guiding Research: Hypotheses give a clear and exact way for research. They act like guides, showing the predicted connections or results that scientists want to study.
  • Formulating Research Questions: Research questions often create guesses. They assist in changing big questions into particular, checkable things. They guide what the study should be focused on.
  • Setting Clear Objectives: Hypotheses set the goals of a study by saying what connections between variables should be found. They set the targets that scientists try to reach with their studies.
  • Testing Predictions: Theories guess what will happen in experiments or observations. By doing tests in a planned way, scientists can check if what they see matches the guesses made by their ideas.
  • Providing Structure: Theories give structure to the study process by arranging thoughts and ideas. They aid scientists in thinking about connections between things and plan experiments to match.
  • Focusing Investigations: Hypotheses help scientists focus on certain parts of their study question by clearly saying what they expect links or results to be. This focus makes the study work better.
  • Facilitating Communication: Theories help scientists talk to each other effectively. Clearly made guesses help scientists to tell others what they plan, how they will do it and the results expected. This explains things well with colleagues in a wide range of audiences.
  • Generating Testable Statements: A good guess can be checked, which means it can be looked at carefully or tested by doing experiments. This feature makes sure that guesses add to the real information used in science knowledge.
  • Promoting Objectivity: Guesses give a clear reason for study that helps guide the process while reducing personal bias. They motivate scientists to use facts and data as proofs or disprovals for their proposed answers.
  • Driving Scientific Progress: Making, trying out and adjusting ideas is a cycle. Even if a guess is proven right or wrong, the information learned helps to grow knowledge in one specific area.

Researchers use hypotheses to put down their thoughts directing how the experiment would take place. Following are the steps that are involved in the scientific method:

  • Initiating Investigations: Hypotheses are the beginning of science research. They come from watching, knowing what’s already known or asking questions. This makes scientists make certain explanations that need to be checked with tests.
  • Formulating Research Questions: Ideas usually come from bigger questions in study. They help scientists make these questions more exact and testable, guiding the study’s main point.
  • Setting Clear Objectives: Hypotheses set the goals of a study by stating what we think will happen between different things. They set the goals that scientists want to reach by doing their studies.
  • Designing Experiments and Studies: Assumptions help plan experiments and watchful studies. They assist scientists in knowing what factors to measure, the techniques they will use and gather data for a proposed reason.
  • Testing Predictions: Ideas guess what will happen in experiments or observations. By checking these guesses carefully, scientists can see if the seen results match up with what was predicted in each hypothesis.
  • Analysis and Interpretation of Data: Hypotheses give us a way to study and make sense of information. Researchers look at what they found and see if it matches the guesses made in their theories. They decide if the proof backs up or disagrees with these suggested reasons why things are happening as expected.
  • Encouraging Objectivity: Hypotheses help make things fair by making sure scientists use facts and information to either agree or disagree with their suggested reasons. They lessen personal preferences by needing proof from experience.
  • Iterative Process: People either agree or disagree with guesses, but they still help the ongoing process of science. Findings from testing ideas make us ask new questions, improve those ideas and do more tests. It keeps going on in the work of science to keep learning things.

People Also View:

Mathematics Maths Formulas Branches of Mathematics

Summary – Hypothesis

A hypothesis is a testable statement serving as an initial explanation for phenomena, based on observations, theories, or existing knowledge. It acts as a guiding light for scientific research, proposing potential relationships between variables that can be empirically tested through experiments and observations.

The hypothesis must be specific, testable, falsifiable, and grounded in prior research or observation, laying out a predictive, if-then scenario that details a cause-and-effect relationship. It originates from various sources including existing theories, observations, previous research, and even personal curiosity, leading to different types, such as simple, complex, directional, non-directional, null, and alternative hypotheses, each serving distinct roles in research methodology .

The hypothesis not only guides the research process by shaping objectives and designing experiments but also facilitates objective analysis and interpretation of data , ultimately driving scientific progress through a cycle of testing, validation, and refinement.

Hypothesis – FAQs

What is a hypothesis.

A guess is a possible explanation or forecast that can be checked by doing research and experiments.

What are Components of a Hypothesis?

The components of a Hypothesis are Independent Variable, Dependent Variable, Relationship between Variables, Directionality etc.

What makes a Good Hypothesis?

Testability, Falsifiability, Clarity and Precision, Relevance are some parameters that makes a Good Hypothesis

Can a Hypothesis be Proven True?

You cannot prove conclusively that most hypotheses are true because it’s generally impossible to examine all possible cases for exceptions that would disprove them.

How are Hypotheses Tested?

Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data

Can Hypotheses change during Research?

Yes, you can change or improve your ideas based on new information discovered during the research process.

What is the Role of a Hypothesis in Scientific Research?

Hypotheses are used to support scientific research and bring about advancements in knowledge.

author

Please Login to comment...

Similar reads.

  • Geeks Premier League
  • School Learning
  • Geeks Premier League 2023
  • Maths-Class-12

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

  • What you can generate and how
  • Edit on GitHub

What you can generate and how ¶

Most things should be easy to generate and everything should be possible.

To support this principle Hypothesis provides strategies for most built-in types with arguments to constrain or adjust the output, as well as higher-order strategies that can be composed to generate more complex types.

This document is a guide to what strategies are available for generating data and how to build them. Strategies have a variety of other important internal features, such as how they simplify, but the data they can generate is the only public part of their API.

Core strategies ¶

Functions for building strategies are all available in the hypothesis.strategies module. The salient functions from it are as follows:

Generates bytes .

The generated bytes will have a length of at least min_size and at most max_size . If max_size is None there is no upper limit.

Examples from this strategy shrink towards smaller strings and lower byte values.

Returns a strategy which generates instances of bool .

Examples from this strategy will shrink towards False (i.e. shrinking will replace True with False where possible).

Generates values by drawing from args and kwargs and passing them to the callable (provided as the first positional argument) in the appropriate argument position.

e.g. builds(target, integers(), flag=booleans()) would draw an integer i and a boolean b and call target(i, flag=b) .

If the callable has type annotations, they will be used to infer a strategy for required arguments that were not passed to builds. You can also tell builds to infer a strategy for an optional argument by passing ... ( Ellipsis ) as a keyword argument to builds, instead of a strategy for that argument to the callable.

If the callable is a class defined with attrs , missing required arguments will be inferred from the attribute on a best-effort basis, e.g. by checking attrs standard validators . Dataclasses are handled natively by the inference from type hints.

Examples from this strategy shrink by shrinking the argument values to the callable.

Generates characters, length-one str ings, following specified filtering rules.

When no filtering rules are specified, any character can be produced.

If min_codepoint or max_codepoint is specified, then only characters having a codepoint in that range will be produced.

If categories is specified, then only characters from those Unicode categories will be produced. This is a further restriction, characters must also satisfy min_codepoint and max_codepoint .

If exclude_categories is specified, then any character from those categories will not be produced. You must not pass both categories and exclude_categories ; these arguments are alternative ways to specify exactly the same thing.

If include_characters is specified, then any additional characters in that list will also be produced.

If exclude_characters is specified, then any characters in that list will be not be produced. Any overlap between include_characters and exclude_characters will raise an exception.

If codec is specified, only characters in the specified codec encodings will be produced.

The _codepoint arguments must be integers between zero and sys.maxunicode . The _characters arguments must be collections of length-one unicode strings, such as a unicode string.

The _categories arguments must be used to specify either the one-letter Unicode major category or the two-letter Unicode general category . For example, ('Nd', 'Lu') signifies “Number, decimal digit” and “Letter, uppercase”. A single letter (‘major category’) can be given to match all corresponding categories, for example 'P' for characters in any punctuation category.

We allow codecs from the codecs module and their aliases, platform specific and user-registered codecs if they are available, and python-specific text encodings (but not text or binary transforms). include_characters which cannot be encoded using this codec will raise an exception. If non-encodable codepoints or categories are explicitly allowed, the codec argument will exclude them without raising an exception.

Examples from this strategy shrink towards the codepoint for '0' , or the first allowable codepoint after it if '0' is excluded.

Returns a strategy that generates complex numbers.

This strategy draws complex numbers with constrained magnitudes. The min_magnitude and max_magnitude parameters should be non-negative Real numbers; a value of None corresponds an infinite upper bound.

If min_magnitude is nonzero or max_magnitude is finite, it is an error to enable allow_nan . If max_magnitude is finite, it is an error to enable allow_infinity .

allow_infinity , allow_nan , and allow_subnormal are applied to each part of the complex number separately, as for floats() .

The magnitude constraints are respected up to a relative error of (around) floating-point epsilon, due to implementation via the system sqrt function.

The width argument specifies the maximum number of bits of precision required to represent the entire generated complex number. Valid values are 32, 64 or 128, which correspond to the real and imaginary components each having width 16, 32 or 64, respectively. Passing width=64 will still use the builtin 128-bit complex class, but always for values which can be exactly represented as two 32-bit floats.

Examples from this strategy shrink by shrinking their real and imaginary parts, as floats() .

If you need to generate complex numbers with particular real and imaginary parts or relationships between parts, consider using builds(complex, ...) or @composite respectively.

Defines a strategy that is built out of potentially arbitrarily many other strategies.

This is intended to be used as a decorator. See the full documentation for more details about how to use this function.

Examples from this strategy shrink by shrinking the output of each draw call.

This isn’t really a normal strategy, but instead gives you an object which can be used to draw data interactively from other strategies.

See the rest of the documentation for more complete information.

Examples from this strategy do not shrink (because there is only one), but the result of calls to each data.draw() call shrink as they normally would.

This type only exists so that you can write type hints for tests using the data() strategy. Do not use it directly!

A strategy for dates between min_value and max_value .

Examples from this strategy shrink towards January 1st 2000.

A strategy for generating datetimes, which may be timezone-aware.

This strategy works by drawing a naive datetime between min_value and max_value , which must both be naive (have no timezone).

timezones must be a strategy that generates either None , for naive datetimes, or tzinfo objects for ‘aware’ datetimes. You can construct your own, though we recommend using one of these built-in strategies:

with Python 3.9 or newer or backports.zoneinfo : hypothesis.strategies.timezones() ;

with dateutil : hypothesis.extra.dateutil.timezones() ; or

with pytz : hypothesis.extra.pytz.timezones() .

You may pass allow_imaginary=False to filter out “imaginary” datetimes which did not (or will not) occur due to daylight savings, leap seconds, timezone and calendar adjustments, etc. Imaginary datetimes are allowed by default, because malformed timestamps are a common source of bugs.

Examples from this strategy shrink towards midnight on January 1st 2000, local time.

Generates instances of decimal.Decimal , which may be:

A finite rational number, between min_value and max_value .

Not a Number, if allow_nan is True. None means “allow NaN, unless min_value and max_value are not None”.

Positive or negative infinity, if max_value and min_value respectively are None, and allow_infinity is not False. None means “allow infinity, unless excluded by the min and max values”.

Note that where floats have one NaN value, Decimals have four: signed, and either quiet or signalling . See the decimal module docs for more information on special values.

If places is not None, all finite values drawn from the strategy will have that number of digits after the decimal place.

Examples from this strategy do not have a well defined shrink order but try to maximize human readability when shrinking.

A deferred strategy allows you to write a strategy that references other strategies that have not yet been defined. This allows for the easy definition of recursive and mutually recursive strategies.

The definition argument should be a zero-argument function that returns a strategy. It will be evaluated the first time the strategy is used to produce an example.

Example usage:

Mutual recursion also works fine:

Examples from this strategy shrink as they normally would from the strategy returned by the definition.

Generates dictionaries of type dict_class with keys drawn from the keys argument and values drawn from the values argument.

The size parameters have the same interpretation as for lists() .

Examples from this strategy shrink by trying to remove keys from the generated dictionary, and by shrinking each generated key and value.

This type only exists so that you can write type hints for functions decorated with @composite .

A strategy for generating email addresses as unicode strings. The address format is specified in RFC 5322#section-3.4.1 . Values shrink towards shorter local-parts and host domains.

If domains is given then it must be a strategy that generates domain names for the emails, defaulting to domains() .

This strategy is useful for generating “user data” for tests, as mishandling of email addresses is a common source of bugs.

Generates a dictionary of the same type as mapping with a fixed set of keys mapping to strategies. mapping must be a dict subclass.

Generated values have all keys present in mapping, in iteration order, with the corresponding values drawn from mapping[key].

If optional is passed, the generated value may or may not contain each key from optional and a value drawn from the corresponding strategy. Generated values may contain optional keys in an arbitrary order.

Examples from this strategy shrink by shrinking each individual value in the generated dictionary, and omitting optional key-value pairs.

Returns a strategy which generates floats.

If min_value is not None, all values will be >= min_value (or > min_value if exclude_min ).

If max_value is not None, all values will be <= max_value (or < max_value if exclude_max ).

If min_value or max_value is not None, it is an error to enable allow_nan.

If both min_value and max_value are not None, it is an error to enable allow_infinity.

If inferred values range does not include subnormal values, it is an error to enable allow_subnormal.

Where not explicitly ruled out by the bounds, subnormals , infinities, and NaNs are possible values generated by this strategy.

The width argument specifies the maximum number of bits of precision required to represent the generated float. Valid values are 16, 32, or 64. Passing width=32 will still use the builtin 64-bit float class, but always for values which can be exactly represented as a 32-bit float.

The exclude_min and exclude_max argument can be used to generate numbers from open or half-open intervals, by excluding the respective endpoints. Excluding either signed zero will also exclude the other. Attempting to exclude an endpoint which is None will raise an error; use allow_infinity=False to generate finite floats. You can however use e.g. min_value=-math.inf, exclude_min=True to exclude only one infinite endpoint.

Examples from this strategy have a complicated and hard to explain shrinking behaviour, but it tries to improve “human readability”. Finite numbers will be preferred to infinity and infinity will be preferred to NaN.

Returns a strategy which generates Fractions.

If min_value is not None then all generated values are no less than min_value . If max_value is not None then all generated values are no greater than max_value . min_value and max_value may be anything accepted by the Fraction constructor.

If max_denominator is not None then the denominator of any generated values is no greater than max_denominator . Note that max_denominator must be None or a positive integer.

Examples from this strategy shrink towards smaller denominators, then closer to zero.

Generates strings that contain a match for the given regex (i.e. ones for which re.search() will return a non-None result).

regex may be a pattern or compiled regex . Both byte-strings and unicode strings are supported, and will generate examples of the same type.

You can use regex flags such as re.IGNORECASE or re.DOTALL to control generation. Flags can be passed either in compiled regex or inside the pattern with a (?iLmsux) group.

Some regular expressions are only partly supported - the underlying strategy checks local matching and relies on filtering to resolve context-dependent expressions. Using too many of these constructs may cause health-check errors as too many examples are filtered out. This mainly includes (positive or negative) lookahead and lookbehind groups.

If you want the generated string to match the whole regex you should use boundary markers. So e.g. r"\A.\Z" will return a single character string, while "." will return any string, and r"\A.$" will return a single character optionally followed by a "\n" . Alternatively, passing fullmatch=True will ensure that the whole string is a match, as if you had used the \A and \Z markers.

The alphabet= argument constrains the characters in the generated string, as for text() , and is only supported for unicode strings.

Examples from this strategy shrink towards shorter strings and lower character values, with exact behaviour that may depend on the pattern.

Looks up the appropriate search strategy for the given type.

from_type is used internally to fill in missing arguments to builds() and can be used interactively to explore what strategies are available or to debug type resolution.

You can use register_type_strategy() to handle your custom types, or to globally redefine certain strategies - for example excluding NaN from floats, or use timezone-aware instead of naive time and datetime strategies.

The resolution logic may be changed in a future version, but currently tries these five options:

If thing is in the default lookup mapping or user-registered lookup, return the corresponding strategy. The default lookup covers all types with Hypothesis strategies, including extras where possible.

If thing is from the typing module, return the corresponding strategy (special logic).

If thing has one or more subtypes in the merged lookup, return the union of the strategies for those types that are not subtypes of other elements in the lookup.

Finally, if thing has type annotations for all required arguments, and is not an abstract class, it is resolved via builds() .

Because abstract types cannot be instantiated, we treat abstract types as the union of their concrete subclasses. Note that this lookup works via inheritance but not via register , so you may still need to use register_type_strategy() .

There is a valuable recipe for leveraging from_type() to generate “everything except” values from a specified type. I.e.

For example, everything_except(int) returns a strategy that can generate anything that from_type() can ever generate, except for instances of int , and excluding instances of types added via register_type_strategy() .

This is useful when writing tests which check that invalid input is rejected in a certain way.

This is identical to the sets function but instead returns frozensets.

A strategy for functions, which can be used in callbacks.

The generated functions will mimic the interface of like , which must be a callable (including a class, method, or function). The return value for the function is drawn from the returns argument, which must be a strategy. If returns is not passed, we attempt to infer a strategy from the return-type annotation if present, falling back to none() .

If pure=True , all arguments passed to the generated function must be hashable, and if passed identical arguments the original return value will be returned again - not regenerated, so beware mutable values.

If pure=False , generated functions do not validate their arguments, and may return a different value if called again with the same arguments.

Generated functions can only be called within the scope of the @given which created them. This strategy does not support .example() .

Returns a strategy which generates integers.

If min_value is not None then all values will be >= min_value. If max_value is not None then all values will be <= max_value

Examples from this strategy will shrink towards zero, and negative values will also shrink towards positive (i.e. -n may be replaced by +n).

Generate IP addresses - v=4 for IPv4Address es, v=6 for IPv6Address es, or leave unspecified to allow both versions.

network may be an IPv4Network or IPv6Network , or a string representing a network such as "127.0.0.0/24" or "2001:db8::/32" . As well as generating addresses within a particular routable network, this can be used to generate addresses from a reserved range listed in the IANA registries .

If you pass both v and network , they must be for the same version.

This has the same behaviour as lists, but returns iterables instead.

Some iterables cannot be indexed (e.g. sets) and some do not have a fixed length (e.g. generators). This strategy produces iterators, which cannot be indexed and do not have a fixed length. This ensures that you do not accidentally depend on sequence behaviour.

Return a strategy which only generates value .

Note: value is not copied. Be wary of using mutable values.

If value is the result of a callable, you can use builds(callable) instead of just(callable()) to get a fresh value each time.

Examples from this strategy do not shrink (because there is only one).

Returns a list containing values drawn from elements with length in the interval [min_size, max_size] (no bounds in that direction if these are None). If max_size is 0, only the empty list will be drawn.

If unique is True (or something that evaluates to True), we compare direct object equality, as if unique_by was lambda x: x . This comparison only works for hashable types.

If unique_by is not None it must be a callable or tuple of callables returning a hashable type when given a value drawn from elements. The resulting list will satisfy the condition that for i != j , unique_by(result[i]) != unique_by(result[j]) .

If unique_by is a tuple of callables the uniqueness will be respective to each callable.

For example, the following will produce two columns of integers with both columns being unique respectively.

Examples from this strategy shrink by trying to remove elements from the list, and by shrinking each individual element of the list.

Return a strategy which only generates None.

This strategy never successfully draws a value and will always reject on an attempt to draw.

Examples from this strategy do not shrink (because there are none).

Return a strategy which generates values from any of the argument strategies.

This may be called with one iterable argument instead of multiple strategy arguments, in which case one_of(x) and one_of(*x) are equivalent.

Examples from this strategy will generally shrink to ones that come from strategies earlier in the list, then shrink according to behaviour of the strategy that produced them. In order to get good shrinking behaviour, try to put simpler strategies first. e.g. one_of(none(), text()) is better than one_of(text(), none()) .

This is especially important when using recursive strategies. e.g. x = st.deferred(lambda: st.none() | st.tuples(x, x)) will shrink well, but x = st.deferred(lambda: st.tuples(x, x) | st.none()) will shrink very badly indeed.

Return a strategy which returns permutations of the ordered collection values .

Examples from this strategy shrink by trying to become closer to the original order of values.

Hypothesis always seeds global PRNGs before running a test, and restores the previous state afterwards.

If having a fixed seed would unacceptably weaken your tests, and you cannot use a random.Random instance provided by randoms() , this strategy calls random.seed() with an arbitrary integer and passes you an opaque object whose repr displays the seed value for debugging. If numpy.random is available, that state is also managed, as is anything managed by hypothesis.register_random() .

Examples from these strategy shrink to seeds closer to zero.

Generates instances of random.Random . The generated Random instances are of a special HypothesisRandom subclass.

If note_method_calls is set to True , Hypothesis will print the randomly drawn values in any falsifying test case. This can be helpful for debugging the behaviour of randomized algorithms.

If use_true_random is set to True then values will be drawn from their usual distribution, otherwise they will actually be Hypothesis generated values (and will be shrunk accordingly for any failing test case). Setting use_true_random=False will tend to expose bugs that would occur with very low probability when it is set to True, and this flag should only be set to True when your code relies on the distribution of values for correctness.

For managing global state, see the random_module() strategy and register_random() function.

base: A strategy to start from.

extend: A function which takes a strategy and returns a new strategy.

max_leaves: The maximum number of elements to be drawn from base on a given run.

This returns a strategy S such that S = extend(base | S) . That is, values may be drawn from base, or from any strategy reachable by mixing applications of | and extend.

An example may clarify: recursive(booleans(), lists) would return a strategy that may return arbitrarily nested and mixed lists of booleans. So e.g. False , [True] , [False, []] , and [[[[True]]]] are all valid values to be drawn from that strategy.

Examples from this strategy shrink by trying to reduce the amount of recursion and by shrinking according to the shrinking behaviour of base and the result of extend.

Add an entry to the global type-to-strategy lookup.

This lookup is used in builds() and @given .

builds() will be used automatically for classes with type annotations on __init__ , so you only need to register a strategy if one or more arguments need to be more tightly defined than their type-based default, or if you want to supply a strategy for an argument with a default value.

strategy may be a search strategy, or a function that takes a type and returns a strategy (useful for generic types). The function may return NotImplemented to conditionally not provide a strategy for the type (the type will still be resolved by other methods, if possible, as if the function was not registered).

Note that you may not register a parametrised generic type (such as MyCollection[int] ) directly, because the resolution logic does not handle this case correctly. Instead, you may register a function for MyCollection and inspect the type parameters within that function .

A strategy for getting “the current test runner”, whatever that may be. The exact meaning depends on the entry point, but it will usually be the associated ‘self’ value for it.

If you are using this in a rule for stateful testing, this strategy will return the instance of the RuleBasedStateMachine that the rule is running for.

If there is no current test runner and a default is provided, return that default. If no default is provided, raises InvalidArgument.

Returns a strategy which generates any value present in elements .

Note that as with just() , values will not be copied and thus you should be careful of using mutable data.

sampled_from supports ordered collections, as well as Enum objects. Flag objects may also generate any combination of their members.

Examples from this strategy shrink by replacing them with values earlier in the list. So e.g. sampled_from([10, 1]) will shrink by trying to replace 1 values with 10, and sampled_from([1, 10]) will shrink by trying to replace 10 values with 1.

It is an error to sample from an empty sequence, because returning nothing() makes it too easy to silently drop parts of compound strategies. If you need that behaviour, use sampled_from(seq) if seq else nothing() .

This has the same behaviour as lists, but returns sets instead.

Note that Hypothesis cannot tell if values are drawn from elements are hashable until running the test, so you can define a strategy for sets of an unhashable type but it will fail at test time.

Examples from this strategy shrink by trying to remove elements from the set, and by shrinking each individual element of the set.

Returns a strategy that draws a single shared value per run, drawn from base. Any two shared instances with the same key will share the same value, otherwise the identity of this strategy will be used. That is:

In the above x and y may draw different (or potentially the same) values. In the following they will always draw the same:

Examples from this strategy shrink as per their base strategy.

Generates slices that will select indices up to the supplied size

Generated slices will have start and stop indices that range from -size to size - 1 and will step in the appropriate direction. Slices should only produce an empty selection if the start and end are the same.

Examples from this strategy shrink toward 0 and smaller values

Generates strings with characters drawn from alphabet , which should be a collection of length one strings or a strategy generating such strings.

The default alphabet strategy can generate the full unicode range but excludes surrogate characters because they are invalid in the UTF-8 encoding. You can use characters() without arguments to find surrogate-related bugs such as bpo-34454 .

min_size and max_size have the usual interpretations. Note that Python measures string length by counting codepoints: U+00C5 Å is a single character, while U+0041 U+030A Å is two - the A , and a combining ring above.

Examples from this strategy shrink towards shorter strings, and with the characters in the text shrinking as per the alphabet strategy. This strategy does not normalize() examples, so generated strings may be in any or none of the ‘normal forms’.

A strategy for timedeltas between min_value and max_value .

Examples from this strategy shrink towards zero.

A strategy for times between min_value and max_value .

The timezones argument is handled as for datetimes() .

Examples from this strategy shrink towards midnight, with the timezone component shrinking as for the strategy that provided it.

A strategy for IANA timezone names .

As well as timezone names like "UTC" , "Australia/Sydney" , or "America/New_York" , this strategy can generate:

Aliases such as "Antarctica/McMurdo" , which links to "Pacific/Auckland" .

Deprecated names such as "Antarctica/South_Pole" , which also links to "Pacific/Auckland" . Note that most but not all deprecated timezone names are also aliases.

Timezone names with the "posix/" or "right/" prefixes, unless allow_prefix=False .

These strings are provided separately from Tzinfo objects - such as ZoneInfo instances from the timezones() strategy - to facilitate testing of timezone logic without needing workarounds to access non-canonical names.

The zoneinfo module is new in Python 3.9, so you will need to install the backports.zoneinfo module on earlier versions.

On Windows, you will also need to install the tzdata package .

pip install hypothesis[zoneinfo] will install these conditional dependencies if and only if they are needed.

On Windows, you may need to access IANA timezone data via the tzdata package. For non-IANA timezones, such as Windows-native names or GNU TZ strings, we recommend using sampled_from() with the dateutil package, e.g. dateutil.tz.tzwin.list() .

A strategy for zoneinfo.ZoneInfo objects.

If no_cache=True , the generated instances are constructed using ZoneInfo.no_cache instead of the usual constructor. This may change the semantics of your datetimes in surprising ways, so only use it if you know that you need to!

Return a strategy which generates a tuple of the same length as args by generating the value at index i from args[i].

e.g. tuples(integers(), integers()) would generate a tuple of length two with both values an integer.

Examples from this strategy shrink by shrinking their component parts.

Returns a strategy that generates UUIDs .

If the optional version argument is given, value is passed through to UUID and only UUIDs of that version will be generated.

If allow_nil is True, generate the nil UUID much more often. Otherwise, all returned values from this will be unique, so e.g. if you do lists(uuids()) the resulting list will never contain duplicates.

Examples from this strategy don’t have any meaningful shrink order.

Provisional strategies ¶

This module contains various provisional APIs and strategies.

It is intended for internal use, to ease code reuse, and is not stable. Point releases may move or break the contents at any time!

Internet strategies should conform to RFC 3986 or the authoritative definitions it links to. If not, report the bug!

Generate RFC 1035 compliant fully qualified domain names.

A strategy for RFC 3986 , generating http/https URLs.

Shrinking ¶

When using strategies it is worth thinking about how the data shrinks . Shrinking is the process by which Hypothesis tries to produce human readable examples when it finds a failure - it takes a complex example and turns it into a simpler one.

Each strategy defines an order in which it shrinks - you won’t usually need to care about this much, but it can be worth being aware of as it can affect what the best way to write your own strategies is.

The exact shrinking behaviour is not a guaranteed part of the API, but it doesn’t change that often and when it does it’s usually because we think the new way produces nicer examples.

Possibly the most important one to be aware of is one_of() , which has a preference for values produced by strategies earlier in its argument list. Most of the others should largely “do the right thing” without you having to think about it.

Adapting strategies ¶

Often it is the case that a strategy doesn’t produce exactly what you want it to and you need to adapt it. Sometimes you can do this in the test, but this hurts reuse because you then have to repeat the adaption in every test.

Hypothesis gives you ways to build strategies from other strategies given functions for transforming the data.

map is probably the easiest and most useful of these to use. If you have a strategy s and a function f , then an example s.map(f).example() is f(s.example()) , i.e. we draw an example from s and then apply f to it.

Note that many things that you might use mapping for can also be done with builds() , and if you find yourself indexing into a tuple within .map() it’s probably time to use that instead.

Filtering ¶

filter lets you reject some examples. s.filter(f).example() is some example of s such that f(example) is truthy.

It’s important to note that filter isn’t magic and if your condition is too hard to satisfy then this can fail:

In general you should try to use filter only to avoid corner cases that you don’t want rather than attempting to cut out a large chunk of the search space.

A technique that often works well here is to use map to first transform the data and then use filter to remove things that didn’t work out. So for example if you wanted pairs of integers (x,y) such that x < y you could do the following:

Chaining strategies together ¶

Finally there is flatmap . flatmap draws an example, then turns that example into a strategy, then draws an example from that strategy.

It may not be obvious why you want this at first, but it turns out to be quite useful because it lets you generate different types of data with relationships to each other.

For example suppose we wanted to generate a list of lists of the same length:

In this example we first choose a length for our tuples, then we build a strategy which generates lists containing lists precisely of that length. The finds show what simple examples for this look like.

Most of the time you probably don’t want flatmap , but unlike filter and map which are just conveniences for things you could just do in your tests, flatmap allows genuinely new data generation that you wouldn’t otherwise be able to easily do.

(If you know Haskell: Yes, this is more or less a monadic bind. If you don’t know Haskell, ignore everything in these parentheses. You do not need to understand anything about monads to use this, or anything else in Hypothesis).

Recursive data ¶

Sometimes the data you want to generate has a recursive definition. e.g. if you wanted to generate JSON data, valid JSON is:

Any float, any boolean, any unicode string.

Any list of valid JSON data

Any dictionary mapping unicode strings to valid JSON data.

The problem is that you cannot call a strategy recursively and expect it to not just blow up and eat all your memory. The other problem here is that not all unicode strings display consistently on different machines, so we’ll restrict them in our doctest.

The way Hypothesis handles this is with the recursive() strategy which you pass in a base case and a function that, given a strategy for your data type, returns a new strategy for it. So for example:

That is, we start with our leaf data and then we augment it by allowing lists and dictionaries of anything we can generate as JSON data.

The size control of this works by limiting the maximum number of values that can be drawn from the base strategy. So for example if we wanted to only generate really small JSON we could do this as:

Composite strategies ¶

The @composite decorator lets you combine other strategies in more or less arbitrary ways. It’s probably the main thing you’ll want to use for complicated custom strategies.

The composite decorator works by converting a function that returns one example into a function that returns a strategy that produces such examples - which you can pass to @given , modify with .map or .filter , and generally use like any other strategy.

It does this by giving you a special function draw as the first argument, which can be used just like the corresponding method of the data() strategy within a test. In fact, the implementation is almost the same - but defining a strategy with @composite makes code reuse easier, and usually improves the display of failing examples.

For example, the following gives you a list and an index into it:

draw(s) is a function that should be thought of as returning s.example() , except that the result is reproducible and will minimize correctly. The decorated function has the initial argument removed from the list, but will accept all the others in the expected order. Defaults are preserved.

Note that the repr will work exactly like it does for all the built-in strategies: it will be a function that you can call to get the strategy in question, with values provided only if they do not match the defaults.

You can use assume inside composite functions:

This works as assume normally would, filtering out any examples for which the passed in argument is falsey.

Take care that your function can cope with adversarial draws, or explicitly rejects them using the .filter() method or assume() - our mutation and shrinking logic can do some strange things, and a naive implementation might lead to serious performance problems. For example:

If @composite is used to decorate a method or classmethod, the draw argument must come before self or cls . While we therefore recommend writing strategies as standalone functions and using the register_type_strategy() function to associate them with a class, methods are supported and the @composite decorator may be applied either before or after @classmethod or @staticmethod . See issue #2578 and pull request #2634 for more details.

Drawing interactively in tests ¶

There is also the data() strategy, which gives you a means of using strategies interactively. Rather than having to specify everything up front in @given you can draw from strategies in the body of your test.

This is similar to @composite , but even more powerful as it allows you to mix test code with example generation. The downside of this power is that data() is incompatible with explicit @example(...) s - and the mixed code is often harder to debug when something goes wrong.

If you need values that are affected by previous draws but which don’t depend on the execution of your test, stick to the simpler @composite .

If the test fails, each draw will be printed with the falsifying example. e.g. the above is wrong (it has a boundary condition error), so will print:

As you can see, data drawn this way is simplified as usual.

Optionally, you can provide a label to identify values generated by each call to data.draw() . These labels can be used to identify values in the output of a falsifying example.

For instance:

will produce the output:

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Statistics > Methodology

Title: hypothesis testing for general network models.

Abstract: The network data has attracted considerable attention in modern statistics. In research on complex network data, one key issue is finding its underlying connection structure given a network sample. The methods that have been proposed in literature usually assume that the underlying structure is a known model. In practice, however, the true model is usually unknown, and network learning procedures based on these methods may suffer from model misspecification. To handle this issue, based on the random matrix theory, we first give a spectral property of the normalized adjacency matrix under a mild condition. Further, we establish a general goodness-of-fit test procedure for the unweight and undirected network. We prove that the null distribution of the proposed statistic converges in distribution to the standard normal distribution. Theoretically, this testing procedure is suitable for nearly all popular network models, such as stochastic block models, and latent space models. Further, we apply the proposed method to the degree-corrected mixed membership model and give a sequential estimator of the number of communities. Both simulation studies and real-world data examples indicate that the proposed method works well.
Subjects: Methodology (stat.ME)
Cite as: [stat.ME]
  (or [stat.ME] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

COMMENTS

  1. What Is Hypothesis Testing? Types and Python Code Example

    # Define the null hypothesis H0 = "The average weight of 10 year old children is 32kg." # Define the alternative hypothesis H1 = "The average weight of 10 year old children is more than 32kg." Using the code above, we declared our null and alternate hypotheses stating the average weight of a 10-year-old in both cases.

  2. PDF Mapping Saldaňa's Coding Methods onto the Literature Review Process

    exchange coding); Exploratory methods (i.e., holistic coding, provisional coding, hypothesis coding); and Procedural methods (i.e., protocol coding, outline of cultural materials coding, domain and taxonomic coding, causation coding). ... Coding Method Definition 1 Attribute Coding Provide essential information about data for future reference

  3. Chapter 18. Data Analysis and Coding

    Coding is the iterative process of assigning meaning to the data you have collected in order to both simplify and identify patterns. This chapter introduces you to the process of qualitative data analysis and the basic concept of coding, while the following chapter (chapter 19) will take you further into the various kinds of codes and how to ...

  4. Qualitative Data Coding 101 (With Examples)

    For example, in the sentence: "Pigeons attacked me and stole my sandwich.". You could use "pigeons" as a code. This code simply describes that the sentence involves pigeons. So, building onto this, qualitative data coding is the process of creating and assigning codes to categorise data extracts. You'll then use these codes later down ...

  5. Understanding Hypothesis Testing

    Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing.

  6. Hypothesis Testing with Python: Step by step hands-on tutorial with

    It tests the null hypothesis that the population variances are equal (called homogeneity of variance or homoscedasticity). Suppose the resulting p-value of Levene's test is less than the significance level (typically 0.05).In that case, the obtained differences in sample variances are unlikely to have occurred based on random sampling from a population with equal variances.

  7. Hypothesis in Machine Learning

    A hypothesis is a function that best describes the target in supervised machine learning. The hypothesis that an algorithm would come up depends upon the data and also depends upon the restrictions and bias that we have imposed on the data. The Hypothesis can be calculated as: y = mx + b y =mx+b. Where, y = range. m = slope of the lines.

  8. Efficient coding hypothesis

    The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes ...

  9. Testing your Python Code with Hypothesis • Inspired Python

    Hypothesis is a capable test generator. Unlike a tool like faker that generates realistic-looking test data for fixtures or demos, Hypothesis is a property-based tester. It uses heuristics and clever algorithms to find inputs that break your code. Hypothesis assumes you understand the problem domain you want to model.

  10. Hypothesis Testing. What it is and how to do it in Python

    A hypothesis is a claim or a premise that we want to test. Hypothesis testing is a way of backing up your conclusions with data, in a more "scientific" way. It is useful not only to scientists, but actually important in so many areas, ranging from marketing to web design to pharmaceutical trials and much more.

  11. Hypothesis testing in Machine learning using Python

    Hypothesis testing is an essential procedure in statistics. A hypothesis test evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. When we say that a finding is statistically significant, it's thanks to a hypothesis test.

  12. (PDF) Coding: An Overview and Guide to Qualitative Data Analysis for

    Code "COMFORTABLE"). P. 3 In Vivo coding is particularly well suited for extracting and highlighting "folk" or "indigenous" terms (participant-generated words indicative of a group, culture, or subcultures categories of meaning). In Vivo coding is associated with grounded theory. o Process Coding: codes for a word or phrase that ...

  13. How to Perform Hypothesis Testing in Python (With Examples)

    Example 1: One Sample t-test in Python. A one sample t-test is used to test whether or not the mean of a population is equal to some value. For example, suppose we want to know whether or not the mean weight of a certain species of some turtle is equal to 310 pounds. To test this, we go out and collect a simple random sample of turtles with the ...

  14. Hypothesis Testing in R Programming

    Alternative hypothesis: The true mean is not equal to five, according to the alternative hypothesis. 95 percent confidence interval: (-0.1910645, 0.2090349) is the confidence interval's value. ... From a robust code we mean a code which will not break easily upon changes, can be refactored simply, can be extended without breaking the rest ...

  15. Hypothesis Testing in Python Course

    Hypothesis testing lets you answer questions about your datasets in a statistically rigorous way. In this course, you'll grow your Python analytical skills as you learn how and when to use common tests like t-tests, proportion tests, and chi-square tests. Working with real-world data, including Stack Overflow user feedback and supply-chain data ...

  16. Inductive Coding: A Step-by-Step Guide for Researchers

    Inductive Coding Definition and Key Features. Inductive coding is a research method for generating themes from textual datasets, such as transcribed semi-structured interviews or transcriptions of speeches and audio.. It is also known as "open coding" or "data-driven coding". Below are some key features of inductive coding, each of which you should note in your methods section.

  17. 17 Statistical Hypothesis Tests in Python (Cheat Sheet)

    In this post, you will discover a cheat sheet for the most popular statistical hypothesis tests for a machine learning project with examples using the Python API. Each statistical test is presented in a consistent way, including: The name of the test. What the test is checking. The key assumptions of the test. How the test result is interpreted.

  18. Welcome to Hypothesis!

    Hypothesis is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn't have thought to look for. It is stable, powerful and easy to add to any existing test suite. It works by letting you write tests that assert that something should be true for every case ...

  19. Hypothesis Testing with Python

    Hypothesis testing is used to address questions about a population based on a subset from that population. For example, A/B testing is a framework for learning about consumer behavior based on a small sample of consumers. This course assumes some preexisting knowledge of Python, including the NumPy and pandas libraries.

  20. Introduction to Hypothesis Testing

    Learn how to run t-tests and binomial tests in this introduction to Hypothesis Testing. Start. This course includes. AI assistance for guided coding help. A certificate of completion. Skill level. Beginner. Time to complete. 2 hours.

  21. Hypothesis Testing in R: A Comprehensive Guide with Code and Examples

    Understanding the Hypothesis Testing Process. Before diving into code examples, let's grasp the key concepts of hypothesis testing: Null Hypothesis (H0): This is the default assumption that there is no significant effect, difference, or relationship in the population. It's often denoted as H0. Alternative Hypothesis (Ha): This is the ...

  22. What is Hypothesis

    Hypothesis is a testable statement that explains what is happening or observed. It proposes the relation between the various participating variables. Hypothesis is also called Theory, Thesis, Guess, Assumption, or Suggestion. Hypothesis creates a structure that guides the search for knowledge. In this article, we will learn what is hypothesis ...

  23. What you can generate and how

    hypothesis.strategies. deferred (definition) [source] ¶ A deferred strategy allows you to write a strategy that references other strategies that have not yet been defined. This allows for the easy definition of recursive and mutually recursive strategies. The definition argument should be a zero-argument function that returns a strategy.

  24. 2.1: Chapter 7- Introduction to Hypothesis Testing

    The Null Hypothesis. The hypothesis that an apparent effect is due to chance is called the null hypothesis, written H 0 (" H-naught").In the Physicians' Reactions example, the null hypothesis is that in the population of physicians, the mean time expected to be spent with obese patients is equal to the mean time expected to be spent with average-weight patients.

  25. [2408.04213] Hypothesis testing for general network models

    The network data has attracted considerable attention in modern statistics. In research on complex network data, one key issue is finding its underlying connection structure given a network sample. The methods that have been proposed in literature usually assume that the underlying structure is a known model. In practice, however, the true model is usually unknown, and network learning ...