2.1 Why Is Research Important?
Learning objectives.
By the end of this section, you will be able to:
- Explain how scientific research addresses questions about behavior
- Discuss how scientific research guides public policy
- Appreciate how scientific research can be important in making personal decisions
Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession ( Figure 2.2 ). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.
The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.
While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.
Use of Research Information
Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the explosion in our use of technology has led researchers to question whether this ultimately helps or hinders us. The use and implementation of technology in educational settings has become widespread over the last few decades. Researchers are coming to different conclusions regarding the use of technology. To illustrate this point, a study investigating a smartphone app targeting surgery residents (graduate students in surgery training) found that the use of this app can increase student engagement and raise test scores (Shaw & Tan, 2015). Conversely, another study found that the use of technology in undergraduate student populations had negative impacts on sleep, communication, and time management skills (Massimini & Peterson, 2009). Until sufficient amounts of research have been conducted, there will be no clear consensus on the effects that technology has on a student's acquisition of knowledge, study skills, and mental health.
In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.
We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding early intervention programs. These programs are designed to help children who come from low-income backgrounds, have special needs, or face other disadvantages. These programs may involve providing a wide variety of services to maximize the children's development and position them for optimal levels of success in school and later in life (Blann, 2005). While such programs sound appealing, you would want to be sure that they also proved effective before investing additional money in these programs. Fortunately, psychologists and other scientists have conducted vast amounts of research on such programs and, in general, the programs are found to be effective (Neil & Christensen, 2009; Peters-Scheffer, Didden, Korzilius, & Sturmey, 2011). While not all programs are equally effective, and the short-term effects of many such programs are more pronounced, there is reason to believe that many of these programs produce long-term benefits for participants (Barnett, 2011). If you are committed to being a good steward of taxpayer money, you would want to look at research. Which programs are most effective? What characteristics of these programs make them effective? Which programs promote the best outcomes? After examining the research, you would be best equipped to make decisions about which programs to fund.
Link to Learning
Watch this video about early childhood program effectiveness to learn how scientists evaluate effectiveness and how best to invest money into programs that are most effective.
Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine that your sister, Maria, expresses concern about her two-year-old child, Umberto. Umberto does not speak as much or as clearly as the other children in his daycare or others in the family. Umberto's pediatrician undertakes some screening and recommends an evaluation by a speech pathologist, but does not refer Maria to any other specialists. Maria is concerned that Umberto's speech delays are signs of a developmental disorder, but Umberto's pediatrician does not; she sees indications of differences in Umberto's jaw and facial muscles. Hearing this, you do some internet searches, but you are overwhelmed by the breadth of information and the wide array of sources. You see blog posts, top-ten lists, advertisements from healthcare providers, and recommendations from several advocacy organizations. Why are there so many sites? Which are based in research, and which are not?
In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.
NOTABLE RESEARCHERS
Psychological research has a long history involving important figures from diverse backgrounds. While the introductory chapter discussed several researchers who made significant contributions to the discipline, there are many more individuals who deserve attention in considering how psychology has advanced as a science through their work ( Figure 2.3 ). For instance, Margaret Floy Washburn (1871–1939) was the first woman to earn a PhD in psychology. Her research focused on animal behavior and cognition (Margaret Floy Washburn, PhD, n.d.). Mary Whiton Calkins (1863–1930) was a preeminent first-generation American psychologist who opposed the behaviorist movement, conducted significant research into memory, and established one of the earliest experimental psychology labs in the United States (Mary Whiton Calkins, n.d.).
Francis Sumner (1895–1954) was the first African American to receive a PhD in psychology in 1920. His dissertation focused on issues related to psychoanalysis. Sumner also had research interests in racial bias and educational justice. Sumner was one of the founders of Howard University’s department of psychology, and because of his accomplishments, he is sometimes referred to as the “Father of Black Psychology.” Thirteen years later, Inez Beverly Prosser (1895–1934) became the first African American woman to receive a PhD in psychology. Prosser’s research highlighted issues related to education in segregated versus integrated schools, and ultimately, her work was very influential in the hallmark Brown v. Board of Education Supreme Court ruling that segregation of public schools was unconstitutional (Ethnicity and Health in America Series: Featured Psychologists, n.d.).
Although the establishment of psychology’s scientific roots occurred first in Europe and the United States, it did not take much time until researchers from around the world began to establish their own laboratories and research programs. For example, some of the first experimental psychology laboratories in South America were founded by Horatio Piñero (1869–1919) at two institutions in Buenos Aires, Argentina (Godoy & Brussino, 2010). In India, Gunamudian David Boaz (1908–1965) and Narendra Nath Sen Gupta (1889–1944) established the first independent departments of psychology at the University of Madras and the University of Calcutta, respectively. These developments provided an opportunity for Indian researchers to make important contributions to the field (Gunamudian David Boaz, n.d.; Narendra Nath Sen Gupta, n.d.).
When the American Psychological Association (APA) was first founded in 1892, all of the members were White males (Women and Minorities in Psychology, n.d.). However, by 1905, Mary Whiton Calkins was elected as the first female president of the APA, and by 1946, nearly one-quarter of American psychologists were female. Psychology became a popular degree option for students enrolled in the nation’s historically Black higher education institutions, increasing the number of Black Americans who went on to become psychologists. Given demographic shifts occurring in the United States and increased access to higher educational opportunities among historically underrepresented populations, there is reason to hope that the diversity of the field will increasingly match the larger population, and that the research contributions made by the psychologists of the future will better serve people of all backgrounds (Women and Minorities in Psychology, n.d.).
The Process of Scientific Research
Scientific knowledge is advanced through a process known as the scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning , ideas are tested in the real world; in inductive reasoning , real-world observations lead to new ideas ( Figure 2.4 ). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.
In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.
Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favorite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.
For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.
We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.
A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests Figure 2.5 .
To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.
A scientific hypothesis is also falsifiable , or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors ( Figure 2.6 ). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.
In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).
Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.
This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.
Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.
Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
- Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
- Publisher/website: OpenStax
- Book title: Psychology 2e
- Publication date: Apr 22, 2020
- Location: Houston, Texas
- Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
- Section URL: https://openstax.org/books/psychology-2e/pages/2-1-why-is-research-important
© Jun 26, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
2.1 Why is Research Important
Learning objectives.
By the end of this section, you will be able to:
- Explain how scientific research addresses questions about behavior
- Discuss how scientific research guides public policy
- Appreciate how scientific research can be important in making personal decisions
Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession (figure below). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.
Some of our ancestors, across the work and over the centuries, believed that trephination – the practice of making a hole in the skull, as shown here – allowed evil spirits to leave the body, thus curing mental illness and other diseases (credit” “taiproject/Flickr)
The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.
We can easily observe the behavior of others around us. For example, if someone is crying, we can observe that behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes, asking about the underlying cognitions is as easy as asking the subject directly: “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In other situations, it may be hard to identify exactly why you feel the way you do. Think about times when you suddenly feel annoyed after a long day. There may be a specific trigger for your annoyance (a loud noise), or you may be tired, hungry, stressed, or all of the above. Human behavior is often a complicated mix of a variety of factors. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.
USE OF RESEARCH INFORMATION
Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of coming to an agreement, and it could be quite some time before a consensus emerges. In other cases, rapidly developing technology is improving our ability to measure things, and changing our earlier understanding of how the mind works.
In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? Science is always changing and new evidence is alwaus coming to light, thus this dash of skepticism should be applied to all research you interact with from now on. Yes, that includes the research presented in this textbook.
Evaluation of research findings can have widespread impact. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding the D.A.R.E. (Drug Abuse Resistance Education) program in public schools (figure below). This program typically involves police officers coming into the classroom to educate students about the dangers of becoming involved with alcohol and other drugs. According to the D.A.R.E. website (www.dare.org), this program has been very popular since its inception in 1983, and it is currently operating in 75% of school districts in the United States and in more than 40 countries worldwide. Sounds like an easy decision, right? However, on closer review, you discover that the vast majority of research into this program consistently suggests that participation has little, if any, effect on whether or not someone uses alcohol or other drugs (Clayton, Cattarello, & Johnstone, 1996; Ennett, Tobler, Ringwalt, & Flewelling, 1994; Lynam et al., 1999; Ringwalt, Ennett, & Holt, 1991). If you are committed to being a good steward of taxpayer money, will you fund this particular program, or will you try to find other programs that research has consistently demonstrated to be effective?
The D.A.R.E. program continues to be popular in schools around the world despite research suggesting that it is ineffective.
It is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with a doctor or psychologist and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.
In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.
THE PROCESS OF SCIENTIFIC RESEARCH
Scientific knowledge is advanced through a process known as the scientific method . Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. We continually test and revise theories based on new evidence.
Two types of reasoning are used to make decisions within this model: Deductive and inductive. In deductive reasoning, ideas are tested against the empirical world. Think about a detective looking for clues and evidence to test their “hunch” about whodunit. In contrast, in inductive reasoning, empirical observations lead to new ideas. In other words, inductive reasoning involves gathering facts to create or refine a theory, rather than testing the theory by gathering facts (figure below). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.
Psychological research relies on both inductive and deductive reasoning.
In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider the famous example from Greek philosophy. A philosopher decided that human beings were “featherless bipeds”. Using deductive reasoning, all two-legged creatures without feathers must be human, right? Diogenes the Cynic (named because he was, well, a cynic) burst into the room with a freshly plucked chicken from the market and held it up exclaiming “Behold! I have brought you a man!”
Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For example, you might be a biologist attempting to classify animals into groups. You notice that quite a large portion of animals are furry and produce milk for their young (cats, dogs, squirrels, horses, hippos, etc). Therefore, you might conclude that all mammals (the name you have chosen for this grouping) have hair and produce milk. This seems like a pretty great hypothesis that you could test with deductive reasoning. You go out an look at a whole bunch of things and stumble on an exception: The coconut. Coconuts have hair and produce milk, but they don’t “fit” your idea of what a mammal is. So, using inductive reasoning given the new evidence, you adjust your theory again for an other round of data collection. Inductive and deductive reasoning work in tandem to help build and improve scientific theories over time.
We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once. Instead, researchers create hypotheses to test specific aspects of a theory.
A hypothesis is a testable prediction about how the world will behave if our theory is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests (figure below).
The scientific method of research includes proposing hypotheses, conducting research, and creating or modifying theories based on results.
To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.
A scientific hypothesis is also falsifiable, or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors (figure below). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable. The essential characteristic of Freud’s building blocks of personality, the id, ego, and superego, is that they are unconscious, and therefore people can’t observe them. Because they cannot be observed or tested in any way, it is impossible to say that they don’t exist, so they cannot be considered scientific theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.
Many of the specifics of (a) Freud’s theories, such ad (b) his division on the mind into the id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable (i.e., cannot be verified through scientific investigation). In broader strokes, his views set the stage for much psychological thinking today, such as the idea that some psychological process occur at the level of the unconscious.
In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).
Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.
Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. IHaving good information generated from research aids in making wise decisions both in public policy and in our personal lives.
Review Questions:
1. Scientific hypotheses are ________ and falsifiable.
a. observable
b. original
c. provable
d. testable
2. ________ are defined as observable realities.
a. behaviors
c. opinions
d. theories
3. Scientific knowledge is ________.
a. intuitive
b. empirical
c. permanent
d. subjective
4. A major criticism of Freud’s early theories involves the fact that his theories ________.
a. were too limited in scope
b. were too outrageous
c. were too broad
d. were not testable
Critical Thinking Questions:
1. In this section, the D.A.R.E. program was described as an incredibly popular program in schools across the United States despite the fact that research consistently suggests that this program is largely ineffective. How might one explain this discrepancy?
2. The scientific method is often described as self-correcting and cyclical. Briefly describe your understanding of the scientific method with regard to these concepts.
Personal Application Questions:
1. Healthcare professionals cite an enormous number of health problems related to obesity, and many people have an understandable desire to attain a healthy weight. There are many diet programs, services, and products on the market to aid those who wish to lose weight. If a close friend was considering purchasing or participating in one of these products, programs, or services, how would you make sure your friend was fully aware of the potential consequences of this decision? What sort of information would you want to review before making such an investment or lifestyle change yourself?
deductive reasoning
falsifiable
hypothesis: (plural
inductive reasoning
Answers to Exercises
Review Questions:
1. There is probably tremendous political pressure to appear to be hard on drugs. Therefore, even though D.A.R.E. might be ineffective, it is a well-known program with which voters are familiar.
2. This cyclical, self-correcting process is primarily a function of the empirical nature of science. Theories are generated as explanations of real-world phenomena. From theories, specific hypotheses are developed and tested. As a function of this testing, theories will be revisited and modified or refined to generate new hypotheses that are again tested. This cyclical process ultimately allows for more and more precise (and presumably accurate) information to be collected.
deductive reasoning: results are predicted based on a general premise
empirical: grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing
fact: objective and verifiable observation, established using evidence collected through empirical research
falsifiable: able to be disproven by experimental results
hypothesis: (plural: hypotheses) tentative and testable statement about the relationship between two or more variables
inductive reasoning: conclusions are drawn from observations
opinion: personal judgments, conclusions, or attitudes that may or may not be accurate
theory: well-developed set of ideas that propose an explanation for observed phenomena
Share This Book
- Increase Font Size
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- Front Res Metr Anal
The Use of Research Methods in Psychological Research: A Systematised Review
Salomé elizabeth scholtz.
1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
Werner de Klerk
Leon t. de beer.
2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa
Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.
Introduction
Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).
The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).
Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).
A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.
Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).
Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.
Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.
The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.
Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).
The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.
Systematised review procedure.
According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.
Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.
To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.
The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).
Journal article frequency.
Codes used to form themes (research topics).
Social Psychology | 31 | Aggression SP, Attitude SP, Belief SP, Child abuse SP, Conflict SP, Culture SP, Discrimination SP, Economic, Family illness, Family, Group, Help, Immigration, Intergeneration, Judgement, Law, Leadership, Marriage SP, Media, Optimism, Organisational and Social justice, Parenting SP, Politics, Prejudice, Relationships, Religion, Romantic Relationships SP, Sex and attraction, Stereotype, Violence, Work |
Experimental Psychology | 17 | Anxiety, stress and PTSD, Coping, Depression, Emotion, Empathy, Facial research, Fear and threat, Happiness, Humor, Mindfulness, Mortality, Motivation and Achievement, Perception, Rumination, Self, Self-efficacy |
Cognitive Psychology | 12 | Attention, Cognition, Decision making, Impulse, Intelligence, Language, Math, Memory, Mental, Number, Problem solving, Reading |
Health Psychology | 7 | Addiction, Body, Burnout, Health, Illness (Health Psychology), Sleep (Health Psychology), Suicide and Self-harm |
Physiological Psychology | 6 | Gender, Health (Physiological psychology), Illness (Physiological psychology), Mood disorders, Sleep (Physiological psychology), Visual research |
Developmental Psychology | 3 | Attachment, Development, Old age |
Personality | 3 | Machiavellian, Narcissism, Personality |
Psychological Psychology | 3 | Programme, Psychology practice, Theory |
Education and Learning | 1 | Education and Learning |
Psychometrics | 1 | Measure |
Code Total | 84 |
These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.
Topic frequency (international sample).
In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.
Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.
What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.
Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .
Research methods in psychology.
Quantitative | 401 | 162 | 69 | 60 | 52 | 52 | 48 | 28 | 38 | 13 |
Qualitative | 28 | 4 | 1 | 0 | 5 | 2 | 3 | 5 | 0 | 1 |
Review | 11 | 5 | 2 | 0 | 3 | 4 | 1 | 13 | 0 | 1 |
Mixed Methods | 7 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 |
Multi-method | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Total | 447 | 171 | 72 | 60 | 61 | 58 | 53 | 47 | 39 | 15 |
Research method frequency in topics.
Secondly, in the case of how these research methods are employed , our study indicated the following.
Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.
Sampling use in the field of psychology.
Not stated | 331 | 153 | 45 | 57 | 49 | 43 | 43 | 38 | 31 | 14 |
Convenience sampling | 55 | 8 | 10 | 1 | 6 | 8 | 9 | 2 | 6 | 1 |
Random sampling | 15 | 3 | 9 | 1 | 2 | 2 | 0 | 2 | 1 | 1 |
Snowball sampling | 14 | 4 | 4 | 1 | 2 | 0 | 0 | 3 | 0 | 0 |
Purposive sampling | 6 | 0 | 2 | 0 | 0 | 2 | 0 | 3 | 1 | 0 |
Cluster sampling | 8 | 1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | 0 |
Stratified sampling | 4 | 1 | 2 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
Non-probability sampling | 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Probability sampling | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Quota sampling | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Criterion sampling | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Self-selection sampling | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Unsystematic sampling | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Total | 443 | 172 | 76 | 60 | 60 | 58 | 52 | 48 | 40 | 16 |
Sampling method frequency in topics.
Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.
From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .
Design use in the field of psychology.
Experimental design | 82 | 82 | 3 | 60 | 10 | 12 | 8 | 6 | 4 | 3 |
Non-experimental design | 115 | 30 | 51 | 0 | 13 | 17 | 13 | 13 | 14 | 3 |
Cross-sectional design | 123 | 31 | 12 | 1 | 19 | 17 | 21 | 5 | 13 | 2 |
Correlational design | 56 | 12 | 3 | 0 | 10 | 2 | 2 | 0 | 4 | 2 |
Not stated | 37 | 7 | 3 | 0 | 4 | 2 | 4 | 14 | 1 | 3 |
Longitudinal design | 21 | 6 | 2 | 1 | 1 | 2 | 2 | 0 | 2 | 3 |
Quasi-experimental design | 4 | 1 | 0 | 0 | 0 | 0 | 2 | 1 | 0 | 0 |
Systematic review | 3 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 |
Cross-cultural design | 3 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 |
Descriptive design | 2 | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 |
Ethnography | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Literature review | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
Interpretative Phenomenological Analysis (IPA) | 2 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Narrative design | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |
Case-control research design | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 |
Concurrent data collection design | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Grounded Theory | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Narrative review | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Auto-ethnography | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Case series evaluation | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Case study | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Comprehensive review | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Descriptive-inferential | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Explanatory sequential design | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Exploratory mixed-method | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 |
Grounded ethnographic design | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Historical cohort design | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Historical research | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
interpretivist approach | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Meta-review | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Prospective design | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Qualitative review | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Qualitative systematic review | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Short-term prospective design | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Total | 461 | 175 | 74 | 63 | 63 | 58 | 56 | 48 | 39 | 16 |
Design frequency in topics.
Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.
Data collection in the field of psychology.
Questionnaire | 364 | 113 | 65 | 42 | 40 | 51 | 39 | 24 | 37 | 11 |
Experimental task | 68 | 66 | 3 | 52 | 9 | 5 | 11 | 5 | 5 | 1 |
Cognitive ability test | 9 | 57 | 1 | 12 | 6 | 1 | 5 | 1 | 1 | 0 |
Physiological measure | 3 | 12 | 1 | 6 | 2 | 5 | 3 | 0 | 1 | 0 |
Interview | 19 | 3 | 0 | 1 | 3 | 0 | 2 | 2 | 0 | 1 |
Online scholarly literature | 10 | 4 | 0 | 0 | 3 | 4 | 0 | 10 | 0 | 0 |
Open-ended questions | 15 | 3 | 0 | 1 | 3 | 1 | 2 | 3 | 0 | 0 |
Semi-structured interviews | 10 | 3 | 0 | 0 | 3 | 2 | 1 | 2 | 0 | 1 |
Observation | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 |
Documents | 5 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 2 | 0 |
Focus group | 6 | 1 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Not stated | 2 | 1 | 1 | 0 | 0 | 0 | 1 | 4 | 0 | 1 |
Public data | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 1 |
Drawing task | 0 | 2 | 0 | 1 | 1 | 1 | 0 | 2 | 0 | 0 |
In-depth interview | 6 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Structured interview | 0 | 2 | 0 | 0 | 1 | 2 | 0 | 0 | 1 | 0 |
Writing task | 1 | 0 | 0 | 0 | 4 | 0 | 0 | 1 | 0 | 0 |
Questionnaire interviews | 1 | 0 | 1 | 0 | 2 | 0 | 1 | 0 | 0 | 0 |
Non-experimental task | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Tests | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Group accounts | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Open-ended prompts | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Field notes | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Open-ended interview | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Qualitative questions | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
Social media | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Assessment procedure | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Closed-ended questions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Open discussions | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Qualitative descriptions | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Total | 551 | 273 | 75 | 116 | 79 | 73 | 65 | 60 | 50 | 17 |
Data collection frequency in topics.
Data analysis in the field of psychology.
Not stated | 5 | 1 | 2 | 0 | 0 | 1 | 1 | 5 | 0 | 1 |
Actor-Partner Interdependence Model (APIM) | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Analysis of Covariance (ANCOVA) | 17 | 8 | 1 | 3 | 4 | 2 | 1 | 0 | 0 | 1 |
Analysis of Variance (ANOVA) | 112 | 60 | 16 | 29 | 15 | 17 | 15 | 6 | 5 | 3 |
Auto-regressive path coefficients | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Average variance extracted (AVE) | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Bartholomew's classification system | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Bayesian analysis | 3 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Bibliometric analysis | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Binary logistic regression | 1 | 1 | 0 | 0 | 1 | 4 | 1 | 0 | 0 | 0 |
Binary multilevel regression | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Binomial and Bernoulli regression models | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Binomial mixed effects model | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Bivariate Correlations | 32 | 10 | 3 | 0 | 4 | 3 | 5 | 1 | 1 | 1 |
Bivariate logistic correlations | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Bootstrapping | 39 | 16 | 2 | 3 | 5 | 1 | 6 | 1 | 2 | 1 |
Canonical correlations | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 |
Cartesian diagram | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Case-wise diagnostics | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Casual network analysis | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Categorisation | 5 | 2 | 0 | 0 | 1 | 1 | 0 | 4 | 0 | 0 |
Categorisation of responses | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Category codes | 3 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Cattell's scree-test | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Chi-square tests | 52 | 20 | 17 | 5 | 6 | 11 | 8 | 7 | 4 | 3 |
Classic Parallel Analysis (PA) | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 |
Cluster analysis | 7 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 |
Coded | 15 | 3 | 1 | 2 | 1 | 1 | 1 | 2 | 1 | 0 |
Cohen d effect size | 14 | 5 | 2 | 1 | 3 | 2 | 3 | 1 | 0 | 1 |
Common method variance (CMV) | 5 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Comprehensive Meta-Analysis (CMA) | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Confidence Interval (CI) | 2 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Confirmatory Factor Analysis (CFA) | 57 | 13 | 40 | 0 | 2 | 4 | 7 | 1 | 3 | 1 |
Content analysis | 9 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | 0 | 0 |
Convergent validity | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cook's distance | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Correlated-trait-correlated-method minus one model | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Correlational analysis | 259 | 85 | 44 | 18 | 27 | 31 | 34 | 8 | 33 | 8 |
Covariance matrix | 3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Covariance modelling | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Covariance structure analyses | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cronbach's alpha | 61 | 14 | 18 | 6 | 5 | 10 | 8 | 3 | 7 | 5 |
Cross-validation | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Cross-lagged analyses | 1 | 2 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Dependent t-test | 1 | 2 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 |
Descriptive statistics | 324 | 132 | 43 | 49 | 41 | 43 | 36 | 28 | 29 | 10 |
Differentiated analysis | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Discriminate analysis | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Discursive psychology | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Dominance analysis | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Expectation maximisation | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Exploratory data Analysis | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
Exploratory Factor Analysis (EFA) | 14 | 5 | 24 | 0 | 1 | 1 | 4 | 0 | 4 | 0 |
Exploratory structural equation modelling (ESEM) | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Factor analysis | 12 | 4 | 16 | 0 | 2 | 1 | 5 | 0 | 2 | 0 |
Measurement invariance testing | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Four-way mixed ANOVA | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Frequency rate | 20 | 1 | 4 | 2 | 1 | 2 | 2 | 2 | 0 | 0 |
Friedman test | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Games-Howell | 2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
General linear model analysis | 1 | 2 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |
Greenhouse-Geisser correction | 2 | 5 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 |
Grounded theory method | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Grounded theory methodology using open and axial coding | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Guttman split-half | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Harman's one-factor test | 13 | 2 | 0 | 0 | 0 | 1 | 2 | 0 | 0 | 0 |
Herman's criteria of experience categorisation | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Hierarchical CFA (HCFA) | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Hierarchical cluster analysis | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Hierarchical Linear Modelling (HLM) | 76 | 22 | 2 | 3 | 7 | 6 | 7 | 4 | 4 | 1 |
Huynh-Felt correction | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Identified themes | 3 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Independent samples t-test | 38 | 9 | 4 | 4 | 4 | 8 | 3 | 3 | 1 | 1 |
Inductive open coding | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Inferential statistics | 2 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Interclass correlation | 3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Internal consistency | 3 | 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Interpreted and defined | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Interpretive Phenomenological Analysis (IPA) | 2 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Item fit analysis | 1 | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
K-means clustering | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Kaiser-meyer-Olkin measure of sampling adequacy | 2 | 0 | 8 | 0 | 0 | 0 | 2 | 0 | 2 | 0 |
Kendall's coefficients | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Kolmogorov-Smirnov test | 1 | 2 | 1 | 1 | 2 | 2 | 0 | 0 | 1 | 0 |
Lagged-effects multilevel modelling | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Latent class differentiation (LCD) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Latent cluster analysis | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Latent growth curve modelling (LGCM) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
Latent means | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Latent Profile Analysis (LPA) | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Linear regressions | 69 | 19 | 4 | 10 | 3 | 12 | 5 | 3 | 13 | 0 |
Linguistic Inquiry and Word Count | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Listwise deletion method | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Log-likelihood ratios | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Logistic mixed-effects model | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Logistic regression analyses | 17 | 0 | 1 | 0 | 4 | 2 | 1 | 0 | 0 | 1 |
Loglinear Model | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Mahalanobis distances | 0 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Mann-Whitney U tests | 6 | 4 | 2 | 1 | 2 | 0 | 2 | 4 | 0 | 0 |
Mauchly's test | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 1 | 0 | 1 |
Maximum likelihood method | 11 | 3 | 9 | 0 | 1 | 3 | 2 | 3 | 1 | 0 |
Maximum-likelihood factor analysis with promax rotation | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Measurement invariance testing | 4 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Mediation analysis | 29 | 7 | 1 | 2 | 4 | 3 | 5 | 0 | 3 | 0 |
Meta-analysis | 3 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Microanalysis | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Minimum significant difference (MSD) comparison | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Mixed ANOVAs | 19 | 6 | 0 | 10 | 1 | 2 | 1 | 4 | 1 | 0 |
Mixed linear model | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
Mixed-design ANCOVA | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Mixed-effects multiple regression models | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Moderated hierarchical regression model | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Moderated regression analysis | 8 | 4 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 |
Monte Carlo Markov Chains | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multi-group analysis | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multidimensional Random Coefficient Multinomial Logit (MRCML) | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multidimensional Scaling | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multiple-Group Confirmatory Factor Analysis (MGCFA) | 3 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 |
Multilevel latent class analysis | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Multilevel modelling | 7 | 2 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 |
Multilevel Structural Equation Modelling (MSEM) | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multinominal logistic regression (MLR) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multinominal regression analysis | 1 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 |
Multiple Indicators Multiple Causes (MIMIC) | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
Multiple mediation analysis | 2 | 6 | 0 | 0 | 2 | 2 | 1 | 0 | 0 | 0 |
Multiple regression | 34 | 15 | 3 | 0 | 3 | 4 | 5 | 0 | 7 | 2 |
Multivariate analysis of co-variance (MANCOVA) | 12 | 2 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 0 |
Multivariate Analysis of Variance (MANOVA) | 38 | 8 | 4 | 5 | 5 | 6 | 9 | 1 | 1 | 2 |
Multivariate hierarchical linear regression | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multivariate linear regression | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Multivariate logistic regression analyses | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multivariate regressions | 2 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Nagelkerke's R square | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Narrative analysis | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Negative binominal regression with log link | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Newman-Keuls | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Nomological Validity Analysis | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
One sample t-test | 8 | 10 | 1 | 7 | 4 | 6 | 4 | 0 | 1 | 0 |
Ordinary Least-Square regression (OLS) | 2 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Pairwise deletion method | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Pairwise parameter comparison | 4 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
Parametric Analysis | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Partial Least Squares regression method (PLS) | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Path analysis | 21 | 9 | 0 | 1 | 2 | 4 | 5 | 1 | 2 | 0 |
Path-analytic model test | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Phenomenological analysis | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Polynomial regression analyses | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Fisher LSD | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Principal axis factoring | 2 | 1 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Principal component analysis (PCA) | 8 | 1 | 12 | 1 | 1 | 0 | 3 | 2 | 5 | 1 |
Pseudo-panel regression | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Quantitative content analysis | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Receiver operating characteristic (ROC) curve analysis | 2 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Relative weight analysis | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Repeated measures analyses of variances (rANOVA) | 18 | 22 | 1 | 7 | 5 | 2 | 1 | 1 | 1 | 1 |
Ryan-Einot-Gabriel-Welsch multiple F test | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Satorra-Bentler scaled chi-square statistic | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Scheffe's test | 3 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Sequential multiple mediation analysis | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Shapiro-Wilk test | 2 | 3 | 0 | 2 | 1 | 0 | 0 | 0 | 0 | 0 |
Sobel Test | 13 | 5 | 0 | 1 | 0 | 2 | 4 | 0 | 0 | 0 |
Squared multiple correlations | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Squared semi-partial correlations (sr2) | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Stepwise regression analysis | 3 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | 2 | 0 |
Structural Equation Modelling (SEM) | 56 | 22 | 3 | 3 | 3 | 5 | 5 | 0 | 5 | 3 |
Structure analysis | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Subsequent t-test | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Systematic coding- Gemeinschaft-oriented | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Task analysis | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Thematic analysis | 11 | 2 | 0 | 0 | 3 | 0 | 2 | 2 | 0 | 0 |
Three (condition)-way ANOVA | 0 | 4 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 |
Three-way hierarchical loglinear analysis | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Tukey-Kramer corrections | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
Two-paired sample t-test | 7 | 6 | 1 | 1 | 0 | 3 | 1 | 1 | 0 | 1 |
Two-tailed related t-test | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Unadjusted Logistic regression analysis | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Univariate generalized linear models (GLM) | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Variance inflation factor (VIF) | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Variance-covariance matrix | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Wald test | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Ward's hierarchical cluster method | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Weighted least squares with corrections to means and variances (WLSMV) | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Welch and Brown-Forsythe F-ratios | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
Wilcoxon signed-rank test | 3 | 3 | 0 | 2 | 0 | 0 | 0 | 2 | 0 | 1 |
Wilks' Lamba | 6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
Word analysis | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Word Association Analysis | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
scores | 5 | 6 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 |
Total | 1738 | 635 | 329 | 192 | 198 | 237 | 225 | 117 | 152 | 55 |
Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.
This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).
With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.
Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.
Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.
Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.
Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.
Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.
With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).
This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.
Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.
Ethics Statement
This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.
Author Contributions
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
- Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
- American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
- Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
- Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
- Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
- Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
- Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
- Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
- Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
- Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
- Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
- Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
- Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
- Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
- Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
- Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
- Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
- Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
- Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
- Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
- Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
- Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
- Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
- Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
- Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
- Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
- Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
- Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
- Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
- Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
- Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
- O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
- O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
- Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
- Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
- Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
- Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
- Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
- Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
- Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
- Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
- Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
- Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
- Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
- Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
- Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
- Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
- Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]
CALL TO BOOK
- Why Choose Us
- About The Artwork
- Vision and Mission Statement
- Referral Pathways
- Fees, Rebates & Services
- Your Rights
- Carer Rights
- Mental Health Adelaide Help
- SA Intensive Gambling Help Service
- Trauma Services
- Suicide Prevention
- The Matrix Program
- The RPG Program
- Disability (NDIS)
- Rural and Remote Help
- Psychological Assessments and Reports
- Employee Assistance Programs
- Therapy for Internet Gaming Disorder (IGD)
- Remedial Massage
- Game Therapy (Lego® / Minecraft)
- Trauma-Informed Care
- Adelaide Clinic (City)
- Gilles Plains Clinic
- Seaview Downs Clinic
- Sunshine Coast
The role of research in the practice of psychology
‘I’ve done my research,’ is a phrase that seems to be spoken more and more often. As information continues to become easier to produce and access, doing research is likely to become more relevant for everybody. In psychology, research plays an essential part in understanding human behaviour, and in the assessment, diagnosis and treatment of psychological disorders.
As a science, the body of knowledge under the heading ‘psychology’ concerns our knowledge of human behaviour that has been acquired through scientific research. Behaviour can be researched through an array of techniques and study designs, which gives individual studies unique qualities that affect the conclusions that can be drawn from them. This means a single piece of research will rarely provide a comprehensive understanding of a particular problem, and that research needs to be ongoing.
In Australia, registration as a psychologist requires university study involving both training in research and conducting research itself. This enables psychologists to do their own research, and also to understand, critique and apply others’ research to reach their own conclusions. Conducting high-quality research requires critical thinking, rigour, logic, and objectivity, which can be applied to assessing the quality of studies they use to inform their practice.
A key area of application for research in psychology is in developing, administering and interpreting psychological assessments. To provide real world meaning of the results, development of psychological assessments requires research in actual populations. The research that is used to develop an assessment has a substantial effect on determining if it is appropriate to use, how the test should be administered, and how the results should be interpreted. Because of this, understanding the research behind an assessment is important in psychological practice. It enables psychologists to better explain what results ‘mean’.
Research is also conducted in psychology to develop treatments for psychological disorders, determine whether they are effective, and use them in clinical practice. As psychologists are required to follow evidence-based practice, treatments used by psychologists have been demonstrated under scientific conditions to produce results. Research into the efficacy of treatments enables psychologists to better understand the variables involved and ensures treatments are applied in the most effective way. As an example, a study conducted by PsychMed on rates of remission for methamphetamine addiction showing that people who gave up tobacco and methamphetamine had higher rates of remission than those who quit methamphetamine alone, has helped us to advise on issues around co-substance use.
Finally, research plays a role in measurement-based treatment/care. Measurement-based treatment is a systematic approach to mental health care that involves using standardised assessments to track a patient’s progress and adjust their treatment plan as needed. This approach is based on the idea that regular monitoring and assessment can help identify changes in a patient’s symptoms or functioning, and allow for timely adjustments to their treatment plan to address any changes that may be occurring in different aspects of a patient’s mental health, including their symptoms, functioning, and overall well-being.
One of the main advantages of measurement-based treatment is that it provides a systematic and objective way to monitor a patient’s progress over time. This can be particularly helpful for patients with chronic mental health conditions, as it can allow for more precise tracking of their symptoms and functioning and can help ensure that their treatment plan is appropriate and effective. In addition to tracking a patient’s progress, measurement-based treatment may also involve setting specific treatment goals and working with the patient to develop strategies to achieve those goals.
In summary, research is crucial in understanding human behaviour, as well as in the assessment, diagnosis, and treatment of psychological disorders. Psychological assessments and interventions are developed through rigorous research, and psychologists being trained in research enables them to understand, critique, and apply this research in their own practice. Measurement-based treatment, which involves using standardised assessments to track a patient’s progress and adjust their treatment plan as needed, can be seen as research on the smallest, but also most relevant scale. Understanding the research behind these tools and approaches is vital for psychologists to provide the most effective care for their patients.
- History & Society
- Science & Tech
- Biographies
- Animals & Nature
- Geography & Travel
- Arts & Culture
- Games & Quizzes
- On This Day
- One Good Fact
- New Articles
- Lifestyles & Social Issues
- Philosophy & Religion
- Politics, Law & Government
- World History
- Health & Medicine
- Browse Biographies
- Birds, Reptiles & Other Vertebrates
- Bugs, Mollusks & Other Invertebrates
- Environment
- Fossils & Geologic Time
- Entertainment & Pop Culture
- Sports & Recreation
- Visual Arts
- Demystified
- Image Galleries
- Infographics
- Top Questions
- Britannica Kids
- Saving Earth
- Space Next 50
- Student Center
- Introduction
Early history
Behaviourism, freud and his followers, after world war ii and sputnik.
- Impact and aftermath of the cognitive revolution
- Social cognitive neuroscience
- Epigenetics
- Evolving scope and structure of psychological science
- Multiple tools and methods for diverse goals
- Complex data-analysis methods
- Where was Sigmund Freud educated?
- What did Sigmund Freud die of?
- Why is Sigmund Freud famous?
Our editors will review what you’ve submitted and determine whether to revise the article.
- Verywell Mind - An Overview of Psychology
- Simply Psychology - What is Psychology?
- Khan Academy - Introduction to psychology - Depression and major depressive disorder
- Psychology Today - Psychology
- Social Science LibreTexts - What is Psychology?
- WebMD - Guide to Psychiatry and Counseling
- Official Site of the American Psychological Association
- psychology - Children's Encyclopedia (Ages 8-11)
- psychology - Student Encyclopedia (Ages 11 and up)
- Table Of Contents
psychology , scientific discipline that studies mental states and processes and behaviour in humans and other animals.
The discipline of psychology is broadly divisible into two parts: a large profession of practitioners and a smaller but growing science of mind , brain , and social behaviour. The two have distinctive goals, training, and practices, but some psychologists integrate the two.
(Read Sigmund Freud’s 1926 Britannica essay on psychoanalysis.)
In Western culture , contributors to the development of psychology came from many areas, beginning with philosophers such as Plato and Aristotle . Hippocrates philosophized about basic human temperaments (e.g., choleric, sanguine , melancholic) and their associated traits. Informed by the biology of his time, he speculated that physical qualities, such as yellow bile or too much blood, might underlie differences in temperament ( see also humour ). Aristotle postulated the brain to be the seat of the rational human mind, and in the 17th century René Descartes argued that the mind gives people the capacities for thought and consciousness : the mind “decides” and the body carries out the decision—a dualistic mind-body split that modern psychological science is still working to overcome. Two figures who helped to found psychology as a formal discipline and science in the 19th century were Wilhelm Wundt in Germany and William James in the United States . James’s The Principles of Psychology (1890) defined psychology as the science of mental life and provided insightful discussions of topics and challenges that anticipated much of the field’s research agenda a century later.
During the first half of the 20th century, however, behaviourism dominated most of American academic psychology. In 1913 John B. Watson , one of the influential founders of behaviourism, urged reliance on only objectively measurable actions and conditions, effectively removing the study of consciousness from psychology. He argued that psychology as a science must deal exclusively with directly observable behaviour in lower animals as well as humans, emphasized the importance of rewarding only desired behaviours in child rearing, and drew on principles of learning through classical conditioning (based on studies with dogs by the Russian physiologist Ivan Pavlov and thus known as Pavlovian conditioning ). In the United States most university psychology departments became devoted to turning psychology away from philosophy and into a rigorous empirical science.
Beginning in the 1930s, behaviourism flourished in the United States, with B.F. Skinner leading the way in demonstrating the power of operant conditioning through reinforcement. Behaviourists in university settings conducted experiments on the conditions controlling learning and “shaping” behaviour through reinforcement, usually working with laboratory animals such as rats and pigeons. Skinner and his followers explicitly excluded mental life, viewing the human mind as an impenetrable “black box,” open only to conjecture and speculative fictions. Their work showed that social behaviour is readily influenced by manipulating specific contingencies and by changing the consequences or reinforcement (rewards) to which behaviour leads in different situations. Changes in those consequences can modify behaviour in predictable stimulus-response (S-R) patterns. Likewise, a wide range of emotions , both positive and negative, may be acquired through processes of conditioning and can be modified by applying the same principles.
Concurrently, in a curious juxtaposition , the psychoanalytic theories and therapeutic practices developed by the Vienna-trained physician Sigmund Freud and his many disciples—beginning early in the 20th century and enduring for many decades—were undermining the traditional view of human nature as essentially rational. Freudian theory made reason secondary: for Freud, the unconscious and its often socially unacceptable irrational motives and desires, particularly the sexual and aggressive, were the driving force underlying much of human behaviour and mental illness . Making the unconscious conscious became the therapeutic goal of clinicians working within this framework.
Freud proposed that much of what humans feel, think, and do is outside awareness, self-defensive in its motivations, and unconsciously determined. Much of it also reflects conflicts grounded in early childhood that play out in complex patterns of seemingly paradoxical behaviours and symptoms. His followers, the ego psychologists, emphasized the importance of the higher-order functions and cognitive processes (e.g., competence motivation , self-regulatory abilities) as well as the individual’s psychological defense mechanisms . They also shifted their focus to the roles of interpersonal relations and of secure attachment in mental health and adaptive functioning, and they pioneered the analysis of these processes in the clinical setting.
After World War II , American psychology, particularly clinical psychology, grew into a substantial field in its own right, partly in response to the needs of returning veterans. The growth of psychology as a science was stimulated further by the launching of Sputnik in 1957 and the opening of the Russian-American space race to the Moon . As part of this race, the U.S. government fueled the growth of science. For the first time, massive federal funding became available, both to support behavioral research and to enable graduate training. Psychology became both a thriving profession of practitioners and a scientific discipline that investigated all aspects of human social behaviour, child development , and individual differences, as well as the areas of animal psychology, sensation , perception , memory , and learning.
Training in clinical psychology was heavily influenced by Freudian psychology and its offshoots. But some clinical researchers, working with both normal and disturbed populations, began to develop and apply methods focusing on the learning conditions that influence and control social behaviour. This behaviour therapy movement analyzed problematic behaviours (e.g., aggressiveness , bizarre speech patterns, smoking , fear responses) in terms of the observable events and conditions that seemed to influence the person’s problematic behaviour. Behavioral approaches led to innovations for therapy by working to modify problematic behaviour not through insight, awareness, or the uncovering of unconscious motivations but by addressing the behaviour itself. Behaviourists attempted to modify the maladaptive behaviour directly, examining the conditions controlling the individual’s current problems, not their possible historical roots. They also intended to show that such efforts could be successful without the symptom substitution that Freudian theory predicted. Freudians believed that removing the troubling behaviour directly would be followed by new and worse problems. Behaviour therapists showed that this was not necessarily the case.
To begin exploring the role of genetics in personality and social development , psychologists compared the similarity in personality shown by people who share the same genes or the same environment . Twin studies compared monozygotic (identical) as opposed to dizygotic (fraternal) twins, raised either in the same or in different environments . Overall, these studies demonstrated the important role of heredity in a wide range of human characteristics and traits, such as those of the introvert and extravert , and indicated that the biological-genetic influence was far greater than early behaviourism had assumed. At the same time, it also became clear that how such dispositions are expressed in behaviour depends importantly on interactions with the environment in the course of development, beginning in utero.
REVIEW article
The use of research methods in psychological research: a systematised review.
- 1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
- 2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa
Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.
Introduction
Psychology is an ever-growing and popular field ( Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on ( Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation ( Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data ( Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method ( Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions ( Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm ( O'Neil and Koekemoer, 2016 ), research question ( Grix, 2002 ), or the skill and exposure of the researcher ( Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research ( Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies ( Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design ( Truscott et al., 2010 ).
The inappropriate use of research methods affects how students and researchers improve and utilise their research skills ( Scott Jones and Goldring, 2015 ), how theories are developed ( Ngulube, 2013 ), and the credibility of research results ( Levitt et al., 2017 ). This, in turn, can be detrimental to the field ( Nind et al., 2015 ), journal publication ( Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research ( Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing ( Earp and Trafimow, 2015 ; Hengartner, 2018 ).
Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. (2011) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. (1999) as well as Bluhm et al. (2011) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl (2014) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer (2016) . Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity ( O'Cathain, 2009 ).
A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population ( Bittermann and Fischer, 2018 )], method [data-gathering tools ( Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research ( Ritchie et al., 2009 )], data collection [techniques and research strategy ( Maree, 2016 )], and data analysis [discovering information by examining bodies of data ( Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.
Grant and Booth (2009) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form ( Grant and Booth, 2009 ).
Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample ( Strydom, 2011 ), and time and cost constraints ( Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample ( Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee (2015) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.
Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank ( Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus ® database ( Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ ( Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals ( Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals ( Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.
The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.
Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).
The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. (2016) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually ( Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.
Figure 1 . Systematised review procedure.
According to Johnston et al. (2019) , “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form ( Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.
Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data ( Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. (2016) and approved by three research committees/stakeholders and the researchers ( Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail ( Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process ( Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process ( Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.
To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy ( Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction ( Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review ( Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.
The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).
Figure 2 . Journal article frequency.
Table 1 . Codes used to form themes (research topics).
These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten (2010) . These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten (2010) . It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.
Figure 3 . Topic frequency (international sample).
In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour ( Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.
Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost (2014) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.
What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.
Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .
Table 2 . Research methods in psychology.
Figure 4 . Research method frequency in topics.
Secondly, in the case of how these research methods are employed , our study indicated the following.
Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.
Table 3 . Sampling use in the field of psychology.
Figure 5 . Sampling method frequency in topics.
Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher (2016) , is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.
From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .
Table 4 . Design use in the field of psychology.
Figure 6 . Design frequency in topics.
Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.
Table 5 . Data collection in the field of psychology.
Figure 7 . Data collection frequency in topics.
Table 6 . Data analysis in the field of psychology.
Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. (2019) to visually present a research framework based on this Article's results.
This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia ( Holloway, 2008 ).
With regard to the methods used, our data stayed true to literature, finding only common research methods ( Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature ( Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method ( Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies ( Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research ( Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method ( Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies ( Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.
Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth (2009) , state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method ( Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods ( Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty ( Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.
Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world ( Gunasekare, 2015 ). According to Toomela (2010) , this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.
Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics ( Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results ( Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations ( Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.
Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science ( American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs ( Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used ( Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) ( Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.
Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.
With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process ( Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology ( Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health ( Deangelis, 2017 ).
This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error ( Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.
Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.
Ethics Statement
This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.
Author Contributions
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Aanstoos, C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
Google Scholar
American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am. Psychol. 73:3. doi: 10.1037/amp0000191
PubMed Abstract | CrossRef Full Text | Google Scholar
Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S., and Beekhuyzen, J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support. Commun. Ass. Inform. Syst. 37, 154–204. doi: 10.17705/1CAIS.03708
CrossRef Full Text | Google Scholar
Barr-Walker, J. (2017). Evidence-based information needs of public health workers: a systematized review. J. Med. Libr. Assoc. 105, 69–79. doi: 10.5195/JMLA.2017.109
Bittermann, A., and Fischer, A. (2018). How to identify hot topics in psychology using topic modeling. Z. Psychol. 226, 3–13. doi: 10.1027/2151-2604/a000318
Bluhm, D. J., Harman, W., Lee, T. W., and Mitchell, T. R. (2011). Qualitative research in management: a decade of progress. J. Manage. Stud. 48, 1866–1891. doi: 10.1111/j.1467-6486.2010.00972.x
Breen, L. J., and Darlaston-Jones, D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia. Aust. Psychol. 45, 67–76. doi: 10.1080/00050060903127481
Burman, E., and Whelan, P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill.
Chaichanasakul, A., He, Y., Chen, H., Allen, G. E. K., Khairallah, T. S., and Ramos, K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007). J. Career. Dev. 38, 440–455. doi: 10.1177/0894845310380223
Chryssochoou, X. (2015). Social Psychology. Inter. Encycl. Soc. Behav. Sci. 22, 532–537. doi: 10.1016/B978-0-08-097086-8.24095-6
Cichocka, A., and Jost, J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies. Inter. J. Psychol. 49, 6–29. doi: 10.1002/ijop.12011
Clay, R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
Coetzee, M., and Van Zyl, L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA. J. Psychol . 40, 1–16. doi: 10.4102/sajip.v40i1.1227
Counsell, A., and Harlow, L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58, 140–147. doi: 10.1037/cap0000074
Deangelis, T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
Demuth, C. (2015). New directions in qualitative research in psychology. Integr. Psychol. Behav. Sci. 49, 125–133. doi: 10.1007/s12124-015-9303-9
Denzin, N. K., and Lincoln, Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage.
Drotar, D. (2010). A call for replications of research in pediatric psychology and guidance for authors. J. Pediatr. Psychol. 35, 801–805. doi: 10.1093/jpepsy/jsq049
Dweck, C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe. Perspect. Psychol. Sci. 12, 656–659. doi: 10.1177/1745691616687747
Earp, B. D., and Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621. doi: 10.3389/fpsyg.2015.00621
Ezeh, A. C., Izugbara, C. O., Kabiru, C. W., Fonn, S., Kahn, K., Manderson, L., et al. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob. Health Action 3:5693. doi: 10.3402/gha.v3i0.5693
Ferreira, A. L. L., Bessa, M. M. M., Drezett, J., and De Abreu, L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review. Reprod. Clim. 31, 48–54. doi: 10.1016/j.recli.2015.12.002
Fonseca, M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
Gough, B., and Lyons, A. (2016). The future of qualitative research in psychology: accentuating the positive. Integr. Psychol. Behav. Sci. 50, 234–243. doi: 10.1007/s12124-015-9320-8
Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x
Grix, J. (2002). Introducing students to the generic terminology of social research. Politics 22, 175–186. doi: 10.1111/1467-9256.00173
Gunasekare, U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review. Int. J. Sci. Res. 4, 361–368. Available online at: https://ssrn.com/abstract=2735996
Hengartner, M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9:256. doi: 10.3389/fpsyg.2018.00256
Holloway, W. (2008). Doing intellectual disagreement differently. Psychoanal. Cult. Soc. 13, 385–396. doi: 10.1057/pcs.2008.29
Ivankova, N. V., Creswell, J. W., and Plano Clark, V. L. (2016). “Foundations and Approaches to mixed methods research,” in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers), 306–335.
Johnson, M., Long, T., and White, A. (2001). Arguments for British pluralism in qualitative health research. J. Adv. Nurs. 33, 243–249. doi: 10.1046/j.1365-2648.2001.01659.x
Johnston, A., Kelly, S. E., Hsieh, S. C., Skidmore, B., and Wells, G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide. J. Clin. Epidemiol. 108, 64–72. doi: 10.1016/j.jclinepi.2018.11.030
Ketchen, D. J. Jr., Boyd, B. K., and Bergh, D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges. Organ. Res. Methods 11, 643–658. doi: 10.1177/1094428108319843
Ktepi, B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
Laher, S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research. S. Afr. J. Psychol. 46, 316–327. doi: 10.1177/0081246316649121
Lee, C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
Lee, T. W., Mitchell, T. R., and Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. J. Vocat. Behav. 55, 161–187. doi: 10.1006/jvbe.1999.1707
Leech, N. L., Anthony, J., and Onwuegbuzie, A. J. (2007). A typology of mixed methods research designs. Sci. Bus. Media B. V Qual. Quant 43, 265–275. doi: 10.1007/s11135-007-9105-3
Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., and Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual. Psychol. 4, 2–22. doi: 10.1037/qup0000082
Lowe, S. M., and Moore, S. (2014). Social networks and female reproductive choices in the developing world: a systematized review. Rep. Health 11:85. doi: 10.1186/1742-4755-11-85
Maree, K. (2016). “Planning a research proposal,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 49–70.
Maree, K., and Pietersen, J. (2016). “Sampling,” in First Steps in Research, 2nd Edn , ed K. Maree (Pretoria: Van Schaik Publishers), 191–202.
Ngulube, P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa. ESARBICA J. 32, 10–23. Available online at: http://hdl.handle.net/10500/22397 .
Nieuwenhuis, J. (2016). “Qualitative research designs and data-gathering techniques,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 71–102.
Nind, M., Kilburn, D., and Wiles, R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods. Int. J. Soc. Res. Methodol. 18, 561–576. doi: 10.1080/13645579.2015.1062628
O'Cathain, A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution. J. Mix. Methods 3, 1–6. doi: 10.1177/1558689808326272
O'Neil, S., and Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review. SA J. Indust. Psychol. 42, 1–16. doi: 10.4102/sajip.v42i1.1350
Onwuegbuzie, A. J., and Collins, K. M. (2017). The role of sampling in mixed methods research enhancing inference quality. Köln Z Soziol. 2, 133–156. doi: 10.1007/s11577-017-0455-0
Perestelo-Pérez, L. (2013). Standards on how to develop and report systematic reviews in psychology and health. Int. J. Clin. Health Psychol. 13, 49–57. doi: 10.1016/S1697-2600(13)70007-3
Pericall, L. M. T., and Taylor, E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review. Dev. Med. Child Neurol. 56, 19–30. doi: 10.1111/dmcn.12237
Peterson, R. A., and Merunka, D. R. (2014). Convenience samples of college students and research reproducibility. J. Bus. Res. 67, 1035–1041. doi: 10.1016/j.jbusres.2013.08.010
Ritchie, J., Lewis, J., and Elam, G. (2009). “Designing and selecting samples,” in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed J. Ritchie and J. Lewis (London: Sage), 1–23.
Sandelowski, M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis. Res. Nurs. Health 34, 342–352. doi: 10.1002/nur.20437
Sandelowski, M., Voils, C. I., and Knafl, G. (2009). On quantitizing. J. Mix. Methods Res. 3, 208–222. doi: 10.1177/1558689809334210
Scholtz, S. E., De Klerk, W., and De Beer, L. T. (2019). A data generated research framework for conducting research methods in psychological research.
Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
Scott Jones, J., and Goldring, J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods. Int. J. Soc. Res. Methodol. 18, 479–494. doi: 10.1080/13645579.2015.1062623
Smith, B., and McGannon, K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 11, 101–121. doi: 10.1080/1750984X.2017.1317357
Stangor, C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
Strydom, H. (2011). “Sampling in the quantitative paradigm,” in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds A. S. de Vos, H. Strydom, C. B. Fouché, and C. S. L. Delport (Pretoria: Van Schaik Publishers), 221–234.
Tashakkori, A., and Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications.
Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Front. Psychol. 1:29. doi: 10.3389/fpsyg.2010.00029
Truscott, D. M., Swars, S., Smith, S., Thornton-Reid, F., Zhao, Y., Dooley, C., et al. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005. Int. J. Soc. Res. Methodol. 13, 317–328. doi: 10.1080/13645570903097950
Weiten, W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth.
Keywords: research methods, research approach, research trends, psychological research, systematised review, research designs, research topic
Citation: Scholtz SE, de Klerk W and de Beer LT (2020) The Use of Research Methods in Psychological Research: A Systematised Review. Front. Res. Metr. Anal. 5:1. doi: 10.3389/frma.2020.00001
Received: 30 December 2019; Accepted: 28 February 2020; Published: 20 March 2020.
Reviewed by:
Copyright © 2020 Scholtz, de Klerk and de Beer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Salomé Elizabeth Scholtz, 22308563@nwu.ac.za
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Contact APA
APA is the leading scientific and professional organization representing psychology in the United States, with more than 157,000 researchers, educators, clinicians, consultants, and students as its members.
Our mission is to promote the advancement, communication, and application of psychological science and knowledge to benefit society and improve lives. We do this by:
- Utilizing psychology to make a positive impact on critical societal issues.
- Elevating the public’s understanding of, regard for, and use of psychology.
Become an APA member
- Preparing the discipline and profession of psychology for the future.
- Strengthening APA’s standing as an authoritative voice for psychology.
APA Strategic Plan
APA is positioning our field to play a leading role in addressing the grand challenges of today and the future. The strategic plan —originally adopted in February 2019 by APA’s Council of Representatives—enables us to focus the association’s efforts toward maximizing the impact we can have on complex issues facing the field of psychology and broader society. The plan was revised and adopted by Council in February 2024.
View the Strategic Plan
Equity, Diversity, and Inclusion
APA is committed to infusing the principles of equity, diversity, and inclusion (EDI) into all aspects of the work we do. Our EDI framework is a foundational tool to create substantive, transformative, and sustainable change within APA, across the psychology profession, and throughout broader society. The framework was collaboratively shaped by our volunteer leaders, governance, and staff.
Learn more about APA ’ s EDI efforts
Leadership Development Institute at APA
The Leadership Development Institute at APA is the new central hub for APA’s leadership development, mentoring, and training programs. The Institute is more than a collection of basic and bespoke training and mentorship programs - it's a powerful force for change, dedicated to uniting APA's existing leadership initiatives, pioneering groundbreaking new programming, and fostering meaningful mentorship and training opportunities under one dynamic banner.
Discover what we do
Publications and resources
APA Style »
APA PsycInfo »
APA PsycNet »
Annual Reports »
Task Force Reports »
Amicus Briefs »
Policy Statements »
Accreditation »
Governance and senior staff
Volunteer governance members play a key role in the direction and completion of APA ’ s advocacy, publishing, member service, and more.
- Council of Representatives , which has the sole authority to approve policy and appropriate the association’s revenue.
- Board of Directors , elected by the membership to be the administrative agent of the Council of Representatives.
APA ’ s daily operations are overseen by its senior staff at APA headquarters in Washington, D.C.
- APA President , elected annually by the membership to serve as the face of the association.
- Committees, boards, and task forces which focus on particular issues in the field.
APA at a glance
Code of Ethics
Ethical principles of psychologists and code of conduct.
Jobs at APA
Look for current career opportunities or search for specific jobs that match your interests.
Financial & Corporate Documents
General financial information about the association.
APA Services, Inc.
Definition of “Psychology”
Psychology is a diverse discipline, grounded in science, but with nearly boundless applications in everyday life. Some psychologists do basic research, developing theories and testing them through carefully honed research methods involving observation, experimentation, and analysis. Other psychologists apply the discipline’s scientific knowledge to help people, organizations, and communities function better.
As psychological research yields new information, whether it’s improved interventions to treat depression or how humans interact with machines, these findings become part of the discipline’s body of knowledge and are applied in work with patients and clients, in schools, in corporate settings, within the judicial system, even in professional sports.
Psychology is a doctoral-level profession. Psychologists study both normal and abnormal functioning and treat patients with mental and emotional problems. They also study and encourage behaviors that build wellness and emotional resilience. Today, as the link between mind and body is well-recognized, more and more psychologists are teaming with other health care providers to provide whole-person health care for patients.
Careers in Psychology
Science of Psychology
Role of Psychology in Health Care
Choosing a Psychologist
Hello, you are using a very old browser that's not supported. Please, consider updating your browser to a newer version.
Templeton.org is in English. Only a few pages are translated into other languages.
Usted está viendo Templeton.org en español. Tenga en cuenta que solamente hemos traducido algunas páginas a su idioma. El resto permanecen en inglés.
Você está vendo Templeton.org em Português. Apenas algumas páginas do site são traduzidas para o seu idioma. As páginas restantes são apenas em Inglês.
أنت تشاهد Templeton.org باللغة العربية. تتم ترجمة بعض صفحات الموقع فقط إلى لغتك. الصفحات المتبقية هي باللغة الإنجليزية فقط.
Templeton Prize
- Learn About Sir John Templeton
- Our Mission
- Explore Funding Areas
- Character Virtue Development
- Individual Freedom & Free Markets
- Life Sciences
- Mathematical & Physical Sciences
- Public Engagement
- Religion, Science, and Society
- Apply for Grant
- About Our Grants
- Grant Calendar
- Grant Database
- Templeton Ideas
Affiliated Programs
Templeton Press
Humble Approach Initiative
Reviewing six decades of research into the meaning, development, and benefits of purpose in life
Modern scientific research on human purpose has its origins in, of all places, a Holocaust survivor’s experiences in a series of Nazi concentration camps. While a prisoner at Theresienstadt, Auschwitz and two satellite camps of Dachau, Viennese psychologist Viktor Frankl noticed that fellow prisoners who had a sense of purpose showed greater resilience to the torture, slave labor, and starvation rations to which they were subjected. Writing of his experience later, he found a partial explanation in a quote from Friedrich Nietzsche, “Those who have a ‘why’ to live, can bear almost any ‘how.’” Frankl’s 1959 book Man’s Search for Meaning , a book which proved to be seminal in the field, crystallized his convictions about the crucial role of meaning and purpose. A decade later, Frankl would assist in the development of the first and most widely used standardized survey of purpose, the 21-item “Purpose in Life” test.
As part of its ongoing interest in increasing understanding of character and virtue, the John Templeton Foundation commissioned a review of more than six decades of the literature surrounding the nature of human purpose. Covering more than 120 publications tracing back to Frankl’s work, the review examines six core questions relating to the definition, measurement, benefits, and development of purpose.
Psychology of Purpose: What Is Purpose and How Do You Measure It?
In psychological terms, a consensus definition for purpose has emerged in the literature according to which purpose is a stable and generalized intention to accomplish something that is at once personally meaningful and at the same time leads to productive engagement with some aspect of the world beyond the self. Not all goals or personally meaningful experiences contribute to purpose, but in the intersection of goal orientation, personal meaningfulness, and a focus beyond the self, a distinct conception of purpose emerges.
Studies and surveys investigating individual sources of purpose in life cite examples ranging from personal experience (being inspired by a caring teacher) to concerns affairs far removed from our current circumstance (becoming an activist after learning about sweatshop conditions in another country). Most world religions, as well as many secular systems of thought, also offer their adherents well-developed guidelines for developing purpose in life. Love of friends and family, and desire for meaningful work are common sources of purpose.
Over the past few decades, psychologists and sociologists have developed a host of assessments that touch on people’s senses of purpose including the Life Regard Index, the Purpose in Life subscale of the Psychological Scales of Well-being, the Meaning in Life questionnaire, the Existence Subscale of the Purpose in Life Test, the Revised Youth Purpose Survey, the Claremont Purpose Scale and the Life Purpose Questionnaire, among others.
The conclusion that emerges from work these tests and surveys, interviews, definitions, and meta-analyses is, roughly, that Frankl’s observation was correct — having a purpose in life is associated with a tremendous number of benefits, ranging from a subjective sense of happiness to lower levels of stress hormones. A 2004 study found that highly purposeful older women had lower cholesterol, were less likely to be overweight, and had lower levels of inflammatory response, while another from 2010 found that individuals who reported higher purpose scores were less likely to be diagnosed with mild cognitive impairment and even Alzheimer’s Disease. The vast majority of those noted benefits, however, are currently only correlations — in many cases it is not clear whether having a strong sense of purpose in life causes the benefits or whether people experiencing the benefits are simply better positioned to develop a sense of purpose.
Psychology of Purpose: Interventions
Potential interventions to increase purpose and its benefits have focused on the formative years of late youth, where studies have looked at the benefits of supportive mentors and of practices such as gratitude journaling on purpose in life. A 2009 study followed 89 children and adolescents who were assigned to write and deliver gratitude letters to people they felt had blessed them. Participants who had lower initial levels of positive affect and gratitude, compared to a control group, had significantly higher gratitude and positive affect after delivering the letter for as long as two months later.
This result becomes even more promising in light of a series of four studies in 2014 which concluded that even inducing a temporary purpose-mindset improved academic outcomes, including self-regulation, persistence, grade point average, and the amount of time students were willing to spend studying for tests and completing homework.
Psychology of Purpose: The Arc of Purpose
About one in five high school students and one in three college students report having a clear purpose in life. Those rates drop slightly into midlife and more precipitously into later adulthood. Some of these changes make sense in light of the future-oriented nature of purpose. For young people from late childhood onward, a sense of searching for a purpose is associated with a sense of life satisfaction — but only until middle age, when unending purpose-seeking may carry connotations of immaturity. One study, however, explored an interesting exception to the general decline of purpose-seeking: compared to other adults, “9-enders” (individuals ending a decade of life, at ages 29, 39, 49, etc.) tend to focus more on aging and meaning, and consequently, they are more likely to report searching for purpose or experiencing a crisis of meaning.
In mid-life, parenting and other forms of caregiving become a clear source of purpose and meaning for many. Interestingly, studies in 2006 and 1989 showed that, although parents had a stronger sense of meaning in their lives than non-parents, they reported feeling less happy — a reflection of the ways that pursuing one’s purpose, especially in highly demanding seasons, can still be difficult, discouraging, and stressful.
A sense of purpose in one’s career is correlated with both greater satisfaction at work as well as better work-related outputs. In a 2001 study of service workers, researchers indicated that some hospital cleaning staff considered themselves “mere janitors” while others thought of themselves as part of the overall team that brought healing to patients. These groups of individuals performed the same basic tasks, but they thought very differently about their sense of purpose in the organizations where they worked. Not surprisingly, the workers who viewed their role as having a healing function were more satisfied with their jobs, spent more time with patients, worked more closely with doctors and nurses, and found more meaning in their jobs.
In the later stages of life, common adult sources of purpose like fulfillment in one’s career or caregiving for others are less accessible — but maintaining a strong sense of purpose is associated with a host of positive attributes at these ages. Compared to others, older adults with purpose are more likely to be employed, have better health, have a higher level of education, and be married.
Psychology of Purpose Around the Globe
Although the majority of sociological work on purpose in life has focused on people in western, affluent societies, the literature contains a few interesting cross-cultural results that hint at how approaches to and benefits from purpose in life might differ around the world. In Korea, for instance, youth were shown to view purpose as less an individual pursuit and more as a collective matter, while explorations of Chinese concepts of purpose indicates that one’s sense of purpose is divided into senses of professional, moral, and social purpose.
Research in the psychology of purpose among people of different socioeconomic backgrounds suggests that those in challenging circumstances are likely to have a difficult time discovering and pursuing personally meaningful aims. This finding fit well with psychologist A.H. Maslow’s famous hierarchy of needs, which suggests that people must meet basic needs for things like food, shelter, and safety before they are easily motivated to pursue aims like self-actualization.
However several studies also suggest that purpose can emerge in difficult circumstances and that it may serve as an important form of protection, as in a study that showed that having a sense of life purpose buffered African-American youth from the negative experiences associated with growing up in more challenging environments.
Indeed, as Viktor Frankl argued — based in part on what he had observed first-hand — experiencing adversity might actually contribute to the development of a purpose in life.
STILL CURIOUS?
Download the full research review on the psychology of purpose.
Discover our other research papers on discoveries. Explore topics such as:
- intellectual humility
- positive neuroscience
- benefits of forgiveness
- science of free will
- science of generosity
Cookies enable our site to work correctly. By accepting these cookies, you help us ensure that we deliver a secure, functioning and accessible website. Some areas of our site may not function correctly without cookies. Cookie policy
Accept All Accept All
Share on Mastodon
Research Hypothesis In Psychology: Types, & Examples
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .
Hypotheses connect theory to data and guide the research process towards expanding scientific understanding
Some key points about hypotheses:
- A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
- It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
- A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
- Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
- For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
- Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.
Types of Research Hypotheses
Alternative hypothesis.
The research hypothesis is often called the alternative or experimental hypothesis in experimental research.
It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.
The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).
A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:
- Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.
In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.
An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.
It states that the results are not due to chance and are significant in supporting the theory being investigated.
The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.
Null Hypothesis
The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.
It states results are due to chance and are not significant in supporting the idea being investigated.
The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.
Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.
This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.
Nondirectional Hypothesis
A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.
It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.
For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.
Directional Hypothesis
A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)
It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.
For example, “Exercise increases weight loss” is a directional hypothesis.
Falsifiability
The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.
Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.
It means that there should exist some potential evidence or experiment that could prove the proposition false.
However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.
For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.
Can a Hypothesis be Proven?
Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.
All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.
In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
- Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
- However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.
We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.
If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.
Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.
How to Write a Hypothesis
- Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
- Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
- Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
- Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
- Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.
Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).
Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:
- The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
- The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.
More Examples
- Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
- Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
- Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
- Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
- Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
- Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
- Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
- Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.
- Bipolar Disorder
- Therapy Center
- When To See a Therapist
- Types of Therapy
- Best Online Therapy
- Best Couples Therapy
- Managing Stress
- Sleep and Dreaming
- Understanding Emotions
- Self-Improvement
- Healthy Relationships
- Student Resources
- Personality Types
- Sweepstakes
- Guided Meditations
- Verywell Mind Insights
- 2024 Verywell Mind 25
- Mental Health in the Classroom
- Editorial Process
- Meet Our Review Board
- Crisis Support
10 Things We Can All Learn From Psychology
Jamie Grill/Getty Images
Why should you study psychology? There are plenty of great reasons to learn about psychology, even if you are not a psychology major or do not plan to work in a psychology-related profession. Psychology is all around you and touches on every aspect of your life. Who you are now, how you will be in the future, how you interact with family, friends, and strangers; these are all things that psychology can help you better understand. Here are 10 reasons we we think everyone should learn at least a little bit about psychology.
Understand Yourself Better
As you learn about how development occurs, personality forms, and factors like society and culture impact behavior, you may find yourself gaining a deeper understanding of many influences that have impacted your own life.
Learn About Research Methods
Having a basic understanding of psychological research methods can help you better understand some of the many claims you’ll encounter in books, magazines, television shows, and movies. Becoming a better-informed consumer of psychology means that you will be equipped to sort out the truth from the fiction surrounding many pop psychology myths.
Improve Your Understanding of Others
The next time someone behaves in a certain way, you may be better able to understand the influences and motivations behind their actions.
Become a Better Communicator
Studying subjects such as emotion , language, and body language can help you fine-tune your interpersonal communication skills. By learning more about these things, you can gain a greater understanding of other people and what they are trying to say.
Develop Critical Thinking Skills
As you study psychology you will learn more about topics such as the scientific method, decision-making , and problem-solving, all of which can help your critical thinking skills for a variety of issues.
Help You in Your Future Career
Sure, there are plenty of exciting careers in psychology that you might want to explore, but studying the subject can help you in many other professions as well. For example, if you want to become a business manager, understanding human behavior can improve your ability to manage and interact with your employees.
Learn About Human Development
Understanding how people change and grow throughout the lifespan can make it easier to understand the children in your life, as well as your aging parents. It can also shine a light on your own experiences as you encounter different challenges and opportunities as you age.
Complement Other Areas of Study
Because different areas of psychology encompass a range of topics including philosophy, biology, and physiology, studying the subject can help you gain a richer understanding of these related areas.
Develop Insight Into Mental Illness
While you might not be interested in becoming a psychotherapist, studying psychology can help you better understand how psychological conditions are diagnosed and treated.
You can also discover how mental wellness can be enhanced, how to reduce stress, how to boost memory , and how to live a happier, healthier life.
Can Be Fun and Fascinating
From intriguing optical illusions that reveal the inner workings of the brain to shocking experiments that expose how far people will go to obey an authority figure, there is always something amazing and even downright astonishing to learn about the human mind and behavior.
Schwarz N, Newman E, Leach W. Making the truth stick & the myths fade: Lessons from cognitive psychology . Behavioral Science & Policy . 2016;2(1):85-95. doi:10.1353/bsp.2016.0009
American Psychological Association. Preparing to use your bachelor's degree in psychology .
American Psychological Association. Science of Psychology .
By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
COMMENTS
Discuss how scientific research guides public policy. Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck.
Discuss how scientific research guides public policy. Appreciate how scientific research can be important in making personal decisions. Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people's authority, and blind luck.
Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.
Abstract. Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what ...
Basic Research in Psychology. Basic research—also known as fundamental or pure research—refers to study and research meant to increase our scientific knowledge base. This type of research is often purely theoretical, with the intent of increasing our understanding of certain phenomena or behavior. In contrast with applied research, basic ...
Research in Psychology: The Basics. The first step in your review should include a basic introduction to psychology research methods. Psychology research can have a variety of goals. What researchers learn can be used to describe, explain, predict, or change human behavior. Psychologists use the scientific method to conduct studies and research ...
Psychological science helps educators understand how children think, process and remember — helping to design effective teaching methods. Psychological science contributes to justice by helping the courts understand the minds of criminals, evidence and the limits of certain types of evidence or testimony. The science of psychology is pervasive.
The field of psychology commonly uses experimental methods in what is known as experimental psychology.Researchers design experiments to test specific hypotheses (the deductive approach), or to evaluate functional relationships (the inductive approach).. The method of experimentation involves an experimenter changing some influence—the independent variable(IV)— on the research subjects ...
An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants. 2. Descriptive Research . Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are: Case studies
It enables psychologists to better explain what results 'mean'. Research is also conducted in psychology to develop treatments for psychological disorders, determine whether they are effective, and use them in clinical practice. As psychologists are required to follow evidence-based practice, treatments used by psychologists have been ...
Psychology is the study of the human mind and behavior. To conduct research on something as complex, and as personal, as human behavior, psychologists rely on a series of techniques, or methods ...
Advantages. Limitations. Examples. A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design. Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.
Psychology is the scientific study of the mind and behavior, according to the American Psychological Association. Psychology is a multifaceted discipline and includes many sub-fields of study such areas as human development, sports, health, clinical, social behavior and cognitive processes.
The purpose of research is to further understand the world and to learn how this knowledge can be applied to better everyday life. It is an integral part of problem solving. Although research can take many forms, there are three main purposes of research: Exploratory: Exploratory research is the first research to be conducted around a problem ...
In the simplest of terms, the research definition is a process of seeking out knowledge. This knowledge can be new, or it can support an already known fact. The purpose of research is to inform ...
psychology, scientific discipline that studies mental states and processes and behaviour in humans and other animals.. The discipline of psychology is broadly divisible into two parts: a large profession of practitioners and a smaller but growing science of mind, brain, and social behaviour.The two have distinctive goals, training, and practices, but some psychologists integrate the two.
In the case of developmental psychology, researchers conduct research into human development from childhood to old age.Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them.Health psychology aims to determine the effect of psychological factors on physiological health.
Psychology is the study of mind and behavior. It encompasses the biological influences, social pressures, and environmental factors that affect how people think, act, and feel. Gaining a richer and deeper understanding of psychology can help people achieve insights into their own actions as well as a better understanding of other people.
Our mission is to promote the advancement, communication, and application of psychological science and knowledge to benefit society and improve lives. We do this by: Utilizing psychology to make a positive impact on critical societal issues. Elevating the public's understanding of, regard for, and use of psychology. Become an APA member.
Purpose of the research. All foreseeable risks and discomforts to the participant (if there are any). ... Ethical Issues in Psychology & Socially Sensitive Research. ... They should make the limitations of their research explicit (e.g., 'the study was only carried out on white middle-class American male students,' 'the study is based on ...
Purpose. Reviewing six decades of research into the meaning, development, and benefits of purpose in life. Modern scientific research on human purpose has its origins in, of all places, a Holocaust survivor's experiences in a series of Nazi concentration camps. While a prisoner at Theresienstadt, Auschwitz and two satellite camps of Dachau ...
A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.
Psychology is all around you and touches on every aspect of your life. Who you are now, how you will be in the future, how you interact with family, friends, and strangers; these are all things that psychology can help you better understand. Here are 10 reasons we we think everyone should learn at least a little bit about psychology.