Pediaa.Com

Home » Education » Difference Between Conceptual and Empirical Research

Difference Between Conceptual and Empirical Research

The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts, whereas empirical research involves research based on observation, experiments and verifiable evidence.

Conceptual research and empirical research are two ways of doing scientific research. These are two opposing types of research frameworks since conceptual research doesn’t involve any experiments and empirical research does.

Key Areas Covered

1. What is Empirical Research     – Definition, Characteristics, Uses 2. What is Empirical Research     – Definition, Characteristics, Uses 3. What is the Difference Between Conceptual and Empirical Research     – Comparison of Key Differences

Conceptual Research, Empirical Research, Research

Difference Between Conceptual and Empirical Research - Comparison Summary

What is Conceptual Research?

Conceptual research is a type of research that is generally related to abstract ideas or concepts. It doesn’t particularly involve any practical experimentation. However, this type of research typically involves observing and analyzing information already present on a given topic. Philosophical research is a generally good example for conceptual research.

Conceptual research can be used to solve real-world problems. Conceptual frameworks, which are analytical tools researchers use in their studies, are based on conceptual research. Furthermore, these frameworks help to make conceptual distinctions and organize ideas researchers need for research purposes.

Main Difference - Conceptual vs Empirical Research

Figure 2: Conceptual Framework

In simple words, a conceptual framework is the researcher’s synthesis of the literature (previous research studies) on how to explain a particular phenomenon. It explains the actions required in the course of the study based on the researcher’s observations on the subject of research as well as the knowledge gathered from previous studies.

What is Empirical Research?

Empirical research is basically a research that uses empirical evidence. Empirical evidence refers to evidence verifiable by observation or experience rather than theory or pure logic. Thus, empirical research is research studies with conclusions based on empirical evidence. Moreover, empirical research studies are observable and measurable.

Empirical evidence can be gathered through qualitative research studies or quantitative research studies . Qualitative research methods gather non-numerical or non-statistical data. Thus, this type of studies helps to understand the underlying reasons, opinions, and motivations behind something as well as to uncover trends in thought and opinions. Quantitative research studies, on the other hand, gather statistical data. These have the ability to quantify behaviours, opinions, or other defined variables. Moreover, a researcher can even use a combination of quantitative and qualitative methods to find answers to his research questions .

Difference Between Conceptual and Empirical Research

Figure 2: Empirical Research Cycle

A.D. de Groot, a famous psychologist, came up with a cycle (figure 2) to explain the process of the empirical research process. Moreover, this cycle has five steps, each as important as the other. These steps include observation, induction, deduction, testing and evaluation.

Conceptual research is a type of research that is generally related to abstract ideas or concepts whereas empirical research is any research study where conclusions of the study are drawn from evidence verifiable by observation or experience rather than theory or pure logic.

Conceptual research involves abstract idea and concepts; however, it doesn’t involve any practical experiments. Empirical research, on the other hand, involves phenomena that are observable and measurable.

Type of Studies

Philosophical research studies are examples of conceptual research studies, whereas empirical research includes both quantitative and qualitative studies.

The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts whereas empirical research involves research based on observation, experiments and verifiable evidence.

1.“Empirical Research: Definition, Methods, Types and Examples.” QuestionPro, 14 Dec. 2018, Available here . 2. “Empirical Research.” Wikipedia, Wikimedia Foundation, 15 Sept. 2019, Available here . 3.“Conceptual Research: Definition, Framework, Example and Advantages.” QuestionPro, 18 Sept. 2018, Available here. 4. Patrick. “Conceptual Framework: A Step-by-Step Guide on How to Make One.” SimplyEducate.Me, 4 Dec. 2018, Available here .

Image Courtesy:

1. “APM Conceptual Framework” By LarryDragich – Created for a Technical Management Counsel meeting Previously published: First published in APM Digest in March (CC BY-SA 3.0) via Commons Wikimedia 2. “Empirical Cycle” By Empirical_Cycle.png: TesseUndDaanderivative work: Beao (talk) – Empirical_Cycle.png (CC BY 3.0) via Commons Wikimedia

' src=

About the Author: Hasa

Hasanthi is a seasoned content writer and editor with over 8 years of experience. Armed with a BA degree in English and a knack for digital marketing, she explores her passions for literature, history, culture, and food through her engaging and informative writing.

​You May Also Like These

Leave a reply cancel reply.

Difference Wiki

Conceptual Research vs. Empirical Research: What's the Difference?

conceptual vs empirical research

Key Differences

Comparison chart, nature of research, data collection, methodology, conceptual research and empirical research definitions, conceptual research, empirical research, what methods are used in empirical research, is conceptual research important, what are examples of conceptual research, how is empirical research different from conceptual research, how does empirical research validate theories, what's a key characteristic of empirical research, can conceptual research include data collection, is empirical research always quantitative, how does conceptual research contribute to academia, what is conceptual research, can conceptual research lead to practical applications, are surveys considered empirical research, what fields commonly use empirical research, do empirical studies always confirm theories, can a study combine conceptual and empirical research, how does one validate results in conceptual research, can conceptual research be subjective, what role do case studies play in empirical research, what is a hypothesis in empirical research, is literature review a part of conceptual research.

conceptual vs empirical research

Trending Comparisons

conceptual vs empirical research

Popular Comparisons

conceptual vs empirical research

New Comparisons

conceptual vs empirical research

conceptual vs empirical research

  • Conduct , Resources

Conceptual Research Vs Empirical Research?

Conceptual research.

Conceptual research is a technique wherein investigation is conducted by watching and analyzing already present data on a given point. Conceptual research does not include any viable tests. It is related to unique concepts or thoughts. Philosophers have long utilized conceptual research to create modern speculations or decipher existing hypotheses in a diverse light.

It doesn’t include viable experimentation, but the instep depends on analyzing accessible data on a given theme. Conceptual research has been broadly utilized within logic to create modern hypotheses, counter existing speculations, or distinctively decipher existing hypotheses. 

Today, conceptual research is utilized to answer business questions and fathom real-world problems. Researchers utilize explanatory apparatuses called conceptual systems to form conceptual refinements and organize thoughts required for investigation purposes.

Conceptual Research Framework

A conceptual research framework is built utilizing existing writing and studies from which inferences can be drawn. A conceptual research system constitutes a researcher’s combination of past research and related work and clarifies the phenomenon. The study is conducted to diminish the existing information gap on a specific theme and make important and dependable data available. 

The following steps can be taken to make a conceptual research framework:

Explain a topic for research

The primary step is to characterize the subject of your research. Most analysts will choose a topic relating to their field of expertise.

Collect and Organize relevant research

As conceptual research depends on pre-existing studies and writing, analysts must collect all important data relating to their point. It’s imperative to utilize dependable sources and information from scientific journals or investigate well-presumed papers. As conceptual research does not utilize experimentation and tests, the significance of analyzing dependable, fact-based information is reinforced.

Distinguish factors for the research

The other step is to choose important factors for their research. These factors will be the measuring sticks by which inductions will be drawn. They provide modern scope to inquire about and offer to help identify how distinctive factors may influence the subject of research.

Make the Framework 

The last step is to make the research framework by utilizing significant writing, factors, and other significant material. 

Advantages of Conceptual Research

It requires few resources compared to other types of market research where practical experimentation is required. This spares time and assets.

It is helpful as this form of investigation only requires the assessment of existing writing. 

Disadvantages of Conceptual Research

Speculations based on existing writing instead of experimentation and perception draw conclusions that are less fact-based and may not essentially be considered dependable.

Often, we see philosophical hypotheses being countered or changed since their conclusions or inferences are drawn from existing writings instead of practical experimentation. 

Empirical Research:

Empirical research is based on observed and established phenomena and determines information from real involvement instead of hypothesis or conviction. It derives knowledge from actual experiences. How do you know a study is empirical? Pay attention to the subheadings inside the article, book, or report and examine them to seek a depiction of the investigating “strategy.” Inquire yourself: Could I recreate this study and test these results?

Key characteristics to see for: 

  • Specific research questions to be answered 
  • Definition of the population, behavior, or wonders being studied 
  • Description of the methods used to consider the population of the area of phenomena, including various aspects like choice criteria, controls, and testing instruments.

Empirical Research Framework:

Since empirical research is based on perception and capturing experiences, it is critical to arrange the steps to experiment and how to examine it. This will empower the analyst to resolve issues or obstacles amid the test.

  • Define your purpose for this research:

This is often the step where the analyst must answer questions like what precisely I need to discover? What is the issue articulation? Are there any issues regarding the accessibility of knowledge, data, time, or assets? Will this research be more useful than what it’ll cost? Before going ahead, an analyst should characterize his reason for the investigation and plan to carry out assist tasks.

  • Supporting theories and relevant literature:

The analyst should discover if some hypotheses can be connected to his research issue. He must figure out if any hypothesis can offer assistance in supporting his discoveries. All kinds of significant writing will offer assistance to the analyst to discover if others have researched this before. The analyst will also need to set up presumptions and also discover if there’s any history concerning his investigation issue

  • Creation of Hypothesis and measurement:

Before starting the proper research related to his subject, he must give himself a working theory or figure out the probable result. The researcher has to set up factors, choose the environment for the research and find out how he can relate between the variables. The researcher will also need to characterize the units of estimations, tolerable degree for mistakes, and discover in the event that the estimation chosen will be approved by others.

  • Methodology and data collection:

In this step, the analyst has to characterize a strategy for conducting his investigation. He must set up tests to gather the information that can empower him to propose the theory. The analyst will choose whether to require a test or non-test strategy for conducting the research. The research design will shift depending on the field in which the research is being conducted. Finally, the analyst will discover parameters that will influence the legitimacy of the research plan. The information collected will need to be done by choosing appropriate tests depending on the inquire-about address. To carry out the inquiry, he can utilize one of the numerous testing strategies. Once information collection is complete, the analyst will have experimental information which must be examined.

  • Data Analysis and result:

Data analysis can be tried in two ways, qualitatively and quantitatively. The analyst will need to discover what subjective strategy or quantitative strategy will be required or will require a combination of both. Depending on the examination of his information, he will know if his speculation is backed or rejected. Analyzing this information is the foremost vital portion to bolster his speculation.

A report will need to be made with the discoveries of the research. The analyst can deliver the hypotheses and writing that support his investigation. He can make recommendations or suggestions to assist research on his subject

Advantages of empirical research

  • Empirical research points to discover the meaning behind a specific phenomenon. In other words, it looks for answers to how and why something works the way it is. 
  • By recognizing why something happens, it is conceivable to imitate or avoid comparative events. 
  • The adaptability of the research permits the analysts to alter certain perspectives of the research and alter them to new objectives. 
  • It is more dependable since it speaks to a real-life involvement and not fair theories. 
  • Data collected through experimental research may be less biased since the analyst is there amid the collection handle. In contrast, it is incomprehensible to confirm the precision of the information in non-empirical research.

Disadvantages of empirical research

  • It can be time-consuming depending on the research subject that you have chosen. 
  • It isn’t a cost-effective way of information collection in most cases because of the viable costly strategies of information gathering. Additionally, it may require traveling between numerous locations. 
  • Lack of proof and research subjects may not surrender the required result. A little test estimate avoids generalization since it may not be enough to speak to the target audience.
  • It isn’t easy to induce data on touchy points. Additionally, analysts may require participants’ consent to utilize the data

Difference Between Conceptual and Empirical Research

Conceptual research and empirical research are two ways of doing logical research. These are two restricting investigation systems since conceptual research doesn’t include any tests, and empirical investigation does.

Conceptual research includes unique thoughts and ideas; as it may, it doesn’t include any experiments and tests. Empirical research, on the other hand, includes phenomena that are observable and can be measured.

  • Type of Studies:

Philosophical research studies are cases of conceptual research, while empirical research incorporates both quantitative and subjective studies.

The major difference between conceptual and empirical investigation is that conceptual research involves unique thoughts and ideas, though experimental investigation includes investigation based on perception, tests, and unquestionable evidence.

References:

  • Empirical Research: Advantages, Drawbacks, and Differences with Non-Empirical Research. In Voicedocs . Retrieved from https://voicedocs.com/en/blog/empirical-research-advantages-drawbacks-and-differences-non-empirical-research
  • Empirical Research: Definition, Methods, Types and Examples. In QuestionPro . Retrieved from https://www.questionpro.com/blog/empirical-research/
  • Conceptual vs. empirical research: which is better? In Enago Academy . Retrieved from https://www.enago.com/academy/conceptual-vs-empirical-research-which-is-better/

We’ve collected the items for you to purchase for your convenience.

Get the entire package for up to 50% discount with our Replication program.

conceptual vs empirical research

Our Location

Conduct science.

  • Become a Partner
  • Social Media
  • Career /Academia
  • Privacy Policy
  • Shipping & Returns
  • Request a quote

Customer service

  • Account details
  • Lost password

DISCLAIMER: ConductScience and affiliate products are NOT designed for human consumption, testing, or clinical utilization. They are designed for pre-clinical utilization only. Customers purchasing apparatus for the purposes of scientific research or veterinary care affirm adherence to applicable regulatory bodies for the country in which their research or care is conducted.

Conceptual vs. Empirical

What's the difference.

Conceptual and empirical are two different approaches used in research and analysis. Conceptual refers to ideas, theories, and concepts that are based on abstract thinking and reasoning. It involves developing a theoretical framework and understanding the relationships between different variables or concepts. On the other hand, empirical refers to the collection and analysis of data through observation or experimentation. It involves gathering real-world evidence and using statistical methods to draw conclusions. While conceptual research focuses on developing theories and understanding concepts, empirical research focuses on testing and validating those theories through data analysis. Both approaches are important in their own ways and often complement each other in research studies.

Conceptual

AttributeConceptualEmpirical
DefinitionAbstract or theoreticalBased on observation or experience
OriginIdeas, concepts, or theoriesReal-world data or evidence
SubjectivityCan be subjective or interpretiveObjective or measurable
ValidityDepends on logical reasoningDepends on empirical evidence
GeneralizabilityMay not be applicable to all casesCan be generalized to similar cases
ScopeBroader and abstractNarrower and specific
ExamplesJustice, love, democracyTemperature, population, income

Empirical

Further Detail

Introduction.

Conceptual and empirical are two fundamental approaches used in various fields of study, including philosophy, science, and research. While both approaches aim to gain knowledge and understanding, they differ in their methods and sources of information. In this article, we will explore the attributes of conceptual and empirical approaches, highlighting their strengths and limitations.

Conceptual Approach

The conceptual approach primarily relies on abstract ideas, theories, and concepts to understand and explain phenomena. It focuses on the theoretical framework and uses deductive reasoning to draw conclusions. Conceptual analysis involves breaking down complex ideas into simpler components and examining their relationships.

One of the key attributes of the conceptual approach is its flexibility. It allows researchers to explore ideas and concepts that may not be directly observable or measurable. This flexibility enables the development of new theories and frameworks, expanding our understanding of various subjects.

Furthermore, the conceptual approach encourages critical thinking and creativity. Researchers can propose new ideas and challenge existing theories, leading to innovation and advancement in their respective fields. It also allows for the exploration of hypothetical scenarios and thought experiments, which can provide valuable insights.

However, the conceptual approach has its limitations. Since it relies heavily on abstract ideas, it may lack empirical evidence to support its claims. This can lead to subjective interpretations and potential biases. Additionally, the conceptual approach may struggle to provide concrete predictions or practical applications without empirical validation.

Empirical Approach

The empirical approach, on the other hand, emphasizes the collection and analysis of observable data to draw conclusions. It relies on direct observation, experimentation, and measurement to test hypotheses and theories. Empirical research aims to provide objective and verifiable evidence to support or refute claims.

One of the key attributes of the empirical approach is its emphasis on objectivity. By relying on observable data, it aims to minimize biases and subjective interpretations. This allows for the replication of experiments and studies, enhancing the reliability and validity of the findings.

Moreover, the empirical approach provides a solid foundation for evidence-based decision making. It enables researchers to gather data from real-world scenarios and draw conclusions based on actual observations. This practical application makes the empirical approach highly valuable in fields such as medicine, psychology, and social sciences.

However, the empirical approach also has its limitations. It may not capture the full complexity of certain phenomena, as some aspects may be difficult to measure or observe directly. Additionally, empirical research often requires significant resources, time, and effort to collect and analyze data, which can limit the scope and feasibility of certain studies.

Comparing Conceptual and Empirical

While the conceptual and empirical approaches have distinct attributes, they are not mutually exclusive. In fact, they often complement each other in the pursuit of knowledge and understanding.

Conceptual and empirical approaches can be seen as two sides of the same coin. The conceptual approach provides the theoretical framework and ideas, while the empirical approach tests and validates these concepts through observation and measurement.

By combining the strengths of both approaches, researchers can develop comprehensive and robust theories. The conceptual approach allows for the exploration of new ideas and the development of theoretical frameworks, while the empirical approach provides the necessary evidence to support or refute these concepts.

Furthermore, the integration of conceptual and empirical approaches can lead to a more holistic understanding of complex phenomena. The conceptual approach helps researchers identify relevant variables and relationships, guiding the design of empirical studies. The empirical approach, in turn, provides data that can refine and improve conceptual frameworks.

It is important to note that the choice between the conceptual and empirical approaches depends on the research question, the nature of the subject under investigation, and the available resources. Some research questions may require a more theoretical and conceptual analysis, while others may necessitate empirical data collection and experimentation.

Conceptual and empirical approaches are two distinct but interconnected methods used in various fields of study. While the conceptual approach relies on abstract ideas and theories, the empirical approach emphasizes the collection and analysis of observable data. Both approaches have their strengths and limitations, and their integration can lead to a more comprehensive understanding of complex phenomena.

Researchers should carefully consider the attributes of both approaches and choose the most appropriate method based on their research question and objectives. By utilizing the strengths of both conceptual and empirical approaches, researchers can contribute to the advancement of knowledge and make meaningful contributions to their respective fields.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

scholarsedge.in

ScholarsEdge - Academic Workshops And Research Training

Understanding Conceptual vs Empirical Research: Definitions, Differences, and Frameworks

Understanding Conceptual vs Empirical Research: Definitions, Differences, and Frameworks

Research methodologies are fundamental to the advancement of scientific knowledge. Conceptual and empirical research are distinct approaches, each with its unique focus and methodology. This section will explore these two types of research, providing definitions and examples to elucidate their differences and applications, in sort understanding Conceptual vs Empirical Research.

Table of Contents

Definition of Conceptual Research

Conceptual research primarily focuses on abstract ideas and theories rather than empirical observation. This type of research involves synthesizing and analyzing existing knowledge to formulate new theoretical insights or reinterpret existing ones without direct observation or experimentation.

Focus on Abstract Ideas and Theories: Conceptual research delves into abstract concepts to propose new theories or provide a novel interpretation of existing theories. It leverages deductive reasoning and logical analysis to explore and expand the theoretical framework of a subject.

Examples of Conceptual Research in Various Fields:

Philosophy: Immanuel Kant’s “ Critique of Pure Reason ” is an exemplary piece of conceptual research, offering a foundational analysis of metaphysics and epistemology without empirical investigation.

Mathematics: The development of non-Euclidean geometry by mathematicians like Lobachevsky and Bolyai, which redefined traditional notions of space using theoretical models, not empirical evidence.

Economics: John Maynard Keynes’ formulation of Keynesian economics was initially a conceptual framework that adjusted the existing economic theories to explain periods of economic downturn better.

1. Definition of Empirical Research

In contrast, empirical research is based on direct observation or experience, relying on experimental and observational methods to collect data that can inform scientific theories. This approach is fundamental in fields where practical experiments and observations are used to test hypotheses and develop new knowledge.

Based on Observation and Experimentation: Empirical research requires data collection through direct observation or experimentation. This approach ensures that theories are tested against observable reality, grounding theoretical insights in empirical evidence.

Examples of Empirical Research in Action:

Biology: Charles Darwin’s use of empirical evidence from his voyage on the HMS Beagle led to the development of the theory of natural selection.

Physics: The use of particle accelerators to observe subatomic particles directly, testing theoretical predictions about particle physics.

Psychology: Using controlled experiments, like the Stanford prison experiment, to observe human behavior in simulated conditions.

2. Differences Between Conceptual and Empirical Research

To clarify the distinctions between conceptual and empirical research, the following table summarizes the main differences based on several key attributes:

DefinitionInvolves research that is based on theorizing and synthesizing ideas and concepts without direct observation or experimentation.Involves research that is based on direct observation, experimentation, and the collection of measurable evidence.
Abstract ideas, theories, and conceptual understanding.Observable phenomena, data collection, and empirical evidence.
Uses existing theoretical data and literature; relies on secondary data to form hypotheses and frameworks.Generates new, primary data through direct observations and experiments; relies on measurable and observable data.
Analytical and interpretative methods, including literature review, theoretical synthesis, and logical reasoning.Experimental methods, including controlled experiments, surveys, observations, and statistical analysis.
To clarify, reinterpret, or propose new theoretical frameworks; to organize and synthesize existing knowledge.To test hypotheses, validate theories, and discover new patterns or phenomena through empirical evidence.
No direct data collection; the study uses existing data and information from various sources.Involves active data collection, including using instruments and measurements to gather new data.
Primarily qualitative, focusing on identifying patterns and constructing theoretical propositions.Can be both qualitative and quantitative; focuses on statistical analysis and testing to derive conclusions from the data.
Produces theoretical insights and expands understanding through new models or frameworks.Produces empirical evidence that supports or refutes hypotheses, contributing to the body of factual knowledge.
– Developing theories of social behavior.
– Formulating new economic models based on existing data.
– Observing behavioral changes in response to a new educational method.
– Conducting clinical trials to test a new drug.

3. Understanding Conceptual Frameworks in Research

Conceptual Frameworks in Research: From Theory to Practice - ScholarsEdge

A conceptual framework plays a crucial role in both types of research, particularly in guiding the research design and interpretation of data in complex studies.

What is a Conceptual Framework?

A conceptual framework is a coherent system of concepts, assumptions, expectations, beliefs, and theories supporting research. It serves as a map that guides the research by clarifying the key variables and their presumed relationships.

Definition and Role in Research:

A conceptual framework helps to organize the research questions, design, and methodology by providing a precise model of the relationships among the variables involved. It ensures the research has a solid theoretical base and a clear investigatory pathway.

How It Guides Research Questions and Design:

The conceptual framework helps shape the research questions and influence the study’s design by delineating the key variables and hypothesized relationships. It ensures that the research objectives are clearly mapped and aligned with the broader theoretical issues.

Components of a Conceptual Framework

Key Variables and Hypotheses: Identifies the specific variables being studied and their hypothesized relationships. For example, in a study on the impact of social media on youth, key variables might include hours spent on social media, academic performance, and anxiety levels.

The Role of Background Literature: Background literature utilizes existing research and theories to justify the selection of variables and the proposed relationships. It helps to ground the conceptual framework in existing knowledge, providing a rationale for the expected relationships and guiding the development of research hypotheses.

4. Conceptual vs. Theoretical Frameworks

While often used interchangeably, conceptual and theoretical frameworks serve different roles in research.

Definition of Theoretical Framework

Understanding Theoretical Frameworks - A Must-Read for Every Researcher

A theoretical framework is based on tested and validated theories. It serves as the foundation for identifying which theories will guide the research, the relationships between the variables, and the formulation of hypotheses.

How It Underpins the Study with Existing Theories:

The theoretical framework involves applying specific theories, often from different fields, to frame the research question and interpret the findings. For example, a theoretical framework might use Piaget’s stages of cognitive development to analyze how children learn in educational settings.

Differences and Similarities

Conceptual frameworks are typically developed as initial models to guide the early stages of research, helping to identify key variables and hypotheses. In contrast, theoretical frameworks are based on existing theories and are used to interpret data after empirical research. Despite these differences, both frameworks aim to clarify relationships among key variables and guide research by providing a structured approach to understanding the studied phenomena, ensuring research is grounded in and contributes to theoretical knowledge.

When to Use Each Framework:

Conceptual frameworks are used when existing theories are insufficient to explain new phenomena or when a study aims to reinterpret or extend theoretical approaches, guiding the initial exploration of variables and hypotheses. Theoretical frameworks are utilized when a well-established theory can be directly applied to a research problem, often used to interpret data after empirical research. Both frameworks aim to clarify relationships among key variables and guide the research process, providing structured approaches to understanding the studied phenomena.

Examples Illustrating Their Use in Research:

A conceptual framework is used in studies where established theories may not sufficiently explain new phenomena. For instance, a survey of the role of innovation in small businesses might develop a conceptual framework incorporating variables like organizational culture, market dynamics, and technology adoption without initially applying a specific established theory. This helps explore and identify key variables and formulate initial hypotheses, setting the stage for empirical investigation.

Conversely, a theoretical framework is applied when a well-established theory can be directly used to address the research problem. An example is research on voter behavior, where the Theory of Planned Behavior is used to model how attitudes, subjective norms, and perceived control influence voting intentions. This approach uses existing theories to interpret the data after conducting empirical research. Both frameworks are essential in providing a structured research approach, clarifying relationships among key variables, and guiding the process to enhance scientific knowledge and understanding.

5. How to Develop a Conceptual Framework

Developing a conceptual framework is a critical step in the research process, particularly in qualitative research, where integrating theoretical and empirical observations is foundational. This section outlines a step-by-step guide to creating a robust conceptual framework, identifying key variables, establishing relationships, and using appropriate tools and techniques.

Step-by-Step Guide to Creating Your Framework

Identify the main elements influencing your research question.
Gather information about your research topic.
Use theory to guide which variables are essential.
Define how variables might affect each other.
Formulate propositions about the relationships to be tested.
Visually organize the relationships between variables.

6. Examples of Conceptual Frameworks in Qualitative Research

Conceptual frameworks play a vital role in qualitative research by providing a structured lens through which the study is conducted. This section presents a case study as an example of a conceptual framework in action and discusses how these frameworks inform qualitative research.

Case Study: Example of a Conceptual Framework in a Qualitative Study

Study Overview: The study examined how social media influences teenagers’ self-esteem. The key variables identified were social media usage, peer influence, and self-esteem levels.

Framework Development:

This conceptual framework examines how social media usage and peer influence affect teenagers’ self-esteem. Social media usage, measured by time spent and engagement, negatively impacts self-esteem. This is because teenagers are exposed to more peer comparisons online, leading to feelings of inadequacy. However, the framework suggests that positive peer influence can moderate this relationship. In other words, supportive online and offline interactions with peers can buffer the negative effects of social media on self-esteem.

Conceptual frameworks are particularly valuable for qualitative research. Defining key elements and their hypothesized connections ensures research methods align with the study’s goals. Additionally, they provide a foundation for interpreting data, allowing researchers to explore the “why” behind the “what” and draw more profound insights into the phenomenon under investigation.

7. Practical Tips and Common Mistakes

Developing an effective conceptual framework requires careful thought and a structured approach. Below are practical tips and common pitfalls to avoid during this process.

Tips for Developing Effective Conceptual Frameworks

Ensure your framework is grounded in a solid theoretical base supported by current literature.Address and explore any contradictions between your theoretical framework and empirical findings.
Aim for clarity in illustrating how each variable fits into the framework and the nature of their relationships.Consider and articulate potential alternative explanations for the relationships in your framework.
As your understanding evolves, be prepared to refine and adjust your frameworkAvoid adding too many variables or overly complex relationships that make the framework challenging to understand and operationalize.

8. Conclusion

Conceptual and empirical research, though distinct in methods (theory vs. data), work together to explore and explain phenomena. One builds theories; the other tests them. Researchers who thoughtfully use both frameworks can create stronger, clearer studies that ultimately improve our understanding of complex issues.

FAQS About Conceptual vs Empirical Research

What is a conceptual framework in qualitative research.

In qualitative research, a conceptual framework acts like a roadmap that guides your study. It clarifies the key concepts (variables) you’ll explore, how they might relate and the theoretical underpinnings that inform your research question. It helps you analyze your qualitative data (interviews, observations, etc.) through the lens of these defined relationships.

How does a conceptual framework differ from a theoretical one in practical terms?

Scope: A conceptual framework is more specific to your research question, focusing on the variables and relationships you’re investigating. A broader theoretical framework outlines general principles and concepts within a discipline. Development: A conceptual framework is often developed during research, informed by your literature review and evolving understanding. A theoretical framework is usually pre-established within a particular field. Purpose: A conceptual framework helps you make sense of your qualitative data and draw conclusions. A theoretical framework provides a foundation for interpreting your findings within a larger body of knowledge.

Can empirical research have a conceptual framework?

Yes! While less common, empirical research (quantitative studies) can benefit from a conceptual framework. It helps you define the variables you’ll measure, how they might influence each other, and the hypotheses you want to test.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • CBE Life Sci Educ
  • v.21(3); Fall 2022

Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks: An Introduction for New Biology Education Researchers

Julie a. luft.

† Department of Mathematics, Social Studies, and Science Education, Mary Frances Early College of Education, University of Georgia, Athens, GA 30602-7124

Sophia Jeong

‡ Department of Teaching & Learning, College of Education & Human Ecology, Ohio State University, Columbus, OH 43210

Robert Idsardi

§ Department of Biology, Eastern Washington University, Cheney, WA 99004

Grant Gardner

∥ Department of Biology, Middle Tennessee State University, Murfreesboro, TN 37132

Associated Data

To frame their work, biology education researchers need to consider the role of literature reviews, theoretical frameworks, and conceptual frameworks as critical elements of the research and writing process. However, these elements can be confusing for scholars new to education research. This Research Methods article is designed to provide an overview of each of these elements and delineate the purpose of each in the educational research process. We describe what biology education researchers should consider as they conduct literature reviews, identify theoretical frameworks, and construct conceptual frameworks. Clarifying these different components of educational research studies can be helpful to new biology education researchers and the biology education research community at large in situating their work in the broader scholarly literature.

INTRODUCTION

Discipline-based education research (DBER) involves the purposeful and situated study of teaching and learning in specific disciplinary areas ( Singer et al. , 2012 ). Studies in DBER are guided by research questions that reflect disciplines’ priorities and worldviews. Researchers can use quantitative data, qualitative data, or both to answer these research questions through a variety of methodological traditions. Across all methodologies, there are different methods associated with planning and conducting educational research studies that include the use of surveys, interviews, observations, artifacts, or instruments. Ensuring the coherence of these elements to the discipline’s perspective also involves situating the work in the broader scholarly literature. The tools for doing this include literature reviews, theoretical frameworks, and conceptual frameworks. However, the purpose and function of each of these elements is often confusing to new education researchers. The goal of this article is to introduce new biology education researchers to these three important elements important in DBER scholarship and the broader educational literature.

The first element we discuss is a review of research (literature reviews), which highlights the need for a specific research question, study problem, or topic of investigation. Literature reviews situate the relevance of the study within a topic and a field. The process may seem familiar to science researchers entering DBER fields, but new researchers may still struggle in conducting the review. Booth et al. (2016b) highlight some of the challenges novice education researchers face when conducting a review of literature. They point out that novice researchers struggle in deciding how to focus the review, determining the scope of articles needed in the review, and knowing how to be critical of the articles in the review. Overcoming these challenges (and others) can help novice researchers construct a sound literature review that can inform the design of the study and help ensure the work makes a contribution to the field.

The second and third highlighted elements are theoretical and conceptual frameworks. These guide biology education research (BER) studies, and may be less familiar to science researchers. These elements are important in shaping the construction of new knowledge. Theoretical frameworks offer a way to explain and interpret the studied phenomenon, while conceptual frameworks clarify assumptions about the studied phenomenon. Despite the importance of these constructs in educational research, biology educational researchers have noted the limited use of theoretical or conceptual frameworks in published work ( DeHaan, 2011 ; Dirks, 2011 ; Lo et al. , 2019 ). In reviewing articles published in CBE—Life Sciences Education ( LSE ) between 2015 and 2019, we found that fewer than 25% of the research articles had a theoretical or conceptual framework (see the Supplemental Information), and at times there was an inconsistent use of theoretical and conceptual frameworks. Clearly, these frameworks are challenging for published biology education researchers, which suggests the importance of providing some initial guidance to new biology education researchers.

Fortunately, educational researchers have increased their explicit use of these frameworks over time, and this is influencing educational research in science, technology, engineering, and mathematics (STEM) fields. For instance, a quick search for theoretical or conceptual frameworks in the abstracts of articles in Educational Research Complete (a common database for educational research) in STEM fields demonstrates a dramatic change over the last 20 years: from only 778 articles published between 2000 and 2010 to 5703 articles published between 2010 and 2020, a more than sevenfold increase. Greater recognition of the importance of these frameworks is contributing to DBER authors being more explicit about such frameworks in their studies.

Collectively, literature reviews, theoretical frameworks, and conceptual frameworks work to guide methodological decisions and the elucidation of important findings. Each offers a different perspective on the problem of study and is an essential element in all forms of educational research. As new researchers seek to learn about these elements, they will find different resources, a variety of perspectives, and many suggestions about the construction and use of these elements. The wide range of available information can overwhelm the new researcher who just wants to learn the distinction between these elements or how to craft them adequately.

Our goal in writing this paper is not to offer specific advice about how to write these sections in scholarly work. Instead, we wanted to introduce these elements to those who are new to BER and who are interested in better distinguishing one from the other. In this paper, we share the purpose of each element in BER scholarship, along with important points on its construction. We also provide references for additional resources that may be beneficial to better understanding each element. Table 1 summarizes the key distinctions among these elements.

Comparison of literature reviews, theoretical frameworks, and conceptual reviews

Literature reviewsTheoretical frameworksConceptual frameworks
PurposeTo point out the need for the study in BER and connection to the field.To state the assumptions and orientations of the researcher regarding the topic of studyTo describe the researcher’s understanding of the main concepts under investigation
AimsA literature review examines current and relevant research associated with the study question. It is comprehensive, critical, and purposeful.A theoretical framework illuminates the phenomenon of study and the corresponding assumptions adopted by the researcher. Frameworks can take on different orientations.The conceptual framework is created by the researcher(s), includes the presumed relationships among concepts, and addresses needed areas of study discovered in literature reviews.
Connection to the manuscriptA literature review should connect to the study question, guide the study methodology, and be central in the discussion by indicating how the analyzed data advances what is known in the field.  A theoretical framework drives the question, guides the types of methods for data collection and analysis, informs the discussion of the findings, and reveals the subjectivities of the researcher.The conceptual framework is informed by literature reviews, experiences, or experiments. It may include emergent ideas that are not yet grounded in the literature. It should be coherent with the paper’s theoretical framing.
Additional pointsA literature review may reach beyond BER and include other education research fields.A theoretical framework does not rationalize the need for the study, and a theoretical framework can come from different fields.A conceptual framework articulates the phenomenon under study through written descriptions and/or visual representations.

This article is written for the new biology education researcher who is just learning about these different elements or for scientists looking to become more involved in BER. It is a result of our own work as science education and biology education researchers, whether as graduate students and postdoctoral scholars or newly hired and established faculty members. This is the article we wish had been available as we started to learn about these elements or discussed them with new educational researchers in biology.

LITERATURE REVIEWS

Purpose of a literature review.

A literature review is foundational to any research study in education or science. In education, a well-conceptualized and well-executed review provides a summary of the research that has already been done on a specific topic and identifies questions that remain to be answered, thus illustrating the current research project’s potential contribution to the field and the reasoning behind the methodological approach selected for the study ( Maxwell, 2012 ). BER is an evolving disciplinary area that is redefining areas of conceptual emphasis as well as orientations toward teaching and learning (e.g., Labov et al. , 2010 ; American Association for the Advancement of Science, 2011 ; Nehm, 2019 ). As a result, building comprehensive, critical, purposeful, and concise literature reviews can be a challenge for new biology education researchers.

Building Literature Reviews

There are different ways to approach and construct a literature review. Booth et al. (2016a) provide an overview that includes, for example, scoping reviews, which are focused only on notable studies and use a basic method of analysis, and integrative reviews, which are the result of exhaustive literature searches across different genres. Underlying each of these different review processes are attention to the s earch process, a ppraisa l of articles, s ynthesis of the literature, and a nalysis: SALSA ( Booth et al. , 2016a ). This useful acronym can help the researcher focus on the process while building a specific type of review.

However, new educational researchers often have questions about literature reviews that are foundational to SALSA or other approaches. Common questions concern determining which literature pertains to the topic of study or the role of the literature review in the design of the study. This section addresses such questions broadly while providing general guidance for writing a narrative literature review that evaluates the most pertinent studies.

The literature review process should begin before the research is conducted. As Boote and Beile (2005 , p. 3) suggested, researchers should be “scholars before researchers.” They point out that having a good working knowledge of the proposed topic helps illuminate avenues of study. Some subject areas have a deep body of work to read and reflect upon, providing a strong foundation for developing the research question(s). For instance, the teaching and learning of evolution is an area of long-standing interest in the BER community, generating many studies (e.g., Perry et al. , 2008 ; Barnes and Brownell, 2016 ) and reviews of research (e.g., Sickel and Friedrichsen, 2013 ; Ziadie and Andrews, 2018 ). Emerging areas of BER include the affective domain, issues of transfer, and metacognition ( Singer et al. , 2012 ). Many studies in these areas are transdisciplinary and not always specific to biology education (e.g., Rodrigo-Peiris et al. , 2018 ; Kolpikova et al. , 2019 ). These newer areas may require reading outside BER; fortunately, summaries of some of these topics can be found in the Current Insights section of the LSE website.

In focusing on a specific problem within a broader research strand, a new researcher will likely need to examine research outside BER. Depending upon the area of study, the expanded reading list might involve a mix of BER, DBER, and educational research studies. Determining the scope of the reading is not always straightforward. A simple way to focus one’s reading is to create a “summary phrase” or “research nugget,” which is a very brief descriptive statement about the study. It should focus on the essence of the study, for example, “first-year nonmajor students’ understanding of evolution,” “metacognitive prompts to enhance learning during biochemistry,” or “instructors’ inquiry-based instructional practices after professional development programming.” This type of phrase should help a new researcher identify two or more areas to review that pertain to the study. Focusing on recent research in the last 5 years is a good first step. Additional studies can be identified by reading relevant works referenced in those articles. It is also important to read seminal studies that are more than 5 years old. Reading a range of studies should give the researcher the necessary command of the subject in order to suggest a research question.

Given that the research question(s) arise from the literature review, the review should also substantiate the selected methodological approach. The review and research question(s) guide the researcher in determining how to collect and analyze data. Often the methodological approach used in a study is selected to contribute knowledge that expands upon what has been published previously about the topic (see Institute of Education Sciences and National Science Foundation, 2013 ). An emerging topic of study may need an exploratory approach that allows for a description of the phenomenon and development of a potential theory. This could, but not necessarily, require a methodological approach that uses interviews, observations, surveys, or other instruments. An extensively studied topic may call for the additional understanding of specific factors or variables; this type of study would be well suited to a verification or a causal research design. These could entail a methodological approach that uses valid and reliable instruments, observations, or interviews to determine an effect in the studied event. In either of these examples, the researcher(s) may use a qualitative, quantitative, or mixed methods methodological approach.

Even with a good research question, there is still more reading to be done. The complexity and focus of the research question dictates the depth and breadth of the literature to be examined. Questions that connect multiple topics can require broad literature reviews. For instance, a study that explores the impact of a biology faculty learning community on the inquiry instruction of faculty could have the following review areas: learning communities among biology faculty, inquiry instruction among biology faculty, and inquiry instruction among biology faculty as a result of professional learning. Biology education researchers need to consider whether their literature review requires studies from different disciplines within or outside DBER. For the example given, it would be fruitful to look at research focused on learning communities with faculty in STEM fields or in general education fields that result in instructional change. It is important not to be too narrow or too broad when reading. When the conclusions of articles start to sound similar or no new insights are gained, the researcher likely has a good foundation for a literature review. This level of reading should allow the researcher to demonstrate a mastery in understanding the researched topic, explain the suitability of the proposed research approach, and point to the need for the refined research question(s).

The literature review should include the researcher’s evaluation and critique of the selected studies. A researcher may have a large collection of studies, but not all of the studies will follow standards important in the reporting of empirical work in the social sciences. The American Educational Research Association ( Duran et al. , 2006 ), for example, offers a general discussion about standards for such work: an adequate review of research informing the study, the existence of sound and appropriate data collection and analysis methods, and appropriate conclusions that do not overstep or underexplore the analyzed data. The Institute of Education Sciences and National Science Foundation (2013) also offer Common Guidelines for Education Research and Development that can be used to evaluate collected studies.

Because not all journals adhere to such standards, it is important that a researcher review each study to determine the quality of published research, per the guidelines suggested earlier. In some instances, the research may be fatally flawed. Examples of such flaws include data that do not pertain to the question, a lack of discussion about the data collection, poorly constructed instruments, or an inadequate analysis. These types of errors result in studies that are incomplete, error-laden, or inaccurate and should be excluded from the review. Most studies have limitations, and the author(s) often make them explicit. For instance, there may be an instructor effect, recognized bias in the analysis, or issues with the sample population. Limitations are usually addressed by the research team in some way to ensure a sound and acceptable research process. Occasionally, the limitations associated with the study can be significant and not addressed adequately, which leaves a consequential decision in the hands of the researcher. Providing critiques of studies in the literature review process gives the reader confidence that the researcher has carefully examined relevant work in preparation for the study and, ultimately, the manuscript.

A solid literature review clearly anchors the proposed study in the field and connects the research question(s), the methodological approach, and the discussion. Reviewing extant research leads to research questions that will contribute to what is known in the field. By summarizing what is known, the literature review points to what needs to be known, which in turn guides decisions about methodology. Finally, notable findings of the new study are discussed in reference to those described in the literature review.

Within published BER studies, literature reviews can be placed in different locations in an article. When included in the introductory section of the study, the first few paragraphs of the manuscript set the stage, with the literature review following the opening paragraphs. Cooper et al. (2019) illustrate this approach in their study of course-based undergraduate research experiences (CUREs). An introduction discussing the potential of CURES is followed by an analysis of the existing literature relevant to the design of CUREs that allows for novel student discoveries. Within this review, the authors point out contradictory findings among research on novel student discoveries. This clarifies the need for their study, which is described and highlighted through specific research aims.

A literature reviews can also make up a separate section in a paper. For example, the introduction to Todd et al. (2019) illustrates the need for their research topic by highlighting the potential of learning progressions (LPs) and suggesting that LPs may help mitigate learning loss in genetics. At the end of the introduction, the authors state their specific research questions. The review of literature following this opening section comprises two subsections. One focuses on learning loss in general and examines a variety of studies and meta-analyses from the disciplines of medical education, mathematics, and reading. The second section focuses specifically on LPs in genetics and highlights student learning in the midst of LPs. These separate reviews provide insights into the stated research question.

Suggestions and Advice

A well-conceptualized, comprehensive, and critical literature review reveals the understanding of the topic that the researcher brings to the study. Literature reviews should not be so big that there is no clear area of focus; nor should they be so narrow that no real research question arises. The task for a researcher is to craft an efficient literature review that offers a critical analysis of published work, articulates the need for the study, guides the methodological approach to the topic of study, and provides an adequate foundation for the discussion of the findings.

In our own writing of literature reviews, there are often many drafts. An early draft may seem well suited to the study because the need for and approach to the study are well described. However, as the results of the study are analyzed and findings begin to emerge, the existing literature review may be inadequate and need revision. The need for an expanded discussion about the research area can result in the inclusion of new studies that support the explanation of a potential finding. The literature review may also prove to be too broad. Refocusing on a specific area allows for more contemplation of a finding.

It should be noted that there are different types of literature reviews, and many books and articles have been written about the different ways to embark on these types of reviews. Among these different resources, the following may be helpful in considering how to refine the review process for scholarly journals:

  • Booth, A., Sutton, A., & Papaioannou, D. (2016a). Systemic approaches to a successful literature review (2nd ed.). Los Angeles, CA: Sage. This book addresses different types of literature reviews and offers important suggestions pertaining to defining the scope of the literature review and assessing extant studies.
  • Booth, W. C., Colomb, G. G., Williams, J. M., Bizup, J., & Fitzgerald, W. T. (2016b). The craft of research (4th ed.). Chicago: University of Chicago Press. This book can help the novice consider how to make the case for an area of study. While this book is not specifically about literature reviews, it offers suggestions about making the case for your study.
  • Galvan, J. L., & Galvan, M. C. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (7th ed.). Routledge. This book offers guidance on writing different types of literature reviews. For the novice researcher, there are useful suggestions for creating coherent literature reviews.

THEORETICAL FRAMEWORKS

Purpose of theoretical frameworks.

As new education researchers may be less familiar with theoretical frameworks than with literature reviews, this discussion begins with an analogy. Envision a biologist, chemist, and physicist examining together the dramatic effect of a fog tsunami over the ocean. A biologist gazing at this phenomenon may be concerned with the effect of fog on various species. A chemist may be interested in the chemical composition of the fog as water vapor condenses around bits of salt. A physicist may be focused on the refraction of light to make fog appear to be “sitting” above the ocean. While observing the same “objective event,” the scientists are operating under different theoretical frameworks that provide a particular perspective or “lens” for the interpretation of the phenomenon. Each of these scientists brings specialized knowledge, experiences, and values to this phenomenon, and these influence the interpretation of the phenomenon. The scientists’ theoretical frameworks influence how they design and carry out their studies and interpret their data.

Within an educational study, a theoretical framework helps to explain a phenomenon through a particular lens and challenges and extends existing knowledge within the limitations of that lens. Theoretical frameworks are explicitly stated by an educational researcher in the paper’s framework, theory, or relevant literature section. The framework shapes the types of questions asked, guides the method by which data are collected and analyzed, and informs the discussion of the results of the study. It also reveals the researcher’s subjectivities, for example, values, social experience, and viewpoint ( Allen, 2017 ). It is essential that a novice researcher learn to explicitly state a theoretical framework, because all research questions are being asked from the researcher’s implicit or explicit assumptions of a phenomenon of interest ( Schwandt, 2000 ).

Selecting Theoretical Frameworks

Theoretical frameworks are one of the most contemplated elements in our work in educational research. In this section, we share three important considerations for new scholars selecting a theoretical framework.

The first step in identifying a theoretical framework involves reflecting on the phenomenon within the study and the assumptions aligned with the phenomenon. The phenomenon involves the studied event. There are many possibilities, for example, student learning, instructional approach, or group organization. A researcher holds assumptions about how the phenomenon will be effected, influenced, changed, or portrayed. It is ultimately the researcher’s assumption(s) about the phenomenon that aligns with a theoretical framework. An example can help illustrate how a researcher’s reflection on the phenomenon and acknowledgment of assumptions can result in the identification of a theoretical framework.

In our example, a biology education researcher may be interested in exploring how students’ learning of difficult biological concepts can be supported by the interactions of group members. The phenomenon of interest is the interactions among the peers, and the researcher assumes that more knowledgeable students are important in supporting the learning of the group. As a result, the researcher may draw on Vygotsky’s (1978) sociocultural theory of learning and development that is focused on the phenomenon of student learning in a social setting. This theory posits the critical nature of interactions among students and between students and teachers in the process of building knowledge. A researcher drawing upon this framework holds the assumption that learning is a dynamic social process involving questions and explanations among students in the classroom and that more knowledgeable peers play an important part in the process of building conceptual knowledge.

It is important to state at this point that there are many different theoretical frameworks. Some frameworks focus on learning and knowing, while other theoretical frameworks focus on equity, empowerment, or discourse. Some frameworks are well articulated, and others are still being refined. For a new researcher, it can be challenging to find a theoretical framework. Two of the best ways to look for theoretical frameworks is through published works that highlight different frameworks.

When a theoretical framework is selected, it should clearly connect to all parts of the study. The framework should augment the study by adding a perspective that provides greater insights into the phenomenon. It should clearly align with the studies described in the literature review. For instance, a framework focused on learning would correspond to research that reported different learning outcomes for similar studies. The methods for data collection and analysis should also correspond to the framework. For instance, a study about instructional interventions could use a theoretical framework concerned with learning and could collect data about the effect of the intervention on what is learned. When the data are analyzed, the theoretical framework should provide added meaning to the findings, and the findings should align with the theoretical framework.

A study by Jensen and Lawson (2011) provides an example of how a theoretical framework connects different parts of the study. They compared undergraduate biology students in heterogeneous and homogeneous groups over the course of a semester. Jensen and Lawson (2011) assumed that learning involved collaboration and more knowledgeable peers, which made Vygotsky’s (1978) theory a good fit for their study. They predicted that students in heterogeneous groups would experience greater improvement in their reasoning abilities and science achievements with much of the learning guided by the more knowledgeable peers.

In the enactment of the study, they collected data about the instruction in traditional and inquiry-oriented classes, while the students worked in homogeneous or heterogeneous groups. To determine the effect of working in groups, the authors also measured students’ reasoning abilities and achievement. Each data-collection and analysis decision connected to understanding the influence of collaborative work.

Their findings highlighted aspects of Vygotsky’s (1978) theory of learning. One finding, for instance, posited that inquiry instruction, as a whole, resulted in reasoning and achievement gains. This links to Vygotsky (1978) , because inquiry instruction involves interactions among group members. A more nuanced finding was that group composition had a conditional effect. Heterogeneous groups performed better with more traditional and didactic instruction, regardless of the reasoning ability of the group members. Homogeneous groups worked better during interaction-rich activities for students with low reasoning ability. The authors attributed the variation to the different types of helping behaviors of students. High-performing students provided the answers, while students with low reasoning ability had to work collectively through the material. In terms of Vygotsky (1978) , this finding provided new insights into the learning context in which productive interactions can occur for students.

Another consideration in the selection and use of a theoretical framework pertains to its orientation to the study. This can result in the theoretical framework prioritizing individuals, institutions, and/or policies ( Anfara and Mertz, 2014 ). Frameworks that connect to individuals, for instance, could contribute to understanding their actions, learning, or knowledge. Institutional frameworks, on the other hand, offer insights into how institutions, organizations, or groups can influence individuals or materials. Policy theories provide ways to understand how national or local policies can dictate an emphasis on outcomes or instructional design. These different types of frameworks highlight different aspects in an educational setting, which influences the design of the study and the collection of data. In addition, these different frameworks offer a way to make sense of the data. Aligning the data collection and analysis with the framework ensures that a study is coherent and can contribute to the field.

New understandings emerge when different theoretical frameworks are used. For instance, Ebert-May et al. (2015) prioritized the individual level within conceptual change theory (see Posner et al. , 1982 ). In this theory, an individual’s knowledge changes when it no longer fits the phenomenon. Ebert-May et al. (2015) designed a professional development program challenging biology postdoctoral scholars’ existing conceptions of teaching. The authors reported that the biology postdoctoral scholars’ teaching practices became more student-centered as they were challenged to explain their instructional decision making. According to the theory, the biology postdoctoral scholars’ dissatisfaction in their descriptions of teaching and learning initiated change in their knowledge and instruction. These results reveal how conceptual change theory can explain the learning of participants and guide the design of professional development programming.

The communities of practice (CoP) theoretical framework ( Lave, 1988 ; Wenger, 1998 ) prioritizes the institutional level , suggesting that learning occurs when individuals learn from and contribute to the communities in which they reside. Grounded in the assumption of community learning, the literature on CoP suggests that, as individuals interact regularly with the other members of their group, they learn about the rules, roles, and goals of the community ( Allee, 2000 ). A study conducted by Gehrke and Kezar (2017) used the CoP framework to understand organizational change by examining the involvement of individual faculty engaged in a cross-institutional CoP focused on changing the instructional practice of faculty at each institution. In the CoP, faculty members were involved in enhancing instructional materials within their department, which aligned with an overarching goal of instituting instruction that embraced active learning. Not surprisingly, Gehrke and Kezar (2017) revealed that faculty who perceived the community culture as important in their work cultivated institutional change. Furthermore, they found that institutional change was sustained when key leaders served as mentors and provided support for faculty, and as faculty themselves developed into leaders. This study reveals the complexity of individual roles in a COP in order to support institutional instructional change.

It is important to explicitly state the theoretical framework used in a study, but elucidating a theoretical framework can be challenging for a new educational researcher. The literature review can help to identify an applicable theoretical framework. Focal areas of the review or central terms often connect to assumptions and assertions associated with the framework that pertain to the phenomenon of interest. Another way to identify a theoretical framework is self-reflection by the researcher on personal beliefs and understandings about the nature of knowledge the researcher brings to the study ( Lysaght, 2011 ). In stating one’s beliefs and understandings related to the study (e.g., students construct their knowledge, instructional materials support learning), an orientation becomes evident that will suggest a particular theoretical framework. Theoretical frameworks are not arbitrary , but purposefully selected.

With experience, a researcher may find expanded roles for theoretical frameworks. Researchers may revise an existing framework that has limited explanatory power, or they may decide there is a need to develop a new theoretical framework. These frameworks can emerge from a current study or the need to explain a phenomenon in a new way. Researchers may also find that multiple theoretical frameworks are necessary to frame and explore a problem, as different frameworks can provide different insights into a problem.

Finally, it is important to recognize that choosing “x” theoretical framework does not necessarily mean a researcher chooses “y” methodology and so on, nor is there a clear-cut, linear process in selecting a theoretical framework for one’s study. In part, the nonlinear process of identifying a theoretical framework is what makes understanding and using theoretical frameworks challenging. For the novice scholar, contemplating and understanding theoretical frameworks is essential. Fortunately, there are articles and books that can help:

  • Creswell, J. W. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Los Angeles, CA: Sage. This book provides an overview of theoretical frameworks in general educational research.
  • Ding, L. (2019). Theoretical perspectives of quantitative physics education research. Physical Review Physics Education Research , 15 (2), 020101-1–020101-13. This paper illustrates how a DBER field can use theoretical frameworks.
  • Nehm, R. (2019). Biology education research: Building integrative frameworks for teaching and learning about living systems. Disciplinary and Interdisciplinary Science Education Research , 1 , ar15. https://doi.org/10.1186/s43031-019-0017-6 . This paper articulates the need for studies in BER to explicitly state theoretical frameworks and provides examples of potential studies.
  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Sage. This book also provides an overview of theoretical frameworks, but for both research and evaluation.

CONCEPTUAL FRAMEWORKS

Purpose of a conceptual framework.

A conceptual framework is a description of the way a researcher understands the factors and/or variables that are involved in the study and their relationships to one another. The purpose of a conceptual framework is to articulate the concepts under study using relevant literature ( Rocco and Plakhotnik, 2009 ) and to clarify the presumed relationships among those concepts ( Rocco and Plakhotnik, 2009 ; Anfara and Mertz, 2014 ). Conceptual frameworks are different from theoretical frameworks in both their breadth and grounding in established findings. Whereas a theoretical framework articulates the lens through which a researcher views the work, the conceptual framework is often more mechanistic and malleable.

Conceptual frameworks are broader, encompassing both established theories (i.e., theoretical frameworks) and the researchers’ own emergent ideas. Emergent ideas, for example, may be rooted in informal and/or unpublished observations from experience. These emergent ideas would not be considered a “theory” if they are not yet tested, supported by systematically collected evidence, and peer reviewed. However, they do still play an important role in the way researchers approach their studies. The conceptual framework allows authors to clearly describe their emergent ideas so that connections among ideas in the study and the significance of the study are apparent to readers.

Constructing Conceptual Frameworks

Including a conceptual framework in a research study is important, but researchers often opt to include either a conceptual or a theoretical framework. Either may be adequate, but both provide greater insight into the research approach. For instance, a research team plans to test a novel component of an existing theory. In their study, they describe the existing theoretical framework that informs their work and then present their own conceptual framework. Within this conceptual framework, specific topics portray emergent ideas that are related to the theory. Describing both frameworks allows readers to better understand the researchers’ assumptions, orientations, and understanding of concepts being investigated. For example, Connolly et al. (2018) included a conceptual framework that described how they applied a theoretical framework of social cognitive career theory (SCCT) to their study on teaching programs for doctoral students. In their conceptual framework, the authors described SCCT, explained how it applied to the investigation, and drew upon results from previous studies to justify the proposed connections between the theory and their emergent ideas.

In some cases, authors may be able to sufficiently describe their conceptualization of the phenomenon under study in an introduction alone, without a separate conceptual framework section. However, incomplete descriptions of how the researchers conceptualize the components of the study may limit the significance of the study by making the research less intelligible to readers. This is especially problematic when studying topics in which researchers use the same terms for different constructs or different terms for similar and overlapping constructs (e.g., inquiry, teacher beliefs, pedagogical content knowledge, or active learning). Authors must describe their conceptualization of a construct if the research is to be understandable and useful.

There are some key areas to consider regarding the inclusion of a conceptual framework in a study. To begin with, it is important to recognize that conceptual frameworks are constructed by the researchers conducting the study ( Rocco and Plakhotnik, 2009 ; Maxwell, 2012 ). This is different from theoretical frameworks that are often taken from established literature. Researchers should bring together ideas from the literature, but they may be influenced by their own experiences as a student and/or instructor, the shared experiences of others, or thought experiments as they construct a description, model, or representation of their understanding of the phenomenon under study. This is an exercise in intellectual organization and clarity that often considers what is learned, known, and experienced. The conceptual framework makes these constructs explicitly visible to readers, who may have different understandings of the phenomenon based on their prior knowledge and experience. There is no single method to go about this intellectual work.

Reeves et al. (2016) is an example of an article that proposed a conceptual framework about graduate teaching assistant professional development evaluation and research. The authors used existing literature to create a novel framework that filled a gap in current research and practice related to the training of graduate teaching assistants. This conceptual framework can guide the systematic collection of data by other researchers because the framework describes the relationships among various factors that influence teaching and learning. The Reeves et al. (2016) conceptual framework may be modified as additional data are collected and analyzed by other researchers. This is not uncommon, as conceptual frameworks can serve as catalysts for concerted research efforts that systematically explore a phenomenon (e.g., Reynolds et al. , 2012 ; Brownell and Kloser, 2015 ).

Sabel et al. (2017) used a conceptual framework in their exploration of how scaffolds, an external factor, interact with internal factors to support student learning. Their conceptual framework integrated principles from two theoretical frameworks, self-regulated learning and metacognition, to illustrate how the research team conceptualized students’ use of scaffolds in their learning ( Figure 1 ). Sabel et al. (2017) created this model using their interpretations of these two frameworks in the context of their teaching.

An external file that holds a picture, illustration, etc.
Object name is cbe-21-rm33-g001.jpg

Conceptual framework from Sabel et al. (2017) .

A conceptual framework should describe the relationship among components of the investigation ( Anfara and Mertz, 2014 ). These relationships should guide the researcher’s methods of approaching the study ( Miles et al. , 2014 ) and inform both the data to be collected and how those data should be analyzed. Explicitly describing the connections among the ideas allows the researcher to justify the importance of the study and the rigor of the research design. Just as importantly, these frameworks help readers understand why certain components of a system were not explored in the study. This is a challenge in education research, which is rooted in complex environments with many variables that are difficult to control.

For example, Sabel et al. (2017) stated: “Scaffolds, such as enhanced answer keys and reflection questions, can help students and instructors bridge the external and internal factors and support learning” (p. 3). They connected the scaffolds in the study to the three dimensions of metacognition and the eventual transformation of existing ideas into new or revised ideas. Their framework provides a rationale for focusing on how students use two different scaffolds, and not on other factors that may influence a student’s success (self-efficacy, use of active learning, exam format, etc.).

In constructing conceptual frameworks, researchers should address needed areas of study and/or contradictions discovered in literature reviews. By attending to these areas, researchers can strengthen their arguments for the importance of a study. For instance, conceptual frameworks can address how the current study will fill gaps in the research, resolve contradictions in existing literature, or suggest a new area of study. While a literature review describes what is known and not known about the phenomenon, the conceptual framework leverages these gaps in describing the current study ( Maxwell, 2012 ). In the example of Sabel et al. (2017) , the authors indicated there was a gap in the literature regarding how scaffolds engage students in metacognition to promote learning in large classes. Their study helps fill that gap by describing how scaffolds can support students in the three dimensions of metacognition: intelligibility, plausibility, and wide applicability. In another example, Lane (2016) integrated research from science identity, the ethic of care, the sense of belonging, and an expertise model of student success to form a conceptual framework that addressed the critiques of other frameworks. In a more recent example, Sbeglia et al. (2021) illustrated how a conceptual framework influences the methodological choices and inferences in studies by educational researchers.

Sometimes researchers draw upon the conceptual frameworks of other researchers. When a researcher’s conceptual framework closely aligns with an existing framework, the discussion may be brief. For example, Ghee et al. (2016) referred to portions of SCCT as their conceptual framework to explain the significance of their work on students’ self-efficacy and career interests. Because the authors’ conceptualization of this phenomenon aligned with a previously described framework, they briefly mentioned the conceptual framework and provided additional citations that provided more detail for the readers.

Within both the BER and the broader DBER communities, conceptual frameworks have been used to describe different constructs. For example, some researchers have used the term “conceptual framework” to describe students’ conceptual understandings of a biological phenomenon. This is distinct from a researcher’s conceptual framework of the educational phenomenon under investigation, which may also need to be explicitly described in the article. Other studies have presented a research logic model or flowchart of the research design as a conceptual framework. These constructions can be quite valuable in helping readers understand the data-collection and analysis process. However, a model depicting the study design does not serve the same role as a conceptual framework. Researchers need to avoid conflating these constructs by differentiating the researchers’ conceptual framework that guides the study from the research design, when applicable.

Explicitly describing conceptual frameworks is essential in depicting the focus of the study. We have found that being explicit in a conceptual framework means using accepted terminology, referencing prior work, and clearly noting connections between terms. This description can also highlight gaps in the literature or suggest potential contributions to the field of study. A well-elucidated conceptual framework can suggest additional studies that may be warranted. This can also spur other researchers to consider how they would approach the examination of a phenomenon and could result in a revised conceptual framework.

It can be challenging to create conceptual frameworks, but they are important. Below are two resources that could be helpful in constructing and presenting conceptual frameworks in educational research:

  • Maxwell, J. A. (2012). Qualitative research design: An interactive approach (3rd ed.). Los Angeles, CA: Sage. Chapter 3 in this book describes how to construct conceptual frameworks.
  • Ravitch, S. M., & Riggan, M. (2016). Reason & rigor: How conceptual frameworks guide research . Los Angeles, CA: Sage. This book explains how conceptual frameworks guide the research questions, data collection, data analyses, and interpretation of results.

CONCLUDING THOUGHTS

Literature reviews, theoretical frameworks, and conceptual frameworks are all important in DBER and BER. Robust literature reviews reinforce the importance of a study. Theoretical frameworks connect the study to the base of knowledge in educational theory and specify the researcher’s assumptions. Conceptual frameworks allow researchers to explicitly describe their conceptualization of the relationships among the components of the phenomenon under study. Table 1 provides a general overview of these components in order to assist biology education researchers in thinking about these elements.

It is important to emphasize that these different elements are intertwined. When these elements are aligned and complement one another, the study is coherent, and the study findings contribute to knowledge in the field. When literature reviews, theoretical frameworks, and conceptual frameworks are disconnected from one another, the study suffers. The point of the study is lost, suggested findings are unsupported, or important conclusions are invisible to the researcher. In addition, this misalignment may be costly in terms of time and money.

Conducting a literature review, selecting a theoretical framework, and building a conceptual framework are some of the most difficult elements of a research study. It takes time to understand the relevant research, identify a theoretical framework that provides important insights into the study, and formulate a conceptual framework that organizes the finding. In the research process, there is often a constant back and forth among these elements as the study evolves. With an ongoing refinement of the review of literature, clarification of the theoretical framework, and articulation of a conceptual framework, a sound study can emerge that makes a contribution to the field. This is the goal of BER and education research.

Supplementary Material

  • Allee, V. (2000). Knowledge networks and communities of learning . OD Practitioner , 32 ( 4 ), 4–13. [ Google Scholar ]
  • Allen, M. (2017). The Sage encyclopedia of communication research methods (Vols. 1–4 ). Los Angeles, CA: Sage. 10.4135/9781483381411 [ CrossRef ] [ Google Scholar ]
  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action . Washington, DC. [ Google Scholar ]
  • Anfara, V. A., Mertz, N. T. (2014). Setting the stage . In Anfara, V. A., Mertz, N. T. (eds.), Theoretical frameworks in qualitative research (pp. 1–22). Sage. [ Google Scholar ]
  • Barnes, M. E., Brownell, S. E. (2016). Practices and perspectives of college instructors on addressing religious beliefs when teaching evolution . CBE—Life Sciences Education , 15 ( 2 ), ar18. https://doi.org/10.1187/cbe.15-11-0243 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Boote, D. N., Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation . Educational Researcher , 34 ( 6 ), 3–15. 10.3102/0013189x034006003 [ CrossRef ] [ Google Scholar ]
  • Booth, A., Sutton, A., Papaioannou, D. (2016a). Systemic approaches to a successful literature review (2nd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Booth, W. C., Colomb, G. G., Williams, J. M., Bizup, J., Fitzgerald, W. T. (2016b). The craft of research (4th ed.). Chicago, IL: University of Chicago Press. [ Google Scholar ]
  • Brownell, S. E., Kloser, M. J. (2015). Toward a conceptual framework for measuring the effectiveness of course-based undergraduate research experiences in undergraduate biology . Studies in Higher Education , 40 ( 3 ), 525–544. https://doi.org/10.1080/03075079.2015.1004234 [ Google Scholar ]
  • Connolly, M. R., Lee, Y. G., Savoy, J. N. (2018). The effects of doctoral teaching development on early-career STEM scholars’ college teaching self-efficacy . CBE—Life Sciences Education , 17 ( 1 ), ar14. https://doi.org/10.1187/cbe.17-02-0039 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cooper, K. M., Blattman, J. N., Hendrix, T., Brownell, S. E. (2019). The impact of broadly relevant novel discoveries on student project ownership in a traditional lab course turned CURE . CBE—Life Sciences Education , 18 ( 4 ), ar57. https://doi.org/10.1187/cbe.19-06-0113 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Creswell, J. W. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • DeHaan, R. L. (2011). Education research in the biological sciences: A nine decade review (Paper commissioned by the NAS/NRC Committee on the Status, Contributions, and Future Directions of Discipline Based Education Research) . Washington, DC: National Academies Press. Retrieved May 20, 2022, from www7.nationalacademies.org/bose/DBER_Mee ting2_commissioned_papers_page.html [ Google Scholar ]
  • Ding, L. (2019). Theoretical perspectives of quantitative physics education research . Physical Review Physics Education Research , 15 ( 2 ), 020101. [ Google Scholar ]
  • Dirks, C. (2011). The current status and future direction of biology education research . Paper presented at: Second Committee Meeting on the Status, Contributions, and Future Directions of Discipline-Based Education Research, 18–19 October (Washington, DC). Retrieved May 20, 2022, from http://sites.nationalacademies.org/DBASSE/BOSE/DBASSE_071087 [ Google Scholar ]
  • Duran, R. P., Eisenhart, M. A., Erickson, F. D., Grant, C. A., Green, J. L., Hedges, L. V., Schneider, B. L. (2006). Standards for reporting on empirical social science research in AERA publications: American Educational Research Association . Educational Researcher , 35 ( 6 ), 33–40. [ Google Scholar ]
  • Ebert-May, D., Derting, T. L., Henkel, T. P., Middlemis Maher, J., Momsen, J. L., Arnold, B., Passmore, H. A. (2015). Breaking the cycle: Future faculty begin teaching with learner-centered strategies after professional development . CBE—Life Sciences Education , 14 ( 2 ), ar22. https://doi.org/10.1187/cbe.14-12-0222 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Galvan, J. L., Galvan, M. C. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (7th ed.). New York, NY: Routledge. https://doi.org/10.4324/9781315229386 [ Google Scholar ]
  • Gehrke, S., Kezar, A. (2017). The roles of STEM faculty communities of practice in institutional and departmental reform in higher education . American Educational Research Journal , 54 ( 5 ), 803–833. https://doi.org/10.3102/0002831217706736 [ Google Scholar ]
  • Ghee, M., Keels, M., Collins, D., Neal-Spence, C., Baker, E. (2016). Fine-tuning summer research programs to promote underrepresented students’ persistence in the STEM pathway . CBE—Life Sciences Education , 15 ( 3 ), ar28. https://doi.org/10.1187/cbe.16-01-0046 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Institute of Education Sciences & National Science Foundation. (2013). Common guidelines for education research and development . Retrieved May 20, 2022, from www.nsf.gov/pubs/2013/nsf13126/nsf13126.pdf
  • Jensen, J. L., Lawson, A. (2011). Effects of collaborative group composition and inquiry instruction on reasoning gains and achievement in undergraduate biology . CBE—Life Sciences Education , 10 ( 1 ), 64–73. https://doi.org/10.1187/cbe.19-05-0098 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kolpikova, E. P., Chen, D. C., Doherty, J. H. (2019). Does the format of preclass reading quizzes matter? An evaluation of traditional and gamified, adaptive preclass reading quizzes . CBE—Life Sciences Education , 18 ( 4 ), ar52. https://doi.org/10.1187/cbe.19-05-0098 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Labov, J. B., Reid, A. H., Yamamoto, K. R. (2010). Integrated biology and undergraduate science education: A new biology education for the twenty-first century? CBE—Life Sciences Education , 9 ( 1 ), 10–16. https://doi.org/10.1187/cbe.09-12-0092 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lane, T. B. (2016). Beyond academic and social integration: Understanding the impact of a STEM enrichment program on the retention and degree attainment of underrepresented students . CBE—Life Sciences Education , 15 ( 3 ), ar39. https://doi.org/10.1187/cbe.16-01-0070 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in everyday life . New York, NY: Cambridge University Press. [ Google Scholar ]
  • Lo, S. M., Gardner, G. E., Reid, J., Napoleon-Fanis, V., Carroll, P., Smith, E., Sato, B. K. (2019). Prevailing questions and methodologies in biology education research: A longitudinal analysis of research in CBE — Life Sciences Education and at the Society for the Advancement of Biology Education Research . CBE—Life Sciences Education , 18 ( 1 ), ar9. https://doi.org/10.1187/cbe.18-08-0164 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lysaght, Z. (2011). Epistemological and paradigmatic ecumenism in “Pasteur’s quadrant:” Tales from doctoral research . In Official Conference Proceedings of the Third Asian Conference on Education in Osaka, Japan . Retrieved May 20, 2022, from http://iafor.org/ace2011_offprint/ACE2011_offprint_0254.pdf
  • Maxwell, J. A. (2012). Qualitative research design: An interactive approach (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Miles, M. B., Huberman, A. M., Saldaña, J. (2014). Qualitative data analysis (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Nehm, R. (2019). Biology education research: Building integrative frameworks for teaching and learning about living systems . Disciplinary and Interdisciplinary Science Education Research , 1 , ar15. https://doi.org/10.1186/s43031-019-0017-6 [ Google Scholar ]
  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Los Angeles, CA: Sage. [ Google Scholar ]
  • Perry, J., Meir, E., Herron, J. C., Maruca, S., Stal, D. (2008). Evaluating two approaches to helping college students understand evolutionary trees through diagramming tasks . CBE—Life Sciences Education , 7 ( 2 ), 193–201. https://doi.org/10.1187/cbe.07-01-0007 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Posner, G. J., Strike, K. A., Hewson, P. W., Gertzog, W. A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change . Science Education , 66 ( 2 ), 211–227. [ Google Scholar ]
  • Ravitch, S. M., Riggan, M. (2016). Reason & rigor: How conceptual frameworks guide research . Los Angeles, CA: Sage. [ Google Scholar ]
  • Reeves, T. D., Marbach-Ad, G., Miller, K. R., Ridgway, J., Gardner, G. E., Schussler, E. E., Wischusen, E. W. (2016). A conceptual framework for graduate teaching assistant professional development evaluation and research . CBE—Life Sciences Education , 15 ( 2 ), es2. https://doi.org/10.1187/cbe.15-10-0225 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reynolds, J. A., Thaiss, C., Katkin, W., Thompson, R. J. Jr. (2012). Writing-to-learn in undergraduate science education: A community-based, conceptually driven approach . CBE—Life Sciences Education , 11 ( 1 ), 17–25. https://doi.org/10.1187/cbe.11-08-0064 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rocco, T. S., Plakhotnik, M. S. (2009). Literature reviews, conceptual frameworks, and theoretical frameworks: Terms, functions, and distinctions . Human Resource Development Review , 8 ( 1 ), 120–130. https://doi.org/10.1177/1534484309332617 [ Google Scholar ]
  • Rodrigo-Peiris, T., Xiang, L., Cassone, V. M. (2018). A low-intensity, hybrid design between a “traditional” and a “course-based” research experience yields positive outcomes for science undergraduate freshmen and shows potential for large-scale application . CBE—Life Sciences Education , 17 ( 4 ), ar53. https://doi.org/10.1187/cbe.17-11-0248 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sabel, J. L., Dauer, J. T., Forbes, C. T. (2017). Introductory biology students’ use of enhanced answer keys and reflection questions to engage in metacognition and enhance understanding . CBE—Life Sciences Education , 16 ( 3 ), ar40. https://doi.org/10.1187/cbe.16-10-0298 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sbeglia, G. C., Goodridge, J. A., Gordon, L. H., Nehm, R. H. (2021). Are faculty changing? How reform frameworks, sampling intensities, and instrument measures impact inferences about student-centered teaching practices . CBE—Life Sciences Education , 20 ( 3 ), ar39. https://doi.org/10.1187/cbe.20-11-0259 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructionism . In Denzin, N. K., Lincoln, Y. S. (Eds.), Handbook of qualitative research (2nd ed., pp. 189–213). Los Angeles, CA: Sage. [ Google Scholar ]
  • Sickel, A. J., Friedrichsen, P. (2013). Examining the evolution education literature with a focus on teachers: Major findings, goals for teacher preparation, and directions for future research . Evolution: Education and Outreach , 6 ( 1 ), 23. https://doi.org/10.1186/1936-6434-6-23 [ Google Scholar ]
  • Singer, S. R., Nielsen, N. R., Schweingruber, H. A. (2012). Discipline-based education research: Understanding and improving learning in undergraduate science and engineering . Washington, DC: National Academies Press. [ Google Scholar ]
  • Todd, A., Romine, W. L., Correa-Menendez, J. (2019). Modeling the transition from a phenotypic to genotypic conceptualization of genetics in a university-level introductory biology context . Research in Science Education , 49 ( 2 ), 569–589. https://doi.org/10.1007/s11165-017-9626-2 [ Google Scholar ]
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge, MA: Harvard University Press. [ Google Scholar ]
  • Wenger, E. (1998). Communities of practice: Learning as a social system . Systems Thinker , 9 ( 5 ), 2–3. [ Google Scholar ]
  • Ziadie, M. A., Andrews, T. C. (2018). Moving evolution education forward: A systematic analysis of literature to identify gaps in collective knowledge for teaching . CBE—Life Sciences Education , 17 ( 1 ), ar11. https://doi.org/10.1187/cbe.17-08-0190 [ PMC free article ] [ PubMed ] [ Google Scholar ]

conceptual vs empirical research

Theoretical vs Conceptual Framework

What they are & how they’re different (with examples)

By: Derek Jansen (MBA) | Reviewed By: Eunice Rautenbach (DTech) | March 2023

If you’re new to academic research, sooner or later you’re bound to run into the terms theoretical framework and conceptual framework . These are closely related but distinctly different things (despite some people using them interchangeably) and it’s important to understand what each means. In this post, we’ll unpack both theoretical and conceptual frameworks in plain language along with practical examples , so that you can approach your research with confidence.

Overview: Theoretical vs Conceptual

What is a theoretical framework, example of a theoretical framework, what is a conceptual framework, example of a conceptual framework.

  • Theoretical vs conceptual: which one should I use?

A theoretical framework (also sometimes referred to as a foundation of theory) is essentially a set of concepts, definitions, and propositions that together form a structured, comprehensive view of a specific phenomenon.

In other words, a theoretical framework is a collection of existing theories, models and frameworks that provides a foundation of core knowledge – a “lay of the land”, so to speak, from which you can build a research study. For this reason, it’s usually presented fairly early within the literature review section of a dissertation, thesis or research paper .

Free Webinar: Literature Review 101

Let’s look at an example to make the theoretical framework a little more tangible.

If your research aims involve understanding what factors contributed toward people trusting investment brokers, you’d need to first lay down some theory so that it’s crystal clear what exactly you mean by this. For example, you would need to define what you mean by “trust”, as there are many potential definitions of this concept. The same would be true for any other constructs or variables of interest.

You’d also need to identify what existing theories have to say in relation to your research aim. In this case, you could discuss some of the key literature in relation to organisational trust. A quick search on Google Scholar using some well-considered keywords generally provides a good starting point.

foundation of theory

Typically, you’ll present your theoretical framework in written form , although sometimes it will make sense to utilise some visuals to show how different theories relate to each other. Your theoretical framework may revolve around just one major theory , or it could comprise a collection of different interrelated theories and models. In some cases, there will be a lot to cover and in some cases, not. Regardless of size, the theoretical framework is a critical ingredient in any study.

Simply put, the theoretical framework is the core foundation of theory that you’ll build your research upon. As we’ve mentioned many times on the blog, good research is developed by standing on the shoulders of giants . It’s extremely unlikely that your research topic will be completely novel and that there’ll be absolutely no existing theory that relates to it. If that’s the case, the most likely explanation is that you just haven’t reviewed enough literature yet! So, make sure that you take the time to review and digest the seminal sources.

Need a helping hand?

conceptual vs empirical research

A conceptual framework is typically a visual representation (although it can also be written out) of the expected relationships and connections between various concepts, constructs or variables. In other words, a conceptual framework visualises how the researcher views and organises the various concepts and variables within their study. This is typically based on aspects drawn from the theoretical framework, so there is a relationship between the two.

Quite commonly, conceptual frameworks are used to visualise the potential causal relationships and pathways that the researcher expects to find, based on their understanding of both the theoretical literature and the existing empirical research . Therefore, the conceptual framework is often used to develop research questions and hypotheses .

Let’s look at an example of a conceptual framework to make it a little more tangible. You’ll notice that in this specific conceptual framework, the hypotheses are integrated into the visual, helping to connect the rest of the document to the framework.

example of a conceptual framework

As you can see, conceptual frameworks often make use of different shapes , lines and arrows to visualise the connections and relationships between different components and/or variables. Ultimately, the conceptual framework provides an opportunity for you to make explicit your understanding of how everything is connected . So, be sure to make use of all the visual aids you can – clean design, well-considered colours and concise text are your friends.

Theoretical framework vs conceptual framework

As you can see, the theoretical framework and the conceptual framework are closely related concepts, but they differ in terms of focus and purpose. The theoretical framework is used to lay down a foundation of theory on which your study will be built, whereas the conceptual framework visualises what you anticipate the relationships between concepts, constructs and variables may be, based on your understanding of the existing literature and the specific context and focus of your research. In other words, they’re different tools for different jobs , but they’re neighbours in the toolbox.

Naturally, the theoretical framework and the conceptual framework are not mutually exclusive . In fact, it’s quite likely that you’ll include both in your dissertation or thesis, especially if your research aims involve investigating relationships between variables. Of course, every research project is different and universities differ in terms of their expectations for dissertations and theses, so it’s always a good idea to have a look at past projects to get a feel for what the norms and expectations are at your specific institution.

Want to learn more about research terminology, methods and techniques? Be sure to check out the rest of the Grad Coach blog . Alternatively, if you’re looking for hands-on help, have a look at our private coaching service , where we hold your hand through the research process, step by step.

conceptual vs empirical research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

23 Comments

CIPTA PRAMANA

Thank you for giving a valuable lesson

Muhammed Ebrahim Feto

good thanks!

Elias

VERY INSIGHTFUL

olawale rasaq

thanks for given very interested understand about both theoritical and conceptual framework

Tracey

I am researching teacher beliefs about inclusive education but not using a theoretical framework just conceptual frame using teacher beliefs, inclusive education and inclusive practices as my concepts

joshua

good, fantastic

Melese Takele

great! thanks for the clarification. I am planning to use both for my implementation evaluation of EmONC service at primary health care facility level. its theoretical foundation rooted from the principles of implementation science.

Dorcas

This is a good one…now have a better understanding of Theoretical and Conceptual frameworks. Highly grateful

Ahmed Adumani

Very educating and fantastic,good to be part of you guys,I appreciate your enlightened concern.

Lorna

Thanks for shedding light on these two t opics. Much clearer in my head now.

Cor

Simple and clear!

Alemayehu Wolde Oljira

The differences between the two topics was well explained, thank you very much!

Ntoks

Thank you great insight

Maria Glenda O. De Lara

Superb. Thank you so much.

Sebona

Hello Gradcoach! I’m excited with your fantastic educational videos which mainly focused on all over research process. I’m a student, I kindly ask and need your support. So, if it’s possible please send me the PDF format of all topic provided here, I put my email below, thank you!

Pauline

I am really grateful I found this website. This is very helpful for an MPA student like myself.

Adams Yusif

I’m clear with these two terminologies now. Useful information. I appreciate it. Thank you

Ushenese Roger Egin

I’m well inform about these two concepts in research. Thanks

Omotola

I found this really helpful. It is well explained. Thank you.

olufolake olumogba

very clear and useful. information important at start of research!!

Chris Omira

Wow, great information, clear and concise review of the differences between theoretical and conceptual frameworks. Thank you! keep up the good work.

science

thank you so much. Educative and realistic.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

A Framework for Undertaking Conceptual and Empirical Research

  • First Online: 28 September 2017

Cite this chapter

conceptual vs empirical research

  • Susanne Wiatr Borg 3 &
  • Louise Young 3 , 4  

1993 Accesses

1 Citations

Marketing scholars have repeatedly called for more conceptual work. Despite this, the number of conceptual contributions within the discipline of marketing is declining. This chapter argues that one strategy to change this is development of methodological frameworks that can guide and accredit the creation of conceptual scientific knowledge. This chapter offers a framework—the Conceptual and Empirical Research ( CER) model—to guide c onceptual and e mpirical research. The model consists of three embedded layers—ultimate presumptions, abductive logic and research design, which describe and interrelate the processes of conceptual as well as empirical research and show how knowledge creation is an emergent process. A range of conceptual research strategies are proposed that facilitate both the discovery and justification of conceptual insights.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

conceptual vs empirical research

Advancing marketing theory and practice: guidelines for crafting research propositions

conceptual vs empirical research

Contours of the marketing literature: Text, context, point-of-view, research horizons, interpretation, and influence in marketing

Marketing’s theoretical and conceptual value proposition: opportunities to address marketing’s influence.

Alderson, Wroe. 1957. Marketing behavior and executive action . Homewood, Illinois: Richard D. Irwin.

Google Scholar  

Alvesson, M., and K. Sköldberg. 1994. Tolkning och Reflektion. Vetenskapsfilosofi och Kvalitativ Metod . Lund: Studentlitteratur.

Andersen, I. 2003. Den skinbarlige virkelighed. Gylling, Forlaget Samfundslitteratur.

Andreewsky, E., and Bourcier, D. 2000. Abduction in language interpretation and law making. Kybernetes 29: 836–845.

Arbnor, I., and B. Bjerke. 2009. Methodology for creating business knowledge . London: Sage Publications.

Book   Google Scholar  

Arndt, J. 1985. On making marketing science more scientific: Role of orientations, paradigms, metaphors, and puzzle solving. Journal of Marketing 49: 11–23.

Article   Google Scholar  

Barker, A., C. Nancarrow, and N. Spackman. 2001. Informed eclecticism: A research paradigm for the twenty-first century. International Journal of Market Research 43: 3–27.

Bateson, G. 1972. Steps to an ecology of mind. Chicago: The University of Chicago.

Blackmore, S. 1999. The meme machine . New York: Oxford University Press.

Bonoma, T.V. 1985. Case research in marketing: Opportunities, problems, and a process. Journal of Marketing Research 199–208.

Booth, W.C., G.G. Colomb, and B.C. Williams. 2003. The craft of research . London: The University of Chicago Press.

Borg, S.W. 2012. Conceptualisations of a relational oriented B2B selling process—and exploring the role of neuro-linguistic programming. Ph.D., University of Southern Denmark.

Brodie, R.J., M. Saren, and J. Pels. 2011. Theorizing about the service dominant logic: The bridging role of middle range theory. Marketing Theory 11 (1): 75–91.

Buttriss, G., and I.F. Wilkinson. 2006. Using narrative sequence methods to advance international entrepreneurship theory. Journal of International Entrepreneurship 4: 157–174.

Chalmers, A.F. 2007. What is this thing called Science? , 3rd ed. St. Lucia: University of Queensland Press.

Cleeren, K., H.J. Van Heerde, and M.G. Dekimpe. 2013. Rising from the ashes: How brands and categories can overcome product-harm crises. Journal of Marketing 77 (2): 58–77.

Danermark, B., M. Ekström, L. Jakobsen, and J.C. Karlsson. 2002. Explaining society critical realism in the social sciences , Routledge.

Darden, L. 1991. Theory change in science . New York: Oxford University Press.

Davis, M.S. 1971. That’s interesting!: Towards a phenomenology of sociology and a sociology of phenomenology. Philosophy of the Social Sciences , June, 309–344.

Denzin, N.K. 1988. The research act: A theoretical introduction to sociological methods . Englewood Cliffs, NJ: Prentice-Hall.

Dosi, G. 1988. The nature of the innovative process. In Technical change and economic theory , ed. G. Dosi, Christopher Freeman, R. Nelson, G. Silverberg, and L.L. Soete. London: Pinter Publishers.

Dubois, A., and L.-E. Gadde. 2002. Systematic combining: An abductive approach to case research. Journal of Business Research 55 : 553–560.

Duymedjian, R., and C.C. Rüling. 2010. Towards a foundation of bricolage in organization and management theory. Organization Studies 31 (2): 133–151.

Easton, G. 2002. Marketing: A critical realist approach. Journal of Business Research 55: 103–109.

Elder, L., and R. Paul. 2009. A glossary of critical thinking terms of concepts: The critical analytic vocabulary of the English language. CA, Foundation for Critical Thinking.

Ely, Margot. 1991. Doing qualitative research: Circles within circles , (Vol. 3.). Psychology Press.

Flick, U. 2009. An introduction to qualitative research . London: Sage Publications.

Frazier, Gary L. 1983. On the measurement of interfirm power in channels of distribution. Journal of Marketing Research, 158–166.

Freytag, P.V., and K. Philipsen. 2010. Challenges in relationship marketing . Viborg: Academica.

Gaski, John F. 1984. The theory of power and conflict in channels of distribution. The Journal of Marketing , 9–29.

Gordon, W. 1999. Goodthinking: A guide to qualitative research. London: Admap.

Guba, E.G., and Y.S. Lincoln. 1994. Competing paradigms in qualitative research. In Handbook of qualitative research , ed. N.K. Denzin, and Y.S. Lincoln. London: Sage Publications.

Hanson, N.R. 1958. Patterns of discovery . Cambridge, UK: University Press.

Healy, M., and C. Perry. 2000. Comprehensive criteria to judge validity and reliability of qualitative research within the realism paradigm. Qualitative Research in Organizations and Management: An International Journal 3: 118–126.

Hunt, S.D. 2011. Theory status, inductive realism, and approximate truth: No miracles, no charades. International Studies in the Philosophy of Science 25: 159–178.

Jensen, H.S. 1995. Paradigms of theory-building in business studies. In European research paradigms in business studies , ed. T. Elfring, H.S. Jensen, and A. Money. København: Handelshøjskolens Forlag.

Kauffman, S. 1995. At home in the universe: The search for the laws of self-organization and complexity . New York, USA: Oxford University Press.

Kerin, R.A. 1988. From the editor. Journal of Marketing 52: 1.

Kuhn, T. 1962. The structure of scientific revolutions. Chicago: University of Chicago Press.

Kirkeby, O.F. 1990. Abduktion. In Videnskabsteori og metodelære , ed. H. Andersen. Gylling: Samfundslitteratur.

Lundgren, A. 1995. Technological innovation and network evolution. New York, Routledge.

Lvi-Strauss, C. 1966. The savage mind. University of Chicago Press.

MacInnis, D.J. 2004. Where have all the papers gone? Association for Consumer Research Newsletter (Spring): 1–3.

MacInnis, D.J. 2011. A framework for conceptual contributions in marketing. Journal of Marketing 75: 136–154.

Marshall, C., and G. Rossman. 1989. Designing qualitative research. London, Sage.

Maxwell, J.A. 1996. Qualitative research design—An interactive approach . London: Sage Publication.

Mick, D.G., S. Pettigrew, C. Pechmann, and J.L. Ozanne. 2012. Origins, qualities, and envisionments of transformative consumer research. In Transformative consumer research for personal and collective well-being , 3–24.

Parasuraman, Anantharanthan, Valarie A. Zeithaml, and Leonard L. Berry. 1985. A conceptual model of service quality and its implications for future research. The Journal of Marketing , 41–45.

Pike, S., and S.J. Page. 2014. Destination marketing organizations and destination marketing: A narrative analysis of the literature. Tourism Management 41: 202–227.

Punch, K. 1998. Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Ridley, M. 2011. The rational optimist: How prosperity evolves . New York: Harper Collins.

Robson, C. 2009. Real world research . Singapore: Blackwell Publishing.

Rong, B., and I.F. Wilkinson. 2011. What do managers’ survey responses mean and what affects them? The case of market orientation and firm performance. Australasian Marketing Journal (AMJ) 19 (3): 137–147.

Sartre, J.P. 1974. Between existentialism and Marxism ( www.philpapers.org ).

Saunders, M., P. Lewis, and A. Thornhill. 2009. Research methods for business students . Harlow: Pearson Education Limited.

Silverman, D. 2010. Doing qualitative research . London: SAGE Publications Ltd.

Sheth, J.N. 2011. Impact of emerging markets on marketing: Rethinking existing perspectives and practices. Journal of Marketing 75 (4): 166–182.

Srivastava, Rajendra K., Tasadduq A. Shervani, and Liam Fahey. 1999. Marketing, business processes, and shareholder value: An organizationally embedded view of marketing activities and the discipline of marketing. The Journal of Marketing , 168–179.

Stewart, D.W., and G.M. Zinkhan. 2006. Enhancing marketing theory in academic research. Journal of the Academy of Marketing Science 34: 477–480.

Taylor, S.S., D. Fisher, and R.L. Dufresne. 2002. The aesthetics of management storytelling: A key to organizational learning. Management Learning 33: 313–330.

Webster, F.E.J. 2005. Back to the future: Integrating marketing as tactics, strategy, and organizational culture. In Marketing renaissance: Opportunities and imperatives for improving marketing thought, practice, and infrastructure . Journal of Marketing 69: 4–6.

Welch, C., and I. Wilkinson. 2002. Idea logics and network theory in business marketing. Journal of Business-to-Business Marketing 9: 27–48.

Wilkinson, I., and L. Young. 2002a. On cooperating: Firms, relationships and networks. Journal of Business Research 55 (2): 123–133.

Wilkinson, I., and L. Young. 2002b. The role of marketing theory in studying marketing. Proceedings : Journal of Macro Marketing Conference , University of New South Wales, Sydney.

Wilkinson, I., and L. Young. 2013. The past and the future of business marketing theory. Industrial Marketing Management 42 (3): 394–404.

Yadav, M.S. 2010. The decline of conceptual articles and implications for knowledge development. Journal of Marketing 74: 1–19.

Young, L., and L. Freeman. 2008. A case for contrast as a catalyst for change. International Journal of Learning 15 (3): 295–304.

Zikmund, W.B., J.C. Babin, and M. Griffin. 2012. Business research methods . Cengage Learning.

Download references

Author information

Authors and affiliations.

University of Southern Denmark, Kolding, Denmark

Susanne Wiatr Borg & Louise Young

Western Sydney University, Sydney, Australia

Louise Young

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Louise Young .

Editor information

Editors and affiliations.

Per Vagn Freytag

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Borg, S.W., Young, L. (2018). A Framework for Undertaking Conceptual and Empirical Research. In: Freytag, P., Young, L. (eds) Collaborative Research Design. Springer, Singapore. https://doi.org/10.1007/978-981-10-5008-4_4

Download citation

DOI : https://doi.org/10.1007/978-981-10-5008-4_4

Published : 28 September 2017

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5006-0

Online ISBN : 978-981-10-5008-4

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Ask Difference

Conceptual Research vs. Empirical Research — What's the Difference?

conceptual vs empirical research

Difference Between Conceptual Research and Empirical Research

Table of contents, key differences, comparison chart, primary sources, reliance on, contribution to knowledge, compare with definitions, conceptual research, empirical research, common curiosities, what is conceptual research, is empirical research more valid than conceptual research, why is conceptual research important, how does empirical research differ, can empirical research stand alone without conceptual research, are hypotheses always tested in empirical research, can one researcher do both conceptual and empirical research, do all scientific studies use empirical research, which research type is more common in academic journals, which research form is more challenging, can conceptual research lead to empirical research, how is data gathered in empirical research, which research type is quicker, do the humanities prefer conceptual research, is conceptual or empirical research more impactful, share your discovery.

conceptual vs empirical research

Author Spotlight

conceptual vs empirical research

Popular Comparisons

conceptual vs empirical research

Trending Comparisons

conceptual vs empirical research

New Comparisons

conceptual vs empirical research

Trending Terms

conceptual vs empirical research

Online Tesis

Conceptual Research and its differences with Empirical Research

by Bastis Consultores | Sep 20, 2021 | Methodology | 0 comments

conceptual vs empirical research

Conceptual research, as the name suggests, is research related to abstract concepts and ideas. It does not involve practical experimentation, but is based on the researcher analyzing the available information on a given topic. Conceptual research has been widely used in the study of philosophy to develop new theories, counter existing theories, or interpret existing theories in a different way.

Components of Conceptual Research

Conceptual research framework.

A conceptual research framework is constructed from existing literature and studies from which inferences can be drawn. The study is carried out to reduce existing knowledge gaps on a particular topic and to make relevant and reliable information available.

To create a conceptual research framework, the following steps can be followed:

Defining a research topic

The first step of the framework is to clearly define the topic of your research. Most researchers will choose a topic related to their field of expertise.

Collecting and organizing relevant research

Since conceptual research is based on pre-existing studies and literature, researchers should collect all pertinent information related to their topic.

It is important to use reliable sources and data from reputable scientific journals or research papers. As conceptual research does not employ the use of practical experimentation, the importance of analyzing reliable, fact-based studies is reinforced.

Identifying variables for research

The next step is to select the variables relevant to the research. These variables will be the scales with which the inferences will be made. They give a new scope to the research and also help to identify how the different variables may be affecting the subject of the research.

Creating the framework

The last step is to create the research framework using the relevant literature, variables and any other relevant material. The statement of the main question/problem of the research becomes your research framework.

Conceptual Research Example

An example of conceptual research is the philosophy of Thomas Malthus set forth in his book “An Essay on the Principle of Population”. In his book, Malthus theorized that due to disease, famine, war, and/or calamities, the human population would cease to expand.

His theory was based on observations about human population growth and the growth of food production. He claimed that the human population increased geometrically while food production only increased arithmetically. To reach this conclusion he used existing population and food statistics. Based on this information, he assumed that humans would end up being unable to produce enough food to support themselves.

For many reasons, Malthus’s theory was wrong. One of the most important is that technological advances were not taken into account, probably due to the time in which the research was carried out. Technological advances and global interconnection enabled a massive increase in food production and stimulated the flow of food from one country to another.

Although Multhus’s theory was based on the current statistics of his time, his observations turned out to be false.

Advantages of Conceptual Research

It requires few resources, compared to other forms of market research where practical experimentation is required. This saves time and resources.

It is a convenient form of research: As this form of research only requires the evaluation of the existing literature, it turns out to be a relatively convenient form of research.

Disadvantages of Conceptual Research

Questionable reliability and validity: Theories based on existing literature, rather than experimentation and observation, draw conclusions that are less based on facts and cannot necessarily be considered reliable.

It is subject to a greater number of errors or subjectivity: We often see that philosophical theories are refuted or revised because their conclusions are inferences drawn from existing texts and not from practical experimentation.

Conceptual Research vs Empirical Research

Scientific research is usually divided into two classes: conceptual research and empirical research. Before there were different ways of investigating and a researcher prided himself on being one or the other, praising his method and despising the alternative. Today the distinction is not so clear.

Conceptual research focuses on the concept or theory that explains or describes the phenomenon studied. What causes the disease? How can we describe the movement of the planets? What are the basic components of matter? The conceptual researcher sits at his desk with a pen in his hand and tries to solve these problems by thinking about them.

He doesn’t do experiments, but he can use the observations of others, since this is the mass of data he tries to make sense of. Until recently, conceptual research methodology was considered the most honorable form of research: it required using the brain, not the hands. Researchers who did experiments, like alchemists, were considered little better than blacksmiths: “disgusting empiricals.”

What is empirical research?

Despite their high status, conceptual researchers regularly produced theories that were wrong. Aristotle taught that large cannonballs fell to earth faster than small ones, and many generations of professors repeated his teachings until Galileo proved them wrong. Galileo was an empiricist of the best kind, who conducted original experiments not only to destroy old theories but to provide the basis for new theories.

The backlash against the ivory tower theorists culminated in those claiming to have no use for the theory, arguing that the empirical acquisition of knowledge was the only path to truth. A pure empiricist simply graphed the data and saw if he would get a straight-line relationship between the variables. If so, it had a good “empirical” relationship that allowed useful predictions to be made. The theory behind the correlation was irrelevant.

Conceptual Questions and Empirical Questions

Conceptual questions.

Philosophical questions tend to be conceptual in nature. This means that they cannot be answered simply by giving facts or information. A concept is the object of a thought, not something that is present to the senses.

Concepts are not a mystery, and although they are “abstract,” we use them all the time to organize our thinking. We literally couldn’t think or communicate without concepts. Some common examples of concepts are “justice,” “beauty,” and “truth,” but also “seven,” “blue,” or “big.”

When we ask a philosophical conceptual question, we usually inquire into the nature of something, or ask a question about how something is as it is. Ancient philosophers, such as Plato, posed conceptual questions such as “What is justice?” as the basis of philosophy. The statements “That action is wrong” or “Knowledge is a true justified belief” are conceptual statements.

In papers, you will often be asked to consider concepts, analyze and describe how philosophers use them, and perhaps compare them between texts. For example, you may be asked, “Do animals have rights?” This question asks you to consider what a right is and whether it is the kind of thing an animal should or even might have. He did not wonder whether or not there were any laws that actually granted those rights. Nor does it ask for your opinion on this question, but a reasoned position that is based on philosophical concepts and texts.

Empirical Questions

The word “empirical” means “obtained through experience.” Scientific experiments and observation give rise to empirical data. The scientific theories that organize the data are conceptual. Historical records or the results of sociological or psychological surveys are empirical. Making sense of those records or results requires the use of concepts.

Empirical questions can be answered by giving facts or information. Examples of empirical questions are: “What is the chemical composition of water?” or: “When did the French Revolution occur?” or: “Which education system gives rise to the highest literacy rate?”

The cycle of empirical research

The empirical research cycle is a 5-phase cycle that describes the systematic processes for conducting empirical research. It was developed by the Dutch psychologist A.D. de Groot in the 1940s and it aligns 5 important stages that can be considered deductive approaches to empirical research.

In the methodological cycle of empirical research, all processes are interconnected and neither of them is more important than the other. This cycle clearly outlines the different phases involved in the generation of research hypotheses and in the systematic testing of these hypotheses from empirical data.

Observation

It is the process of collecting empirical data for research. In this phase, the researcher collects relevant empirical data using qualitative or quantitative observation methods, and this serves to support the hypotheses of the research.

At this stage, the researcher makes use of inductive reasoning to reach a probable overall conclusion of the research based on his observation. The researcher generates a general hypothesis that tries to explain the empirical data and goes on to observe the empirical data according to this hypothesis.

It is the stage of deductive reasoning. In it, the researcher generates hypotheses by applying logic and rationality to his observation.

Here the researcher tests the hypotheses using qualitative or quantitative research methods. At the verification stage, the researcher combines the relevant instruments of systematic research with empirical methods to arrive at objective results that support or negate the research hypotheses.

Evaluation research is the final stage of an empirical research study. It presents the empirical data, the conclusions of the research and the arguments that support them, in addition to the problems that have been found during the research process.

This information is useful for future research.

Examples of empirical research

An empirical research study can be conducted to determine whether listening to upbeat music improves people’s mood. The researcher may have to conduct an experiment that involves exposing individuals to upbeat music to see if this improves their mood.

The results of such an experiment will provide empirical evidence that confirms or disproves the hypotheses.

An empirical research study may also be conducted to determine the effects of a new drug on specific groups of people. The researcher may expose research subjects to controlled amounts of the drug and observe the effects over a specific period of time to gather empirical data.

Another example of empirical research is the measurement of noise pollution levels in an urban area to determine the average levels of sound exposure experienced by its inhabitants. In this case, the researcher may have to administer questionnaires or conduct a survey to collect relevant data based on the experiences of the research subjects.

Empirical research can also be conducted to determine the relationship between seasonal migration and the body mass of flying birds. A researcher may need to observe the birds and carry out the observation and experimentation necessary to arrive at objective results that answer the research question.

Methods of data collection from empirical research

Empirical data can be collected using qualitative and quantitative data collection methods. Quantitative data collection methods are used for numerical data collection, while qualitative data collection processes are used to collect empirical data that cannot be quantified, i.e. non-numerical data.

The following are common methods of data collection in empirical research

Survey/ Questionnaire

The survey is a data collection method typically employed by researchers to gather large data sets from a specific number of respondents in relation to a research topic. This method of data collection is often used for quantitative data collection, although it can also be used in quantitative research.

A survey contains a set of questions that can range from closed questions to open-ended questions, along with other types of questions that revolve around the research topic. A survey can be administered physically or with the use of online data collection platforms.

Empirical data can also be collected by conducting an experiment. An experiment is a controlled simulation in which one or more of the variables of the research are manipulated by a set of interconnected processes in order to confirm or refute the hypotheses of the research.

An experiment is a useful method for measuring causality, i.e., cause and effect between dependent and independent variables in a research environment. It is a comprehensive method of data collection in an empirical research study because it involves checking calculated assumptions to arrive at the most valid data and research results.

Case Studies

The case study method is another common method of data collection in an empirical research study. It consists of examining and analyzing relevant cases and real-life experiences on the topic or variables of the research to discover in-depth information that can serve as empirical data.

The observation method is a qualitative data collection method that requires the researcher to study the behaviors of research variables in their natural environments to gather relevant information that can serve as empirical data.

Main Differences Between Conceptual Research and Empirical Research

Conceptual research is a type of research that is usually related to abstract ideas or concepts, while empirical research is any research study in which the conclusions of the study are drawn from evidence verifiable by observation or experience, rather than theory or pure logic.

Conceptual research has to do with abstract ideas and concepts; however, it does not involve any practical experiments. Empirical research, on the other hand, involves phenomena that are observable and measurable.

Type of studies

Philosophical research studies are examples of conceptual research studies, while empirical research includes both quantitative and qualitative studies.

The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts, while empirical research involves research based on observation, experiments, and verifiable evidence.

The Scientific Method: A Bit of Both

Modern scientific method is actually a combination of empirical and conceptual research. From known experimental data, a scientist formulates a working hypothesis to explain some aspect of nature. Then, it conducts new experiments designed to test the predictions of the theory, to support or disprove it. Einstein is often cited as an example of a conceptual researcher, but he based his theories on experimental observations and proposed experiments, real and thought, that would test his theories.

On the other hand, Edison is often considered an empiricist, with the “Edisonian method” being a trial-and-error term. But Edison appreciated the work of theorists and hired some of the best. Random screening of a myriad of possibilities remains valuable: pharmaceutical companies looking for new drugs do so, sometimes with great success.

Our specialists wait for you to contact them through the quote form or direct chat. We also have confidential communication channels such as WhatsApp and Messenger. And if you want to be aware of our innovative services and the different advantages of hiring us, follow us on Facebook, Instagram or Twitter.

If this article was to your liking, do not forget to share it on your social networks.

Bibliographic References

“Empirical Research: Definition, Methods, Types and Examples” QuestionPro, 14 Dec. 2018.

“Conceptual Research: Definition, Framework, Example and Advantages” QuestionPro, 18 Sept. 2018.

Patrick, M. “Conceptual Framework: A Step-by-Step Guide on How to Make One.” SimplyEducate.Me, 4 Dec. 2018.

You might also be interested in: Research Design: Cross-Sectional Study vs Longitudinal Study

Conceptual Research and its differences with Empirical Research

Conceptual Research and its differences with Empirical Research. Photo: Unsplash. Credits: Mimi Thian @mimithian

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Categories:

The most seen, copy short link.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 22 August 2024

Conceptual structure and the growth of scientific knowledge

  • Kara Kedrick   ORCID: orcid.org/0000-0002-3410-5834 1 ,
  • Ekaterina Levitskaya 2 &
  • Russell J. Funk   ORCID: orcid.org/0000-0001-6670-4981 3  

Nature Human Behaviour ( 2024 ) Cite this article

853 Accesses

12 Altmetric

Metrics details

  • Computer science
  • Science, technology and society

How does scientific knowledge grow? This question has occupied a central place in the philosophy of science, stimulating heated debates but yielding no clear consensus. Many explanations can be understood in terms of whether and how they view the expansion of knowledge as proceeding through the accretion of scientific concepts into larger conceptual structures. Here we examine these views empirically by analysing 2,605,224 papers spanning five decades from both the social sciences (Web of Science) and the physical sciences (American Physical Society). Using natural language processing techniques, we create semantic networks of concepts, wherein noun phrases become linked when used in the same paper abstract. We then detect the core/periphery structures of these networks, wherein core concepts are densely connected sets of highly central nodes and periphery concepts are sparsely connected nodes that are highly connected to the core. For both the social and physical sciences, we observe increasingly rigid conceptual cores accompanied by the proliferation of periphery concepts. Subsequently, we examine the relationship between conceptual structure and the growth of scientific knowledge, finding that scientific works are more innovative in fields with cores that have higher conceptual churn and with larger cores. Furthermore, scientific consensus is associated with reduced conceptual churn and fewer conceptual cores. Overall, our findings suggest that while the organization of scientific concepts is important for the growth of knowledge, the mechanisms vary across time.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

conceptual vs empirical research

Similar content being viewed by others

conceptual vs empirical research

Papers and patents are becoming less disruptive over time

conceptual vs empirical research

Leading countries in global science increasingly receive more citations than other countries doing similar research

conceptual vs empirical research

Surprising combinations of research contents and contexts are related to impact and emerge with scientific outsiders from distant disciplines

Data availability.

The WoS data and the APS data are available from the Web of Science and the American Physical Society, respectively, but restrictions apply to the availability of these data, which were used under licence for the current study and so are not publicly available. If you are interested in accessing the WoS data, you can request access to the API through Clarivate, which requires an additional subscription or permission ( https://clarivate.com/products/scientific-and-academic-research/research-discovery-and-workflow-solutions/webofscience-platform/web-of-science-core-collection/ ). For access to the APS data, you can request permission directly from their website ( https://journals.aps.org/datasets/ ).

Code availability

The Python v.3 and Stata v.18 code we used to analyse and visualize the data for the current study are publicly available via Zenodo at https://doi.org/10.5281/zenodo.11533199 (ref. 49 ).

Price, D. J. d. S. Science since Babylon (Yale Univ. Press, 1961).

Price, D. J. d. S. Little Science, Big Science (Columbia Univ. Press, 1963).

Bornmann, L., Devarakonda, S., Tekles, A. & Chacko, G. Are disruption index indicators convergently valid? The comparison of several indicator variants with assessments by peers. Quant. Sci. Stud. 1 , 1242–1259 (2020).

Article   Google Scholar  

Milojević, S. Quantifying the cognitive extent of science. J. Informetr. 9 , 962–973 (2015).

Tabah, A. N. Literature dynamics: studies on growth, diffusion, and epidemics. Annu. Rev. Inf. Sci. Technol. 34 , 249–286 (1999).

Google Scholar  

Kuhn, T. S. The Structure of Scientific Revolutions (Univ. Chicago Press, 1962).

Lakatos, I. & Musgrave, A. Criticism and the Growth of Knowledge: Proceedings of the International Colloquium in the Philosophy of Science, London, 1965 Vol. 4 (Cambridge Univ. Press, 1970).

Laudan, L. Progress and Its Problems: Toward a Theory of Scientific Growth (Univ. California Press, 1978).

Popper, K. R. Conjectures and Refutations: The Growth of Scientific Knowledge (Routledge & Kegan Paul, 2002).

Cole, S. Why sociology doesn’t make progress like the natural sciences. Sociol. Forum 9 , 133–154 (1994).

Cole, S. Disciplinary knowledge revisited: the social construction of sociology. Am. Sociol. 37 , 41–56 (2006).

Gonzalez, W. J. Prediction and Novel Facts in the Methodology of Scientific Research Programs 103–124 (Springer International, 2015).

Chu, J. S. G. & Evans, J. A. Slowed canonical progress in large fields of science. Proc. Natl Acad. Sci. USA 118 , e2021636118 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Newman, M. E. J. Scientific collaboration networks. ii. Shortest paths, weighted networks, and centrality. Phys. Rev. E 64 , 016132 (2001).

Article   CAS   Google Scholar  

Latour, B. Science in Action: How to Follow Scientists and Engineers through Society (Harvard Univ. Press, 1987).

Lakatos, I., Worrall, J., Currie, G. & Currie, P. The Methodology of Scientific Research Programmes: Philosophical Papers Vol. 1 (Cambridge Univ. Press, 1978).

Kojaku, S. & Masuda, N. Finding multiple core–periphery pairs in networks. Phys. Rev. E 96 , 052313 (2017).

Article   PubMed   Google Scholar  

Borgatti, S. P. & Everett, M. G. Models of core/periphery structures. Soc. Netw. 21 , 375–395 (2000).

Funk, R. J. & Owen-Smith, J. A dynamic network measure of technological change. Manage. Sci. 63 , 791–817 (2017).

Mulkay, M. J., Gilbert, G. N. & Woolgar, S. Problem areas and research networks in science. Sociology 9 , 187–203 (1975).

Wimsatt, W. C. Reductionism and its heuristics: making methodological reductionism honest. Synthese 151 , 445–475 (2006).

Wu, L., Wang, D. & Evans, J. A. Large teams develop and small teams disrupt science and technology. Nature 566 , 378–382 (2019).

Article   CAS   PubMed   Google Scholar  

Shwed, U. & Bearman, P. S. The temporal structure of scientific consensus formation. Am. Sociol. Rev. 75 , 817–840 (2010).

Article   PubMed   PubMed Central   Google Scholar  

Mayo, L. C., McCue, S. W. & Moroney, T. J. Gravity-driven fingering simulations for a thin liquid film flowing down the outside of a vertical cylinder. Phys. Rev. E 87 , 053018 (2013).

Jones, B. F. The burden of knowledge and the ‘death of the Renaissance Man’: is innovation getting harder? Rev. Econ. Stud. 76 , 283–317 (2009).

Gordon, R. J. The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War revised edn (Princeton Univ. Press, 2016).

Bhattacharya, J. & Packalen, M. Stagnation and Scientific Incentives Working Paper No. 26752 (National Bureau of Economic Research, 2020).

Fink, T., Reeves, M., Palma, R. & Farr, R. S. Serendipity and strategy in rapid innovation. Nat. Commun. 8 , 2002 (2017).

Tria, F., Loreto, V., Servedio, V. & Strogatz, S. The dynamics of correlated novelties. Sci. Rep. 4 , 5890 (2014).

Bloom, N., Jones, C. I., Van Reenen, J. & Webb, M. Are ideas getting harder to find? Am. Econ. Rev. 110 , 1104–1144 (2020).

Horgan, J. The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age (Basic Books, 2015).

Jones, B. F. & Weinberg, B. A. Age dynamics in scientific creativity. Proc. Natl Acad. Sci. USA 108 , 18910–18914 (2011).

Duncker, K. On problem solving. Psychol. Monogr . 58 , i–113 (1945).

Jansson, D. G. & Smith, S. M. Design fixation. Des. Stud. 12 , 3–11 (1991).

Maier, N. R. F. Reasoning in humans: II. The solution of a problem and its appearance in consciousness. J. Compar. Psychol. 12 , 181–194 (1931).

Smith, S. M., Ward, T. B. & Schumacher, J. S. Constraining effects of examples in a creative generation task. Mem. Cogn. 21 , 837–845 (1993).

Cole, S. Making Science: Between Nature and Society (Harvard Univ. Press, 1995).

Van Rossum, G. & Drake, F. L. Python 3 Reference Manual. (CreateSpace, 2009).

MariaDB Foundation. MariaDB. https://mariadb.com/ (2023).

Mongeon, P. & Paul-Hus, A. The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics 106 , 213–228 (2016).

Tennant, J. P. Web of Science and Scopus are not global databases of knowledge. Eur. Sci. Ed. 46 , e51987 (2020).

Christianson, N. H., Sizemore Blevins, A. & Bassett, D. S. Architecture and evolution of semantic networks in mathematics texts. Proc. R. Soc. A 476 , 20190741 (2020).

Dworkin, J. D., Shinohara, R. T. & Bassett, D. S. The emergent integrated network structure of scientific research. PLoS ONE 14 , e0216146 (2019).

Rule, A., Cointet, J.-P. & Bearman, P. S. Lexical shifts, substantive changes, and continuity in State of the Union discourse, 1790–2014. Proc. Natl Acad. Sci. USA 112 , 10837–10844 (2015).

Honnibal, M., Montani, I., Van Landeghem, S. & Boyd, A. spaCy: industrial-strength natural language processing in Python. Zenodo https://zenodo.org/records/10009823 (2020).

DeWilde, B. textacy documentation (Chartbeat, Inc., 2021).

Hofstra, B. et al. The diversity–innovation paradox in science. Proc. Natl Acad. Sci. USA 117 , 9284–9291 (2020).

Kojaku, S. & Masuda, N. Core–periphery structure requires something else in the network. New J. Phys. 20 , 043012 (2018).

Kedrick, K., Levitskaya, E. & Funk, R. J. Conceptual structure and the growth of scientific knowledge. Zenodo https://doi.org/10.5281/zenodo.11533199 (2024).

Davis, R. L. Quantum turbulence. Phys. Rev. Lett. 64 , 2519–2522 (1990).

Download references

Acknowledgements

We thank the National Science Foundation for financial support of work related to this project (grants no. 1829168 to R.J.F and no. 1932596 to R.J.F). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. We also thank D. Hirschman, M. Park and Y. J. Kim for feedback on an earlier version of this work, and T. Gebhart for many helpful conversations and assistance with data and computation. Our work was presented as a poster at the 2nd Annual International Conference on the Science of Science and Innovation, as a poster at the 43rd Annual Meeting of the Cognitive Science Society, as a lightning talk at Networks 2021: A Joint Sunbelt and NetSci Conference, and as a poster at the 3rd North American Social Networks Conference.

Author information

Authors and affiliations.

Institute for Complex Social Dynamics, Carnegie Mellon University, Pittsburgh, PA, USA

Kara Kedrick

The Coleridge Initiative, New York, NY, USA

Ekaterina Levitskaya

Carlson School of Management, University of Minnesota, Minneapolis, MN, USA

Russell J. Funk

You can also search for this author in PubMed   Google Scholar

Contributions

The study was conceptualized and designed by K.K., E.L. and R.J.F. The data analysis was conducted by K.K. and R.J.F. The manuscript was initially drafted by K.K., E.L. and R.J.F., with subsequent revisions made by K.K. and R.J.F.

Corresponding author

Correspondence to Russell J. Funk .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Human Behaviour thanks Sadamori Kojaku, Marc Santolini and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 concepts extracted from the text of an abstract..

This figure shows an example abstract from the APS data; the highlighted text indicates single-word and multi-word noun phrases identified as concepts using our extraction algorithm. Reproduced with permission from ref. 50 , American Physical Society.

Supplementary information

Supplementary information.

Supplementary Figs. 1–9 and Tables 1–3.

Reporting Summary

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Kedrick, K., Levitskaya, E. & Funk, R.J. Conceptual structure and the growth of scientific knowledge. Nat Hum Behav (2024). https://doi.org/10.1038/s41562-024-01957-x

Download citation

Received : 29 September 2022

Accepted : 16 July 2024

Published : 22 August 2024

DOI : https://doi.org/10.1038/s41562-024-01957-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

conceptual vs empirical research

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

Day Two: Placebo Workshop: Translational Research Domains and Key Questions

July 11, 2024

July 12, 2024

Day 1 Recap and Day 2 Overview

ERIN KING: All right. It is 12:01 so we'll go ahead and get started. And so on behalf of the Co-Chairs and the NIMH Planning Committee, I'd like to welcome you back to day two of the NIMH Placebo Workshop, Translational Research Domains and Key Questions. Before we begin, I will just go over our housekeeping items again. So attendees have been entered into the workshop in listen-only mode with cameras disabled. You can submit your questions via the Q&A box at any time during the presentation. And be sure to address your question to the speaker that you would like to respond.

For more information on today's speakers, their biographies can be found on the event registration website. If you have technical difficulties hearing or viewing the workshop, please note these in the Q&A box and our technicians will work to fix the problem. And you can also send an e-mail to [email protected]. And we'll put that e-mail address in the chat box for you. This workshop will be recorded and posted to the NIMH event web page for later viewing.

Now I would like to turn it over to our workshop Co-Chair, Dr. Cristina Cusin, for today's introduction.

CRISTINA CUSIN: Thank you so much, Erin. Welcome, everybody. It's very exciting to be here for this event.

My job is to provide you a brief recap of day one and to introduce you to the speakers of day two. Let me share my slides.

Again, thank you to the amazing Planning Committee. Thanks to their effort, we think this is going to be a success. I learned a lot of new information and a lot of ideas for research proposals and research projects from day one. Very briefly, please go and watch the videos. They are going to be uploaded in a couple of weeks if you missed them.

But we had an introduction from Tor, my Co-Chair. We had an historic perspective on clinical trials from the industry regulatory perspective. We had the current state from the FDA on placebo.

We had an overview of how hard it is to sham, to provide the right sham for device-based trials, and the challenges for TMS. We have seen some new data on the current state of placebo in psychosocial trials and what is the equivalent of a placebo pill for psychosocial trials. And some social neuroscience approach to placebo analgesia. We have come a long way from snake oil and we are trying to figure out what is placebo.

Tor, my Co-Chair, presented some data on the neurocircuitry underlying placebo effect and the questions that how placebo is a mixture of different elements including regression to the mean, sampling bias, selective attrition for human studies, the natural history of illness, the placebo effect per se that can be related to expectations, context, learning, interpretation.

We have seen a little bit of how is the impact on clinical trial design and how do we know that something, it really works. Or whatever this "it" is. And why do even placebo effect exists? It's fascinating idea that placebo exists as a predictive control to anticipate threats and the opportunity to respond in advance and to provide causal inference, a construct perception to infer the underlying state of body and of world.

We have seen historical perspective. And Ni Aye Khin and Mike Detke provided some overview of 25 years of randomized control trials from the data mining in major depressive disorders, schizophrenia trials and the lessons we have learned.

We have seen some strategies, both historical strategies and novel strategies to decrease placebo response in clinical trials and the results. Start from trial design, SPCD, lead-in, placebo phase and flexible dosing. Use of different scales. The use of statistical approaches like last observation carried forward or MMRM. Centralized ratings, self-rating, computer rating for different assessments. And more issues in clinical trials related to patient selection and professional patients.

Last, but not least, the dream of finding biomarkers for psychiatric conditions and tying response, clinical response to biomarkers. And we have seen how difficult it is to compare more recent studies with studies that were started in the '90s.

We have the FDA perspective with Tiffany Farchione in this placebo being a huge issue from the FDA. Especially the discussion towards the end of the day was on how to blind psychedelics.

We have seen an increasing placebo response rate in randomized controlled trials, also in adolescents, that is. And the considerations from the FDA of novel design models in collaboration with industry. We had examples of drugs approved for other disorders, not psychiatric condition, and realized -- made me realize how little we know about the true pathophysiology of psychiatric disorders, likely also heterogeneous conditions.

It made me very jealous of other fields because they have objective measures. They have biology, they have histology, they have imaging, they have lab values. While we are -- we are far behind, and we are not really able to explain to our patients why our mitigations are supposed to work or how they really work.

We heard from Holly Lisanby and Zhi-De Deng. The sham, the difficulty in producing the right sham for each type of device because most of them have auxiliary effects that are separate from the clinical effect like the noise or the scalp stimulation for TMS.

And it's critical to obtain a true blinding and separating sham from verum. We have seen how in clinical trials for devices expectancy from the patient, high tech environment and prolonged contact with clinicians and staff may play a role. And we have seen how difficult it is to develop the best possible sham for TMS studies in tDCS. It's really complicated and it's so difficult also to compare published studies in meta-analysis because they've used very different type of sham. Not all sham are created equal. And some of them could have been biologically active, so therefore invalidating the result or making the study uninformative.

Then we moved on to another fascinating topic with Dr. Rief and Dr. Atlas. What is the impact of psychological factors when you're studying a psychological intervention. Expectations, specific or nonspecific factors in clinical trials and what is interaction between those factors.

More, we learned about the potential nocebo effect of standard medical care or being on a wait list versus being in the active arm of a psychotherapy trial. And the fact that we are not accurately measuring the side effect of psychotherapy trial itself. And we heard more a fascinating talk about the neurocircuitry mediating placebo effect -- salience, affective value, cognitive control. And how perception of provider, perception of his or her warmth and competence and other social factors can affect response and placebo response, induce bias in evaluation of acute pain of others. Another very interesting field of study.

From a clinician perspective, this is -- and from someone who conduct clinical trials, all this was extremely informative because in many case in our patient situation no matter how good the treatment is, they have severe psychosocial stressors. They have difficulties to accessing food, to access treatment, transportation, or they live in an extremely stressful environment. So to disentangle other psychosocial factors from the treatment, from the biology is going to be critical to figure out how to treat best our patients.

And there is so much more work to do. Each of us approach the placebo topic for research from a different perspective. And like the blind man trying to understand what is an elephant, we have to endure it, we have to talk to each other, we have to collaborate and understand better the underlying biology, understand different aspect of the placebo phenomena.

And this lead us to the overview for day two. We are going to hear more about other topic that are so exciting. The placebo, the nocebo effect and other predictive factors in laboratory setting. We are going to hear about genetic of the placebo response to clinical trials. More physiological and psychological and neuromechanism for analgesia. And after a brief break around 1:30, we are going to hear about novel biological and behavioral approaches for the placebo effect.

We are going to hear about brain mapping. We are going to hear about other findings from imaging. And we're going to hear about preclinical modeling. There were some questions yesterday about animal models of placebo. And last, but not least, please stay around because in the panel discussion, we are going to tackle some of your questions. And we are going to have two wonderful moderators, Ted Kaptchuk and Matthew Rudorfer. So please stay with us and ask questions. We love to see more challenges for our speakers. And we're going to be all of the panelists from yesterday, from today are going to be present. Thank you so much.

Now we're going to move on to our first speaker of the day. If I am correct according to the last -- Luana.

Measuring & Mitigating the Placebo Effect

LUANA COLLOCA: Thank you very much, Cristina. First, I would love to thank the organizer. This is a very exciting opportunity to place our awareness about this important phenomenon for clinical trials and the clinical practice.

And today, I wish to give you a very brief overview of the psychoneurobiological mechanism of a placebo and nocebo, the description of some pharmacological studies, and a little bit of information on social learning. That is a topic that has been mentioned a little bit yesterday. And finally, the translational part. Can we translate what we learn from mechanistic approach to placebo and nocebo in terms of a disease and symptomatology and eventually predictors is the bigger question.

So we learned yesterday that placebo effects are generated by verbal suggestion, this medication has strong antidepressant effects. Therapeutic prior experience, merely taking a medication weeks, days before being substitute with a simulation of placebo sham treatment. Observation of a benefit in other people, contextual and treatment cue, and interpersonal interactions.

Especially in the fields of pain where we can simulate nociception, painful experience in laboratory setting, we learn a lot about the modulation related to placebo. In particular, expectation can provide a reaction and activation of parts of the brain like frontal area, nucleus accumbens, ventral striatum. And this kind of mechanism can generate a descending stimulation to make the painful nociceptive stimulus less intense.

The experience of analgesia at the level of a pain mechanism translate into a modulation reduction of a pain intensity. But most important, pain unpleasantness and the effective components of the pain. I will show today some information about the psychological factor, the demographic factor as well as genetic factors that can be predictive of placebo effects in the context of a pain.

On the other hand, a growing interest is related to nocebo effects, the negative counter sides of this phenomenon. When we talk about nocebo effects, we refer to increase in worsening of outcome in symptoms related to negative expectation, prior negative therapeutic experience, observing a negative outcome in others, or even mass psychogenic modeling such as some nocebo-related response during the pandemic. Treatment leaflets, the description of all side effects related to a medication. Patient-clinician communication. The informed consent where we list all of the side effects of a procedure or medication as well as contextual cues in clinical encounters.

And importantly, internal factor like emotion, mood, maladaptive cognitive appraisal, negative valence, personality traits, somatosensory features and omics can be predictive of negative worsening of symptom and outcome related to placebo and nocebo effects. In terms of a nocebo very briefly, there is a lot of attention again related to brain imaging with beautiful data show that the brainstem, the spinal cord, the hippocampus play a critical role during nocebo hyperalgesic effects.

And importantly, we learn that about placebo and nocebo through different approach including brain imaging, as we saw yesterday, but also pharmacological approach. We start from realizing that placebo effects are really neurobiological effects with the use of agonist or antagonist.

In other words, we can use a drug to mimic the action of that drug when we replace the drug with a saline solution, for example. In the cartoon here, you can see a brief pharmacological conditioning with apomorphine. Apomorphine is a dopamine agonist. And after three days of administration, apomorphine was replaced with saline solution in the intraoperative room to allow us to understand if we can mimic at the level of neuronal response the effects of apomorphine.

So in brief these are patients undergoing subthalamic EEG installation of deep brain stimulation. You can see here reaching the subthalamic nucleus. So after crossing the thalamus, the zona incerta, the STN, and the substantia nigra, the surgeon localized the area of stimulation. Because we have two subthalamic nuclei, we can use one as control and the other one as target to study in this case the effects of saline solution given after three days of apomorphine.

What we found was in those people who respond, there was consistency in reduction of clinical symptoms. As you can see here, the UPDRS, a common scale to measure rigidity in Parkinson, the frequency of a discharge at the level of neurons and the self-perception, patients with sentences like I feel like after Levodopa, I feel good. This feeling good translate in less rigidity, less tremor in the surgical room.

On the other hand, some participants didn't respond. Consistently we found no clinical improvement, no difference in preference over this drug at the level of a single unit and no set perception of a benefit. This kind of effects started to trigger the questions what is the reason why some people who responded to placebo and pharmacological conditioning and some other people don't. I will try to address this question in the second part of my talk.

On the other hand, we learn a lot about the endogenous modulation of pain and true placebo effects by using in this case an antagonist. The goal in this experiment was to create a painful sensation through a tourniquet. Week one with no treatment. Week two we pre-inject healthy participant with morphine. Week three the same morphine. And week four we replace morphine with placebo.

And you can see that a placebo increase the pain tolerance in terms of imminent. And this was not carryover effects. In fact, the control at week five showed no differences. Part of the participants were pre-injected with an antagonist Naloxone that when we use Naloxone at high dose, we can block the opioids delta and K receptors. You can see that by pre-injecting Naloxone there is a blockage of placebo analgesia, and I would say this morphine-like effects related to placebo given after morphine.

We start to then consider this phenomenon. Is this a way for tapering opioids. And we called this sort of drug-like effects as dose-extending placebo. The idea is that if we use a pharmacological treatment, morphine, apomorphine, as I showed to you, and then we replace the treatment with a placebo, we can create a pharmacological memory, and this can translate into a clinical benefit. Therefore, the dose-extending placebo can be used to extend the benefit of the drug, but also to reduce side effects related to the active drug.

In particular for placebo given after morphine, you can see on this graph, the effects is similarly strong if we do the repetition of a morphine one day apart or one week apart. Interestingly, this is the best model to be used in animal research.

Here at University of Maryland in collaboration with Todd Degotte, we create a model of anhedonia in mice and we condition animals with Ketamine. The goal was to replace Ketamine with a placebo. There are several control as you can see. But what is important for us, we condition animal with Ketamine week one, three and five. And then we substitute Ketamine with saline along with the CS. The condition of the stimulus was a light, a low light. And we want to compare this with an injection of Ketamine given at week seven.

So as you can see here, of course Ketamine was inducing a benefit as compared to saline and the Ketamine. But what is seen testing when we compare Ketamine week seven with saline replacing the Ketamine, we found no difference; suggesting that even in animals, in mice we were able to create drug-related effects. In this case, a Ketamine antidepressant-like placebo effects. These effects also add dimorphic effects in the sense that we observed this is in males but not in females.

So another approach to use agonist, like I mentioned for apomorphine in Parkinson patient, was to use vasopressin and oxytocin to increase placebo effects. In this case, we used verbal suggestion that in our experience especially with healthy participants tended to create very small sample size in terms of placebo analgesic effects. So we knew that from the literature that there is a dimorphic effects for this hormone. So we inject people with intranasal vasopressin, saline, oxytocin in low dose and no treatment. You can see there was a drug effects in women whereby vasopressin boost placebo analgesic effects, but not in men where yet we found many effects of manipulation but not drug effects.

Importantly, vasopressin affect dispositional anxiety as well as cortisol. And there is a negative correlation between anxiety and cortisol in relationship to vasopressin-induced placebo analgesia.

Another was, can we use medication to study placebo in laboratory setting or can we study placebo and nocebo without any medication? One example is to use a manipulation of the intensity of the painful stimulations. We use a thermal stimulation tailored at three different levels. 80 out of 100 with a visual analog scale, 50 or 20, as you can see from the thermometer.

We also combined the level of pain with a face. So first to emphasize there is three level of pain, participants will see an anticipatory cue just before the thermal stimulation. Ten seconds of the thermal stimulation to provide the experience of analgesia with the green and the hyperalgesia with the red as compared to the control, the yellow condition.

Therefore, the next day we move in the fMRI. And the goal was to try to understand to what extent expectation is relevant in placebo and nocebo effects. We mismatch what they anticipate, and they learn the day before. But also you can see we tailored the intensity at the same identical level. 50 for each participant.

We found that when expectation matched the level of the cues, anticipatory cue and face, we found a strong nocebo effects and placebo effects. You can see in red that despite the level of pain were identical, the perceived red-related stimuli as higher in terms of intensity, and the green related the stimuli as lower when compared to the control. By mismatching what they expect with what they saw, we blocked completely placebo effects and still nocebo persist.

So then I showed to you that we can use conditioning in animals and in humans to create placebo effects. But also by suggestion, the example of vasopressin. Another important model to study placebo effects in laboratory setting is social observation. We see something in other people, we are not told what we are seeing and we don't experience the thermal stimulation. That is the setting. A demonstrator receiving painful or no painful stimulation and someone observing this stimulation.

When we tested the observers, you can see the level of pain were tailored at the same identical intensity. And these were the effects. In 2009, when we first launched this line of research, this was quite surprising. We didn't anticipate that merely observing someone else could boost the expectations and probably creating this long-lasting analgesic effect. This drove our attention to the brain mechanism of what is so important during this transfer of placebo analgesia.

So we scanned participants when they were observing a video this time. And a demonstrator receiving control and placebo cream. We counterbalance the color. We controlled for many variables. So during the observation of another person when they were not stimulated, they didn't receive the cream, there is an activation of the left and right temporoparietal junction and a different activation of the amygdala with the two creams. And importantly, an activation of the periaqueductal gray that I show to you is critical in modulating placebo analgesia.

Afterwards we put both the placebo creams with the two different color. We tailored the level of pain at the identical level of intensity. And we saw how placebo effects through observation are generated. They create strong different expectation and anxiety. And importantly, we found that the functional connectivity between the dorsolateral prefrontal cortex and temporoparietal junction that was active during the observation mediate the behavior results. Suggesting that there is some mechanism here that may be relevant to exploit in clinical trials and clinical practice.

From this, I wish to switch to a more translational approach. Can we replicate these results observed in health participant for nociception in people suffering from chronic pain. So we chose as population of facial pain that is an orphan disease that has no consensus on how to treat it, but also it affects the youngest including children.

So participants were coming to the lab. And thus you can see we used the same identical thermal stimulation, the same electrodes, the same conditioning that I showed to you. We measured expectation before and after the manipulation. The very first question was can we achieve similar monitored distribution of placebo analgesia in people suffering chronically from pain and comorbidities. You can see that we found no difference between temporo parenthala, between TMD and controls. Also, we observed that some people responded to the placebo manipulation with hyperalgesia. We call this nocebo effect.

Importantly, these affects are less relevant than the benefit that sometime can be extremely strong show that both health control and TMD. Because we run experiment in a very beautiful ecological environment where we are diverse, the lab, the experimenters as well as the population we recruit in the lab has a very good distribution of race, ethnicity.

So the very first question was we need to control for this factor. And this turned out to be a beautiful model to study race, ethnicity in the lab. So when chronic pain patient were studied by same experimenter race, dark blue, we observe a larger placebo effect. And this tell us about the disparity in medicine. In fact, we didn't see these effects in our controls.

In chronic pain patient, we also saw a sex concordance influence. But in the opposite sense in women studied by a man experimenter placebo effects are larger. Such an effect was not seen in men.

The other question that we had was what about the contribution of psychological factors. At that stage, there were many different survey used by different labs. Some are based on the different area of, you know, the states of the world, there were trends where in some people in some study they observe an effects of neurodisease, more positive and negative set, that refer to the words. Instead of progressing on single survey, and now we have a beautiful meta-analysis today that is not worth in the sense that it is not predictive of placebo effects.

We use the rogue model suggested by the NIMH. And by doing a sophisticated approach we were able to combine this into four balances. Emotional distress, reward-seeking, pain related fear catastrophizing, empathy and openness. These four valences then were interrelated to predict placebo effects. And you can see that emotional distress is associated with lower magnitude of placebo effects extinguishing over time and lower proportion of placebo responsivity.

Also people who tend to catastrophizing display lower magnitude of placebo effects. In terms of expectation, it is also interesting patients expect to benefit, they have this desire for a reward. But also those people who are more open and characterized by empathy tend for the larger expectations. But this doesn't translate necessarily in larger placebo effects, somehow hinting that the two phenomenon can be not necessarily linked.

Because we study chronic pain patients they come with their own baggage of disease comorbidities. And Dr. Wang in his department look at insomnia. Those people suffering from insomnia tends to have lower placebo analgesic effects along with those who have a poor pattern of sleep, suggesting that clinical factor can be relevant when we wish to predict placebo effects.

Another question that we address how simple SNPs, single nucleotide polymorphism variants in three regions that have been published can be predictive of placebo effects. In particular, I'm referring to OPRM1 that is linked to the gene for endogenous opioids. COMT linked to endogenous dopamine. And FAAH linked to endogenous cannabinoids. And we will learn about that more with the next talk.

And you can see that there is a prediction. These are rogue codes that can be interesting. We model all participants with verbal suggestion alone, the conditioning. There isn't really a huge difference between using one SNP versus two or three. What is truly impact and was stronger in terms of prediction was accounting for the procedure we used to study placebo. Whether by suggestion alone versus condition. When we added the manipulation, the prediction becomes stronger.

More recently, we started gene expression transcriptomic profile associated with placebo effects. We select from the 402 participants randomly 54. And we extract their transcriptomic profiles. Also we select a validation cohort to see if we can't replicate what we discover in terms of mRNA sequencing. But we found over 600 genes associated with the discovered cohort. In blue are the genes downregulated and in red upregulated.

We chose the top 20 genes and did the PCA to validate the top 20. And we found that six of them were replicated and they include all these genes that you see here. The Selenom for us was particularly interesting, as well as the PI3, the CCDC85B, FBXL15, HAGHL and the TNFRSF4. So with this --

LUANA COLLOCA: Yes, I'm done. With this, that is the goal probably one day with AI and other approach to combine clinical psychological brain imaging and so on, characteristic and behavior to predict a level of transitory response to placebo. That may guide us in clinical trials and clinical path to tailor the treatment. Therefore, the placebo and nocebo biological response can be to some extent predicted. And identify those who responded to placebo can help tailoring drug development and symptom management.

Thank you to my lab. All of you, the funding agencies. And finally, for those who like to read more about placebo, this book is available for free to be downloaded. And they include many of the speakers from this two-day event as contributors to this book. Thank you very much.

CRISTINA CUSIN: Thank you so much, Luana. It was a wonderful presentation. We have one question in the Q&A.

Elegant studies demonstrating powerful phenomena. Two questions. Is it possible to extend or sustain placebo-boosting effect? And what is the dose response relationship with placebo or nocebo?

LUANA COLLOCA: Great questions. The goal is to boost a placebo effects. And one way, as I showed was, for example, using intranasal vasopressin. But also extending relationship with placebo we know that we need the minimum of a three or four other administration before boosting this sort of pharmacological memory. And the longer is the administration of the active drug before we replace with placebo, the larger the placebo effects.

For nocebo, we show similar relationship with the collaborators. So again, the longer we condition, the stronger the placebo or nocebo effects. Thank you so much.

CRISTINA CUSIN: I wanted to ask, do you have any theory or interpretation about the potential for transmit to person a placebo response between the observer or such, do you have any interpretation of this phenomenon?

LUANA COLLOCA: It is not completely new in the literature. There is a lot of studies show that we can transfer pain in both animal models and humans.

So transfer analgesia is a natural continuation of that line of research. And the fact that we mimic things that we see in some other people, this is the very most basic form of learning when we grow up. But also from a revolutionary point of view protect us from predators and animals and us as human beings observing is a very good mechanism to boost behaviors and in this case placebo effects. Thank you.

CRISTINA CUSIN: Okay. We will have more time to ask questions.

We are going to move on to the next speaker. Dr. Kathryn Hall.

KATHRYN HALL: Thank you. Can you see my screen okay? Great.

So I'm going to build on Dr. Colloca's talk to really kind of give us a deeper dive into the genetics of the placebo response in clinical trials.

So I have no disclosures. So as we heard and as we have been hearing over the last two days, there is -- there are physiological drivers of placebo effects, whether they are opioid signaling or dopamine signaling. And these are potentiated by the administration or can be potentiated by saline pills, saline injections, sugar pills. And what's really interesting here, I think, is this discussion about how drugs impact the drivers of placebo response. In particular we heard about Naloxone yesterday and proglumide.

What I really want to do today is think about the next layer. Like how do the genes that shape our biology and really drive or influence that -- those physiological drivers of placebo response, how do the genes, A, modify our placebo response? But also, how are they modifying the effect of the drugs and the placebos on this basic -- this network?

And if you think about it, we really don't know much about all of the many interactions that are happening here. And I would actually argue that it goes even beyond genetic variation to other factors that lead to heterogeneity in clinical trials. Today I'm going to really focus on genes and variations in the genome.

So let's go back so we have the same terminology. I'm going to be talking about placebo-responsing trials. And so we saw this graph or a version of this graph yesterday where in clinical trials when we want to assess the effect of a drug, we subtract the outcomes in the placebo arm from the outcomes in the drug treatment arm. And there is a basic assumption here that the placebo response is additive to the drug response.

And what I want to do today is to really challenge that assumption. I want to challenge that expectation. Because I think we have enough literature and enough studies that have already been done that demonstrate that things are not as simple as that and that we might be missing a lot from this basic averaging and subtracting that we are doing.

So the placebo response is that -- is the bold lines there which includes placebo effects which we have been focusing on here. But it also includes a natural history of the disease or the condition, phenomenon such as statistical regression not mean, blinding and bias and Hawthorn effects. So we lump all of those together in the placebo arm of the trial and subtract the placebo response from the drug response to really understand the drug effect.

So one way to ask about, well, how do genes affect this is to look at candidate genes. And as Dr. Colloca pointed out and has done some very elegant studies in this area, genes like COMT, opioid receptors, genes like OPRM1, the FAAH endocannabinoid signaling genes are all candidate genes that we can look at in clinical trials and ask did these genes modify what we see in the placebo arm of trials?

We did some studies in COMT. And I want to just show you those to get a -- so you can get a sense of how genes can influence placebo outcomes. So COMT is catacholamethyl transferase. And it's a protein, an enzyme that metabolizes dopamine which as you saw is important in mediating the placebo response. COMT also metabolizes epinephrin, norepinephrine and catecholest estrogen. So the fact that COMT might be involved in placebo response is really interesting because it might be doing more than just metabolizing dopamine.

So we asked the question what happens if we look at COMT genetic variation in clinical trials of irritable bowel syndrome? And working with Ted Kaptchuk and Tony Lembo at Beth Israel Deaconess Medical Center, we did just that. We looked at COMT effects in a randomized clinical trial of irritable bowel syndrome. And what we did see was that for the gene polymorphism RS46AD we saw that people who had the weak version of the COMT enzyme actually had more placebo response. These are the met/met people here shown on this, in this -- by this arrow. And that the people who had less dopamine because that enzyme didn't work as well for this polymorphism, they had less of a placebo response in one of the treatment arms. And we would later replicate this study in another clinical trial that was recently concluded in 2021.

So to get a sense, as you can see, we are somewhat -- we started off being somewhat limited by what was available in the literature. And so we wanted to expand on that to say more about genes that might be associated with placebo response. So we went back, and we found 48 studies in the literature where there was a gene that was looked at that modified the placebo response.

And when we mapped those to the interactome, which is this constellation of all gene products and their interactions, their physical interactions, we saw that the placebome or the placebo module had certain very interesting characteristics. Two of those characteristics that I think are relevant here today are that they overlapped with the targets of drugs, whether they were analgesics, antidepressive drugs, anti-Parkinson's agents, placebo genes putatively overlapped with drug treatment genes or targets.

They also overlapped with disease-related genes. And so what that suggests is that when we were looking at the outcomes of clinical trial there might be a lot more going on that we are missing.

And let's just think about that for a minute. On the left is what we expect. We expect that we are going to see an effect in the drug, it's going to be greater than the effect of the placebo and that difference is what we want, that drug effect. But what we often see is on the right here where there is really no difference between drug and placebo. And so we are left to scratch our heads. Many companies go out of business. Many sections of companies close. And, quite frankly, patients are left in need. Money is left on the table because we can't discern between drug and placebo.

And I think what is interesting is that's been a theme that's kind of arisen since yesterday where oh, if only we had better physiological markers or better genes that targeted physiology then maybe we could see a difference and we can, you know, move forward with our clinical trials.

But what I'm going to argue today is actually what we need to do is to think about what is happening in the placebo arm, what is contributing to the heterogeneity in the placebo arm, and I'm going to argue that when we start to look at that compared to what is happening in the drug treatment arm, oftentimes -- and I'm going to give you demonstration after demonstration. And believe me, this is just the tip of the iceberg.

What we are seeing is there are differential effects by genotype in the drug treatment arm and the placebo treatment arm such that if you average out what's happening in these -- in these drug and placebo arms, you would basically see that there is no difference. But actually there's some people that are benefiting from the drug but not placebo. And conversely, benefiting from placebo but not drug. Average out to no difference.

Let me give you some examples. We had this hypothesis and we started to look around to see if we could get partners who had already done clinical trials that had happened to have genotyped COMT. And what we saw in this clinical trial for chronic fatigue syndrome where adolescents were treated with clonidine was that when we looked in the placebo arm, we saw that the val/val patients, so this is the COMT genotype. The low activity -- sorry, that is high activity genotype. They had the largest number increase in the number of steps they were taking per week. In contrast, the met/met people, the people with the weaker COMT had fewer, almost no change in the number of steps they were taking per week.

So you would look at this and you would say, oh, the val/val people were the placebo responders and the met/met people didn't respond to placebo. But what we saw when we looked into the drug treatment arm was very surprising. We saw that clonidine literally erased the effect that we were seeing in placebo for the val/val participants in this trial. And clonidine basically was having no effect on the heterozygotes, the val/mets or on the met/mets. And so this trial rightly concluded that there was no benefit for clonidine.

But if they hadn't taken this deeper look at what was happening, they would have missed that clonidine may potentially be harmful to people with chronic fatigue in this particular situation. What we really need to do I think is look not just in the placebo or not just in the drug treatment arm but in both arms to understand what is happening there.

And I'm going to give you another example. And, like I said, the literature is replete with these examples. On the left is an example from a drug that was used to test cognitive -- in cognitive scales, Tolcupone, which actually targets COMT. And what you can see here again on the left is differential outcomes in the placebo arm and in the drug treatment arm that if you were to just average these two you would not see the differences.

On the right is a really interesting study looking at alcohol among people with alcohol disorder, number of percent drinking days. And they looked at both COMT and OPRM1. And this is what Dr. Colloca was just talking about there seemed to be not just gene-placebo drug interactions but gene-gene drug placebo interactions. This is a complicated space. And I know we like things to be very simple. But I think what these data are showing is we need to pay more attention.

So let me give you another example because these -- you know, you could argue, okay, those are objective outcomes -- sorry, subjective outcomes. Let's take a look at the Women's Health Study. Arguably, one of the largest studies on aspirin versus placebo in history. 30,000 women were randomized to aspirin or placebo. And lo and behold, after 10 years of following them the p value was nonsignificant. There was no difference between drug and placebo.

So we went to this team, and we asked them, could we look at COMT because we had a hypothesis that COMT might modify the outcomes in the placebo arm and potentially differentially modify the treatments in the drug treatment arm. You might be saying that can't have anything to do with the placebo effect and we completely agree. This if we did find it would suggest that there might be something to do with the placebo response that is related to natural history. And I'm going to show you the data that -- what we found.

So when we compared the outcomes in the placebo arm to the aspirin arm, what we found was the met/met women randomized to placebo had the highest of everybody rates of cardiovascular disease. Which means the highest rates of myocardial infarction, stroke, revascularization and death from a cardiovascular disease cause. In contrast, the met/met women on aspirin had benefit, had a statistically significant reduction in these rates.

Conversely, the val/val women on placebo did the best, but the val/val women on aspirin had the highest rates, had significantly higher rates than the val/val women on placebo. What does this tell us? Well, we can't argue that this is a placebo effect because we don't have the control for placebo effects, which is a no treatment control.

But we can say that these are striking differences that, like I said before, if you don't pay attention to them, you miss the point that there are subpopulations for benefit or harm because of differential outcomes in the drug and placebo arms of the trial.

And so I'm going to keep going. There are other examples of this. We also partnered with a group at Brigham and Women's Hospital that had done the CAMP study, the Childhood Asthma Management Study. And in this study, they randomized patients to placebo, Budesonide or Nedocromil for five years and study asthma outcomes.

Now what I was showing you previously was candidate gene analyses. What this was, was a GWAS. We wanted to be agnostic and ask are there genes that modify the placebo outcomes and are these outcomes different in the -- when we look in the drug treatment arm. And so that little inset is a picture of all of the genes that were looked at in the GWAS. And we had a borderline genome Y significant hit called BBS9. And when we looked at BBS9 in the placebo arm, those white boxes at the top are the baseline levels of coughing and wheezing among these children. And in the gray are at the end of the treatment their level of coughing and wheezing.

And what you can see here is that participants with the AA genotype were the ones that benefited from the Bedenoside -- from placebo, whereas the GG, the patients with the GG genotype really there was no significant change.

Now, when we looked in the drug treatment arms, we were surprised to see that the outcomes were the same, of course, at baseline. There is no -- everybody is kind of the same. But you can see the differential responses depending on the genotype. And so, again, not paying attention to these gene drug/placebo interactions we miss another story that is happening here among our patients.

Now, I just want to -- I added this one because it is important just to realize that this is not just about gene-drug placebo. But these are also about epigenetic effects. And so here is the same study that I showed earlier on alcohol use disorder. They didn't just stop at looking at the polymorphisms or the genetic variants. This team also went so far as to look at methylation of OPRM1 and COMT.

So methylation is basically when the promoter region of a gene is basically blocked because it has a methyl group. It has methylation on some of the nucleotides in that region. So you can't make the protein as efficiently. And if you look on the right, what you can see in the three models that they looked at, they looked at other genes. They also looked at SLC6A3 that's involved in dopamine transport. And what you can see here is that there is significant gene by group by time interactions for all these three genes, these are candidate genes that they looked at.

And even more fascinating is their gene-by-gene interactions. Basically it is saying that you cannot say what the outcome is going to be unless you know the patient's or the participant's COMT or OPRM genotype A and also how methylated the promoter region of that -- of these genes are. So this makes for a very complicated story. And I know we like very simple stories.

But I want to say that I'm just adding to that picture that we had before to say that it's not just in terms of the gene's polymorphisms, but as Dr. Colloca just elegantly showed it is transcription as well as methylation that might be modifying what is happening in the drug treatment arm and the placebo treatment arm. And to add to this it might also be about the natural history of the condition.

So BBS9 is actually a gene that is involved in the cilia, the activity of the formation of the cilia which is really important in breathing in the nasal canal. And so, you can see that it is not just about what's happening in the moment when you are doing the placebo or drug or the clinical trial, it also might -- the genes might also be modifying where the patient starts out and how the patient might develop over time. So, in essence, we have a very complicated playground here.

But I think I have shown you that genetic variation, whether it is polymorphisms in the gene, gene-gene interactions or epigenetics or all of the above can modify the outcomes in placebo arms of clinical trials. And that this might be due to the genetic effects on placebo effects or the genetic effects on natural history. And this is something I think we need to understand and really pay attention to.

And I also think I've showed you, and these are just a few examples, there are many more. But genetic variation can differentially modify drugs and placebos and that these potential interactive effects really challenge this basic assumption of additivity that I would argue we have had for far too long and we really need to rethink.

TED KAPTCHUK: (Laughing) Very cool.

KATHRYN HALL: Hi, Ted.

TED KAPTCHUK: Oh, I didn't know I was on.

KATHRYN HALL: Yeah, that was great. That's great.

So in summary, can we use these gene-placebo drug interactions to improve clinical trials. Can we change our expectations about what is happening. And perhaps as we have been saying for the last two days, we don't need new drugs with clear physiological effects, what we need is to understand drug and placebo interactions and how they impact subpopulations and can reveal who benefits or is harmed by therapies.

And finally, as we started to talk about in the last talk, can we use drugs to boost placebo responses? Perhaps some drugs already do. Conversely, can we use drugs to block placebo responses? And perhaps some drugs already do.

So I just want to thank my collaborators. There was Ted Kaptchuk, one of my very close mentors and collaborators. And really, thank you for your time.

CRISTINA CUSIN: Thank you so much. It was a terrific presentation. And definitely Ted's captured laugh, it was just one of the best spontaneous laughs.

We have a couple of questions coming through the chat. One is about the heterogeneity of response in placebo arms. It is not uncommon to see quite a dispersion of responses at trials. Was that thought experiment, if one looks at the fraction of high responders in the placebo arms, would one expect to see, enrich for some of the genetic marker for and as placebo response?

KATHRYN HALL: I absolutely think so. We haven't done that. And I would argue that, you know, we have been having kind of quiet conversation here about Naloxone because I think as Lauren said yesterday that the findings of Naloxone is variable. Sometimes it looks like Naloxone is blocking placebo response and sometimes it isn't.

We need to know more about who is in that trial, right? Is this -- I could have gone on and showed you that there is differences by gender, right. And so this heterogeneity that is coming into clinical trials is not just coming from the genetics. It's coming from race, ethnicity, gender, population. Like are you in Russia or are you in China or are you in the U.S. when you're conducting your clinical trial? We really need to start unpacking this and paying attention to it. I think because we are not paying attention to it, we are wasting a lot of money.

CRISTINA CUSIN: And epigenetic is another way to consider traumatic experiences, adverse event learning. There is another component that we are not tracking accurately in clinical trials. I don't think this is a one of the elements routinely collected. Especially in antidepressant clinical trials it is just now coming to the surface.

KATHRYN HALL: Thank you.

CRISTINA CUSIN: Another question comes, it says the different approaches, one is GWAS versus candidate gene approach.

How do you start to think about genes that have a potential implication in neurophysiological pathways and choosing candidates to test versus a more agnostic U.S. approach?

KATHRYN HALL: I believe you have to do both because you don't know what you're going to find if you do a GWAS and it's important to know what is there.

At the same time, I think it's also good to test our assumptions and to replicate our findings, right? So once you do the GWAS and you have a finding -- for instance, our BBS9 finding would be amazing to replicate or to try and test in another cohort. But, of course, it is really difficult to do a whole clinical trial again. These are very expensive, and they last many years.

And so, you know, I think replication is something that is tough to do in this space, but it is really important. And I would do both.

CRISTINA CUSIN: Thank you. We got a little short on time. We are going to move on to the next speaker. Thank you so much.

FADEL ZEIDAN: Good morning. It's me, I imagine. Or good afternoon.

Let me share my screen. Yeah, so good morning. This is going to be a tough act to follow. Dr. Colloca and Dr. Hall's presentations were really elegant. So manage your expectations for mine. And, Ted, please feel free to unmute yourself because I think your laugh is incredibly contagious, and I think we were all were laughing as well.

So my name is Fadel Zeidan, I'm at UC San Diego. And I'll be discussing mostly unpublished data that we have that's under review examining if and how mindfulness meditation assuages pain and if the mechanism supporting mindfulness meditation-based analgesia are distinct from placebo.

And so, you know, this is kind of like a household slide that we all are here because we all appreciate how much of an epidemic chronic pain is and, you know, how significant it is, how much it impacts our society and the world. And it is considered a silent epidemic because of the catastrophic and staggering cost to our society. And that is largely due to the fact that the subjective experience of pain is modulated and constructed by a constellation of interactions between sensory, cognitive, emotional dimensions, genetics, I mean I can -- the list can go on.

And so what we've been really focused on for the last 20 years or so is to appreciate if there is a non-pharmacological approach, a self-regulated approach that can be used to directly assuage the experience of pain to acutely modify exacerbated pain.

And to that extent, we've been studying meditation, mindfulness-based meditation. And mindfulness is a very nebulous construct. If you go from one lab to another lab to another lab, you are going to get a different definition of what it is. But obviously my lab's definition is the correct one. And so the way that we define it is awareness of arising sensory events without reaction, without judgment.

And we could develop this construct, this disposition by practicing mindfulness-based meditation, which I'll talk about here in a minute. And we've seen a lot of -- and this is an old slide -- a lot of new evidence, converging evidence demonstrating that eight weeks of manualized mindfulness-based interventions can produce pretty robust improvements in chronic pain and opiate misuse. These are mindfulness-based stress reduction programs, mindfulness-oriented recovery enhancement, mindfulness-based cognitive therapy which are about eight weeks long, two hours of formalized didactics a week, 45 minutes a day of homework.

There is yoga, there is mental imagery, breathing meditation, walking meditation, a silent retreat and about a $600 tab. Which may not be -- I mean although they are incredibly effective, may not be targeting demographics and folks that may not have the time and resources to participate in such an intense program.

And to that extent and, you know, as an immigrant to this country I've noticed that we are kind of like this drive-thru society where, you know, we have a tendency to eat our lunches and our dinners in our cars. We're attracted to really brief interventions for exercise or anything really, pharmaceuticals, like ":08 Abs" and "Buns of Steel." And we even have things called like the military diet that promise that you'll lose ten pounds in three days without dying.

So we seemingly are attracted to these fast-acting interventions. And so to this extent we've worked for quite some time to develop a very user friendly, very brief mindfulness-based intervention. So this is an intervention that is about four sessions, 20 minutes each session. And participants are -- we remove all religious aspects, all spiritual aspects. And we really don't even call it meditation, we call it mindfulness-based mental training.

And our participants are taught to sit in a straight posture, close their eyes, and to focus on the changing sensations of the breath as they arise. And what we've seen is this repetitive practice enhances cognitive flexibility and the ability to -- flexibility and the ability to sustain attention. And when individual's minds drift away from focusing on the breath, they are taught to acknowledge distractive thoughts, feelings, emotions without judging themselves or the experience. Doing so by returning their attention back to the breath.

So there is really a one-two punch here where, A, you're focusing on the breath and enhancing cognitive flexibility; and, B, you're training yourself to not judge discursive events. And that we believe enhances emotion regulation. So quite malleable to physical training we would say mental training. Now that we have the advent of imaging, we can actually see that there are changes in the brain related to this.

But as many of you know, mindfulness is kind of like a household term now. It's all over our mainstream media. You know, we have, you know, Lebron meditating courtside. Oprah meditating with her Oprah blanket. Anderson Cooper is meditating on TV. And Time Magazine puts, you know, people on the cover meditating. And it's just all over the place.

And so these types of images and these types of, I guess, insinuations could elicit nonspecific effects related to meditation. And for quite some time I've been trying to really appreciate not is meditation more effective than placebo, although that's interesting, but does mindfulness meditation engage mechanisms that also are shared by placebo? So beliefs that you are meditating could elicit analgesic responses.

The majority of the manualized interventions in their manuals they use terms like the power of meditation, which I guarantee you is analgesic. To focus on the breath, we need to slow the breath down. Not implicit -- not explicitly, but it just happens naturally. And slow breathing can also reduce pain. Facilitator attention, social support, conditioning, all factors that are shared with other therapies and interventions but in particular are also part of meditation training.

So the question is because of all this, is mindfulness meditation merely -- or not merely after these two rich days of dialogue -- but is mindfulness meditation engaging processes that are also shared by placebo.

So if I apply a placebo cream to someone's calf and then throw them in the scanner versus asking someone to meditate, the chances are very high that the brain processes are going to be distinct. So we wanted to create a -- and validate an operationally matched mindfulness meditation intervention that we coined as sham mindfulness meditation. It's not sham meditation because it is meditation. It's a type of meditative practice called Pranayama.

But here in this intervention we randomize folks, we tell folks that they've been randomized to a genuine mindfulness meditation intervention. Straight posture, eyes closed. And every two to three minutes they are instructed to, quote-unquote, take a deep breath as we sit here in mindfulness meditation. We even match the time giving instructions between the genuine and the sham mindfulness meditation intervention.

So the only difference between the sham mindfulness and the genuine mindfulness is that the genuine mindfulness is taught to explicitly focus on the changing sensations of the breath without judgment. The sham mindfulness group is just taking repetitive deep, slow breaths. So if the magic part of mindfulness, if the active component of mindfulness is this nonjudgmental awareness, then we should be able to see disparate mechanisms between these.

And we also use a third arm, a book listening control group called the "Natural History of Selborne" where it's a very boring, arguably emotionally pain-evocating book for four days. And this is meant to control for facilitator time and -- sorry, facilitator attention and the time elapsed in the other group's interventions.

So we use a very high level of noxious heat to the back of the calf. And we do so because imaging is quite expensive, and we want to ensure that we can see pain-related processing within the brain. Here and across all of our studies, we use ten 12-second plateaus of 49 degrees to the calf, which is pretty painful.

And then we assess pain intensity and pain unpleasantness using a visual analog scale, where here the participants just see red the more they pull on the algometer the more in pain they are. But on the back, the numbers fluoresce where 0 is no pain and 10 is the worst pain imaginable.

So pain intensity can be considered like sensory dimension of pain, and pain unpleasantness could be more like I don't want to say pain affect but more like the bothersome component of pain, pain unpleasantness. So what we did was we combined all of our studies that have used the mindfulness, sham mindfulness in this book listing control, to see does mindfulness meditation engage is mindfulness meditation more effective than sham mindfulness meditation at reducing pain.

We also combined two different fMRI techniques: Blood oxygen dependent level signalling, bold, which allows us a higher temporal resolution and signal to noise ratio than, say, perfusion imaging technique and allows us to look at connectivity. However, meditation is also predicated on changes in respiration rate which could elicit pretty dramatic artifacts in the brain, breathing related artifacts explicitly related to CO2 output.

So using the perfusion based fMRI technique like arterial spin labeling is really advantageous as well, although it's not as temporally resolute as bold, it provides us a direct quantifiable measurement of cerebral blood flow.

So straight to the results. On the Y axis we have the pain ratings, and on the X axis are book listening controls sham mindfulness meditation, mindfulness meditation. Here are large sample sizes. Blue is intensity and red is unpleasantness. This is the post intervention fMRI scans where we see the first half of the scan to the second half of the scan our controlled participants are simply resting and pain just increases because of pain sensitization and being in a claustrophobic MRI environment.

And you can see here that sham mindfulness meditation does produce pretty significant reduction in pain intensity and unpleasantness, more than the control book. But mindfulness meditation is more effective than sham mindfulness and the controls at reducing pain intensity and pain unpleasantness.

There does seem to be some kind of additive component to the genuine intervention, although this is a really easy practice, the sham techniques.

So for folks that have maybe fatigue or cognitive deficits or just aren't into doing mindfulness technique, I highly recommend this technique, which is just a slow breathing approach, and it's dead easy to do.

Anyone that's practiced mindfulness for the first time or a few times can state that it can be quite difficult and what's the word? -- involving, right?

So what happened in the brain? These are our CBF maps from two studies that we replicated in 2011 and '15 where we found that higher activity, higher CBF in the right anterior insula, which is ipsilateral to the stimulation site and higher rostral anterior cingulate cortex subgenual ACC was associated with greater pain relief, pain intensity, and in the context of pain unpleasantness, higher over the frontal cortical activity was associated with lower pain, and this is very reproducible where we see greater thalamic deactivation predicts greater analgesia on the unpleasantness side.

These areas, obviously right entry insula in conjunction with other areas is associated with interoceptive processing awareness of somatic sensations. And then the ACC and the OFC are associated with higher order cognitive flexibility, emotional regulation processes. And the thalamus is really the gatekeeper from the brain -- I'm sorry, from the body to the brain. Nothing can enter the brain except unless it goes through the thalamus, except if it's the sense of smell.

So it's really like this gatekeeper of arising nociceptive information.

So the takehome here is that mindfulness is engaging multiple neural processes to assuage pain. It's not just one singular pathway.

Our gold studies were also pretty insightful. Here we ran a PPI analysis, psychophysiologic interaction analysis and this was whole brain to see what brain regions are associated with pain relief on the context of using the bold technique, and we find that greater ventral medial prefrontal cortical activity deactivation I'm sorry is associated with lower pain, and the vmPFC is a super evolved area that's associated with, like, higher order processes relating to self. It's one of the central nodes of the so called default mode network, a network supporting self referential processing. But in the context of the vmPFC, I like the way that Tor and Mathieu reflect the vmPFC as being more related to affective meaning and has a really nice paper showing that vmPFC is uniquely involved in, quote/unquote, self ownership or subjective value, which is particularly interesting for the context of pain because pain is a very personal experience that's directly related to the interpretation of arising sensations and what they mean to us.

And seemingly -- I apologize for the reverse inferencing here -- but seemingly mindfulness meditation based on our qualitative assessments as well is reducing the ownership or the intrinsic value, the contextual value of those painful sensations, i.e., they don't feel like they bother -- that pain is there but it doesn't bother our participants as much, which is quite interesting as a manipulation.

We also ran our connectivity analysis between the contralateral thalamus and the whole brain, and we found that greater decoupling between the contralateral thalamus and the precuneus, another central node of the default mode network predicted greater analgesia.

This is a really cool, I think, together mechanism showing that two separate analyses are indicating that the default mode network could be an analgesic system which we haven't seen before. We have seen the DMN involved in chronic pain and pain related exacerbations, but I don't think we've seen it as being a part of an analgesic, like being a pain relieving mechanism. Interestingly, the thalamus and precuneus together are the first two nodes to go offline when we lose consciousness, and they're the first two nodes to come back online when we recover from consciousness, suggesting that these two -- that the thalamus and precuneus are involved in self referential awareness, consciousness of self, things of this nature.

Again, multiple processes involved in meditation based pain relief which maybe gives rise to why we are seeing consistently that meditation could elicit long lasting improvements in pain unpleasantness, in particular, as compared to sensory pain. Although it does that as well.

And also the data gods were quite kind on this because these mechanisms are also quite consistent with the primary premises of Buddhist and contemplative scriptures saying that the primary principle is that your experiences are not you.

Not that there is no self, but that the processes that arise in our moment to moment experience are merely reflections and interpretations in judgments, and that may not be the true inherent nature of mind.

And so before I get into more philosophical discourse, I'm going to keep going for the sake of time. Okay.

So what happened with the sham mindfulness meditation intervention?

We did not find any neural processes predicted analgesia significantly and during sham mindfulness meditation. What did predict analgesia during sham mindfulness was slower breathing rate, which we've never seen before with mindfulness. We've never seen a significant or even close to significant relationship between mindfulness based meditation analgesia and slow breathing. But over and over we see that sham mindfulness based analgesia is related to slower breathing which provides us this really cool distinct process where kind of this perspective where mindfulness is engaging higher order top down type processes to assuage pain while sham mindfulness may be engaging this more bottom up type response to assuage pain.

I'm going to move on to some other new work, and this is in great collaboration with the lovely Tor Wager, and he's developed, with Marta and Woo, these wonderful signatures, these machine learned multivariate pattern signatures that are remarkably accurate at predicting pain over I think like 98, 99 percent.

His seminal paper, the Neurological Pain Signature, was published in the New England Journal of Medicine that showed that these signatures can predict nociceptive specific, in particular, for this particular, thermal heat pain with incredible accuracy.

And it's not modulated by placebo or affective components, per se. And then the SIIPS is a machine learned signature that is, as they put it, associated with cerebral contributions to pain. But if you look at it closely, these are markers that are highly responsive to the placebo response.

So the SIIPS can be used -- he has this beautiful pre print out, showing that it does respond with incredible accuracy to placebo, varieties of placebo.

So we used this MVPA to see if meditation engages signature supporting placebo responses.

And then Marta Ceko's latest paper with Tor published in Nature and Neuro found that the negative affect of signature predicts pain responses above and beyond nociceptive related processes. So this is pain related to negative affect, which again contributes to the multimodal processing of pain and how now we could use these elegant signatures to kind of disentangle which components of pain meditation and other techniques assuage. Here's the design.

We had 40 -- we combined two studies. One with bold and one with ASL. So this would be the first ASL study with signatures, with these MVPA signatures.

And we had the mindfulness interventions that I described before, the book listing interventions I described before and a placebo cream intervention which I'll describe now, all in response to 49 degrees thermal stimuli.

So across again all of our studies we use the same methods. And the placebo group -- I'll try to be quick about this -- this is kind of a combination of Luana Colloca, Don Price and Tor's placebo conditioning interventions where we administer 49 degrees -- we tell our participants that we're testing a new form of lidocaine, and the reason that it's new is that the more applications of this cream, the stronger the analgesia.

And so in the conditioning sessions, they come in, administer 49 degrees, apply and remove this cream, which is just petroleum jelly after 10 minutes, and then we covertly reduce the temperature to 48.

And then they come back in in session two and three, after 49 degrees and removing the cream, we lower the temperature to 47. And then on the last conditioning session, after we remove the cream, we lower the temperature to 46.5, which is a qualitatively completely different experience than 49.

And we do this to lead our participants to believe that the cream is actually working.

And then in a post intervention MRI session, after we remove the cream, we don't modulate the temperature, we just keep it at 49, and that's how we measured placebo in these studies. And then so here, again -- oops -- John Dean and Gabe are coleading this project.

Here, pain intensity on this axis, pain unpleasantness on that axis, controls from the beginning of the scan to the end of the scan significantly go up in pain.

Placebo cream was effective at reducing intensity and unpleasantness, but we see mindfulness meditation was more effective than all the conditions at reducing pain. The signatures, we see that the nociceptive specific signature, the controls go up in pain here.

No change in the placebo and mindfulness meditation you can see here produces a pretty dramatic reduction in the nociceptive specific signature.

The same is true for the negative affective pain signature. Mindfulness meditation uniquely modifies this signature as well which I believe this is one of the first studies to show something like this.

But it does not modulate the placebo signature. What does modulate the placebo signature is our placebo cream, which is a really nice manipulation check for these signatures.

So here, taken together, we show that mindfulness meditation, again, is engaging multiple processes and is reducing pain by directly assuaging nociceptive specific markers as well as markers supporting negative affect but not modulating placebo related signatures, providing further credence that it's not a placebo type response, and we're also demonstrating this granularity between a placebo mechanism that's not being shared by another active mechanism. While we all assume that active therapies and techniques are using a shared subset of mechanisms or processes with placebo, here we're providing accruing evidence that mindfulness is separate from a placebo.

I'll try to be very quick on this last part. This is all not technically related placebo, but I would love to hear everyone's thoughts on these new data we have.

So as we've seen elegantly that pain relief by placebo, distraction, acupuncture, transcranial magnetic stimulation, prayer, are largely driven by endogenous opioidergic release. And, yes, there are other systems. A prime other system is the (indiscernible) system, serotonergic system, dopamine. The list can go on. But it's considered by most of us that the endogenous opioidergic system is this central pain modulatory system.

And the way we do this is by antagonizing endogenous opioids by employing incredibly high administration dosage of naloxone.

And I think this wonderful paper by Ciril Etnes's (phonetic) group provides a nice primer on the appropriate dosages for naloxone to antagonize opiates. And I think a lot of the discussions here where we see differences in naloxone responses are really actually reflective of differences in dosages of naloxone.

It metabolizes so quickly that I would highly recommend a super large bolus with a maintenance infusion IV.

And we've seen this to be a quite effective way to block endogenous opioids. And across four studies now, we've seen that mindfulness based pain relief is not mediated by endogenous opioids. It's something else. We don't know what that something else is but we don't think it's endogenous opioids. But what if it's sex differences that could be driving these opioidergic versus non opioid opioidergic differences?

We've seen that females require -- exhibit higher rates of chronic pain than males. They are prescribed opiates at a higher rate than men. And when you control for weight, they require higher dosages than men. Why?

Well, there's excellent literature in rodent models and preclinical models that demonstrate that male rodents versus female -- male rodents engage endogenous opioids to reduce pain but female rodents do not.

And this is a wonderful study by Ann Murphy that basically shows that males, in response to morphine, have a greater latency and paw withdrawal when coupled with morphine and not so much with females.

But when you add naloxone to the picture, with morphine, the latency goes down. It basically blocks the analgesia in male rodents but enhances analgesia in female rodents.

We basically asked -- we basically -- Michaela, an undergraduate student doing an odyssey thesis asked this question: Are males and females in humans engaging in distinct systems to assuage pain?

She really took off with this and here's the design. We had heat, noxious heat in the baseline.

CRISTINA CUSIN: Doctor, you have one minute left. Can you wrap up?

FADEL ZEIDAN: Yep. Basically we asked, are there sex differences between males and females during meditation in response to noxious heat? And there are.

Baseline, just change in pain. Green is saline. Red is naloxone. You can see that with naloxone onboard, there's greater analgesia in females, and we reversed the analgesia. Largely, there's no differences between baseline in naloxone in males, and the males are reducing pain during saline.

We believe this is the first study to show something like this in humans. Super exciting. It also blocked the stress reduction response in males but not so much in females. Let me just acknowledge our funders. Some of our team. And I apologize for the fast presentation. Thank you.

CRISTINA CUSIN: Thank you so much. That was awesome.

We're a little bit on short on time.

I suggest we go into a short break, ten minute, until 1:40. Please continue to add your questions in Q&A. Our speakers are going to answer or we'll bring some of those questions directly to the discussion panel at the end of the session today. Thank you so much.

Measuring & Mitigating the Placebo Effect (continued)

CRISTINA CUSIN: Hello, welcome back. I'm really honored to introduce our next speaker, Dr. Marta Pecina. And she's going to talk about mapping expectancy-mood interactions in antidepressant placebo effects. Thank you so much.

MARTA PECINA: Thank you, Cristina. It is my great pleasure to be here. And just I'm going to switch gears a little bit to talk about antidepressant placebo effects. And in particular, I'm going to talk about the relationship between acute expectancy-mood neural dynamics and long-term antidepressant placebo effects.

So while we all know that depression is a very prevalent disorder, and just in 2020, Major Depressive Disorder affected 21 million adults in the U.S. and 280 million adults worldwide. And current projections indicate that by the year 2030 it will be the leading cause of disease burden globally.

Now, response rates to first-line treatments, antidepressant treatments are approximately 50%. And complete remission is only achieved in 30 to 35% of individuals. Also, depression tends to be a chronic disorder with 50% of those recovering from a first episode having an additional episode. And 80% of those with two or more episodes having another recurrence.

And so for patients who are nonresponsive to two intervention, remission rates with subsequent therapy drop significantly to 10 to 25%. And so, in summary, we're facing a disorder that is very resistant or becomes resistant very easily. And in this context, one would expect that antidepressant placebo effects would actually be low. But we all know that this is not the case. The response rate to placebos is approximately 40% compared to 50% response rates to antidepressants. And obviously this varies across studies.

But what we do know and learned yesterday as well is that response rates to placebos have increased approximately 7% over the last 40 years. And so these high prevalence of placebo response in depressions have significantly contributed to the current psychopharmacology crisis where large pharma companies have reduced at least in half the number of clinical trials devoted to CNS disorders.

Now, antidepressant placebo response rates among individuals with depression are higher than in any other psychiatric condition. And this was recently published again in this meta-analysis of approximately 10,000 psychiatric patients. Now, other disorders where placebo response rates are also prevalent are generalized anxiety disorder, panic disorders, HDHC or PTSD. And maybe less frequent, although still there, in schizophrenia or OCD.

Now, importantly, placebo effects appear not only in response to pills but also surgical interventions or devices, as it was also mentioned yesterday. And this is particularly important today where there is a very large development of device-based interventions for psychiatric conditions. So, for example, in this study that also was mentioned yesterday of deep brain stimulation, patients with resistant depression were assigned to six months of either active or some pseudo level DBS. And this was followed by open level DBS.

As you can see here in this table, patients from both groups improved significantly compared to baseline, but there were no significant differences between the two groups. And for this reason, DBS has not yet been approved by the FDA for depression, even though it's been approved for OCD or Parkinson's disease as we all know.

Now what is a placebo effect, that's one of the main questions of this workshop, and how does it work from a clinical neuroscience perspective? Well, as it's been mentioned already, most of what we know about the placebo effect comes from the field of placebo analgesia. And in summary, classical theories of the placebo effect have consistently argued that placebo effects results from either positive expectancies regarding the potential beneficial effects of a drug or classical conditioning where the pairing of a neutral stimulus, in this case the placebo pill, with an unconditioned stimulus, in this case the active drug, results in a conditioned response.

Now more recently, theories of the placebo effect have used computational models to predict placebo effects. And these theories posit that individuals update their expectancies as new sensory evidence is accumulated by signaling the response between what is expected and what is perceived. And this information is then used to refine future expectancies. Now these conceptual models have been incorporated into a trial-by-trial manipulation of both expectancies of pain relief and pain sensory experience. And this has rapidly advanced our understanding of the neural and molecular mechanisms of placebo analgesia.

And so, for example, in these meta analytic studies using these experiments they have revealed really two patterns of distinct activations with decreases in brain activity in regions involving brain processing such as the dorsal medial prefrontal cortex, the amygdala and the thalamus; and increases in brain activity in regions involving effective appraisal, such as the vmDFC, the nucleus accumbens, and the PAG.

Now what happens in depression? Well, in the field of antidepressant placebo effects, the long-term dynamics of mood and antidepressant responses have not allowed us to have such trial-by-trial manipulation of expectancies. And so instead researchers have used broad brain changes in the context of a randomized control trial or a placebo lead-in phase which has, to some extent, limited the progress of the field.

Now despite these methodological limitations of these studies, they provide important insights about the neural correlates of antidepressant placebo effects. In particular, from studies -- two early on studies we can see the placebo was associated with increased activations broadly in cortical regions and decreased activations in subcortical regions. And these deactivations in subcortical regions were actually larger in patients who were assigned to an SSRI drug treatment.

We also demonstrated that there is similar to pain, antidepressant placebo effects were associated with enhanced endogenous opiate release during placebo administration, predicting the response to open label treatment after ten weeks. And we have also -- we and others have demonstrated that increased connectivity between the salience network and the rostral anterior cingulate during antidepressant placebo effects can actually predict short-term and long-term placebo effects.

Now an important limitation, and as I already mentioned, is that this study is basically the delay mechanism of action of common antidepressant and this low dynamics of mood which really limit the possibility of actively manipulating antidepressant expectancies.

So to address this important gap, we develop a trial-by-trial manipulation of antidepressant expectancies to be used inside of the scanner. And the purpose was really to be able to further disassociate expectancy and mood dynamics during antidepressant placebo effects.

And so the basic structure of this test involved an expectancy condition where subjects are presented with a four-second infusion cue followed by an expectancy rating cue, and a reinforcement condition which consist of 20 seconds of some neurofeedback followed by a mood rating cue. Now the expectancy and the reinforcement condition map onto the classical theories of the placebo effect that I explained earlier.

During the expectancy condition, the antidepressant infusions are compared to periods of calibration where no drug is administered. And during the reinforcement condition, on the other hand, some neurofeedback of positive sign 80% of the time as compared to some neurofeedback of baseline sign 80% of the time. And so this two-by-two study design results in four different conditions. The antidepressant reinforced, the antidepressant not reinforced, the calibration reinforced, and the calibration not reinforced.

And so the cover story is that we tell participants that we are testing the effects of a new fast-acting antidepressant compared to a conventional antidepressant, but in reality, they are both saline. And then we tell them that they will receive multiple infusions of these drugs inside of the scanner while we record their brain activity which we call neurofeedback. So then patients learn that positive neurofeedback compared to baseline is more likely to cause mood improvement. But they are not told that the neurofeedback is simulated.

Then we place an intravenous line for the administration of the saline infusion, and we bring them inside of the scanner. For these kind of experiments we recruit individuals who are 18 through 55 with or without anxiety disorders and have a HAMD depression rating scale greater than 16, consistent with moderate depression. They're antidepressant medication free for at least 25 -- 21 days and then we use consenting procedures that involve authorized deception.

Now, as suspected, behavioral results during this test consistently show that antidepressant expectancies are higher during the antidepressant infusions compared to the calibration, especially when they are reinforced by positive sham neurofeedback. Now mood responses also are significantly higher during positive sham neurofeedback compared to baseline. But this is also enhanced during the administration of the antidepressant infusions.

Now interestingly, these effects are moderated by the present severity such that the effects of the test conditions and the expectancies and mood ratings are weaker in more severe depression even though their overall expectancies are higher, and their overall mood are lower.

Now at a neuron level, what we see is that the presentation of the infusion cue is associated with an increased activation in the occipital cortex and the dorsal attention network suggesting greater attention processing engaged during the presentation of the treatment cue. And similarly, the reinforcement condition revealed increased activations in the dorsal attention network with additional responses in the ventral striatum suggesting that individuals processed the sham positive neurofeedback cue as rewarding.

Now an important question for us was now that we can manipulate acute placebo -- antidepressant placebo responses, can we use this experiment to understand the mechanisms implicated in short-term and long-term antidepressant placebo effects. And so as I mentioned earlier, there was emerging evidence suggesting that placebo analgesic could be explained by computational models, in particular reinforcement learning.

And so we tested the hypothesis that antidepressant placebo effects could be explained by similar models. So as you know, under these theories, learning occurs when an experienced outcome differs from what is expected. And this is called the prediction error. And then the expected value of the next possible outcome is updated with a portion of this prediction error as reflected in this cue learning rule.

Now in the context of our experiment, model predicted expectancies for each of the four trial conditions would be updated every time the antidepressant or the calibration infusion cue is presented and an outcome, whether positive or baseline neurofeedback, is observed based on a similar learning rule.

Now this basic model was then compared against two alternative models. One which included differential learning rates to account for the possibility that learning would depend on whether participants were updating expectancies for the placebo or the calibration. And then an additional model to account for the possibility that subjects were incorporating positive mood responses as mood rewards.

And then finally, we constructed this additional model to allow the possibility of the combination of models two and three. And so using patient model comparison, we found that the model -- the fourth model, model four which included a placebo bias learning in our reinforcement by mood dominated all the other alternatives after correction for patient omnibus risk.

Now we then map the expected value and reward predictions error signals from our reinforcement learning models into our raw data. And what we found was that expected value signals map into the salience network raw responses; whereas reward prediction errors map onto the dorsal attention network raw responses. And so all together, the combination of our model-free and model-based results reveal that the processing of the antidepressant in patient cue increase activation in the dorsal attention network; whereas, the encoding of the expectancies took place in the salience network once salience had been attributed to the cue.

And then furthermore, we demonstrated that the reinforcement learning model predicted expectancies in coding the salience network triggered mood changes that are perceived as reward signals. And then these mood reward signals further reinforce antidepressant expectancies through the information of expectancy mood dynamics defined by models of reinforcement learning, an idea that could possibly contribute to the formation of long-lasting antidepressant placebo effects.

And so the second aim was really -- was going to look at these in particular how to use behavioral neuroresponses of placebo effects to predict long-term placebo effects in the context of a clinical trial. And so our hypothesis was that during placebo administration greater salient attribution to the contextual cue in the salience network would transfer to regions involved in mood regulation to induce mood changes. So in particular we hypothesized that the DMN would play a key role in belief-induced mood regulation.

And why the DMN? Well, we knew that activity in the rostral anterior cingulate, which is a key node of the DMN, is a robust predictor of mood responses to both active antidepressant and placebos, implying its involvement in nonspecific treatment response mechanisms. We also knew that the rostral anterior cingulate is a robust predicter of placebo analgesia consistent with its role in cognitive appraisals, predictions and evaluation. And we also had evidence that the SN to DMN functional connectivity appears to be a predictor of placebo and antidepressant responses over ten weeks of treatment.

And so in our clinical trial, which you can see the cartoon diagram here, we randomized six individuals to placebo or escitalopram 20 milligrams. And this table is just to say there were no significant differences between the two groups in regard to the gender, race, age, or depression severity. But what we found interesting is that there were also no significant differences in the correct belief assignment with 60% of subjects in each group approximately guessing that they were receiving escitalopram.

Now as you can see here, participants showed lower MADR scores at eight weeks in both groups. But there was no significant differences between the two groups. However, when split in the two groups by the last drug assignment belief, subjects with the drug assignment belief improved significantly compared to those with a placebo assignment belief.

And so the next question was can we use neuroimaging to predict these responses? And what we found was at a neural level during expectancy process the salience network had an increased pattern of functional connectivity with the DMN as well as with other regions of the brainstem including the thalamus. Now at the end -- we also found that increased SN to DMN functional connectivity predicted expectancy ratings during the antidepressant placebo fMRI task such that higher connectivity was associated with greater modulation of the task conditions on expectancy ratings.

Now we also found that enhanced functional connectivity between the SN and the DMN predicted the response to eight weeks of treatment, especially on individuals who believed that they were of the antidepressant group. Now this data supports that during placebo administration, greater salient attributions to the contextual cue is encoded in the salience network; whereas belief-induced mood regulation is associated with an increased functional connectivity between the SN and DMN and altogether this data suggest that enhancements to DMN connectivity enables the switch from greater salient attribution to the treatment cue to DMN-mediated mood regulation.

And so finally, and this is going to be brief, but the next question for us was can we modulate these networks to actually enhance placebo-related activity. And in particular, we decided to use theta burst stimulation which can potentiate or depotentiate brain activity in response to brief periods of stimulation. And so in this study participants undergo three counterbalance sessions of TBS with either continuous, intermittent, or sham known to depotentiate, potentiate, and have no effect.

So each TBS is followed by an fMRI session during the antidepressant placebo effect task which happens approximately an hour after stimulation. The inclusive criteria are very similar to all of our other studies. And our pattern of stimulation is pretty straightforward. We do two blocks of TBS. And during the first block stimulation intensity is gradually escalated in 5% increments in order to enhance tolerability. And during the second session the stimulation is maintained constant at 80% of the moderate threshold.

Then we use the modified cTBS session consisting of three stimuli applied at intervals of 30 hertz. We first repeat it at 6 hertz for a total of 600 stimuli in a continuous train of 33.3 seconds. Then we did the iTBS session consist of a burst of three stimuli applied at intervals of 50 hertz with bursts repeated at 5 hertz for a total of 600 stimulus during 192 seconds. We also use a sham condition where 50% of subjects are assigned to sham TBS simulating the iTBS stimulus pattern, and 50% are assigned to sham TBS simulating the cTBS pattern.

Now our target is the DMN which is the cortical target for the dorsal medial -- the cortical target for the DMN -- sorry, the dmPFC which is the cortical target for the DMN. And this corresponds to the -- and we found these effects based on the results from the antidepressant placebo fMRI task.

And so this target corresponds to our neurosynth scalp which is located 30% of the distance from the nasion-to-inion forward from the vertex and 5% left which corresponds to an EEG location of F1. And the connectivity map of these regions actually result in activation of the DMN. Now we can also show here the E-Field map of this target which basically demonstrates supports a nice coverage of the DMN.

And so what we found here is that the iTBS compared to sham and cTBS enhances the effect of the reinforcement condition of mood responses. And we also found that at a neural level iTBS compared to cTBS shows significant greater bold responses during expectancy processing within the DMN with sham responses in the middle but really not significantly different from iTBS. Now, increased bold responses in the ventral medial prefrontal cortex were associated with a greater effect of the task conditions of mood responses.

And so all together our results suggest that first trial-by-trial modulation of antidepressant expectancies effectively disassociates expectancy mood dynamics. Antidepressant expectancies are predicted by models of reinforcement learning and they're encoded in the salience network. We also showed that enhanced SN to DMN connectivity enables the switch from greater salient attribution to treatment cues to DMN-mediated mood regulation, contributing to the formation of acute expectancy-mood interactions and long-term antidepressant placebo effects. And iTBS potentiation of the DMN enhances placebo-induced mood responses and expectancy processing.

With this, I would just like to thank my collaborators that started this work with me at the University of Michigan and mostly the people in my lab and collaborators at the University of Pittsburgh as well as the funding agencies.

CRISTINA CUSIN: Wonderful presentation. Really terrific way of trying to untangle different mechanism in placebo response in depression, which is not an easy feat.

There are no specific questions in the Q&A. I would encourage everybody attending the workshop to please post your question to the Q&A. Every panelist can answer in writing. And then we will answer more questions during the discussion, but please don't hesitate.

I think I will move on to the next speaker. We have only a couple of minutes so we're just going to move on to Dr. Schmidt. Thank you so much. We can see your slides. We cannot hear you.

LIANE SCHMIDT: Can you hear me now?

CRISTINA CUSIN: Yes, thank you.

LIANE SCHMIDT: Thank you. So I'm Liane Schmidt. I'm an INSERM researcher and team leader at the Paris Brain Institute. And I'm working on placebo effects but understanding the appetitive side of placebo effects. And what I mean by that I will try to explain to you in the next couple of slides.

NIMH Staff: Can you turn on your video?

LIANE SCHMIDT: Sorry?

NIMH Staff: Can you please turn on your video, Dr. Schmidt?

LIANE SCHMIDT: Yeah, yes, yes, sorry about that.

So it's about the appetitive side of placebo effects because actually placebo effects on cognitive processes such as motivation and biases and belief updating because these processes actually play also a role when patients respond to treatment. And when we measure placebo effects, basically when placebo effects matter in the clinical setting.

And this is done at the Paris Brain Institute. And I'm working also in collaboration with the Pitie-Salpetriere Hospital Psychiatry department to get access to patients with depression, for example.

So my talk will be organized around three parts. On the first part, I will show you some data about appetitive placebo effects on taste pleasantness, hunger sensations and reward learning. And this will make the bridge to the second part where I will show you some evidence for asymmetrical learning biases that are more tied to reward learning and that could contribute actually or can emerge after fast-acting antidepressant treatment effects in depression.

And why is this important? I will try to link these two different parts, the first and second part, in the third part to elaborate some perspectives on the synergies between expectations, expectation updating through learning mechanisms, motivational mechanisms, motivational processes and drug experiences and which we can -- might harness actually by using computational models such as, for example, risk-reward Wagner models as Marta just showed you all the evidence for this in her work.

The appetitive side of placebo effects is actually known very well from the field of research in consumer psychology and marketing research where price labels, for example, or quality labels can affect decision-making processes and also experiences like taste pleasantness experience. And since we are in France, one of the most salient examples for these kind of effects comes from wine tasting. And many people have shown -- many studies have shown that basically the price of wine can influence how pleasant it tastes.

And we and other people have shown that this is mediated by activation in what is called the brain valuation systems or regions that encode expected and experienced reward. And one of the most prominent hubs in this brain valuation system is the ventral medial prefrontal cortex, what you see here on the SPM on the slide. That can explain, that basically translates these price label effects on taste pleasantness liking. And what is interesting is also that its sensitivity to monetary reward, for example, obtaining by surprise a monetary reward. It activates, basically the vmPFC activates when you obtain such a reward surprisingly.

And the more in participants who activate the vmPFC more in these kind of positive surprises, these are also the participants in which the vmPFC encoded more strongly the difference between expensive and cheap wines, which makes a nice parallel to what we know from placebo hyperalgesia where it has also been shown that the sensitivity of the reward system in the brain can moderate placebo analgesia with participants with higher reward sensitivity in the ventral striatum, for example, another region showing stronger placebo analgesia.

So this is to basically hope to let you appreciate that these effects parallel nicely what we know from placebo effects in the pain and also in disease. So we went further beyond actually, so beyond just taste liking which is basically experiencing rewards such as wine. But what could be -- could placebos also affect motivational processes per se? So when we, for example, want something more.

And one way to study is to study basic motivation such as, for example, hunger. It is long thought, for instance, eating behavior that is conceptualized to be driven by homeostatic markers, hormone markers such as Ghrelin and Leptin that signal satiety and energy stores. And as a function of these different hormonal markers in our blood, we're going to go and look for food and eat food. But we also know from the placebo effects on taste pleasantness that there is a possibility that our higher order beliefs about our internal states not our hormones can influence whether we want to eat food, whether we engage in these types of very basic motivations. And that we tested that, and other people also, that's a replication.

In the study where we gave healthy participants who came into the lab in a fasted state a glass of water. And we told them well, water sometimes can stimulate hunger by stimulating the receptors in your mouth. And sometimes you can also drink a glass of water to kill your hunger. And a third group, a control group was given a glass of water and told it's just water; it does nothing to hunger. And then we asked them to rate how hungry they feel over the course of the experiment. And it's a three-hour experiment. Everybody has fasted. And they have to do this food choice task in an fMRI scanner so they get -- everybody gets hungry over this three hours.

But what was interesting and what you see here on this rain cloud plot is that participants who believed or drank the water suggested to be a hunger killer increased in their hunger rating less than participants who believed the water will enhance their hunger. So this is a nice replication what we already know from the field; other people have shown this, too.

And the interesting thing is that it also affected this food wanting, this motivational process how much you want to eat food. So when people laid there in the fMRI scanner, they saw different food items, and they were asked whether they want to eat it or not for real at the end of the experiment. So it's incentive compatible. And what you see here is basically what we call stimulus value. So how much do you want to eat this food.

And the hunger sensation ratings that I just showed you before parallel what we find here. The people in the decreased hunger suggestion group wanted to eat the food less than in the increased hunger suggestion group, showing that it is not only an effect on subjective self-reports or how you feel your body signals about hunger. It's also about what you would actually prefer, what your subjective preference of food that is influenced by the placebo manipulation. And it's also influencing how your brain valuation system again encodes the value for your food preference. And that's what you see on this slide.

Slide two, you see the ventral medial prefrontal cortex. The yellow boxes that the more yellow they are, the stronger they correlate to your food wanting. And you see on the right side with the temporal time courses of the vmPFC that that food wanting encoding is stronger when people were on the increased hunger suggestion group than in the decreased hunger suggestion group.

So basically what I've showed you here is three placebo effects. Placebo effects on subjective hunger ratings, placebo effects on food choices, and placebo effects on how the brain encodes food preference and food choices. And you could wonder -- these are readouts. So these are behavioral readouts, neural readouts. But you could wonder what is the mechanism behind? Basically what is in between the placebo intervention here and basically the behavior feed and neural readout of this effect.

And one snippet of the answer to this question is when you look at the expectation ratings. For example, expectations have long been shown to be one of the mediators, the cognitive mediators of placebo effects across domains. And that's what we see here, too. Especially in the hunger killer suggestion group. The participants who believed that the hunger -- that the drug will kill their hunger more strongly were also those whose hunger increased less over the course of the experiment experience.

And this moderated activity in the region that you see here, which is called the medial prefrontal cortex, that basically activated when people saw food on the screen and thought about whether they want to eat it or not. And this region activated by that activity was positively moderated by the strength of the expectancy about the glass of water to decrease their hunger. So the more you expect that the water will decrease your hunger, the more the mPFC activates when you see food on the screen.

It's an interesting brain region because it's right between the ventral medial prefrontal cortex that encodes the value, the food preference, and the dorsal lateral prefrontal cortex. And it has been shown by past research to connect to the vmPFC when participants self-control, especially during food decision-making paradigms.

But another mechanism or another way to answer the question about the mechanism of how the placebo intervention can affect this behavior in neural effects is to use computational modelings to better understand the preference formation -- the preference formation basically. And one way is that -- is drift diffusion modeling. So these drift diffusion models come from perceptual research for understanding perception. And they are recently also used to better understand preference formation. And they assume that your preference for a yes or no food choice, for example, is a noisy accumulation of evidence.

And there are two types of evidence you accumulate in these two -- in these decision-making paradigms is basically how tasty and how healthy food is. How much you like the taste, how much you consider the health. And this could influence this loop of your evidence accumulation how rapidly basically you reach a threshold towards yes or no.

It could also be that the placebo and the placebo manipulation could influence this loop. But the model loops test several other hypotheses. It could be that the placebo intervention basically affected also just the threshold like that reflects how carefully you made the decision towards a yes or no choice. It could be your initial bias; that is, basically initially you were biased towards a yes or a no response. Or it could be the nondecision time which reflects more sensory motor integration.

And the answer to this question is basically that three parameters were influenced by the placebo manipulation. Basically how much you integrated healthiness and tastiness in your initial starting bias. So you paid more attention to the healthiness when you believed that you were on a hunger killer. And more the tastiness when you believed that you were on a hunger enhancer. And similarly, you were initially biased towards accepting food more when participants believed they were on a hunger enhancer than on a hunger killer.

Interestingly, so this basically shows that this decision-making process is biased by the placebo intervention and basically also how much you filter information that is most relevant. When you are hungry, basically taste is very relevant for your choices. When you are believing you are less hungry, then you have more actually space or you pay less attention to taste, but you can also pay attention more to healthiness of food.

And so the example that shows that this might be a filtering of expectation-relevant information is to use psychophysiologic interaction analyzers that look basically at the brain activity in the vmPFC, that's our seed region. Where in the brain does it connect when people, when participants see food on a computer screen and have to think about whether they want to eat this food or not?

And what we observed there that's connected to the dlPFC, the dorsal lateral prefrontal cortex region. And it's a region of interest that we localized first to be sure it is actually a region that is inter -- activating through an interference resolution basically when we filter -- have to filter information that is most relevant to a task in a separate Stroop localizer task.

So the vmPFC connects stronger to this dlPFC interference resolution region and this is moderated especially in the decreased hunger suggestion group by how much participants considered the healthiness against the tastiness of food.

To wrap this part up, it's basically that we replicated findings from previous studies about appetitive placebo effects by showing that expectancies about efficiency of a drink can affect hunger sensations. How participants make -- form their food preferences, make food choices. And value encoding in the ventral medial prefrontal cortex.

But we also provided evidence for underlying neurocognitive mechanisms that involve the medial prefrontal cortex that is moderated by the strengths of the hunger expectation. That the food choice formation is biased in the form of attention-filtering mechanism toward expectancy congruent information that is taste for an increased hunger suggestion group, and healthiness for a decreased hunger suggestion group. And this is implemented by regions that are linked to interference resolution but also to valuation preference encoding.

And so why should we care? In the real world, it is not very relevant to provide people with deceptive information about hunger-influencing ingredients of drinks. But studies like this one provide insights into cognitive mechanisms of beliefs about internal states and how these beliefs can affect the interoceptive sensations and also associated motivations such as economic choices, for example.

And this can actually also give us insights into the synergy between drug experiences and outcome expectations. And that could be harnessed via motivational processes. So translated basically via motivational processes. And then through it maybe lead us to better understand active treatment susceptibility.

And I'm going to elaborate on this in the next part of the talk by -- I'm going a little bit far, coming a little bit far, I'm not talking about or showing evidence about placebo effects. But yes -- before that, yes, so basically it is.

Links to these motivational processes have long been suggested actually to be also part of placebo effects or mechanisms of placebo effect. And that is called the placebo-reward hypothesis. And that's based on findings in Parkinson's disease that has shown that when you give Parkinson's patients a placebo but tell them it's a dopaminergic drug, then you can measure dopamine in the brain. And the dopamine -- especially the marker for dopamine, its binding potential decreases. That is what you see here on this PET screen -- PET scan results.

And that suggests that the brain must have released endogenous dopamine. And dopamine is very important for expectations and learning. Basically learning from reward. And clinical benefit is the kind of reward that patients expect. So it might -- it is possible that basically when a patient expects reward clinical benefit, its brain -- their brain releases dopamine in remodulating that region such as the vmPFC or the ventral striatum.

And we have shown this in the past that the behavioral consequence of such a nucleus dopamine release under placebo could be linked to reward learning, indeed. And what we know is that, for example, that Parkinson patients have a deficit in learning from reward when they are off dopaminergic medication. But this normalizes when they are under active dopaminergic medication.

So we wondered if based on these PET studies under placebo, the brain releases dopamine, does this also have behavior consequences on their reward learning ability. And that is what you see here on the screen on the right side on the screen is that the Parkinson patients basically tested on placebo shows similar reward learning abilities as under active drug.

And this again was also underpinned by increased correlation of the ventral medial prefrontal cortex. Again, this hub of the brain valuation system to the learned reward value. That was stronger in the placebo and active drug condition compared to baseline of drug condition.

And I want to make now this -- a link to another type of disease where also the motivation is deficitary, and which is depression. And depression is known to be maintained or is sympathized to be maintained by this triad of very negative beliefs about the world, the future and one's self. Which is very insensitive to belief disconfirming information, especially if the belief disconfirming information is positive, so favorable. And this has been shown by cognitive neuroscience studies to be reflected by a thought of like of good news/bad news bias or optimism biases and belief updating in depression. And this good news/bad news bias is basically a bias healthy people have to consider favorable information that contradicts initial negative beliefs more than negative information.

And this is healthy because it avoids reversing of beliefs. And it also includes a form of motivational process because good news have motivational salience. So it should be more motivating to update beliefs about the future, especially if these beliefs are negative, then when we learn that our beliefs are way too negative and get information about that disconfirms this initial belief. But depressed patients, they like this good news/bad news bias. So we wonder what happens when patients respond to antidepressant treatments that give immediate sensory evidence about being on an antidepressant.

And these new fast-acting antidepressants such as Ketamine, these types of antidepressants that patients know right away whether they got the treatment through dissociative experiences. And so could it be that this effect or is it a cognitive model of depression. So this was the main question of the study. And then we wondered again what is the computational mechanism. And is it linked again also, as shown in the previous studies, to reward learning mechanisms, so biased updating of beliefs. And is it linked to clinical antidepressant effects and also potentially outcome expectations makes the link to placebo effects.

So patients were given the -- were performing a belief updating task three times before receiving Ketamine infusions. And then after first infusion and then one week after the third infusion, each time, testing time we measured the depression with the Montgomery-Asberg Depression Rating Scale. And patients performed this belief updating task where they were presented with different negative life events like, for example, getting a disease, losing a wallet here, for example.

And they were asked to estimate their probability of experiencing this life event in the near future. And they were presented with evidence about the contingencies of this event in the general population, what we call the base rate. And then they had the possibility to update their belief knowing now the base rate.

And this is, for example, a good news trial where participants initially overestimated the chance for losing a wallet and then learn it's much less frequent than they initially thought. Updates, for example, 15%. And in a bad news trial, it's you initially underestimated your probability of experiencing this adverse life event. And if you have a good news/bad news bias, well, you're going to consider this information to a lesser degree than in a good news trial.

And that's what -- exactly what happens in the healthy controls that you see on the left most part of the screen. I don't know whether you can see the models, but basically we have the belief updating Y axis. And this is healthy age-matched controls to patients. And you can see updating of the good news. Updating of the bad news. We tested the participants more than two times within a week. You can see the bias. There is a bias that decreases a little bit with more sequential testing in the healthy controls. But importantly, in the patients the bias is there although before Ketamine treatment.

But it becomes much more stronger after Ketamine treatment. It emerged basically. So patients become more optimistically biased after Ketamine treatment. And this correlates to the MADRS scores. Patients who improve more with treatment are also those who show a stronger good news/bad news bias after one week of treatment.

And we wondered again about the computational mechanisms. So one way to get at this using a Rescoria-Wagner model reward reinforcement learning model that basically assumes that updating is proportional to your surprise which is called the estimation error.

The difference between the initial estimate and the base rate. And this is weighted by learning rate. And the important thing here is the learning rate has got two components, a scaling parameter and an asymmetry parameter. And the asymmetry parameter basically weighs in how much the learning rate varies after good news, after positive estimation error, than after negative estimation errors.

And what we can see that in healthy controls, there is a stronger learning rate for positive estimation errors and less stronger for negative estimation errors translating this good news/bad news bias. It's basically an asymmetrical learning mechanism. And in the patients, the asymmetrical learning is non-asymmetrical before Ketamine treatment. And it becomes asymmetrical as reflected in the learning rates after Ketamine treatment.

So what we take from that is basically that Ketamine induced an optimism bias. But an interesting question is whether -- basically what comes first. Is it basically the improvement in the depression that we measured with the Montgomery-Asberg Depression Rating Scale, or is it the optimism bias that emerged and that triggered basically. Since it's a correlation, we don't know what comes first.

And an interesting side effect or aside we put in the supplement was that in 16 patients, it's a very low sample size, the expectancy about getting better also correlated to the clinical improvement after Ketamine treatment. We have two expectancy ratings here about the efficiency about Ketamine and also what patients expect their intensity of depression will be after Ketamine treatment.

And so that suggested the clinical benefit is kind of in part or synergistically seems to interact with the drug experience that emerges that generates an optimism bias. And to test this more, we continued data collection just on the expectancy ratings. And basically wondered how the clinical improvement after first infusion links to the clinical improvement after third infusion.

And we know from here that patients improve after first infusion are also those that improved after a third infusion. But is it mediated by their expectancy about the Ketamine treatment? And that's what we indeed found is that basically the more patients expected to get better, the more they got better after one week of treatment. But it mediated this link between the first drug experience and the later drug experiences and suggested there might not be an additive effect as other panelist members today already put forward today, it might be synergetic link.

And one way to get at these synergies is basically again use computational models. And this idea has been around although yesterday that basically there could be self-fulfilling prophesies that could contribute to the treatment responsiveness and treatment adherence. And these self-fulfilling prophesies are biased symmetrically learning mechanisms that are more biased when you have positive treatment experiences, initial positive treatment experiences, and then might contribute how you adhere to the treatment in the long term and also how much you benefit from it in the long term. So it's both drug experience and an expectancy.

And so this is nonpublished work where we played with this idea basically using a reinforcement learning model. This is also very inspired by we know from placebo analgesia. Tor and Luana Kuven, they have a paper on showing that self-fulfilling prophecies can be harnessed with biased patient and reinforcement learning models. And the idea of these models is that there are two learning rates, alpha plus and alpha minus. And these learning rates rate differently into the updating of your expectation after drug experience.

LIANE SCHMIDT: Okay, yeah, I'm almost done.

So rate differently on these drug experiences and expectations as a function of whether the initial experience was congruent to your expectations. So a positive experience, then a negative one. And here are some simulations of this model. I'm showing this basically that your expectation is getting more updated the more bias, positively biased you are. Then when you are negatively biased. And these are some predictions of the model concerning depression improvement.

To wrap this up, the conclusion about this is that there seems to be asymmetrical learning that can capture self-fulfilling prophesies and could be a mechanism that translates expectations and drug experiences potentially across domains from placebo hypoalgesia to antidepressant treatment responsiveness. And the open question is obviously to challenge these predictions of these models more with empirical data in pain but also in mood disorders as Marta does and as we do also currently at Cypitria where we test the mechanisms of belief updating biases in depression with fMRI and these mathematical models.

And this has a direct link implication because it could help us to better understand how these fast-acting antidepressants work and what makes patients adhere to them and get responses to them. Thank you for your attention. We are the control-interoception-attention team. And thank you to all the funders.

CRISTINA CUSIN: Fantastic presentation. Thank you so much. Without further ado, let's move on to the next speaker. Dr. Greg Corder.

GREG CORDER: Did that work? Is it showing?

GREG CORDER: Awesome, all right. One second. Let me just move this other screen. Perfect. All right.

Hi, everyone. My name is Greg Corder. I'm an Assistant Professor at the University of Pennsylvania. I guess I get to be the final scientific speaker in this session over what has been an amazing two-day event. So thank you to the organizers for also having me get the honor of representing the entire field of preclinical placebo research as well.

And so I'm going to give a bit of an overview, some of my friends and colleagues over the last few years and then tell you a bit about how we're leveraging a lot of current neuroscience technologies to really identify the cell types and circuits building from, you know, the human fMRI literature that's really honed in on these key circuits for expectations, belief systems as well as endogenous antinociceptive symptoms, in particular opioid cell types.

So the work I'm going to show from my lab has really been driven by these two amazing scientists. Dr. Blake Kimmey, an amazing post-doc in the lab. As well as Lindsay Ejoh, who recently last week just received her D-SPAN F99/K00 on placebo circuitry. And we think this might be one of the first NIH-funded animal projects on placebo. So congratulations, Lindsay, if you are listening.

Okay. So why use animals, right? We've heard an amazing set of stories really nailing down the specific circuits in humans leveraging MRI, fMRI, EEG and PET imaging that give us this really nice roadmap and idea of how beliefs in analgesia might be encoded within different brain circuits and how those might change over times with different types of patient modeling or updating of different experiences.

And we love this literature. We -- in the lab we read it in depth as best as we can. And we use this as a roadmap in our animal studies because we can take advantage of animal models that really allow us to dive deep into the very specific circuits using techniques like that on the screen here from RNA sequencing, electrophysiology really showing that those functional measurements in fMRI are truly existent with the axons projecting from one region to another.

And then we can manipulate those connections and projections using things like optogenetics and chemogenetics that allow us really tight temporal coupling to turn cells on and off. And we can see the effects of that intervention in real time on animal behavior. And that's really the tricky part is we don't get to ask the animals do you feel pain? Do you feel less pain? It's hard to give verbal suggestions to animals.

And so we have to rely on a lot of different tricks and really get into the heads of what it's like to be a small prey animal existing in a world with a lot of large monster human beings around them. So we really have to be very careful about how we design our experiments. And it's hard. Placebo in animals is not an easy subject to get into. And this is reflected in the fact that as far as we can tell, there is only 24 published studies to date on placebo analgesia in animal models.

However, I think this is an excellent opportunity now to really take advantage of what has been the golden age of neuroscience technologies exploding in the last 10-15 years to revisit a lot of these open questions about when are opioids released, are they released? Can animals have expectations? Can they have something like a belief structure and violations of those expectations that lead to different types of predictions errors that can be encoded in different neural circuits. So we have a chance to really do that.

But I think the most critical first thing is how do we begin to behaviorally model placebo in these preclinical models. So I want to touch on a couple of things from some of my colleagues. So on the left here, this is a graph that has been shown on several different presentations over the past two days from Benedetti using these tourniquet pain models where you can provide pharmacological conditioning with an analgesic drug like morphine to increase this pain tolerance.

And then if it is covertly switched out for saline, you can see that there is an elevation in that pain tolerance reflective of something like a placebo analgesic response overall. And this is sensitive to Naloxone, the new opioid receptor antagonist, suggesting endogenous opioids are indeed involved in this type of a placebo-like response.

And my colleague, Dr. Matt Banghart, at UCSD has basically done a fantastic job of recapitulating this exact model in mice where you can basically use morphine and other analgesics to condition them. And so if I just kind of dive in a little bit into Matt's model here.

You can have a mouse that will sit on a noxious hot plate. You know, it's an environment that's unpleasant. You can have contextual cues like different types of patterns on the wall. And you can test the pain behavior responses like how much does the animal flick and flinch and lick and bite and protect itself to the noxious hot plate.

And then you can switch the contextual cues, provide an analgesic drug like morphine, see reductions in those pain behaviors. And then do the same thing in the Benedetti studies, you switch out the morphine for saline, but you keep the contextual cues. So the animal has effectively created a belief that when I am in this environment, when I'm in this doctor's office, I'm going to receive something that is going to reduce my perceptions of pain.

And, indeed, Matt sees a quite robust effect here where this sort of placebo response is -- shows this elevated paw withdrawal latency indicating that there is endogenous nociception occurring with this protocol. And it happens, again, pretty robustly. I mean most of the animals going through this conditioning protocol demonstrate this type of antinociceptive behavioral response. This is a perfect example of how we can leverage what we learn from human studies into rodent studies for acute pain.

And this is also really great to probe the effects of placebo in chronic neuropathic pain models. And so here this is Dr. Damien Boorman who was with Professor Kevin Key in Australia, now with Lauren Martin in Toronto.

And here Damien really amped up the contextual cues here. So this is an animal who has had an injury to the sciatic nerve with this chronic constriction injury. So now this animal is experiencing something like a tonic chronic neuropathic pain state. And then once you let the pain develop, you can have the animals enter into this sort of placebo pharmacological conditioning paradigm where animals will go onto these thermal plates, either hot or cool, in these rooms that have a large amount of visual tactile as well as odorant cues. And they are paired with either morphine or a controlled saline.

Again, the morphine is switched for saline on that last day. And what Damien has observed is that in a subset of the animals, about 30%, you can have these responder populations that show decreased pain behavior which we interpret as something like analgesia overall. So overall you can use these types of pharmacological conditionings for both acute and chronic pain.

So now what we're going to do in our lab is a bit different. And I'm really curious to hear the field's thoughts because all -- everything I'm about to show is completely unpublished. Here we're going to use an experimenter-free, drug-free paradigm of instrumental conditioning to instill something like a placebo effect.

And so this is what Blake and Lindsay have been working on since about 2020. And this is our setup in one of our behavior rooms here. Our apparatus is this tiny little device down here. And everything else are all the computers and optogenetics and calcium imaging techniques that we use to record the activity of what's going on inside the mouse's brain.

But simply, this is just two hot plates that we can control the temperature of. And we allow a mouse to freely explore this apparatus. And we can with a series of cameras and tracking devices plot the place preference of an animal within the apparatus. And we can also record with high speed videography these highly conserved sort of protective recuperative pain-like behaviors that we think are indicative of the negative affect of pain.

So let me walk you through our little model here real quick. Okay. So we call this the placebo analgesia conditioning assay or PAC assay. So here is our two-plate apparatus here. So plate number one, plate number two. And the animal can always explore whichever plate it wants. It's never restricted to one side. And so we have a habituation day, let the animal familiarize itself. Like oh, this is a nice office, I don't know what's about to happen.

And then we have a pretest. And in this pretest, importantly, we make both of these plates, both environments a noxious 45-degree centigrade. So this will allow the animal to form an initial expectation that the entire environment is noxious and it's going to hurt. So both sides are noxious. Then for our conditioning, this is where we actually make one side of the chamber non-noxious. So it's just room temperature. But we keep one side noxious. So now there is a new expectation for the animal that it learns that it can instrumentally move its body from one side to the other side to avoid and escape feeling pain.

And so we'll do this over three days, twice per day. And then on our post tester placebo day we make both environments hot again. So now we'll start the animal off over here and the animals will get to freely choose do they want to go to the side that they expect should be non-noxious? Or what happens? So what happens?

Actually, if you just look at the place preference for this, over the course of conditioning we can see that the animals will, unsurprisingly, choose the environment that is non-noxious. And they spend 100% of their time there basically. But when we flip the plates or flip the conditions such that everything is noxious on the post test day, the animals will still spend a significant amount of time on the expected analgesia side. So I'm going to show you some videos here now and you are all going to become mouse pain behavior experts by the end of this.

So what I'm going to show you are both side by side examples of conditioned and unconditioned animals. And try to follow along with me as you can see what the effect looks like. So on this post test day. Oh, gosh, let's see if this is going to -- here we go. All right. So on the top we have the control animal running back and forth. The bottom is our conditioned animal.

And you'll notice we start the animal over here and it's going to go to the side that it expects it to not hurt. Notice the posture of the animals. This animal is sitting very calm. It's putting its entire body down on the hot plate. This animal, posture up, tail up. It's running around a little bit frantically. You'll notice it start to lick and bite and shake its paws. This animal down here might have a couple of flinches so it's letting you know that some nociception is getting into the nervous system overall.

But over the course of this three-minute test, the animals will rightly choose to spend more time over here. And if we start to quantify these types of behaviors that the animals are doing in both conditions, what we find is that there is actually a pretty significant reduction in these nociceptive behaviors. But it's not across the entire duration of this placebo day or post test day.

So this trial is three minutes long. And what we see is that this antinociceptive and preference choice only exists for about the first 90 seconds of this assay. So this is when the video I just showed, the animal goes to the placebo side, it spends a lot of its time there, does not seem to be displaying pain-like behaviors.

And then around 90 seconds, the animal -- it's like -- it's almost like the belief or the expectation breaks. And at some point, the animal realizes oh, no, this is actually quite hot. It starts to then run around and starts to show some of the more typical nociceptive-like behaviors. And we really like this design because this is really, really amenable to doing different types of calcium imaging, electrophysiology, optogenetics because now we have a really tight timeline that we can observe the changing of neural dynamics at speeds that we can correlate with some type of behavior.

Okay. So what are those circuits that we're interested in overall that could be related to this form of placebo? Again, we like to use the human findings as a wonderful roadmap. And Tor has demonstrated, and many other people have demonstrated this interconnected distributed network involving prefrontal cortex, nucleus accumbens, insula, thalamus, as well as the periaqueductal gray.

And so today I'm going to talk about just the periaqueductal gray. Because there is evidence that there is also release of endogenous opioids within this system here. And so we tend to think that the placebo process and the encoding, whatever that is, the placebo itself is likely not encoded in the PAG. The PAG is kind of the end of the road. It's the thing that gets turned on during placebo and we think is driving the antinociceptive or analgesic effects of the placebo itself.

So the PAG, for anyone who's not as familiar, we like it because it's conserved across species. We look at in a mouse. There's one in a human. So potentially it's really good for translational studies as well. It has a very storied past where it's been demonstrated that the PAG subarchitecture has these beautiful anterior to posterior columns that if you electrically stimulate different parts of PAG, you can produce active versus passive coping mechanisms as well as analgesia that's dependent on opioids as well as endocannabinoids.

And then the PAG is highly connected. Both from ascending nociception from the spinal cord as well as descending control systems from prefrontal cortex as well as the amygdala. So with regard to opioid analgesia. If you micro infuse morphine into the posterior part of the PAG, you can produce an analgesic effect in rodents that is across the entire body. So it's super robust analgesia from this very specific part of the PAG.

If you look at the PAG back there and you do some of these techniques to look for histological indications that the mu opioid receptor is there, it is indeed there. There is a large amount of mu opioid receptors, it's OPRM1. And it's largely on glutamatergic neurons. So the excitatory cells, not the inhibitory cells. They are on some of them.

And as far as E-phys data goes as well, we can see that the mu opioid receptor is there. So DAMGOs and opioid agonist. We can see activation of inhibitory GIRK currents in those cells. So the system is wired up for placebo analgesia to happen in that location. Okay. So how are we actually going to start to tease this out? By finding these cells where they go throughout the brain and then understanding their dynamics during placebo analgesia.

So last year we teamed up with Karl Deisseroth's lab at Stanford to develop a new toolkit that leverages the genetics of the opioid system, in particular the promoter for the mu opioid receptor. And we were able to take the genetic sequence for this promoter and package it into adeno associated viruses along with a range of different tools that allow us to turn on or turn off cells or record their activity. And so we can use this mu opioid receptor promoter to gain genetic access throughout the brain or the nervous system for where the mu opioid receptors are. And we can do so with high fidelity.

This is just an example of our mu opioid virus in the central amygdala which is a highly mu opioid specific area. But so Blake used this tool using the promoter to drive a range of different trans genes within the periaqueductal gray. And right here, this is the G camp. So this is a calcium indicator that allows us to in real time assess the calcium activity of PAG mu opioid cells.

And so what Blake did was he took a mouse, and he recorded the nociceptive responses within that cell type and found that the mu opioid cell types are actually nociceptive. They respond to pain, and they do so with increasing activity to stronger and stronger and more salient and intense noxious stimuli. So these cells are actually nociceptive.

And if we look at a ramping hot plate, we can see that those same mu opioid cell types in the PAG increase the activity as this temperature on this hot plate increases. Those cells can decrease that activity if we infuse morphine.

Unsurprisingly, they express the mu opioid receptor and they're indeed sensitive to morphine. If we give naltrexone to block the mu opioid receptors, we can see greater activity to the noxious stimuli, suggesting that there could be an opioid tone or some type of an endogenous opioid system that's keeping this system in check, that it's repressing its activity. So when we block it, we actually enhance that activity. So it's going to be really important here. The activity of these mu opioid PAG cells correlates with affective measures of pain.

When animals are licking, shaking, biting, when it wants to escape away from noxious stimuli, that's when we see activity within those cells. So this is just correlating different types of behavior when we see peak amplitudes within those cell types. So let me skip that real quick.

Okay. So we have this ability to look and peek into the activity of mu opioid cell types. Let's go back to that placebo assay, our PAC assay I mentioned before. If we record from the PAG on that post test day in an animal that has not undergone conditioning, when the plates are super hot, we see a lot of nocioceptive activity in these cells here. They're bouncing up and down.

But if we look at the activity of the nociception in an animal undergoing placebo, what we see is there's a suppression of neural activity within that first 90 seconds. And this actually does seem to extinguish within the lighter 90 seconds. So kind of tracks along with the behavior of those animals. When they're showing anti nocioceptive behavior, that's when those cells are quiet.

When the pain behavior comes back, that's when those cell types are ramping up. But what about the opioids too? Mu opioid receptor cell type's decreasing activity. What about the opioids themselves here? The way to do this in animals has been to use microdialysis, fantastic technique but it's got some limitations to it. This is a way of sampling peptides in real time and then using liquid chromatography to tell if the protein was present. However, the sampling rate is about 10 minutes.

And in terms of the brain processing, 10 minutes might as well be an eternity. If we're talking about milliseconds here. But we want to know what these cells here and these red dots are doing. These are the enkephaliner cells in the PAG. We needed revolution in technologies. One of those came several years ago from Dr. Lin Tian, who developed some of the first sensors for dopamine. Some of you may have heard of it. It's called D-Light.

This is a version of D-Light. But it's actually an enkephalin opioid sensor. What Lin did to genetically engineer this is to take the delta opioid receptor, highly select it for enkephalin, and then link it with this GFP molecule here such that when enkephalin binds to the sensor it will fluoresce.

We can capture that florescence with microscopes that we implant over the PAG and we can see when enkephalin is being released with subsecond resolution. And so what we did for that is we want to see if enkephalin is indeed being released onto those mu opioid receptor expressing pain encoding neurons in the PAG. What I showed you before is that those PAG neurons, they ramp up their activity as the nociception increases, a mouse standing on a hot plate. We see nociception ramp up. What do you all think happened with the opoids?

It wasn't what we expected. It actually drops. So what we can tell is that there's a basal opioid tone within the PAG, but that as nociception increases, acute nociception, we see a decrease suppression of opioid peptide release.

We think this has to do with stuff that Tor has published on previously that the PAG is more likely involved in updating prediction errors. And this acute pain phenomenon we think is reflective of the need to experience pain to update your priors about feeling pain and to bias the selection of the appropriate behaviors, like affect related things to avoid pain. However, what happens in our placebo assay?

We actually see the opposite. So if we condition animals to expect pain relief within that PAC assay, we actually see an increase from the deltoid sensor suggesting that there is an increase in enkephalin release post conditioning. So there can be differential control of the opioid system within this brain region. So this next part is the fun thing you can do with animals. What if we just bypassed the need to do the placebo assay?

If we know that we just need to cause release of enkephalin within the PAG to produce pain relief, we could just directly do that with optigenetics. So we tried to us this animal that allows us to put a red light sensitive opsin protein into the enkephalinergic interneurons into the PAG.

When we shine red light on top of these cells, they turn on and they start to release their neurotransmitters. These are GABAergic and enkephalinergic. So they're dumping out GABA and now dumping out enkephalin into the ERG. We can visualize that using the Delta Light sensor from Lin Tien.

So here is an example of optogenetically released enkephalin within the PAG over 10 minutes. The weird thing that we still don't fully understand is that this signal continues after the optogenetic stimulation. So can we harness the placebo effect in mice? At least it seems we can. So if we turn on these cells strongly, cause them to release enkephalin and put animals back on these ramping hot plate tests we don't see any changes in the latency to detect pain, but we see specific ablation or reductions in these affective motivational pain like behaviors overall. Moderator: You have one minute remaining.

GREGORY CORDER: Cool. In this last minutes, people are skeptical. Can we actually test these higher order cognitive processes in animals? And for anyone who is not a behavioral preclinical neural scientist, you might not be aware there's an absolute revolution happening in behavior with the use of deep learning modules that can precisely and accurately quantify animal behavior. So this is an example of a deep learning tracking system.

We've built the Light Automated Pain Evaluator that can capture a range of different pain related behaviors fully automated without human intervention whatsoever that can be paired with brain reporting techniques like calcium imaging, that allow us to fit a lot of different computational models to understand what the activity of single neurons might be doing, let's say, in the cingulate cortex that might be driving that placebo response.

We can really start to tie now in at single cell resolution the activity of prefrontal cortex to drive these placebo effects and see if that alters anti nocioceptive behavior endogenously. I'll stop there and thank all the amazing people, Blake, Greg, and Lindsay, who did this work, as well as all of our funders and the numerous collaborators who have helped us do this. So thank you.

CRISTINA CUSIN: Terrific talk. Thank you so much. We're blown away. I'll leave the discussion to our two moderators. They're going to gather some of the questions from the chat and some of their own questions for all the presenters from today and from yesterday as well.

TED KAPTCHUK: Matt, you start gathering questions. I got permission to say a few moments of comments. I wanted to say this is fantastic. I actually learned an amazing amount of things. The amount of light that was brought forward about what we know about placebos and how we can possibly control placebo effects, how we can possibly harness placebo effects.

There was so much light and new information. What I want to do in my four minutes of comments is look to the future. What I mean by that is -- I want to give my comments and you can take them or leave them but I've got a few minutes.

What I want to say is we got the light, but we didn't put them together. There's no way we could have. We needed to be more in the same room. How does this fit in with your model? It's hard to do. What I mean by putting things together is I'll give you an example. In terms of how do we control placebo effects in clinical trials. I not infrequently get asked by the pharmaceutical industry, when you look at our placebo data -- we just blew it. Placebo was good as or always as good as the drug.

And the first thing I say is I want to talk to experts in that disease. I want to know the natural history. I want to know how you made your entry criteria so I can understand regression to the mean.

I want to know what's the relationship of the objective markers and subjective markers so I can begin to think about how much is the placebo response. I always tell them I don't know. If I knew how to reduce -- increase the difference between drug and placebo I'd be a rich man, I wouldn't be an academic. What I usually wind up saying is, get a new drug. And they pay me pretty well for that. And the reason is that they don't know anything about natural history. We're trying to harness something, and I just want to say -- I've done a lot of natural history controls, and that's more interesting than the rest of the experiments because they're unbelievable, the amount of improvement people show entering the trial without any treatment.

I just want to say we need to look at other things besides the placebo effect. We want to control the placebo response in a randomized control trial. I want to say that going forward. But I also want to say that we need a little bit of darkness. We need to be able to say, you know, I disagree with you. I think this other data, and one of the things I've learned doing placebo reach there's a paper that contradicts your paper real quickly and there's lots of contradictory information. It's very easy to say you're wrong, and we don't say it enough.

I want to take one example -- please forgive me -- I know that my research could be said that, Ted, you're wrong. But I just want to say something. Consistently in the two days of talk everyone talks about the increase of the placebo response over time. No one refers to the article published in 2022 in BMJ, first author was Mark Stone and senior author was Irving Kirsch. And they analyzed all FDA Mark Stone is in the Division of Psychiatry at CDER at the FDA. They analyzed all data of placebo controlled trials in major depressive disorder. They had over 230 trials, way more than 70,000 patients, and they analyzed the trend over time, in 1979 to the present, the publication. There was no increase in the placebo effect.

Are they right or are other people right? Nothing is one hundred percent clear right now and we need to be able to contradict each other when we get together personally and say, I don't think that's right, maybe that's right. I think that would help us. And the last thing I want to say is that some things were missing from the conference that we need to include in the future. We need to have ethics. Placebo is about ethics. If you're a placebo researcher in placebo controlled trials, that's an important question:

What are we talking about in terms of compromising ethics? There's no discussion that we didn't have time but in the future, let's do that.

And the last thing I would say is, we need to ask patients what their experience is. I've got to say I've been around for a long time. But the first time I started asking patients what their experiences were, they were in double blind placebo or open label placebo, I did it way after they finished the trial, the trial was over, and I actually took notes and went back and talked to people. They told me things I didn't even know about. And we need to have that in conferences. What I want to say, along those lines, is I feel so much healthier because I'm an older person, and I feel with this younger crowd here is significantly younger than me.

Maybe Matt and I are the same age, I don't know, but I think this is really one of the best conferences I ever went to. It was real clear data. We need to do lots of other things in the future. So with that, Matt, feed me some questions.

MATTHEW RUDORFER: Okay. Thanks. I didn't realize you were also 35. But okay. [LAUGHTER].

MATTHEW RUDORFER: I'll start off with a question of mine. The recent emergence of intravenous ketamine for resistant depression has introduced an interesting methodologic approach that we have not seen in a long time and that is the active placebo. So where the early trials just used saline, more recently we have seen benzodiazapine midazolam, while not mimicking really the full dissociative effect that many people get from ketamine, but the idea is for people to feel something, some kind of buzz so that they might believe that they're on some active compound and not just saline. And I wonder if the panel has any thoughts about the merits of using an active placebo and is that something that the field should be looking into more?

TED KAPTCHUK: I'm going to say something. Irving Kirsch published a meta analysis of H studies that used atropine as a control in depression studies. He felt that it made it difficult to detect a placebo drug difference. But in other meta analysis said that was not true. That was common in the '80s. People started thinking about that. But I have no idea how to answer your question.

MICHAEL DETKE: I think that's a great question. And I think in the presentations yesterday about devices, Dr. Lisanby was talking about the ideal sham. And I think it's very similar, the ideal active placebo would have none of the axia of the drug, of the drug in question, but would have, you know, exactly the same side effects and all other features, and of course that's attractive, but of course we probably would never have a drug that's exactly like that. I think midazolam was a great thing to try with ketamine. It's still not exactly the same. But I'd also add that it's not black and white. It's not like we need to do this with ketamine and ignore it for all of our other drugs. All of our drugs have side effects.

Arguably, if you do really big chunks, like classes of relatively modern antidepressants, antipsychotics and the psychostimulants, those are in order of bigger effect sizes in clinical trials, psychostimulants versus anti psychotics, versus -- and they're also in the order of roughly, I would argue, of unblinding, of functional unblinding. And in terms of more magnitude, Zyprexa will make you hungry. And also speed of onset of some of the adverse effects, stimulants and some of the Type II -- the second generation and beyond -- anti psychotics, they have pretty noticeable side effects for many subjects and relatively rapidly. So I think those are all important features to consider.

CRISTINA CUSIN: Dr. Schmidt?

LIANE SCHMIDT: I think using midazolam could give, like, some sensory sensations so the patients actually can say there's some effect on the body like immediately. But this raises actually a question whether these dissociations we observe in some patients of ketamine infusions we know have, will play a role for the antidepressant response. It's still an open question. So I don't have the answer to that question. And I think with midazolam doesn't really induce dissociations. I don't know, maybe you can isolate the dissociations you get on ketamine. But maybe even patients might be educated, expecting scientific reaction experiences and basically when they don't have -- so they make the midazolam experience something negative. So yeah, just self fulfilling prophesies might come into play.

CRISTINA CUSIN: I want to add for five seconds. Because I ran a large ketamine clinic. We know very little about cyto placebo maintaining an antidepressant response while the dissociation often wears off over time. It's completely separate from the anti depressant effect. We don't have long term placebo studies. The studies are extremely short lived and we study the acute effect. But we don't know how to sustain or how to maintain, what's the role of placebo effect in long term treatments. So that's another field that really is open to investigations. Dr. Rief.

WINFRIED RIEF: Following up on the issue of active placebos. I just want to mention that we did a study comparing active placebos to passive placebos and showing that active placebos are really more powerful. And I think the really disappointing part of this news is that it questions the blinding of our typical RCTs comparing antidepressants versus placebos because many patients who are in the active group or the tracked group, they perceive these onset effects and this will further boost the placebo mechanisms in the track group that are not existing in the passive placebo group. This is a challenge that further questions the validity of our typical RCTs.

CRISTINA CUSIN: Marta.

MARTA PECINA : Just a quick follow up to what Cristina was saying, too, that we need to clarify whether we want to find an active control for the dissociative effects or for the antidepressive effects. I think the approach will be very different. And this applies to ketamine but also psychodelics because we're having this discussion as well. So when thinking about how to control for or how to blind or how we just -- these treatments are very complicated. They have multiple effects. We just need to have the discussion of what are we trying to blind because the mechanism of action of the blinding drug will be very different.

TED KAPTCHUK: Can I say something about blinding? Robertson, who is the author of the 1970 -- no -- 1993 New England Journal paper saying that there's no that the placebo effect is a myth.

In 2022, published in BMJ, the largest -- he called it a mega meta analysis on blinding. And he took 144 randomized control trials that included nonblinded evidence on the drug versus blinded evidence of the drug. I'm not going to tell you the conclusion because it's unbelievable. But you should read it because it really influences -- it would influence what we think about blinding. That study was just recently replicated on a different set of patients with procedures in JAMA Surgery three months ago. And blinding like placebo is more complicated than we think. That's what I wanted to say.

MATTHEW RUDORFER: Another clinical factor that's come up during our discussion has been the relationship of the patient to the provider that we saw data showing that a warm relationship seemed to enhance therapeutic response, I believe, to most interventions. And I wonder what the panel thinks about the rise on the one hand of shortened clinical visits now that, for example, antidepressants are mostly given by busy primary care physicians and not specialists and the so called med check is a really, kind of, quickie visit, and especially since the pandemic, the rise of telehealth where a person might not ever even meet their provider in person, and is it possible we're on our way to where a clinical trial could involve, say, mailing medication every week to a patient, having them do their weekly ratings online and eliminating a provider altogether and just looking at the pharmacologic effect?

I mean, that probably isn't how we want to actually treat people clinically, but in terms of research, say, early phase efficacy, is there merit to that kind of approach?

LUANA COLLOCA: I'll comment on this, Dr. Rudorfer. We're very interested to see how the telemedicine or virtual reality can affect placebo effects, and we're modeling in the lab placebo effects induced via, you know, in person interaction.

There's an Avatar and virtual reality. And actually we found placebo effects with both the settings. Or whether, when we look at empathy, the Avatar doesn't elicit any empathy in the relationship. We truly need the in person connection to have empathy. So that suggests that our outcome that are affected by having in person versus telemedicine/para remote interactions, but yet the placebo effects persist in both the settings. The empathy is differently modulated and the empathy mediated, interestingly in our data, placebo effects only in the in person interactions. There is still a value in telemedicine. Effects that bypass empathy completely in competence.

MATTHEW RUDORFER: Dr. Hall.

KATHRYN HALL: Several of the large studies, like the Women's Health Study, Physicians' Health Study and, more recently, Vital, they did exactly that, where they mail these pill packs. And I mean, the population, obviously, is clinicians. So they are very well trained and well behaved. And they follow them for years but there's very little contact with the providers, and you still have these giant -- I don't know if you can call them placebo effects -- but certainly many of these trials have not proven to be more effective, the drugs they're studying, than placebo.

MATTHEW RUDORFER: Dr. Atlas.

LAUREN ATLAS: I wanted to chime in briefly on this important question. I think that the data that was presented yesterday in terms of first impressions of providers is relevant for this because it suggests that even when we use things like soft dot (phonetic) to select physicians and we have head shots (phonetic), that really we're making these decisions about who to see based on these kinds of just first impressions and facial features and having the actual interactions by providers is critical for sort of getting beyond that kind of factor that may drive selection. So I think if we have situations where there's reduced chances to interact, first of all, people are bringing expectations to the table based on what they know about the provider and then you don't really have the chance to build on that without the actual kind of therapeutic alliance. That's why I think, even though our study was done in an artificial setting, it really does show how we make these choices when there are bios for physicians and things available for patients to select from. I think there's a really important expectation being brought to the table before the treatment even occurs.

MATTHEW RUDORFER: Thanks. Dr. Lisanby.

SARAH “HOLLY” LISANBY: Thanks for raising this great question, Matt. I have a little bit of a different take on it. Equity in access to mental health care is a challenge. And the more that we can leverage technology to provide and extend the reach of mental health care the better. And so telemedicine and telepsychiatry, we've been thrust into this era by the pandemic but it existed before the pandemic as well. And it's not just about telepsychotherapy or teleprescription from monitoring pharmacotherapy, but digital remote neuromodulation is also a thing now. There are neuromodulation interventions that can be done at home that are being studied, and so there have been trials on transcranial direct current stimulation at home with remote monitoring. There are challenges in those studies differentiating between active and sham. But I think you're right in that we may have to rethink how do we control remote studies when the intensity of the clinician contact is very different, but I do think that we should explore these technologies so that we can extend the reach and extend access to research and to care for people who are not able to come into the research lab setting.

TED KAPTCHUK: May I add something on this? It's also criticizing myself. In 2008, I did this very nice study showing you could increase the doctor/patient relationship. And as you increase it, the placebo effect got bigger and bigger, like a dose response. A team in Korea that I worked with replicated that. I just published that replication.

The replication came out with the exact opposite results. The less doctor/patient relationship, the less intrusive, the less empathy got better effects. We're dealing with very complicated culturally constructed issues, and I just want to put it out there, the sand is soft. I'm really glad that somebody contradicted a major study that I did.

LUANA COLLOCA: Exactly. The central conference is so critical, what we observed in one context in one country, but even within the same in group or out group can be completely different in Japan, China or somewhere else. So the Americas, South Africa. So we need larger studies and more across country collaborations.

MATTHEW RUDORFER: Dr. Schmidt.

LIANE SCHMIDT: I just wanted to raise a point not really like -- it's more like a comment, like there's also very interesting research going on in the interactions between humans and robots, and usually humans treat robots very badly. And so I wonder what could be like -- here we focus on very human traits, like empathy, competence, what we look at. But when it comes to artificial intelligence, for example, and when we have to interact with algorithms, basically, like all these social interactions might completely turn out completely different, actually, and all have different effects on placebo effects. Just a thought.

MATTHEW RUDORFER: Dr. Rief.

WINFRIED RIEF: Yesterday, I expressed a belief for showing more warmth and competence, but I'll modify it a little bit today because I think the real truth became quite visible today, and that is that there is an interaction between these non specific effect placebo effects and the track effect. In many cases, at least. We don't know whether there are exceptions from this rule, but in many cases we have an interaction. And to learn about the interaction, we instead need study designs that modulate track intake versus placebo intake, but they also modulate the placebo mechanisms, the expectation mechanisms, the context of the treatment. And only if we have these 2 by 2 designs, modulating track intake and modulating context and psychological factors, then we learn about the interaction. You cannot learn about the interaction if you modulate only one factor.

And, therefore, I think what Luana and others have said that interact can be quite powerful and effective in one context but maybe even misleading in another context. I think this is proven. We have to learn more about that. And all the studies that have been shown from basic science to application that there could be an interaction, they're all indicating this line and to this necessity that we use more complex designs to learn about the interaction.

MATTHEW RUDORFER: Yes. And the rodent studies we've seen, I think, have a powerful message for us just in terms of being able to control a lot of variables that are just totally beyond our control in our usual human studies. It always seemed to me, for example, if you're doing just an antidepressant versus placebo trial in patients, well, for some people going into the clinic once a week to get ratings, that might be the only day of the week that they get up and take a shower, get dressed, have somebody ask them how they're doing, have some human interaction. And so showing up for your Hamilton rating could be a therapeutic intervention that, of course, we usually don't account for in the pharmacotherapy trial. And the number of variables really can escalate in a hurry when we look at our trials closely.

TED KAPTCHUK: Tor wants to say something.

TOR WAGER: Thanks, Ted.

I wanted to add on to the interaction issue, which came up yesterday, which Winfried and others just commented on, because it seems like it's really a crux issue. If the psychosocial or expectation effects and other things like that are entangled with specific effects so that one can influence the other and they might interact, then, yeah, we need more studies that independently manipulate specific drug or device effects and other kinds of psychological effects independently. And I wanted to bring this back up again because this is an idea that's been out here for a long time. I think the first review on this was in the '70s, like '76 or something, and it hasn't really been picked up for a couple of reasons. One, it's hard to do the studies. But second, when I talk to people who are in industry and pharma, they are very concerned about changing the study designs at all for FDA approval.

And since we had some, you know, FDA and regulatory perspectives here yesterday, I wanted to bring that up and see what people think, because I think that's been a big obstacle. And if it is, then that may be something that would be great for NIH to fund instead of pharma companies because then there's a whole space of drugs, psychological or neurostimulation psychological interactions, that can be explored.

MATTHEW RUDORFER: We also had a question. Yesterday there was discussion in a naloxone trial in sex differences in placebo response. And wonder if there's any further thoughts on studies of sex differences or diversity in general in placebo trials. Yes.

LUANA COLLOCA: We definitely see sex differences in placebo effect, and I show also, for example, women responded to arginine vasopressin in a way that we don't observe in men.

But also you asked about diversity. Currently actually in our paper just accepted today where we look at where people are living, the Maryland states, and even the location where they are based make a difference in placebo effects. So people who live in the most distressed, either the greatest Baltimore area, tended to have lower placebo effects as compared to a not distressful location. And we define that the radius of the criteria and immediately it's a race but we take into account the education, the income and so on. So it is interesting because across studies consistently we see an impact of diversity. And in that sense, I echo, listen to the comment that we need to find a way to reach out to these people and truly improve access and the opportunity for diversity. Thank you for asking.

MATTHEW RUDORFER: Thank you. Another issue that came up yesterday had to do with the pharmacogenomics. And there was a question or a question/comment about using candidate approaches and are they problematic.

KATHRYN HALL: What approaches.

MATTHEW RUDORFER: Candidate genes.

KATHRYN HALL: I think we have to start where we are. I think that the psychiatric field has had a really tough time with genetics. They've invested a lot and, sadly, don't have as much to show for it as they would like to. And I think that that has really tainted this quest for genetic markers of placebo and related studies, these interaction factors. But it's really important to not, I think, to use that to stop us from looking forward and identifying what's there. Because when you start to scratch the surface, there are interactions. You can see them. They're replete in the literature. And what's really fascinating is everybody who finds them, they don't see them when they report their study. And even some of these vasopressin studies, not obviously, Tor, yours, but I was reading one the other day where they had seen tremendous differences by genetics in response to arginine vasopressin. And they totally ignored what they were seeing in placebo and talked about who responds to drug. And so I think that not only do we need to start looking for what's happening, we need to start being more open minded and paying attention to what we're seeing in the placebo arm and accounting for that, taking that into account to understand what we're seeing across a trial in total.

CRISTINA CUSIN: I'll take a second to comment on sufficient selection and trying to figure out, depending on the site who are the patients who went there, treatment and depression clinical trial. If we eliminate from the discussion professional patient and we think about the patients who are more desperate, patients who don't have access to care, patients who are more likely to have psychosocial stressors or the other extreme, there are patients who are highly educated. The trials above and they search out, but they're certainly not representative of the general populations we see in the clinical setting.

They are somewhat different. And then if you think about the psychedelics trial, they go from 5,000 patients applying for a study and the study ends up recruiting 20, 30. So absolutely not representative of the general population we see in terms of diversity, in terms of comorbidities, in terms of psychosocial situations. So that's another factor that adds to the complexity of differentiating what happens in the clinical setting versus artificial setting like a research study. Tor.

MATTHEW RUDORFER: The question of who enters trials and I think the larger issue of diagnosis in general has, I think, really been a challenge to the field for many years. Ted and I go back a ways, and just looking at depression, of course, has dominated a lot of our discussion these last couple of days, with good reason. Now I realize the good database, my understanding is that the good database of placebo controlled trials go back to the late '90s, is what we heard yesterday. And if you go back further, the tricyclic era not only dealt with different medications, which we don't want to go back to, but if you think about practice patterns then, on the one hand, the tricyclics, most nonspecialists steered clear of, they required a lot of hands on. They required titration slowly up. They had some concerning toxicities, and so it was typical that psychiatrists would prescribe them but family docs would not. And that also had the effect of a naturalistic screening, that is, people would have to reach a certain level of severity before they were referred to a psychiatrist to get a prescription for medication.

More mildly ill people either wound up, probably inappropriately, on tranquilizers or no treatment at all and moderately to severely ill people wound up on tricyclics, and of course inpatient stays were common in those days, which again was another kind of screening. So it was the sort of thing, I mean, in the old days I heard of people talk about, well, you could, if you go to the inpatient board, you could easily collect people to be in clinical trial and you kind of knew that they were vetted already. That they had severe depression, the general sense was that the placebo response would be low. Though there's no real evidence for that. But the thing is, once we had the SSRIs on the one hand, the market vastly expanded because they're considered more broad spectrum. People with milder illness and anxiety disorders now are appropriate candidates and they're easier to dispense. The concern about overdose is much less, and so they're mostly prescribed by nonspecialists. So it's the sort of thing where we've seen a lot of large clinical trials where it doesn't take much to reach the threshold for entry, being if I go way back and this is just one of my personal concerns over many years the finer criteria, which I think were the first good set of diagnostic criteria based on data, based on literature, those were published in 1972 to have a diagnosis of major depression, called for four weeks of symptoms. Actually, literally, I think it said one month.

DSM III came out in 1980 and it called for two weeks of symptoms. I don't know -- I've not been able to find any documentation of how the one month went to two weeks, except that the DSM, of course, is the manual that's used in clinical practice. And you can understand, well, you might not want to have too high a bar to treat people who are seeking help. But I think one of the challenges of DSM, it was not meant as a research manual. Though that's often how it's used. So ever since that time, those two weeks have gotten reified, and so my point is it doesn't take much to reach diagnostic criteria for DSM, now, 5TR, major depression. So if someone is doing a clinical trial of an antidepressant, it is tempting to enroll people who meet, honestly meet those criteria but the criteria are not very strict. So I wonder whether that contributes to the larger placebo effect that we see today.

End of soapbox. The question -- I'd like to revisit an excellent point that Dr. Lisanby raised yesterday which has to do with the research domain criteria, the RDOC criteria. I don't know if anyone on the panel has had experience in using that in any trials and whether you see any merit there. Could RDOC criteria essentially enrich the usual DSM type clinical criteria in terms of trying to more finely differentiate subtypes of depression, might respond differently to different treatments.

MODERATOR: I think Tor has been patient on the hand off. Maybe next question, Tor, I'm not sure if you had comments on previous discussion.

TOR WAGER: Sure, thanks. I wanted to make a comment on the candidate gene issue. And I think it links to what you were just saying as well, doctor, in a sense. I think it relates to the issue of predicting individual differences in placebo effects and using that to enhance clinical trials, which has been really sort of a difficult issue. And in genetics, I think what's happened, as many of us know, is that there were many findings on particular candidate genes, especially comped and other particular set of genes in Science and Nature, and none of those really replicated when larger GWA studies started being done. And the field of genetics really focused in on reproducibility and replicability in one of our sample sizes. So I think my genetics colleagues tell me something like 5,000 is a minimum for even making it into their database of genetic associations. And so that makes it really difficult to study placebo effects in sample sizes like that. And at the same time, there's been this trend in psychology and in science, really, in general, towards reproducibility and replicability that probably in part are sort of evoked by John Ioannidis's provocative claims that most findings are false, but there's something really there.

There's been many teams of people who have tried to pull together, like Brian Nosek's work with Open Science Foundation, and many lab studies to replicate effects in psychology with much higher power. So there's this sort of increasing effort to pull together consortia to really test these things vigorously. And I wonder if -- we might not have a GWA study of placebo effects in 100,000 people or something, which is what would convince a geneticist that there's some kind of association. I'm wondering what the ways forward are, and I think one way is to increasingly come together to pull studies or do larger studies that are pre registered and even registered reports which are reviewed before they're published so that we can test some of these associations that have emerged in these what we call early studies of placebo effects.

And I think if we preregister and found something in sufficiently large and diverse samples, that might make a dent in convincing the wider world that essentially there is something that we can use going forward in clinical trials. And pharma might be interested in, for example, as well. That's my take on that. And wondering what people think.

KATHRYN HALL: My two cents. I completely agree with you. I think the way forward is to pull our resources to look at this and not simply stop -- I think when things don't replicate, I think we need to understand why they don't replicate. I think there's a taboo on looking beyond, if you prespecified it and you don't see it, then it should be over. I think in at least this early stage, when we're trying to understand what's happening, I think we need to allow ourselves deeper dives not for action but for understanding.

So I agree with you. Let's pull our resources and start looking at this. The other thing I would like to point out that's interesting is when we've looked at some of these clinical trials at the placebo arm, we actually learn a lot about natural history. We just did one in Alzheimer's disease and in the placebo arm the genome wide significant hit was CETP, which is now a clinical target in Alzheimer's disease. You can learn a lot by looking at the placebo arms of these studies not just about whether or not the drug is working or how the drug is working, but what's happening in the natural history of these patients that might change the effect of the drug.

TED KAPTCHUK: Marta, did you have something to say; you had your hand up.

MARTA PECINA: Just a follow up to what everybody is saying. I do think the issue of individualability is important. I think that one thing that maybe kind of explains some of the things that was also saying at the beginning that there's a little bit of lack of consistency or a way to put all of these findings together. The fact that we think about it as a one single placebo effect and we do know that there's not one single placebo effect, but even within differing clinical conditions is the newer value placebo effect the same in depression as it is in pain?

Or are there aspects that are the same, for example, expectancy processing, but there's some other things that are very specific to the clinical condition, whether it's pain processing, mood or some others. So I think we face the reality of use from a neurobiology perspective that a lot of the research has been done in pain and still there's very little being done at least in psychiatry across many other clinical conditions that we just don't know. And we don't really even know if the placebo how does the placebo effect look when you have both pain and depression, for example?

And so those are still very open questions that kind of reflect our state, right, that we're making progress but there's a lot to do.

TED KAPTCHUK: Winfried, did you want to say something? You have your hand up.

WINFRIED RIEF: I wanted to come back to the question of whether we really understand this increase of placebo effects. I don't know whether you have (indiscernible) for that. But I'm more like a scientist I can't believe that people are nowadays more reacting to placebos than they did 20 years ago. So there might be other explanations for this effect, like we changed the trial designs. We have more control visits maybe nowadays compared to 30 years ago, but there could be also other factors like publication bias which was maybe more frequent, more often 30 years ago than it is nowadays with the need for greater registration. So there are a lot of methodological issues that could explain this increase of placebo effects or of responses in the placebo groups. I would be interested whether you think that this increase is well explained or what your explanations are for this increase.

TED KAPTCHUK: Winfried, I want to give my opinion. I did think about this issue. I remember the first time it was reported in scientists in Cleveland, 40, 50 patients, and I said, oh, my God, okay, and the newspapers had it all over: The placebo effect is increasing. There's this boogie man around, and everyone started believing it. I've been consistently finding as many papers saying there's no -- I've been collecting them. There's no change over time there are changes over time. When I read the original article, I said, of course there's differences. The patients that got recruited in 1980 were different than the patients in 1990 or 2010. They were either more chronic, less chronic.

They were recruited in different ways, and that's really an easy explanation of why things change. Natural history changes. People's health problems are different, and I actually think that the Stone's meta analysis with 70,033 patients says it very clearly. It's a flat line from 1979. And the more data you have, the more you have to believe it. That's all. That's my personal opinion. And I think we actually are very deeply influenced by the media. I mean, I can't believe this:

The mystery of the placebo. We know more about placebo effects at least compared to many drugs on the market. Thanks my opinion. Thanks, Winfried, for letting me say it.

MATTHEW RUDORFER: Thanks, Ted.

We have a question for Greg. The question is, I wonder what the magic of 90 seconds is? Is there a physiologic basis to the turning point when the mouse changes behavior?

GREGORY CORDER: I think I addressed it in a written post somewhere. We don't know. We see a lot of variability in those animals. So like in this putative placebo phase, some mice will remain on that condition side for 40 seconds, 45 seconds, 60 seconds. Or they'll stay there the entire three minutes of the test. We're not exactly sure what's driving the difference in those different animals. These are both male and females. We see the effect in both male and female C57 black six mice, a genetically inbred animal. We always try to restrict the time of day of testing. We do reverse light testing. This is the animal wake cycle.

And there are things like dominance hierarchies within the cages, alpha versus betas. They may have different levels of pain thresholds. But the breaking of whatever the anti nocioceptive effect is they're standing on a hot plate for quite a long time. At some point those nociceptors in the periphery are going to become sensitized and signal. And to some point it's to the animal's advantage to pay attention to pain. You don't want to necessarily go around not paying attention to something that's potentially very dangerous or harmful to you. We would have to scale up the number of animals substantially I think, to really start parse out what the difference is that would account for that. But that's an excellent point, though.

MATTHEW RUDORFER: Carolyn.

CAROLYN RODRIGUEZ: I want to thank all today's speakers and wonderful presentations today. I just wanted to just go back for a second to Dr. Pecina's point about thinking about a placebo effect is not a monolith and also thinking about individual disorders.

And so I'm a clinical trialist and do research in obsessive compulsive disorder, and a lot of the things that are written in the literature meta analysis is that OCD has one of the lowest placebo rates. And so, you know, from what we gathered today, I guess to turn the question on its head is, is why is that, is that the case, why is that the case, and does that say something about OCD pathology, and what about it? Right? How can we really get more refined in terms of different domains and really thinking about the placebo effect.

So just want to say thank you again and to really having a lot of food for thought.

MATTHEW RUDORFER: Thanks. As we're winding down, one of the looming questions on the table remains what are research gaps and where do you think the next set of studies should go. And I think if anyone wants to put some ideas on the table, they'd be welcome.

MICHAEL DETKE: One of the areas that I mentioned in my talk that is hard for industry to study, or there's a big incentive, which is I talked about having third party reviewers review source documents and videos or audios of the HAM D, MADRS, whatever, and that there's not much controlled evidence.

And, you know, it's a fairly simple design, you know, within our largest controlled trial, do this with half the sites and don't do it with the other half.

Blinding isn't perfect. I haven't thought about this, and it can probably be improved upon a lot, but imagine you're the sponsor who's paying the $20 million in three years to run this clinical trial. You want to test your drug as fast as you possibly can. You don't want to really be paying for this methodology.

So that might be -- earlier on Tor or someone mentioned there might be some specific areas where this might be something for NIH to consider picking up. Because that methodology is being used in hundreds of trials, I think, today, the third party remote reviewer. So there's an area to think about.

MATTHEW RUDORFER: Thanks. Holly.

SARAH “HOLLY” LISANBY: Yeah. Carolyn just mentioned one of the gap areas, really trying to understand why some disorders are more amenable to the placebo response than others and what can that teach us. That sounds like a research gap area to me.

Also, throughout these two days we've heard a number of research gap areas having to do with methodology, how to do placebos or shams, how to assess outcome, how to protect the blind, how do you select what your outcome measures should be.

And then also today my mind was going very much towards what can preclinical models teach us and the genetics, the biology of a placebo response, the biogender line, individual differences in placebo response.

There may be clues there. Carolyn, to your point to placebo response being lower in OCD, and yet there are some OCD patients who respond, what's different about them that makes them responders?

And so studies that just look at within a placebo response versus nonresponse or gradation response or durability response and the mechanisms behind that.

These are questions that I think may ultimately facilitate getting drugs and devices to market, but certainly are questions that might be helpful to answer at the research stage, particularly at the translational research stage, in order to inform the design of pivotal trials that you would ultimately do to get things to market.

So it seems like there are many stages before getting to the ideal pivotal trial. So I really appreciate everyone's input. Let me stop talking because I really want to hear what Dr. Hall has to say.

KATHRYN HALL: I wanted to just come back for one of my favorite gaps to this question increasing the placebo effect. I think it's an important one because so many trials are failing these days. And I think it's not all trials are the same.

And what's really fascinating to me is that you see in Phase II clinical trials really great results, and then what's the first thing you do as a pharma company when you got a good result? You get to put out a press release.

And what's the first thing you're going to go do when you enroll in a clinical trial? You're going to read a press release. You're going to read as much as you can about the drug or the trial you're enrolling in. And how placebo boosting is it going to be to see that this trial had amazing effects on this condition you're struggling with.

If lo and behold we go to Phase III, and you can -- we're actually writing a paper on this, how many times we see the words "unexpected results," and I think we saw them here today, today or yesterday. Like, this should not be unexpected. When your Phase III trial fails, you should not be surprised because this is what's happening time and time again.

And I think some of the -- yeah, I agree, Ted, it's like this is a modern time, but there's so much information out there, so much information to sway us towards placebo responses that I think that's a piece of the problem. And finding out what the problem is I think is a really critical gap.

MATTHEW RUDORFER: Winfried.

WINFRIED RIEF: Yeah. May I follow up in that I think it fits quite nicely to what has been said before, and I want to direct I want to answer directly to Michael Detke.

On first glance, it seems less expensive to do the trials the way we do it with one placebo group and one drug arm, and we try to keep the context constant. But this is the problem. We have a constant context without any variation, so we don't learn under which context conditions is this drug really effective and what are the context conditions the drug might not be effective at all.

And therefore I think the current strategy is more like a lottery. It's really by chance it can happen that you are in this little window where the drug can show the most positive effectivity, but it can also be that you are in this little window or the big window where the drug is not able to show its effectivity.

And therefore I think, on second glance, it's a very expensive strategy only to use one single context to evaluate a drug.

MATTHEW RUDORFER: If I have time for--

TED KAPTCHUK: Marta speak, and then Liane should speak.

MARTA PECINA: I just wanted to add kind of a minor comment here, which is this idea that we're going to have to move on from the idea that giving someone a placebo is enough to induce positive expectancies and the fact that expectancies evolve over time.

So at least in some of the data that we've shown, and it's a small sample, but still we see that 50% of those subjects who are given a placebo don't have drug assignment beliefs. And so that is a very large amount of variability there that we are getting confused with everything else.

And so I do think that it is really important, whether in clinical trials, in research, to really come up with very and really develop new ways of measuring expectancies and allow expectancies to be measured over time. Because they do change. We have some prior expectancies, and then we have some expectancies that are learned based on experience. And I do think that this is an area of improvement that the field could improve relatively easily, you know, assess expectancies better, measure expectancies better.

TED KAPTCHUK: Liane, why don't you say something, and Luana, and then Cristina.

LIANE SCHMIDT: So I wanted to -- maybe one -- another open gap is like about the cognition, like what studying placebo, how can it help us to better understand human reasoning, like, and vice versa, actually, all the biases we have, these cognitive processes like motivation, for example, or memory, and yet all the good news about optimism biases, how do they contribute to placebo effects on the patient side but also on the clinician side when the clinicians have to make diagnosis or judge, actually, treatment efficiency based on some clinical scale.

So basically using like tools from cognition, like psychology or cognitive neuroscience, to better understand the processes, the cognitive processes that intervene when we have an expectation and behavior reach out, a symptom or neural activation, what comes in between, like how is it translated, basically, from cognitive predictability.

LUANA COLLOCA: I think we tended to consider expectation as static measurement when in reality we know that what we expect at the beginning of this workshop is slightly different by the end of what we are hearing and, you know, learning.

So expectation is a dynamic phenomenon, and the assumption that we can predict placebo effects with our measurement of expectation can be very limiting in terms of, you know, applications. Rather, it is important to measure expectation over time and also realize that there are so many nuance, like Liane just mentioned, of expectations, you know.

There are people who say I don't expect anything, I try everything, or people who say, oh, I truly want, I will be I truly want to feel better. And these also problematic patients because having an unrealistic expectation can often destroy, as I show, with a violation of expectancies of placebo effects.

TED KAPTCHUK: Are we getting close? Do you want to summarize? Or who's supposed to do that? I don't know.

CRISTINA CUSIN: I think I have a couple of minutes for remarks. There's so much going on, and more questions than answers, of course.

That has been a fantastic symposium, and I was trying to pitch some idea about possibly organizing a summit with all the panelists, all the presenters, and everyone else who wants to join us, because I think that with a coffee or a tea in our hands and talking not through a Zoom video, we could actually come up with some great idea and some collaboration projects.

Anyone who wants to email us, we'll be happy to answer. And we're always open to collaborating and starting a new study, bouncing off each other new ideas. This is what we do for a living. So we're very enthusiastic about people asking difficult questions.

And some of the questions that are ongoing and I think would be future areas is what we were talking a few minutes ago, we don't know if a placebo responder in a migraine study, for example, would be a placebo responder of depression study or IBS study. We don't know if this person is going to be universal placebo responder or is the context include the type of disease they're suffering from so it's going to be fairly different, and why some disorders have lower placebo response rate overall compared to others. Is that a chronicity, a relaxing, remitting disorder, has higher chance of placebo because the system can be modulated, versus a disorder that is considered more chronic and stable? A lot of this information is not known in the natural history.

Also comes to mind the exact trial it is because we almost never have a threshold for number of prior episodes of depression to enter a trial or how chronic has it been or years of depression or other factors that can clearly change our probability of responding to a treatment.

We heard about methodology for clinical trial design and how patients could be responsive to placebo responses or sham, responsive to drug. How about patients who could respond to both? We have no idea how many of those patients are undergoing a trial, universal responders, unless we do a crossover. And we know that crossover is not a popular design for drug trials.

So we need to figure out also aspects of methodology, how to assess outcome, what's the best way to assess the outcome that we want, is it clinically relevant, how to protect the blind aspect, assess expectations and how expectations change over time.

We didn't hear much during the discussion about the role of mindfulness in pain management, and I would like to hear much more about how we're doing in identifying the areas and can we actually intervene on those areas with devices to help with pain management. That's one of the biggest problems we have in terms of clinical care.

In the eating disorder aspect, creating computational models to influence food choices. And, again, with devices or treatments specifically changing the balance about making healthier food choices, I can see an entire field developing. Because most of the medications we prescribe for psychiatric disorders affect food choices and there's weight gain, potentially leading to obesity and cardiovascular complications. So there's an entire field of research we have not touched on.

And the role of animal models in translational results, I don't know if animal researchers, like Greg, talk much with clinical trialists. I think that would be a cross fertilization that is much needed, and we can definitely learn from each other.

And just fantastic. I thank all the panelists for their willingness to work with us and their time, dedication, and just so many meetings to discuss to agree on the program and to divide and conquer different topics. Has been a phenomenal experience, and I'm very, very grateful.

And the NIMH staff has been also amazing, having to collaborate with all of them, and they were so organized. And just a fantastic panel. Thank you, everybody.

MATTHEW RUDORFER: Thank you.

TOR WAGER: Thank you.

NIMH TEAM: Thanks from the NIMH team to all of our participants here.

(Meeting adjourned)

IMAGES

  1. Difference Between Conceptual and Empirical Research

    conceptual vs empirical research

  2. Conceptual Research VS Empirical Research

    conceptual vs empirical research

  3. Conceptual Vs Empirical Research PowerPoint Template and Google Slides

    conceptual vs empirical research

  4. Research Types : Part 4: Conceptual Vs Empirical Research

    conceptual vs empirical research

  5. PPT

    conceptual vs empirical research

  6. What is conceptual research: Definition & examples

    conceptual vs empirical research

VIDEO

  1. Theoretical Framework vs Conceptual Framework

  2. Comparative Vs Empirical Research

  3. Unit 1: UGRC150

  4. | UGC-NET

  5. Do your Empirical and conceptual framework in research within 5mins

  6. Rent Control's Hidden Costs: Unveiling the Domino Effect on Housing Quality

COMMENTS

  1. Difference Between Conceptual and Empirical Research

    by Hasa. 4 min read. The main difference between conceptual and empirical research is that conceptual research involves abstract ideas and concepts, whereas empirical research involves research based on observation, experiments and verifiable evidence. Conceptual research and empirical research are two ways of doing scientific research.

  2. Conceptual Research vs. Empirical Research

    Learn the difference between conceptual research and empirical research, two distinct approaches to conducting research. Conceptual research focuses on theoretical concepts and ideas, while empirical research collects and analyzes data to test hypotheses and answer research questions.

  3. Conceptual Vs. Empirical Research: Which Is Better?

    The modern scientific method is really a combination of empirical and conceptual research. Using known experimental data a scientist formulates a working hypothesis to explain some aspect of nature. He then performs new experiments designed to test predictions of the theory, to support it or disprove it. Einstein is often cited as an example of ...

  4. Conceptual Research vs. Empirical Research: What's the Difference?

    14. Conceptual research often deals with the development of new theories or models, while empirical research seeks to validate or refute these through practical experimentation or observation. 10. Conceptual research contributes to a deeper understanding of theoretical aspects, often without direct physical evidence.

  5. Conceptual Research Vs Empirical Research?

    Conceptual research includes unique thoughts and ideas; as it may, it doesn't include any experiments and tests. Empirical research, on the other hand, includes phenomena that are observable and can be measured. Type of Studies: Philosophical research studies are cases of conceptual research, while empirical research incorporates both ...

  6. Conceptual vs. Empirical

    Learn how conceptual and empirical approaches differ in their methods, sources, and goals of research. Conceptual relies on abstract ideas and theories, while empirical relies on observable data and evidence.

  7. Understanding Conceptual vs Empirical Research: Definitions

    Differences and Similarities. Conceptual frameworks are typically developed as initial models to guide the early stages of research, helping to identify key variables and hypotheses. In contrast, theoretical frameworks are based on existing theories and are used to interpret data after empirical research.

  8. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    Including a conceptual framework in a research study is important, but researchers often opt to include either a conceptual or a theoretical framework. ... Standards for reporting on empirical social science research in AERA publications: American Educational Research Association. Educational Researcher, 35 (6), 33-40. [Google Scholar]

  9. Empirical Vs. Conceptual Research

    Empirical Vs. Conceptual Research. < 1 . min read . According to ORI, research is defined as the process of discovering new knowledge. Using observations and scientific methods, researchers arrive at a hypothesis, test that hypothesis, and make a conclusion based on the key findings. Scientific research can be divided into empirical and ...

  10. Conceptual Vs. Empirical Research: Which Is Better?

    Learn the differences and similarities between conceptual and empirical research, and how they are combined in the scientific method. See examples of famous researchers who used both approaches, and how to choose the best method for your study.

  11. Theoretical vs Conceptual Framework (+ Examples)

    Quite commonly, conceptual frameworks are used to visualise the potential causal relationships and pathways that the researcher expects to find, based on their understanding of both the theoretical literature and the existing empirical research. Therefore, the conceptual framework is often used to develop research questions and hypotheses.

  12. Conceptual vs. Empirical

    Key Differences. Conceptual research primarily deals with abstract ideas and the development of theories, using logical reasoning to explore and define phenomena. In contrast, empirical research is grounded in real-world data collection and experimentation, emphasizing the verification of theories through sensory experience and measurement. 10.

  13. Research: Meaning and Purpose

    Kothari divides research into four categories, e.g., descriptive vs. analytical; applied vs. fundamental; qualitative and quantitative; and conceptual vs. empirical. Kumar ( 2011 ), however, classified research (Fig. 2.1 ) from three broad categories, e.g., (a) applicability of research findings; (b) objectives of the study; and (c) mode of ...

  14. How to Conceptualize a Research Project

    The research process has three phases: the conceptual phase; the empirical phase, which involves conducting the activities necessary to obtain and analyze data; and the interpretative phase, which involves determining the meaning of the results in relation to the purpose of the project and the associated conceptual framework [ 2 ].

  15. A Framework for Undertaking Conceptual and Empirical Research

    A framework is presented that: (i) considers the production of conceptual knowledge in process terms; (ii) highlights that the process is applicable to both empirical and conceptual research; and (iii) shows the possibilities and value of considering the interconnections of these. The model that follows takes a critical realism stance.

  16. Types of Research

    Do you know major differences between Conceptual Research vs Empirical Research, Learn here with simple explanation and examples.#conceptualresearch #empiric...

  17. Conceptual Versus Empirical Research

    Conceptual versus EmpiricalThe research related to some abstract idea or theory is known as Conceptual Research. Generally, philosophers and thinkers use it...

  18. Conceptual Research vs. Empirical Research

    Learn the key differences between conceptual and empirical research, two types of academic inquiry that focus on theoretical ideas and practical evidence, respectively. Compare their sources, purposes, methods, contributions, and examples.

  19. Conceptual Research Vs Empirical Research (L31/2-I)

    Zia Series on MS/MPhil & PhD Research (Lecture 31/2-I) [email protected] Methodology Research Philosophy MS/MPhil & PhD Research /...

  20. Conceptual Research and its differences with Empirical Research

    Learn the differences between conceptual research and empirical research, their components, examples, advantages and disadvantages. Conceptual research is based on abstract concepts and ideas, while empirical research is based on practical experimentation and observation.

  21. Conceptual structure and the growth of scientific knowledge

    R it addresses the theory that a set of general core concepts may promote the creation of more specific conceptual material (for example, theoretical extensions, auxiliary hypotheses and empirical ...

  22. What is difference between Conceptual Research and Empirical Research

    All Answers (2) But in simple terms, conceptual research is based on developing/testing theories (based on gaps in the research) and within these theories versus empirical research is largely ...

  23. Day Two: Placebo Workshop: Translational Research Domains and ...

    The National Institute of Mental Health (NIMH) hosted a virtual workshop on the placebo effect. The purpose of this workshop was to bring together experts in neurobiology, clinical trials, and regulatory science to examine placebo effects in drug, device, and psychosocial interventions for mental health conditions. Topics included interpretability of placebo signals within the context of ...