Heuristics: Definition, Examples, And How They Work

Benjamin Frimodig

Science Expert

B.A., History and Science, Harvard University

Ben Frimodig is a 2021 graduate of Harvard College, where he studied the History of Science.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Every day our brains must process and respond to thousands of problems, both large and small, at a moment’s notice. It might even be overwhelming to consider the sheer volume of complex problems we regularly face in need of a quick solution.

While one might wish there was time to methodically and thoughtfully evaluate the fine details of our everyday tasks, the cognitive demands of daily life often make such processing logistically impossible.

Therefore, the brain must develop reliable shortcuts to keep up with the stimulus-rich environments we inhabit. Psychologists refer to these efficient problem-solving techniques as heuristics.

Heuristics decisions and mental thinking shortcut approach outline diagram. Everyday vs complex technique comparison list for judgments and fast, short term problem solving method vector

Heuristics can be thought of as general cognitive frameworks humans rely on regularly to reach a solution quickly.

For example, if a student needs to decide what subject she will study at university, her intuition will likely be drawn toward the path that she envisions as most satisfying, practical, and interesting.

She may also think back on her strengths and weaknesses in secondary school or perhaps even write out a pros and cons list to facilitate her choice.

It’s important to note that these heuristics broadly apply to everyday problems, produce sound solutions, and helps simplify otherwise complicated mental tasks. These are the three defining features of a heuristic.

While the concept of heuristics dates back to Ancient Greece (the term is derived from the Greek word for “to discover”), most of the information known today on the subject comes from prominent twentieth-century social scientists.

Herbert Simon’s study of a notion he called “bounded rationality” focused on decision-making under restrictive cognitive conditions, such as limited time and information.

This concept of optimizing an inherently imperfect analysis frames the contemporary study of heuristics and leads many to credit Simon as a foundational figure in the field.

Kahneman’s Theory of Decision Making

The immense contributions of psychologist Daniel Kahneman to our understanding of cognitive problem-solving deserve special attention.

As context for his theory, Kahneman put forward the estimate that an individual makes around 35,000 decisions each day! To reach these resolutions, the mind relies on either “fast” or “slow” thinking.

Kahneman

The fast thinking pathway (system 1) operates mostly unconsciously and aims to reach reliable decisions with as minimal cognitive strain as possible.

While system 1 relies on broad observations and quick evaluative techniques (heuristics!), system 2 (slow thinking) requires conscious, continuous attention to carefully assess the details of a given problem and logically reach a solution.

Given the sheer volume of daily decisions, it’s no surprise that around 98% of problem-solving uses system 1.

Thus, it is crucial that the human mind develops a toolbox of effective, efficient heuristics to support this fast-thinking pathway.

Heuristics vs. Algorithms

Those who’ve studied the psychology of decision-making might notice similarities between heuristics and algorithms. However, remember that these are two distinct modes of cognition.

Heuristics are methods or strategies which often lead to problem solutions but are not guaranteed to succeed.

They can be distinguished from algorithms, which are methods or procedures that will always produce a solution sooner or later.

An algorithm is a step-by-step procedure that can be reliably used to solve a specific problem. While the concept of an algorithm is most commonly used in reference to technology and mathematics, our brains rely on algorithms every day to resolve issues (Kahneman, 2011).

The important thing to remember is that algorithms are a set of mental instructions unique to specific situations, while heuristics are general rules of thumb that can help the mind process and overcome various obstacles.

For example, if you are thoughtfully reading every line of this article, you are using an algorithm.

On the other hand, if you are quickly skimming each section for important information or perhaps focusing only on sections you don’t already understand, you are using a heuristic!

Why Heuristics Are Used

Heuristics usually occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind at the same moment

When studying heuristics, keep in mind both the benefits and unavoidable drawbacks of their application. The ubiquity of these techniques in human society makes such weaknesses especially worthy of evaluation.

More specifically, in expediting decision-making processes, heuristics also predispose us to a number of cognitive biases .

A cognitive bias is an incorrect but pervasive judgment derived from an illogical pattern of cognition. In simple terms, a cognitive bias occurs when one internalizes a subjective perception as a reliable and objective truth.

Heuristics are reliable but imperfect; In the application of broad decision-making “shortcuts” to guide one’s response to specific situations, occasional errors are both inevitable and have the potential to catalyze persistent mistakes.

For example, consider the risks of faulty applications of the representative heuristic discussed above. While the technique encourages one to assign situations into broad categories based on superficial characteristics and one’s past experiences for the sake of cognitive expediency, such thinking is also the basis of stereotypes and discrimination.

In practice, these errors result in the disproportionate favoring of one group and/or the oppression of other groups within a given society.

Indeed, the most impactful research relating to heuristics often centers on the connection between them and systematic discrimination.

The tradeoff between thoughtful rationality and cognitive efficiency encompasses both the benefits and pitfalls of heuristics and represents a foundational concept in psychological research.

When learning about heuristics, keep in mind their relevance to all areas of human interaction. After all, the study of social psychology is intrinsically interdisciplinary.

Many of the most important studies on heuristics relate to flawed decision-making processes in high-stakes fields like law, medicine, and politics.

Researchers often draw on a distinct set of already established heuristics in their analysis. While dozens of unique heuristics have been observed, brief descriptions of those most central to the field are included below:

Availability Heuristic

The availability heuristic describes the tendency to make choices based on information that comes to mind readily.

For example, children of divorced parents are more likely to have pessimistic views towards marriage as adults.

Of important note, this heuristic can also involve assigning more importance to more recently learned information, largely due to the easier recall of such information.

Representativeness Heuristic

This technique allows one to quickly assign probabilities to and predict the outcome of new scenarios using psychological prototypes derived from past experiences.

For example, juries are less likely to convict individuals who are well-groomed and wearing formal attire (under the assumption that stylish, well-kempt individuals typically do not commit crimes).

This is one of the most studied heuristics by social psychologists for its relevance to the development of stereotypes.

Scarcity Heuristic

This method of decision-making is predicated on the perception of less abundant, rarer items as inherently more valuable than more abundant items.

We rely on the scarcity heuristic when we must make a fast selection with incomplete information. For example, a student deciding between two universities may be drawn toward the option with the lower acceptance rate, assuming that this exclusivity indicates a more desirable experience.

The concept of scarcity is central to behavioral economists’ study of consumer behavior (a field that evaluates economics through the lens of human psychology).

Trial and Error

This is the most basic and perhaps frequently cited heuristic. Trial and error can be used to solve a problem that possesses a discrete number of possible solutions and involves simply attempting each possible option until the correct solution is identified.

For example, if an individual was putting together a jigsaw puzzle, he or she would try multiple pieces until locating a proper fit.

This technique is commonly taught in introductory psychology courses due to its simple representation of the central purpose of heuristics: the use of reliable problem-solving frameworks to reduce cognitive load.

Anchoring and Adjustment Heuristic

Anchoring refers to the tendency to formulate expectations relating to new scenarios relative to an already ingrained piece of information.

 Anchoring Bias Example

Put simply, this anchoring one to form reasonable estimations around uncertainties. For example, if asked to estimate the number of days in a year on Mars, many people would first call to mind the fact the Earth’s year is 365 days (the “anchor”) and adjust accordingly.

This tendency can also help explain the observation that ingrained information often hinders the learning of new information, a concept known as retroactive inhibition.

Familiarity Heuristic

This technique can be used to guide actions in cognitively demanding situations by simply reverting to previous behaviors successfully utilized under similar circumstances.

The familiarity heuristic is most useful in unfamiliar, stressful environments.

For example, a job seeker might recall behavioral standards in other high-stakes situations from her past (perhaps an important presentation at university) to guide her behavior in a job interview.

Many psychologists interpret this technique as a slightly more specific variation of the availability heuristic.

How to Make Better Decisions

Heuristics are ingrained cognitive processes utilized by all humans and can lead to various biases.

Both of these statements are established facts. However, this does not mean that the biases that heuristics produce are unavoidable. As the wide-ranging impacts of such biases on societal institutions have become a popular research topic, psychologists have emphasized techniques for reaching more sound, thoughtful and fair decisions in our daily lives.

Ironically, many of these techniques are themselves heuristics!

To focus on the key details of a given problem, one might create a mental list of explicit goals and values. To clearly identify the impacts of choice, one should imagine its impacts one year in the future and from the perspective of all parties involved.

Most importantly, one must gain a mindful understanding of the problem-solving techniques used by our minds and the common mistakes that result. Mindfulness of these flawed yet persistent pathways allows one to quickly identify and remedy the biases (or otherwise flawed thinking) they tend to create!

Further Information

  • Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: an effort-reduction framework. Psychological bulletin, 134(2), 207.
  • Marewski, J. N., & Gigerenzer, G. (2012). Heuristic decision making in medicine. Dialogues in clinical neuroscience, 14(1), 77.
  • Del Campo, C., Pauser, S., Steiner, E., & Vetschera, R. (2016). Decision making styles and the use of heuristics in decision making. Journal of Business Economics, 86(4), 389-412.

What is a heuristic in psychology?

A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

Bobadilla-Suarez, S., & Love, B. C. (2017, May 29). Fast or Frugal, but Not Both: Decision Heuristics Under Time Pressure. Journal of Experimental Psychology: Learning, Memory, and Cognition .

Bowes, S. M., Ammirati, R. J., Costello, T. H., Basterfield, C., & Lilienfeld, S. O. (2020). Cognitive biases, heuristics, and logical fallacies in clinical practice: A brief field guide for practicing clinicians and supervisors. Professional Psychology: Research and Practice, 51 (5), 435–445.

Dietrich, C. (2010). “Decision Making: Factors that Influence Decision Making, Heuristics Used, and Decision Outcomes.” Inquiries Journal/Student Pulse, 2(02).

Groenewegen, A. (2021, September 1). Kahneman Fast and slow thinking: System 1 and 2 explained by Sue. SUE Behavioral Design. Retrieved March 26, 2022, from https://suebehaviouraldesign.com/kahneman-fast-slow-thinking/

Kahneman, D., Lovallo, D., & Sibony, O. (2011). Before you make that big decision .

Kahneman, D. (2011). Thinking, fast and slow . Macmillan.

Pratkanis, A. (1989). The cognitive representation of attitudes. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 71–98). Hillsdale, NJ: Erlbaum.

Simon, H.A., 1956. Rational choice and the structure of the environment. Psychological Review .

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185 (4157), 1124–1131.

Print Friendly, PDF & Email

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

Heuristic evaluation: Definition, case study, template

to state that a case study is heuristic means that

Imagine yourself faced with the challenge of assembling a tricky puzzle but not knowing where to start. Elements such as logical reasoning and meticulous attention to detail become essential, requiring an approach that goes beyond the surface level to achieve effectiveness. When evaluating the user experience of an interface, it is no different.

Heuristic Evaluation UX

In this article, we will cover the fundamental concepts of heuristic evaluation, how to properly perform a heuristic evaluation, and the positive effects it can bring to your UX design process. Let’s learn how you can solve challenges with heuristic evaluation.

What is heuristic evaluation?

The heuristic evaluation principles, understanding nielsen’s 10 usability heuristics, essential steps in heuristic evaluation, prioritization criteria in the analysis of usability problems, communicating heuristic evaluation results effectively, dropbox’s heuristic evaluation approach, incorporating heuristic evaluation into the ux process.

The heuristic evaluation method’s main goal is to evaluate the usability quality of an interface based on a set of principles, based on UX best practices. From the identification of the problems, it is possible to provide practical recommendations and consequently improve the user experience.

So where did heuristic evaluation come from, and how do we use these principles? Read on.

Heuristic evaluation was created by Jakob Nielsen, recognized worldwide for his significant contributions to the field of UX. The method created by Nielsen is based on a set of heuristics from human-computer interaction (HCI) and psychology to inspect the usability of user interfaces.

Therefore, Nielsen’s 10 usability heuristics make up the principles of heuristic evaluation by establishing carefully established foundations. These foundations serve as a practical guide to cover the main usability problems of projects. These heuristics work as cognitive shortcuts used by the brain for efficient decision making, especially in redesign projects. Heuristics also help to complement the UX process when understanding user problems and supporting UX research and evaluation.

When you are getting ready to conduct a heuristic evaluation, the first step is to set clear goals. Then, during the evaluation, you should make notes on what you find considering usability issues, always based on the criteria. Once this is done, you can prepare a report that might include information on which issues to tackle first, which makes the evaluation even better. All these steps matter because they help make sure interfaces match what users want and expect, leading to better interactions overall.

Preparation for the heuristic evaluation: Defining usability objectives and criteria

As with the puzzle example in the intro, fully understanding the problem is critical to applying heuristic evaluation effectively. Thus, during the preparation phase, you need to establish the evaluation criteria, also defining how these criteria will be evaluated.

Select evaluators based on their experience. By involving a diverse set of evaluators, you can obtain different perspectives on the same challenge. Although an expert is able to point out most of the problems in a heuristic evaluation, collaboration is essential to generate more comprehensive recommendations.

Although it follows a set of heuristics, the evaluation is less formal and less expensive than a user test, making it faster and easier to conduct. Therefore, heuristic evaluation can be performed in the early stages of design and development when making changes is more cost effective.

Nielsen’s usability heuristics are like a tactical set for methodically making things work, providing valuable clues that designers and creators follow to piece together the usability puzzle. These heuristics act as master guides, helping us intelligently fit each piece of the puzzle together so that everything makes sense and is easy to understand to create amazing experiences in the products and websites we use.

to state that a case study is heuristic means that

Over 200k developers and product managers use LogRocket to create better digital experiences

to state that a case study is heuristic means that

Here are Nielsen’s 10 usability heuristics, each with its own relevance and purpose:

1. System status visibility

Continuously inform the user about what is happening.

Mac Loading Icon

2. Correspondence between the system and the real world

Use words and concepts familiar to the user.

Yahoo Search Bar

3. User control and freedom

Allow users to undo actions and explore the system without fear of making mistakes.

Gmail Undo Trash

4. Consistency and standards

Maintain a consistent design throughout the system, so users can apply what they learned in one part to the rest.

ClickUp Management System

5. Error prevention

Design in a manner that prevents users from committing mistakes or provides means to easily correct wrong decisions.

Confirm Deletion

6. Recognition instead of memorization

Provide contextual hints and tips to help users accomplish tasks without needing to remember specific information.

Siri Listening

7. Flexibility and efficiency of use

Allow users to customize keyboard shortcuts or create custom profiles to streamline their interactions.

Adobe Photoshop Undo

8. Aesthetics and minimalist design

Keep the design clean and simple, focusing on the most relevant information to avoid overwhelming users using proper spacing, colors, and typography.

Airbnb Website

9. Help and documentation

Provide helpful and accessible support in case users need extra guidance.

WhatsApp Help Center

10. User feedback

Give immediate feedback to users when they take an action.

H&M Checkout Confirmation

Together, these pieces of the usability heuristics puzzle help us build a complete picture of digital experiences. Thus, by following these guidelines, evaluators can identify problems and prioritize them for correction at the evaluation stage.

In the evaluation phase, evaluators should look at the product or system interface and document any usability issues based on heuristics. By using heuristics consistently across different parts of the interface, it is still possible to balance conflicting heuristics to find optimal design solutions.

There may be challenges during the evaluation phase, which is why it is important that evaluators suggest strategies to overcome them from the definition of priorities. Evaluators should therefore, in consensus, discuss how these heuristics can be applied to identify and address usability problems.

One of the interesting ways to do heuristic evaluation is through real-time collaboration tools like Miro. On the template below, you will be able to collaborate in real-time to conduct heuristic evaluations of your project with your team, evaluating the problems by criteria and dividing them by colors, based on the level of complexity to be solved.

Heuristic Evaluation Template

You can download the Miro Heuristic Evaluation template for free .

After performing a heuristic assessment, evaluators should analyze the findings and prioritize usability issues, trying to identify the underlying causes of usability issues rather than just addressing surface symptoms.

Usability issues discovered during the assessment can be given severity ratings to prioritize fixes.

Below is an example of categorization by severity according to the challenge presented:

  • High severity : Prevents the user from performing one or more tasks
  • Medium severity : Requires user effort and affects performance
  • Low severity : May be noticeable to the user but does not impede execution or performance

The classification will help the team to have greater clarity regarding what is most relevant to be faced considering the impact on the user experience. By prioritizing the most critical issues based on their impact on the user experience, it will be easier to effectively allocate them throughout the project.

Finally, during the reporting phase, evaluators should present their findings and recommendations to stakeholders and facilitate discussions on identified issues.

Evaluators typically conduct multiple iterations of the assessment to uncover different issues in subsequent rounds based on the need for the project and the issues identified.

Heuristic evaluation provides qualitative data, making it important to interpret the results with a deeper understanding of user behavior. When reporting and communicating the results of a heuristic assessment, assessors should follow best practices by presenting findings in visual representations that are easy to read and understand, and that highlight key findings, whether using interactive boards, tables or other visuals.

Problem descriptions should be clear and concise so they can be actionable. Instead of generating generic problems, for example, break the problems into distinct parts to be easier to deal with. If necessary, try to analyze the interface component and its details, thinking not only analytically in an abstract way but also understanding that that problem will be solved by a UX Designer, considering all its elements. In this scenario, a well-applied context makes all the difference.

It is also important to involve stakeholders and facilitate discussions around identified issues. As a popular saying goes: a problem communicated is a problem half solved.

The Dropbox team really nails it when it comes to giving users a smooth and user friendly experience. Let’s dive into a few ways they have put these heuristic evaluation principles to work in their platform:

Dropbox keeps things clear by using concise labels to show the status of your uploaded files. They also incorporate a convenient progress bar that provides a time estimate for the completion of the upload. This real-time feedback keeps you informed about the ongoing status of your uploads on the platform:

Heuristic Applied

The ease of moving, deleting, and renaming files between different folders and sharing with other people means that Dropbox offers users control over fundamental actions, allowing them to work in a personalized way, increasing their sense of ownership:

Making it a breeze for users to navigate no matter if they’re on a computer or a mobile device, Dropbox keeps things consistent in both: website and mobile app design:

Dropbox Across Mediums

To prevent errors from happening, Dropbox has implemented an interesting feature. If a user attempts to upload a file that’s too large, Dropbox triggers an error message. This message is quite helpful, as it guides the user to select a smaller file and clearly explains the issue. It’s a nifty feature that ensures users know exactly which steps to take next:

Error Prevention

Dropbox cleverly employs affordances to ensure that users can easily figure out how to navigate the app. Take, for instance, the blue button located at the top of the screen — it’s your go-to for creating new files and folders. This is a familiar and intuitive pattern that users can quickly grasp:

Dropbox Navigation

Consider now flexibility and efficiency. On Dropbox, the user can access their files from any gadget, and they can keep working even when they are offline without worrying about losing anything. It makes staying productive a breeze, no matter where the users finds themselves:

Dropbox Access

Dropbox has a clean and minimalist design that’s a breeze to use and get around in. Plus, it’s available in different languages, ensuring accessibility for people all around the world:

Dropbox Design

Dropbox goes the extra mile by using additional methods alongside heuristic evaluation, demonstrating a high positive impact on their services. All this dedication to applying the best heuristics on their products has made Dropbox one of the most popular storage services globally.

Heuristic evaluation fits into the broader UX design process and can be conducted iteratively throughout the design lifecycle, despite being commonly used early in the design process.

It provides valuable insights to inform design decisions and improvements and enables UX designers to effectively identify and address usability issues.

Conclusion and key takeaways

In this article, we have seen that heuristic evaluation is a systematic and valuable approach to identifying usability problems in systems and products. Through the use of general usability guidelines, it is possible to highlight gaps in the user experience, addressing areas such as clarity, consistency and control. This evaluation is conducted by a multidisciplinary team, and the problems identified are recorded in detail, allowing for further prioritization and refinement.

Much like a complex puzzle, improving usability and user experience requires identifying patterns and providing instructive feedback when working collaboratively.

Checking interfaces using heuristic evaluation can uncover many issues, but it’s not a replacement for what you learn from watching actual users. Think of it as an extra tool to understand users better.

Remember that heuristic evaluation not only reveals challenges but also empowers you as a UX professional to create more intuitive and impactful solutions.

When you mix in heuristic evaluation while making your designs, you can end up with products and systems that are more helpful and user-friendly without spending too much. This helps make your products or services even better by following good user experience tips.

So don’t hesitate: make the most of the potential of heuristic evaluation to push usability to the next level in your UX project.

LogRocket : Analytics that give you UX insights without the need for interviews

LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #ux research

to state that a case study is heuristic means that

Stop guessing about your digital experience with LogRocket

Recent posts:.

Tips for Better Ecommerce UX Design

Tips for better ecommerce UX design

There’s little to no room for innovation and creativity in ecommerce. But by nailing every step of the sales funnel, you can greatly impact the company’s sales.

to state that a case study is heuristic means that

Creating a low-fidelity prototype in UX design

Use a low-fidelity prototype for your design whenever you need to experiment with ideas in the UX research analysis stage.

to state that a case study is heuristic means that

Applying the elaboration likelihood model (ELM)

The elaboration likelihood model (ELM) is a theory that describes how people process information and its impact on their attitudes.

to state that a case study is heuristic means that

Why not every UX problem needs to be solved

If you think UX design is all about solving user problems, you’re wrong. In this blog, I share the the lesser-known truths about UX problem solving.

Leave a Reply Cancel reply

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Are Heuristics?

These mental shortcuts lead to fast decisions—and biased thinking

Verywell / Cindy Chung 

  • History and Origins
  • Heuristics vs. Algorithms
  • Heuristics and Bias

How to Make Better Decisions

If you need to make a quick decision, there's a good chance you'll rely on a heuristic to come up with a speedy solution. Heuristics are mental shortcuts that allow people to solve problems and make judgments quickly and efficiently. Common types of heuristics rely on availability, representativeness, familiarity, anchoring effects, mood, scarcity, and trial-and-error.

Think of these as mental "rule-of-thumb" strategies that shorten decision-making time. Such shortcuts allow us to function without constantly stopping to think about our next course of action.

However, heuristics have both benefits and drawbacks. These strategies can be handy in many situations but can also lead to  cognitive biases . Becoming aware of this might help you make better and more accurate decisions.

Press Play for Advice On Making Decisions

Hosted by therapist Amy Morin, LCSW, this episode of The Verywell Mind Podcast shares a simple way to make a tough decision. Click below to listen now.

Follow Now : Apple Podcasts / Spotify / Google Podcasts

History of the Research on Heuristics

Nobel-prize winning economist and cognitive psychologist Herbert Simon originally introduced the concept of heuristics in psychology in the 1950s. He suggested that while people strive to make rational choices, human judgment is subject to cognitive limitations. Purely rational decisions would involve weighing every alternative's potential costs and possible benefits.

However, people are limited by the amount of time they have to make a choice and the amount of information they have at their disposal. Other factors, such as overall intelligence and accuracy of perceptions, also influence the decision-making process.

In the 1970s, psychologists Amos Tversky and Daniel Kahneman presented their research on cognitive biases. They proposed that these biases influence how people think and make judgments.

Because of these limitations, we must rely on mental shortcuts to help us make sense of the world.

Simon's research demonstrated that humans were limited in their ability to make rational decisions, but it was Tversky and Kahneman's work that introduced the study of heuristics and the specific ways of thinking that people rely on to simplify the decision-making process.

How Heuristics Are Used

Heuristics play important roles in both  problem-solving  and  decision-making , as we often turn to these mental shortcuts when we need a quick solution.

Here are a few different theories from psychologists about why we rely on heuristics.

  • Attribute substitution : People substitute simpler but related questions in place of more complex and difficult questions.
  • Effort reduction : People use heuristics as a type of cognitive laziness to reduce the mental effort required to make choices and decisions.
  • Fast and frugal : People use heuristics because they can be fast and correct in certain contexts. Some theories argue that heuristics are actually more accurate than they are biased.

In order to cope with the tremendous amount of information we encounter and to speed up the decision-making process, our brains rely on these mental strategies to simplify things so we don't have to spend endless amounts of time analyzing every detail.

You probably make hundreds or even thousands of decisions every day. What should you have for breakfast? What should you wear today? Should you drive or take the bus? Fortunately, heuristics allow you to make such decisions with relative ease and without a great deal of agonizing.

There are many heuristics examples in everyday life. When trying to decide if you should drive or ride the bus to work, for instance, you might remember that there is road construction along the bus route. You realize that this might slow the bus and cause you to be late for work. So you leave earlier and drive to work on an alternate route.

Heuristics allow you to think through the possible outcomes quickly and arrive at a solution.

Are Heuristics Good or Bad?

Heuristics aren't inherently good or bad, but there are pros and cons to using them to make decisions. While they can help us figure out a solution to a problem faster, they can also lead to inaccurate judgments about others or situations. Understanding these pros and cons may help you better use heuristics to make better decisions.

Types of Heuristics

There are many different kinds of heuristics. While each type plays a role in decision-making, they occur during different contexts. Understanding the types can help you better understand which one you are using and when.

Availability

The availability heuristic  involves making decisions based upon how easy it is to bring something to mind. When you are trying to make a decision, you might quickly remember a number of relevant examples.

Since these are more readily available in your memory, you will likely judge these outcomes as being more common or frequently occurring.

For example, imagine you are planning to fly somewhere on vacation. As you are preparing for your trip, you might start to think of a number of recent airline accidents. You might feel like air travel is too dangerous and decide to travel by car instead. Because those examples of air disasters came to mind so easily, the availability heuristic leads you to think that plane crashes are more common than they really are.

Familiarity

The familiarity heuristic refers to how people tend to have more favorable opinions of things, people, or places they've experienced before as opposed to new ones. In fact, given two options, people may choose something they're more familiar with even if the new option provides more benefits.

Representativeness

The representativeness heuristic  involves making a decision by comparing the present situation to the most representative mental prototype. When you are trying to decide if someone is trustworthy, you might compare aspects of the individual to other mental examples you hold.

A soft-spoken older woman might remind you of your grandmother, so you might immediately assume she is kind, gentle, and trustworthy. However, this is an example of a heuristic bias, as you can't know someone trustworthy based on their age alone.

The affect heuristic involves making choices that are influenced by an individual's emotions at that moment. For example, research has shown that people are more likely to see decisions as having benefits and lower risks when in a positive mood.

Negative emotions, on the other hand, lead people to focus on the potential downsides of a decision rather than the possible benefits.

The anchoring bias involves the tendency to be overly influenced by the first bit of information we hear or learn. This can make it more difficult to consider other factors and lead to poor choices. For example, anchoring bias can influence how much you are willing to pay for something, causing you to jump at the first offer without shopping around for a better deal.

Scarcity is a heuristic principle in which we view things that are scarce or less available to us as inherently more valuable. Marketers often use the scarcity heuristic to influence people to buy certain products. This is why you'll often see signs that advertise "limited time only," or that tell you to "get yours while supplies last."

Trial and Error

Trial and error is another type of heuristic in which people use a number of different strategies to solve something until they find what works. Examples of this type of heuristic are evident in everyday life.

People use trial and error when playing video games, finding the fastest driving route to work, or learning to ride a bike (or any new skill).

Difference Between Heuristics and Algorithms

Though the terms are often confused, heuristics and algorithms are two distinct terms in psychology.

Algorithms are step-by-step instructions that lead to predictable, reliable outcomes, whereas heuristics are mental shortcuts that are basically best guesses. Algorithms always lead to accurate outcomes, whereas, heuristics do not.

Examples of algorithms include instructions for how to put together a piece of furniture or a recipe for cooking a certain dish. Health professionals also create algorithms or processes to follow in order to determine what type of treatment to use on a patient.

How Heuristics Can Lead to Bias

Heuristics can certainly help us solve problems and speed up our decision-making process, but that doesn't mean they are always a good thing. They can also introduce errors, bias, and irrational decision-making. As in the examples above, heuristics can lead to inaccurate judgments about how commonly things occur and how representative certain things may be.

Just because something has worked in the past does not mean that it will work again, and relying on a heuristic can make it difficult to see alternative solutions or come up with new ideas.

Heuristics can also contribute to stereotypes and  prejudice . Because people use mental shortcuts to classify and categorize people, they often overlook more relevant information and create stereotyped categorizations that are not in tune with reality.

While heuristics can be a useful tool, there are ways you can improve your decision-making and avoid cognitive bias at the same time.

We are more likely to make an error in judgment if we are trying to make a decision quickly or are under pressure to do so. Taking a little more time to make a decision can help you see things more clearly—and make better choices.

Whenever possible, take a few deep breaths and do something to distract yourself from the decision at hand. When you return to it, you may find a fresh perspective or notice something you didn't before.

Identify the Goal

We tend to focus automatically on what works for us and make decisions that serve our best interest. But take a moment to know what you're trying to achieve. Consider some of the following questions:

  • Are there other people who will be affected by this decision?
  • What's best for them?
  • Is there a common goal that can be achieved that will serve all parties?

Thinking through these questions can help you figure out your goals and the impact that these decisions may have.

Process Your Emotions

Fast decision-making is often influenced by emotions from past experiences that bubble to the surface. Anger, sadness, love, and other powerful feelings can sometimes lead us to decisions we might not otherwise make.

Is your decision based on facts or emotions? While emotions can be helpful, they may affect decisions in a negative way if they prevent us from seeing the full picture.

Recognize All-or-Nothing Thinking

When making a decision, it's a common tendency to believe you have to pick a single, well-defined path, and there's no going back. In reality, this often isn't the case.

Sometimes there are compromises involving two choices, or a third or fourth option that we didn't even think of at first. Try to recognize the nuances and possibilities of all choices involved, instead of using all-or-nothing thinking .

Heuristics are common and often useful. We need this type of decision-making strategy to help reduce cognitive load and speed up many of the small, everyday choices we must make as we live, work, and interact with others.

But it pays to remember that heuristics can also be flawed and lead to irrational choices if we rely too heavily on them. If you are making a big decision, give yourself a little extra time to consider your options and try to consider the situation from someone else's perspective. Thinking things through a bit instead of relying on your mental shortcuts can help ensure you're making the right choice.

Vlaev I. Local choices: Rationality and the contextuality of decision-making .  Brain Sci . 2018;8(1):8. doi:10.3390/brainsci8010008

Hjeij M, Vilks A. A brief history of heuristics: how did research on heuristics evolve? Humanit Soc Sci Commun . 2023;10(1):64. doi:10.1057/s41599-023-01542-z

Brighton H, Gigerenzer G. Homo heuristicus: Less-is-more effects in adaptive cognition .  Malays J Med Sci . 2012;19(4):6-16.

Schwartz PH. Comparative risk: Good or bad heuristic?   Am J Bioeth . 2016;16(5):20-22. doi:10.1080/15265161.2016.1159765

Schwikert SR, Curran T. Familiarity and recollection in heuristic decision making .  J Exp Psychol Gen . 2014;143(6):2341-2365. doi:10.1037/xge0000024

AlKhars M, Evangelopoulos N, Pavur R, Kulkarni S. Cognitive biases resulting from the representativeness heuristic in operations management: an experimental investigation .  Psychol Res Behav Manag . 2019;12:263-276. doi:10.2147/PRBM.S193092

Finucane M, Alhakami A, Slovic P, Johnson S. The affect heuristic in judgments of risks and benefits . J Behav Decis Mak . 2000; 13(1):1-17. doi:10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S

Teovanović P. Individual differences in anchoring effect: Evidence for the role of insufficient adjustment .  Eur J Psychol . 2019;15(1):8-24. doi:10.5964/ejop.v15i1.1691

Cheung TT, Kroese FM, Fennis BM, De Ridder DT. Put a limit on it: The protective effects of scarcity heuristics when self-control is low . Health Psychol Open . 2015;2(2):2055102915615046. doi:10.1177/2055102915615046

Mohr H, Zwosta K, Markovic D, Bitzer S, Wolfensteller U, Ruge H. Deterministic response strategies in a trial-and-error learning task . Inman C, ed. PLoS Comput Biol. 2018;14(11):e1006621. doi:10.1371/journal.pcbi.1006621

Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare .  J Med Ethics . 2020;46(3):205-211. doi:10.1136/medethics-2019-105586

Bigler RS, Clark C. The inherence heuristic: A key theoretical addition to understanding social stereotyping and prejudice. Behav Brain Sci . 2014;37(5):483-4. doi:10.1017/S0140525X1300366X

del Campo C, Pauser S, Steiner E, et al.  Decision making styles and the use of heuristics in decision making .  J Bus Econ.  2016;86:389–412. doi:10.1007/s11573-016-0811-y

Marewski JN, Gigerenzer G. Heuristic decision making in medicine .  Dialogues Clin Neurosci . 2012;14(1):77-89. doi:10.31887/DCNS.2012.14.1/jmarewski

Zheng Y, Yang Z, Jin C, Qi Y, Liu X. The influence of emotion on fairness-related decision making: A critical review of theories and evidence .  Front Psychol . 2017;8:1592. doi:10.3389/fpsyg.2017.01592

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research bias
  • What Is the Affect Heuristic? | Example & Definition

What Is the Affect Heuristic? | Example & Definition

Published on December 28, 2022 by Kassiani Nikolopoulou . Revised on November 3, 2023.

The affect heuristic occurs when our current emotional state or mood influences our decisions. Instead of evaluating the situation objectively, we rely on our “gut feelings” and respond according to how we feel. As a result, the affect heuristic can lead to suboptimal decision-making.

You were very excited about the opportunity, and now you feel disheartened.

Table of contents

What is the affect heuristic, why does the affect heuristic occur, when is the affect heuristic a problem, affect heuristic example, how to avoid the affect heuristic, other types of research bias, frequently asked questions.

The affect heuristic is a type of cognitive bias that plays a role in decision-making. Instead of using objective information, we rely upon our emotions to evaluate a situation. This can also serve as a shortcut to solve a problem quickly. Here, affect can be viewed as:

  •  a feeling state that people experience, such as happiness or sadness.
  •  a quality associated with a stimulus, or anything that can trigger us to act, such as sounds, words, or temperature changes.

When people need to make a choice under time pressure, they are likely to feel the need to be efficient, or to simply go with what seems the best option. This leads them to rely on heuristics or mental shortcuts. The affect heuristic causes us to consult our emotions and feelings when we need to form a judgment but lack the information or time to reflect more deeply.

The affect heuristic

  • If you used to skate as a kid and have many positive memories, you might feel that the benefit (fun) outweighs any risks (falling). Therefore, you might be more inclined to try it again.
  • On the other hand, if you fell and broke your arm skating as a kid, you most likely associate skating with danger, and feel that it’s a bad idea.

More specifically, the affect heuristic impacts our decision-making by influencing how we perceive risks and benefits related to an action. In other words, when we like an activity, we tend to judge its risk as low, and its benefit as high.

The opposite is true when we dislike something. Here, we tend to judge its risk as high and its benefit as low. In this way, how we feel about something directs our judgment of risk and benefit. This, in turn, motivates our behavior.

Similarly, our mood can influence our decisions. When we are in a good mood, we tend to be optimistic about decisions and focus more on the benefits. When we are in a bad mood, we focus more on the risks and the perceived lack of benefits related to a decision.

The affect heuristic occurs due to emotional or affective reactions to a stimulus. These are often the very first reactions we have. They occur automatically and rapidly, influencing how we process and evaluate information. For example, you can probably sense the different feelings associated with the word “love” as opposed to the word “hate.”

When we subconsciously let these feelings guide our decisions, we rely on the affect heuristic. This is because we perceive reality in two fundamentally different ways or systems. Various names are used to describe them:

  • One is often labeled as intuitive, automatic, and experiential. 
  • The other is labeled as analytical, verbal , and rational. 

While the rational way of comprehending reality relies on logic and evidence, the experiential one relies on feelings we’ve come to associate with certain things. Through the experiential system, we store events or concepts in our minds, “tagging” them with positive or negative feelings. When faced with a decision, we consult our “pool”, containing all the positive and negative tags. These then serve as cues for our judgment.

Although deeper analysis is certainly important in some decision-making contexts, using our emotions is a quicker, easier, and more efficient way to navigate a complex, uncertain, or sometimes even dangerous world.

Although the affect heuristic allows us to make decisions quickly and efficiently (similarly to the availability heuristic or anchoring bias ), it can also deceive us. There are two important ways that the affect heuristic can lead us astray:

  • One occurs when others try to manipulate our emotions in an attempt to affect or control our behavior. For example, politicians often appeal to fear in order to make the public feel that the country will suffer dire consequences if they aren’t elected or certain policies aren’t implemented.
  • The other results from the natural limitations of the experiential system. For instance, we can’t find the correct answer to a math problem by relying on our feelings. Besides, if it was always enough to follow our intuition, there would be no need for the rational/analytic system of thinking.

The affect heuristic is a possible explanation for a range of purchase decisions, such as buying insurance.

The scenarios involved one of the following:

  • An antique clock that no longer works and can’t be repaired. However, it has sentimental value: it was a gift to you from your grandparents on your 5th birthday. You learned how to tell time from it, and have always loved it very much.
  • An antique clock that no longer works and can’t be repaired. It does not have much sentimental value to you. It was a gift from a remote relative on your 5th birthday. You didn’t like it very much then, and you still don’t have any particular feelings towards it now.

Both groups of participants were then asked to indicate the maximum amount they were willing to pay for insurance against loss in a shipment to a new city. In the event of loss, the insurance paid $100 in both cases.

Due to the affect heuristic, how people feel about something drives their judgment. Communicators, such as public relations professionals, know this and can use it to influence our opinions.

By using terms like “smart bombs” and “peacekeeper missiles” for nuclear weapons and “excursions” for reactor accidents, proponents of nuclear energy downplay the risks of nuclear applications and highlight their benefits. Although not without resistance, they attempt to frame nuclear concepts in neutral or positive ways using this language. As a result, the public attaches a neutral or positive sentiment to the technology, leading to a framing effect .

The affect heuristic is a helpful shortcut, but it can also cloud our judgment. Here are a few steps you can take to minimize the negative impact of the affect heuristic:

  • Acknowledge that emotions can influence our decisions, no matter how rational we think we are. This is especially true when we lack the information or time to think things through.
  • Slow down your thinking process if possible. Instead of making a snap judgment, take the time to analyze all the information at hand and consider all the options before reaching your conclusion.
  • Avoid making an important decision when your emotions are running high. Regardless of whether the emotion is positive or negative, try to delay decision-making until you are in a “regular” state of mind.

Cognitive bias

  • Confirmation bias
  • Baader–Meinhof phenomenon
  • Availability heuristic
  • Halo effect
  • Framing effect
  • Affect heuristic
  • Anchoring heuristic

Selection bias

  • Sampling bias
  • Ascertainment bias
  • Attrition bias
  • Self-selection bias
  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias
  • Hawthorne effect
  • Observer bias
  • Omitted variable bias
  • Publication bias
  • Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Placebo effect
  • Actor-observer bias
  • Ceiling effect
  • Ecological fallacy

When customers are asked if they want to extend the warranty for a laptop they’ve just bought, few of them seriously think about relevant factors (e.g., the probability that the laptop will be damaged or the likely cost of repair).

Most people rely on the affect heuristic : the more they cherish their new laptop, the more willing they are to pay for an extended warranty.

Even though the affect heuristic and the availability heuristic are different, they are closely linked. This is because availability occurs not only through ease of recall or imaginability, but because remembered and imagined images come tagged with affect or emotion.

For example, availability can explain why people overestimate certain highly publicized causes of death like accidents, homicides, or tornadoes and underestimate others, such as diabetes, asthma, or stroke. The highly publicized ones are more emotionally charged and, thus, more likely to receive attention.

In other words, the affect heuristic is essentially a type of availability mechanism in which emotionally charged events quickly spring to mind.

Affect in psychology is any experience of feeling, emotion, or mood. It is often described as positive or negative. Affect colors how we see the world and how we feel about people, objects, and events.

Because of this, it can also impact our social interactions, behaviors, and judgments. For example, we often make decisions based on our “gut feeling.” When we do this, we rely on what is called the affect heuristic .

Sources in this article

We strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below.

Nikolopoulou, K. (2023, November 03). What Is the Affect Heuristic? | Example & Definition. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/research-bias/affect-heuristic/
Finucane, M. L., Alhakami, A. S., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making , 13 (1), 1–17. https://doi.org/10.1002/(sici)1099-0771(200001/03)13:1
Hsee, C. K. (2006). The Affection Effect in Insurance Decisions. Journal of Risk and Uncertainty , 20 (2). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=930041
Skagerlund, K., Forsblad, M., Slovic, P., & Västfjäll, D. (2020). The Affect Heuristic and Risk Perception – Stability Across Elicitation Methods and Individual Cognitive Abilities. Frontiers in Psychology , 11 . https://doi.org/10.3389/fpsyg.2020.00970

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, the availability heuristic | example & definition, what is cognitive bias | definition, types, & examples, what is the framing effect | definition & examples.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 17 February 2023

A brief history of heuristics: how did research on heuristics evolve?

  • Mohamad Hjeij   ORCID: orcid.org/0000-0003-4231-1395 1 &
  • Arnis Vilks 1  

Humanities and Social Sciences Communications volume  10 , Article number:  64 ( 2023 ) Cite this article

15k Accesses

9 Citations

12 Altmetric

Metrics details

Heuristics are often characterized as rules of thumb that can be used to speed up the process of decision-making. They have been examined across a wide range of fields, including economics, psychology, and computer science. However, scholars still struggle to find substantial common ground. This study provides a historical review of heuristics as a research topic before and after the emergence of the subjective expected utility (SEU) theory, emphasising the evolutionary perspective that considers heuristics as resulting from the development of the brain. We find it useful to distinguish between deliberate and automatic uses of heuristics, but point out that they can be used consciously and subconsciously. While we can trace the idea of heuristics through many centuries and fields of application, we focus on the evolution of the modern notion of heuristics through three waves of research, starting with Herbert Simon in the 1950s, who introduced the notion of bounded rationality and suggested the use of heuristics in artificial intelligence, thereby paving the way for all later research on heuristics. A breakthrough came with Daniel Kahneman and Amos Tversky in the 1970s, who analysed the biases arising from using heuristics. The resulting research programme became the subject of criticism by Gerd Gigerenzer in the 1990s, who argues that an ‘adaptive toolbox’ consisting of ‘fast-and-frugal’ heuristics can yield ‘ecologically rational’ decisions.

Similar content being viewed by others

to state that a case study is heuristic means that

Rational use of cognitive resources in human planning

to state that a case study is heuristic means that

People construct simplified mental representations to plan

to state that a case study is heuristic means that

Bayesian statistics and modelling

Introduction.

Over the past 50 years, the notion of ‘heuristics’ has considerably gained attention in fields as diverse as psychology, cognitive science, decision theory, computer science, and management scholarship. While for 1970, the Scopus database finds a meagre 20 published articles with the word ‘heuristic’ in their title, the number has increased to no less than 3783 in 2021 (Scopus, 2022 ).

We take this to be evidence that many researchers in the aforementioned fields find the literature that refers to heuristics stimulating and that it gives rise to questions that deserve further enquiry. While there are some review articles on the topic of heuristics (Gigerenzer and Gaissmaier, 2011 ; Groner et al., 1983 ; Hertwig and Pachur, 2015 ; Semaan et al., 2020 ), a somewhat comprehensive and non-partisan historical review seems to be missing.

While interest in heuristics is growing, the very notion of heuristics remains elusive to the point that, e.g., Shah and Oppenheimer ( 2008 ) begin their paper with the statement: ‘The word “heuristic” has lost its meaning.’ Even if one leaves aside characterizations such as ‘rule of thumb’ or ‘mental shortcut’ and considers what Kahneman ( 2011 ) calls ‘the technical definition of heuristic,’ namely ‘a simple procedure that helps find adequate, though often imperfect, answers to difficult questions,’ one is immediately left wondering how simple it has to be, what an adequate, but the imperfect answer is, and how difficult the questions need to be, in order to classify a procedure as a heuristic. Shah and Oppenheimer conclude that ‘the term heuristic is vague enough to describe anything’.

However, one feature does distinguish heuristics from certain other, typically more elaborate procedures: heuristics are problem-solving methods that do not guarantee an optimal solution. The use of heuristics is, therefore, inevitable where no method to find an optimal solution exists or is known to the problem-solver, in particular where the problem and/or the optimality criterion is ill-defined. However, the use of heuristics may be advantageous even where the problem to be solved is well-defined and methods do exist which would guarantee an optimal solution. This is because definitions of optimality typically ignore constraints on the process of solving the problem and the costs of that process. Compared to infallible but elaborate methods, heuristics may prove to be quicker or more efficient.

Nevertheless, the range of what has been called heuristics is very broad. Application of a heuristic may require intuition, guessing, exploration, or experience; some heuristics are rather elaborate, others are truly shortcuts, some are described in somewhat loose terms, and others are well-defined algorithms.

One procedure of decision-making that is commonly not regarded as a heuristic is the application of the full-blown theory of subjective expected utility (SEU) in the tradition of Ramsey ( 1926 ), von Neumann and Morgenstern ( 1944 ), and Savage ( 1954 ). This theory is arguably spelling out what an ideally rational decision would be, but was already seen by Savage (p. 16) to be applicable only in what he called a ‘small world’. Quite a few approaches that have been called heuristics have been explicitly motivated by SEU imposing demands on the decision-maker, which are utterly impractical (cf., e.g., Klein, 2001 , for a discussion). As a second defining feature of the heuristics we want to consider, therefore, we take them to be procedures of decision-making that differ from the ‘gold standard’ of SEU by being practically applicable in at least a number of interesting cases. Along with SEU, we also leave aside the rules of deductive logic, such as Aristotelian syllogisms, modus ponens, modus tollens, etc. While these can also be seen as rules of decision-making, and the universal validity of some of them is not quite uncontroversial (see, e.g., Priest, 2008 , for an introduction to non-classical logic), they are widely regarded as ‘infallible’. By stark contrast, it seems characteristic for heuristics that their application may fail to yield a ‘best’ or ‘correct’ result.

By taking heuristics to be practically applicable, but fallible, procedures for problem-solving, we will also neglect the literature that focuses on the adjective ‘heuristic’ instead of on the noun. When, e.g., Suppes ( 1983 ) characterizes axiomatic analyses as ‘heuristic’, he is not suggesting any rule, but he is saying that heuristic axioms ‘seem intuitively to organize and facilitate our thinking about the subject’ (p. 82), and proceeds to give examples of both heuristic and nonheuristic axioms. It may of course be said that many fundamental equations in science, such as Newton’s force = mass*acceleration, have some heuristic value in the sense indicated by Suppes, but the research we will review is not about the property of being heuristic.

Given that heuristics can be assessed against the benchmark of SEU, one may distinguish broadly between heuristics suggested pre-SEU, i.e., before the middle of the 20th century, and the later research on heuristics that had to face the challenge of an existing theory of allegedly rational decision-making. We will review the former in the section “Deliberate heuristics—the art of invention” below, and devote sections “Herbert Simon: rationality is bounded”, “Heuristics in computer science” and “Daniel Kahneman and Amos Tversky: heuristics and biases” to the latter.

To cover the paradigmatic cases of what has been termed ‘heuristics’ in the literature, we have to take ‘problem-solving’ in a broad sense that includes decision-making and judgement, but also automatic, instinctive behaviour. We, therefore, feel that an account of research on heuristics should also review the main views on how observable behaviour patterns in humans—or maybe animals in general—can be explained. This we do in the section “Automatic heuristics: learnt or innate?”.

While our brief history cannot aim for completeness, we selected the scholars to be included based on their influence and contributions to different fields of research related to heuristics. Our focus, however, will be on the more recent research that may be said to begin with Herbert Simon.

That problem-solving according to SEU will, in general, be impractical, was clearly recognized by Herbert Simon, whose notion of bounded rationality we look at in the section “Herbert Simon: rationality is bounded”. In the section “Heuristics in computer science”, we also consider heuristics in computer science, where the motivation to use heuristics is closely related to Simon’s reasoning. In the section “Daniel Kahneman and Amos Tversky: heuristics and biases”, we turn to the heuristics identified and analysed by Kahneman and Tversky; while their assessment was primarily that the use of those heuristics often does not conform to rational decision-making, the approach by Gigerenzer and his collaborators, reviewed in the section “Gerd Gigerenzer: fast-and-frugal heuristics” below, takes a much more affirmative view on the use of heuristics. Section “Critiques” explains the limitations and critiques of the corresponding ideas. The final section “Conclusion” contains the conclusion, discussion, and avenues for future research.

The evolutionary perspective

While we focus on the history of research on heuristics, it is clear that animal behaviour patterns evolved and were shaped by evolutionary forces long before the human species emerged. Thus ‘heuristics’ in the mere sense of behaviour patterns have been used long before humans engaged in any kind of conscious reflection on decision-making, let alone systematic research. However, evolution endowed humans with brains that allow them to make decisions in ways that are quite different from animal behaviour patterns. According to Gibbons ( 2007 ), the peculiar evolution of the human brain started thousands of years ago when the ancient human discovered fire and started cooking food, which reduced the amount of energy the body needed for digestion. This paved the way for a smaller intestinal tract and implied that the excess calories led to the development of larger tissues and eventually a larger brain. Through this organ, intelligence increased exponentially, resulting in advanced communication that allowed Homo sapiens to collaborate and form relationships that other primates at the time could not match. According to Dunbar ( 1998 ), it was in the time between 400,000 and 100,000 years ago that abilities to hunt more effectively took humans from the middle of the food chain right to the top.

It does not seem to be known when and how exactly the human brain developed the ability to reflect on decisions made consciously, but it is now widely recognized that in addition to the fast, automatic, and typically nonconscious type of decision-making that is similar to animal behaviour, humans also employ another, rather a different type of decision-making that can be characterized as slow, conscious, controlled, and reflective. The former type is known as ‘System 1’ or ‘the old mind’, and the latter as ‘System 2’ or ‘the new mind’ (Evans, 2010 ; Kahneman, 2011 ), and both systems have clearly evolved side by side throughout the evolution of the human brain. According to Gigerenzer ( 2021 ), humans as well as other organisms evolved to acquire what he calls ‘embodied heuristics’ that can be both innate or learnt rules of thumb, which in turn supply the agility to respond to the lack of information by fast judgement. The ‘embodied heuristics’ use the mental capacity that includes the motor and sensory abilities that start to develop from the moment of birth.

While a detailed discussion of the ‘dual-process theories’ of the mind is beyond the scope of this paper, we find it helpful to point out that one may distinguish between ‘System 1 heuristics’ and ‘System 2 heuristics’ (Kahneman 2011 , p. 98). While some ‘rules of decision-making’ may be hard-wired into the human species by its genes and physiology, others are complicated enough that their application typically requires reflection and conscious mental effort. Upon reflection, however, the two systems are not as separate as they may seem. For example, participants in the Mental Calculation World Cup perform mathematical tasks instantly, whereas ordinary people would need a pen and paper or a calculator. Today, many people cannot multiply large numbers or calculate a square root using only a pen and paper but can easily do this using the calculator app on their smartphone. Thus, what can be done by spontaneous effortless calculation by some, may for others require the application of a more or less complicated theory.

Nevertheless, one can loosely characterize the heuristics that have been explained and recommended for more or less well-specified purposes over the course of history as System 2 or deliberate heuristics.

Deliberate heuristics—the art of invention

Throughout history, scholars have investigated methods to solve complex tasks. In this section, we review those attempts to formulate ‘operant and voluntary’ heuristics to solve demanding problems—in particular, to generate new insights or do research in more or less specified fields. Most of the heuristics in this section have been suggested before the emergence of the SEU theory and the associated modern definition of rationality, and none of them deals with the kind of decision problems that are assumed as ‘given’ in the SEU model. The reader will notice that some historical heuristics were suggested for problems that, today, may seem too general to be solved. However, through the development of such attempts, later scholars were inspired to develop a more concrete understanding of the notion of heuristics.

The Greek origin

The term heuristic originates from the Greek verb heurísko , which means to discover or find out. The Greek word heúrēka , allegedly exclaimed by Archimedes when discovering how to measure the volume of a random object through water, derives from the same verb and can be translated as I found it! (Pinheiro and McNeill, 2014 ). Heuristics can thus be said to be etymologically related to the discipline of discovery, the branch of knowledge based on investigative procedures, and are naturally associated with trial techniques, including what-if scenarios and simple trial and error.

While the term heurísko does not seem to be used in this context by Aristotle, his notion of induction ( epagôgê ) can be seen as a method to find, but not prove, true general statements and thus as a heuristic. At any rate, Aristotle considered inductive reasoning as leading to insights and as distinct from logically valid syllogisms (Smith, 2020 ).

Pappus (4th century)

While a brief, somewhat cryptic, mention of analysis and synthesis appears in Book 13 of some, but not all, editions of Euclid’s Elements, a clearer explanation of the two methods was given in the 4th century by the Greek mathematician and astronomer Pappus of Alexandria (cf. Heath, 1926 ; Polya, 1945 ; Groner et al., 1983 ). While synthesis is what today would be called deduction from known truths, analysis is a method that can be used to try and find proof. Two slightly different explanations are given by Pappus. They boil down to this: in order to find proof for a statement A, one can deduce another statement B from A, continue by deducing yet another statement C from B, and so on, until one comes upon a statement T that is known to be true. If all the inferences are convertible, the converse deductions evidently constitute a proof of A from T. While Pappus did not mention the condition that the inferences must be convertible, his second explanation of analysis makes it clear that one must be looking for deductions from A which are both necessary and sufficient for A. In Polya’s paraphrase of Pappus’ text: ‘We enquire from what antecedent the desired result could be derived; then we enquire again what could be the antecedent of that antecedent, and so on, until passing from antecedent to antecedent, we come eventually upon something already known or admittedly true.’ Analysis thus described is hardly a ‘shortcut’ or ‘rule of thumb’, but quite clearly it is a heuristic: it may help to find a proof of A, but it may also fail to do so…

Al-Khawarizmi (9th century)

In the 9th century, the Persian thinker Mohamad Al-Khawarizmi, who resided in Baghdad’s centre of knowledge or the House of Wisdom , used stepwise methods for problem-solving. Thus, after his name and findings, the algorithm concept was derived (Boyer, 1991 ). Although a heuristic orientation has sometimes been contrasted with an algorithmic one (Groner and Groner, 1991 ), it is worth noting that an algorithm may well serve as a heuristic—certainly in the sense of a shortcut, and also in the sense of a fallible method. After all, an algorithm may fail to produce a satisfactory result. We will return to this issue in the section “Heuristics in computer science” below.

Zairja (10th century)

Heuristic methods were created by medieval polymaths in their attempts to find solutions for the complex problems they faced—science not yet being divorced from what today would appear as theology or astrology. Perhaps the first tangible example of a heuristic based on a mechanical device was using an ancient tool called a zairja , which Arab astrologers employed before the 11th century (Ritchey, 2022 ). It was designed to reconfigure notions into ideas through randomization and resonance and thus to produce answers to questions mechanically (Link, 2010 ). The word zairja may have originated from the Persian combination zaicha-daira , which means horoscope-circle. According to Ibn Khaldoun, ‘zairja is the technique of finding out answers from questions by means of connections existing between the letters of the expressions used in the question; they imagine that these connections can form the basis for knowing the future happenings they want to know’ (Khaldun, 1967 ).

Ramon Llull (1305)

The Majorcan philosopher Ramon Llull (or Raimundus Lullus), who was exposed to the Arabic culture, used the zairja as a starting point for his ars inveniendi veritatem that was meant to complement the ars demonstrandi of medieval Scholastic logic and on which he worked from around 1270–1305 (Link, 2010 ; Llull, 1308 ; Ritchey, 2022 ) when he finished his Ars Generalis Ultima (or Ars Magna ). Llull transformed the astrological and combinatorial components of the zairja into a religious system that took the fundamental ideas of the three Abrahamic faiths of Islam, Christianity, and Judaism and analysed them through symbolic and numeric reasoning. Llull tried to broaden his theory across all fields of knowledge and combine all sciences into a single science that would address all human problems. His thoughts impacted great thinkers, such as Leibniz, and even the modern theory of computation (Fidora and Sierra, 2011 ). Llull’s approach may be considered a clear example of heuristic methods applied to complicated and even theological questions (Hertwig and Pachur, 2015 ).

Joachim Jungius (1622)

Arguably, the German mathematician and philosopher Joachim Jungius was the first to use the terminology heuretica in a call to establish a research society in 1622. Jungius distinguished between three degrees or levels of learning and cognition: empirical, epistemic, and heuristic. Those who have reached the empirical level believe that what they have learned is true because it corresponds to experience. Those who have reached the epistemic level know how to derive their knowledge from principles with rigorous evidence. But those who have reached the highest level, the heuristic level, have a method of solving unsolved problems, finding new theorems, and introducing new methods into science (Ritter et al., 2017 ).

René Descartes (1637)

In 1637, the French philosopher René Descartes published his Discourse on Method (one of the first major works not written in Latin). Descartes argued that humans could utilize mathematical reasoning as a vehicle for progress in knowledge. He proposed four simple steps to follow in problem-solving. First, to accept as true only what is indubitable. Next, divide the problem into as many smaller subproblems as possible and helpful. After that, to conduct one’s thoughts in an orderly fashion, beginning with the simplest and gradually ascending to the most complex. And finally, to make enumerations so complete that one is assured of having omitted nothing (Descartes, 1998 ). In reference to his other methods, Descartes ( 1908 ) started working on the proper heuristic rules to transform every problem, when possible, into algebraic equations, thus creating a mathesis universalis or universal science. In his unfinished ‘Rules for the Direction of the Mind’ or Regulae ad directionem ingenii , Descartes suggested 21 heuristic rules (of planned 36) for scientific research like simplifying the problem, rewriting the problem in geometrical shape, and identifying the knowns and the unknowns. Although Leibniz criticized the rules of Descartes for being too general (Leibniz, 1880 ), this treatise outlined the basis for later work on complex problems in several disciplines.

Gottfried Wilhelm Leibniz (1666)

Influenced by the ideas of Llull, Jungius, and Descartes, the Prussian–German polymath Gottfried Wilhelm Leibniz suggested an original approach to problem-solving in his Dissertatio de Arte Combinatoria , published in Leipzig in 1666. His aim was to create a new universal language into which all problems could be translated and a standard solving procedure that could be applied regardless of the type of the problem. Leibniz also defined an ars inveniendi as a method for finding new truths, distinguishing it from an ars iudicandi , a method to evaluate the validity of alleged truths. Later, in 1673, he invented the calculating machine that could execute all four arithmetic operations and thus find ‘new’ arithmetic truths (Pombo, 2002 ).

Bernard Bolzano ( 1837 )

In 1837, the Czech mathematician and philosopher Bernard Bolzano published his four-volume Wissenschaftslehre (Theory of Science). The fourth part of his theory he called ‘Erfindungskunst’ or the art of invention, mentions in the introductory section 322 that ‘heuristic’ is just the Greek translation. Bolzano explains that the rules he is going to state are not at all entirely new, but instead have always been used ‘by the talented’—although mostly not consciously. He then explains 13 general and 33 special rules one should follow when trying to find new truths. Among the general rules are, e.g., that one should first decide on the question one wants to answer, and the kind of answer one is looking for (section 325), or that one should choose suitable symbols to represent one’s ideas (section 334). Unlike the general rules, the special ones are meant to be helpful for special mental tasks only. E.g., in order to solve the task of finding the reason for any given truth, Bolzano advises first to analyse or dissect the truth into its parts and then use those to form truths which are simpler than the given one (section 378). Another example is Bolzano’s special rule 28, explained in section 386, which is meant to help identify the intention behind a given action. To do so, Bolzano advises exploring the agent’s beliefs about the effects of his action at the time he decided to act, and explains that this will require investigating the agent’s knowledge, his degree of attention and deliberation, any erroneous beliefs the agent may have had, and ‘many other circumstances’. Bolzano continues to point out that any effect the agent may have expected to result from his action will not be an intended one if he considered it neither as an obligation nor as advantageous. While Bolzano’s rules can hardly be considered as ‘shortcuts’, he mentions again and again that they may fail to solve the task at hand adequately (cf. Hertwig and Pachur, 2015 ; Siitonen, 2014 ).

Frank Ramsey ( 1926 )

In Ramsey’s pathbreaking paper on ‘Truth and Probability’ which laid the foundation of subjective probability theory, a final section that has received little attention in the literature is devoted to inductive logic. While he does not use the word ‘heuristic’, he characterizes induction as a ‘habit of the mind,’ explaining that he uses ‘habit in the most general possible sense to mean simply rule or the law of behaviour, including instinct,’ but also including ‘acquired rules.’ Ramsey gives the following pragmatic justification for being convinced by induction: ‘our conviction is reasonable because the world is so constituted that inductive arguments lead on the whole to true opinions,’ and states more generally that ‘we judge mental habits by whether they work, i.e., whether the opinions they lead to are for the most part true, or more often true than those which alternative habits would lead to’ (Ramsey, 1926 ). In modern terminology, Ramsey was pointing out that mental habits—such as inductive inference—may be more or less ‘ecologically rational’.

Karl Duncker ( 1935 )

Karl Duncker was a pioneer in the experimental investigation of human problem-solving. In his 1935 book Zur Psychologie des produktiven Denkens , he discussed both heuristics that help to solve problems, but also hindrances that may block the solution of a problem—and reported on a number of experimental findings. Among the heuristics was a situational analysis with the aim of uncovering the reasons for the gap between the status quo and the problem-solvers goal, analysis of the goal itself, and of sacrifices the problem-solver is willing to make, of prerequisites for the solution, and several others. Among the hindrances to problem-solving was what Duncker called functional fixedness, illustrated by the famous candle problem, in which he asked the participants to fix a candle to the wall and light it without allowing the wax to drip. The available tools were a candle, matches, and a box filled with thumbtacks. The solution was to empty the box of thumbtacks, fix the empty box to the wall using the thumbtacks, put the candle in the box, and finally light the candle. Participants who were given the empty box as a separate item could solve this problem, while those given the box filled with thumbtacks struggled to find a solution. Through this experiment, Duncker illustrated an inability to think outside the box and the difficulty in using a device in a way that is different from the usual one (Glaveanu, 2019 ). Duncker emphasized that success in problem-solving depends on a complementary combination of both the internal mind and the external problem structure (cf. Groner et al., 1983 ).

George Polya ( 1945 )

The Hungarian mathematician George Polya can be aptly called the father of problem-solving in modern mathematics and education. In his 1945 book, How to Solve it , Polya writes that ‘heuristic…or ‘ ars inveniendi ’ was the name of a certain branch of study…often outlined, seldom presented in detail, and as good as forgotten today’ and he attempts to ‘revive heuristic in a modern and modest form’. According to his four principles of mathematical problem-solving, it is first necessary to understand the problem, then plan the execution, carry out the plan, and finally, reflect and search for improvement opportunities. Among the more detailed suggestions for problem-solving explained by Polya are to ask questions such as ‘can you find the solution to a similar problem?’, to use inductive reasoning and analogy, or to choose a suitable notation. Procedures inspired by Polya’s ( 1945 ) book and several later ones (e.g., Induction and Analogy in Mathematics of 1954 ) also informed the field of artificial intelligence (AI) (Hertwig and Pachur, 2015 ).

Johannes Müller (1968)

In 1968, the German scientist Johannes Müller introduced the concept of systematic heuristics while working on his postdoctoral thesis at the Chemnitz University of Technology. Systematic heuristics is a framework for improving the efficiency of intellectual work using problem-solving processes in the fields of science and technology.

The main idea of systematic heuristics is to solve repeated problems with previously validated solutions. These methods are called programmes and are gathered in a library that can be accessed by the main programme, which receives the requirements, prepares the execution plan, determines the required procedures, executes the plan, and finally evaluates the results. Müller’s team was dismissed for ideological reasons, and his programme was terminated after a few years, but his findings went on to be successfully applied in many projects across different industries (Banse and Friedrich, 2000 ).

Imre Lakatos ( 1970 )

In his ‘Methodology of Scientific Research Programmes’ that turned out to be a major contribution to the Popper–Kuhn controversy about the rationality of non-falsifiable paradigms in the natural sciences, Lakatos introduced the interesting distinction between a ‘negative heuristic’ that is given by the ‘hard core’ of a research programme and the ‘positive heuristic’ of the ‘protective belt’. While the latter suggests ways to develop the research programme further and to predict new facts, the ‘hard core’ of the research programme is treated as irrefutable ‘by the methodological decision of its protagonists: anomalies must lead to changes only in the ‘protective’ belt’ of auxiliary hypotheses. The Lakatosian notion of a negative heuristic seems to have received little attention outside of the Philosophy of Science community but may be important elsewhere: when there are too many ways to solve a complicated problem, excluding some of them from consideration may be helpful.

Gerhard Kleining ( 1982 )

The German sociologist Gerhard Kleining suggested a qualitative heuristic as the appropriate research method for qualitative social science. It is based on four principles: (1) open-mindedness of the scientist who should be ready to revise his preconceptions about the topic of study, (2) openness of the topic of study, which is initially defined only provisionally and allowed to be modified in course of the research, (3) maximal variation of the research perspective, and (4) identification of similarities within the data (Kleining, 1982 , 1995 ).

Automatic heuristics: learnt or innate?

Unlike the deliberate, and in some cases quite elaborate, heuristics reviewed above, at least some System 1 heuristics are often applied automatically, without any kind of deliberation or conscious reflection on the task that needs to be performed or the question that needs to be answered. One may view them as mere patterns of behaviour, and as such their scientific examination has been a long cumulative process through different disciplines, even though explicit reference to heuristics was not often made.

Traditionally, examining the behaviour patterns of any living creature, any study concerning thoughts, feelings, or cognitive abilities was regarded as the task of biologists. However, the birth of psychology as a separate discipline paved the way for an alternative outlook. Evolutionary psychology views human behaviour as being shaped through time and experience to promote survival throughout the long history of human struggle with nature. With many factors to consider, scholars have been interested in the evolution of the human brain, patterns of behaviour, and problem-solving (Buss and Kenrick, 1998 ).

Charles Darwin (1873)

Charles Darwin himself maybe qualifies for the title of first evolutionary psychologist, as his perceptions laid the foundations for this field that would continue to grow over a century later (Ghiselin, 1973 ).

In 1873, Darwin claimed that the brain’s articulations regarding expressions and emotions have probably developed similarly to its physical traits (Baumeister and Vohs, 2007 ). He acknowledged that personal demonstrations or expressions have a high capacity for interaction with different peers from the same species. For example, an aggressive look flags an eagerness to battle yet leaves the recipient with the option of retreating without either party being harmed. Additionally, Darwin, as well as his predecessor Lamarck, constantly emphasized the role of environmental factors in ‘the struggle for existence’ that could shape the organism’s traits in response to changes in their corresponding environments (Sen, 2020 ). The famous example of giraffes that grew long necks in response to trees growing taller is an illustration of a major environmental effect. Similarly, cognitive skills, including heuristics, must have also been shaped by the environments to evolve and keep humans surviving and reproducing.

Darwin’s ideas impacted the early advancement of brain science, psychology, and all related disciplines, including the topic of cognitive heuristics (Smulders, 2009 ).

William James (1890)

A few years later, in 1890, the father of American psychology, William James, introduced the notion of evolutionary psychology in his 1200-page text The Principles of Psychology , which later became a reference on the subject and helped establish psychology as a science. In its core content, James reasoned that many actions of the human being demonstrate the activity of instincts, which are the evolutionary embedded inclinations to react to specific incentives in adaptive manners. With this idea, James added an important building block to the foundation of heuristics as a scientific topic.

A simple example of such hard-wired behaviour patterns would be a sneeze, the preprogrammed reaction of convulsive nasal expulsion of air from the lungs through the nose and mouth to remove irritants (Baumeister and Vohs, 2007 ).

Ivan Pavlov (1897)

Triggered by scientific curiosity or the instinct for research, as he called it, the first Russian Nobel laureate, Ivan Pavlov, introduced classical conditioning, which occurs when a stimulus is used that has a predictive relationship with a reinforcer, resulting in a change in response to the stimulus (Schreurs, 1989 ). This learning process was demonstrated through experiments conducted with dogs. In the experiments, a bell (a neutral stimulus) was paired with food (a potent stimulus), resulting ultimately in the dogs salivating at the ringing of the bell—a conditioned response. Pavlov’s experiments remain paradigmatic cases of the emergence of behaviour patterns through association learning.

William McDougall (1909)

At the start of the 20th century, the Anglo-American psychologist William McDougall was one of the first to write about the instinct theory of motivation. McDougall argued that instincts trigger many critical social practices. He viewed instincts as extremely sophisticated faculties in which specific provocations such as social impediments can drive a person’s state of mind in a particular direction, for example, towards a state of hatred, envy, or anger, which in turn may increase the probability of specific practices such as hostility or violence (McDougall, 2015 ).

However, in the early 1920s, McDougall’s perspective about human behaviour being driven by instincts faded remarkably as scientists supporting the concept of behaviourism started to get more attention with original ideas (Buss and Kenrick, 1998 ).

John B. Watson (1913)

The pioneer of the psychological school of behaviourism, John B. Watson, who conducted the controversial ‘Little Albert’ experiment by imposing a phobia on a child to evidence classical conditioning in humans (Harris, 1979 ), argued against the ideas of McDougall, even within public debates (Stephenson, 2003 ). Unlike McDougall, Watson considered the brain an empty page ( tabula rasa as described by Aristotle). According to him. all personality traits and behaviours directly result from the accumulated experience that starts from birth. Thus, the story of the human mind is a continuous writing process featured by surrounding events and factors. This perception was supported in the following years of the 20th century by anthropologists who revealed many very different social standards in different societies, and numerous social researchers argued that the wide variety of cross-cultural differences should lead to the conclusion that there is no mental content built-in from birth, and that all knowledge, therefore, comes from individual experience or perception (Farr, 1996 ). In stark contrast to McDougall, Watson suggested that human intuitions and behaviour patterns are the product of a learning process that starts blank.

B. F. Skinner (1938)

Inspired by the work of Pavlov, the American psychologist B.F. Skinner took the classical conditioning approach to a more advanced level by modifying a key aspect of the process. According to Skinner, human behaviour is dependent on the outcome of past activities. If the outcome is bad, the action will probably not be repeated; however, if the outcome is good, the likelihood of the activity being repeated is relatively high. Skinner called this process reinforcement learning (Schacter et al., 2011 ). Based on reinforcement learning, Skinner also introduced the concept of operant conditioning, a type of associative learning process through which the strength of a behaviour is adjusted by reinforcement or punishment. Considering, for example, a parent’s response to a child’s behaviour, the probability of the child repeating an action will be highly dependent on the parent’s reaction (Zilio, 2013 ). Effectively, Skinner argues that the intuitive System 1 may get edited and that a heuristical cue may become more or less ‘hard-wired’ in the subject’s brain as a stimulus leading to an automatic response.

The DNA and its environment (1953 onwards)

Today, there seems to be wide agreement that behaviour patterns in humans and other species are to some extent ‘in the DNA’, the structure of which was discovered by Francis Crick and James Watson in 1953, but that they also to some extent depend on ‘the environment’—including the social environment in which the agent lives and has problems to solve. Today, it seems safe to say, therefore, that the methods of problem-solving that humans apply are neither completely innate nor completely the result of environmental stimuli—but rather the product of the complex interaction between genes and the environment (Lerner, 1978 ).

Herbert Simon: rationality is bounded

Herbert Simon is well known for his contributions to several fields, including economics, psychology, computer science, and management. Simon proposed a remarkable theory that led him to be awarded the Nobel Prize for Economics in 1978.

Bounded rationality and satisficing

In the mid-1950s, Simon published A Behavioural Model of Rational Choice, which focused on bounded rationality: the idea that people must make decisions with limited time, mental resources, and information (Simon, 1955 ). He clearly states the triangle of limitations in every decision-making process—the availability of information, time, and cognitive ability (Bazerman and Moore, 1994 ). The ideas of Simon are considered an inspiring foundation for many technologies in use today.

Instead of conforming to the idea that economic behaviour can be seen as rational and dependent on all accessible data (i.e., as optimization), Simon suggested that the dynamics of decision-making were essentially ‘satisficing,’ a notion synthesized from ‘satisfy’ and ‘suffice’ (Byron, 1998 ). During the 1940s, scholars noticed the frequent failure of two assumptions required for ‘rational’ decision-making. The first is that data is never enough and may be far from perfect, while people dependably make decisions based on incomplete data. Second, people do not assess every feasible option before settling on a decision. This conduct is highly correlated with the cost of data collection since data turns out to be progressively harder and costlier to accumulate. Rather than trying to find the ideal option, people choose the first acceptable or satisfactory option they find. Simon described this procedure as satisficing and concluded that the human brain in the decision-making process would, at best, exhibit restricted abilities (Barros, 2010 ).

Since people can neither obtain nor process all the data needed to make a completely rational decision, they use the limited data they possess to determine an outcome that is ‘good enough’—a procedure later refined into the take-the-best heuristic. Simon’s view that people are bounded by their cognitive limits is usually known as the theory of bounded rationality (cf. Gigerenzer and Selten, 2001 ).

Herbert Simon and AI

With the cooperation of Allen Newell of the RAND Corporation, Simon attempted to create a computer simulator for human decision-making. In 1956, they created a ‘thinking’ machine called the ‘Logic Theorist’. This early smart device was a computer programme with the ability to prove theorems in symbolic logic. It was perhaps the first man-made programme that simulated some human reasoning abilities to solve actual problems (Gugerty, 2006 ). After a few years, Simon, Newell, and J.C. Shaw proposed the General Problem Solver or GPS, the first AI-based programme ever invented. They actually aimed to create a single programme that could solve all problems with the same unified algorithm. However, while the GPS was efficient with sufficiently well-structured problems like the Towers of Hanoi (a puzzle with 3 rods and different-sized disks to be moved), it could not solve real-life scenarios with all their complexities (A. Newell et al., 1959 ).

By 1965, Simon was confident that ‘machines will be capable of doing any work a man can do’ (Vardi, 2012 ). Therefore, Simon dedicated most of the remainder of his career to the advancement of machine intelligence. The results of his experiments showed that, like humans, certain computer programmes make decisions using trial-and-error and shortcut methods (Frantz, 2003 ). Quite explicitly, Simon and Newell ( 1958 , p. 7) referred to heuristics being used by both humans and intelligent machines: ‘Digital computers can perform certain heuristic problem-solving tasks for which no algorithms are available… In doing so, they use processes that are closely parallel to human problem-solving processes’.

Additionally, the importance of the environment was also clearly observed in Newell and Simon’s ( 1972 ) work:

‘Just as scissors cannot cut paper without two blades, a theory of thinking and problem-solving cannot predict behaviour unless it encompasses both an analysis of the structure of task environments and an analysis of the limits of rational adaptation to task requirements’ (p. 55).

Accordingly, the term ‘task environment’ describes the formal structure of the universe of choices and results for a specific problem. At the same time, Newell and Simon do not treat the agent and the environment as two isolated entities, but rather as highly related. Consequently, they tend to believe that agents with different cognitive abilities and choice repertoires will inhabit different task environments even though their physical surroundings and intentions might be the same (Agre and Horswill, 1997 ).

Heuristics in computer science

Computer science as a discipline may have the biggest share of deliberately applied heuristics. As heuristic problem-solving has often been contrasted with algorithmic problem-solving—even by Simon and Newell ( 1958 )—it is worth recalling that the very notion of ‘algorithm’ was clarified only in the first half of the 20th century, when Alan Turing ( 1937 ) defined what was later named ‘Turing-machine’. Basically, he defined ‘mechanical’ computation as a computation that can be done by a—stylized—machine. ‘Mechanical’ being what is also known today as algorithmic, one can say that any procedure that can be performed by a digital computer is algorithmic. Nevertheless, many of them are also heuristics because an algorithm may fail to produce an optimal solution to the problem it is meant to solve. This may be so either because the problem is ill-defined or because the computations required to produce the optimal solution may not be feasible with the available resources. If the problem is ill-defined—as it often is, e.g., in natural language processing—the algorithm that does the processing has to rely on a well-defined model that does not capture the vagueness and ambiguities of the real-life problem—a problem typically stated in natural language. If the problem is well-defined, but finding the optimal solution is not feasible, algorithms that would find it may exist ‘in principle’, but require too much time or memory to be practically implemented.

In fact, there is today a rich theory of complexity classes that distinguishes between types of (well-defined) problems according to how fast the time or memory space required to find the optimal solution increases with increasing problem size. E.g., for problem types of the complexity class P, any deterministic algorithm that produces the optimal solution has a running time bounded by a polynomial function of the input size, whereas, for problems of complexity class EXPTIME, the running time is bounded by an exponential function of the input size. In the jargon of computer science, problems of the latter class are considered intractable, although the input size has to become sufficiently large before the computation of the optimal solution becomes practically infeasible (cf. Harel, 2000 ; Hopcroft et al., 2007 ). Research indicates that the computational complexity of problems can also reduce the quality of human decision-making (Bossaerts and Murawski, 2017 ).

Shortest path algorithms

A classic optimization problem that may serve to illustrate the issues of optimal solution, complexity, and heuristics goes by the name of the travelling salesman problem (TSP), which was first introduced in 1930. In this problem, several cities with given distances between each two are considered, and the goal is to find the shortest possible path through all cities and return to the starting point. For a small input size, i.e., for a small number of cities, the ‘brute-force’ algorithm is easy to use: write down all the possible paths through all the cities, calculate their lengths, and choose the shortest. However, the number of steps that are required by this procedure quickly increases with the number of cities. The TSP is today known to belong to the complexity class NP which is in between P and EXPTIME Footnote 1 ). To solve the TSP, Jon Bentley ( 1982 ) proposed the greedy (or nearest-neighbour) algorithm that will yield an acceptable result, but not necessarily the optimal one, within a relatively short time. This approach always picks the nearest neighbour as the next city to visit without regard to possible later non-optimal steps. Hence, it is considered a good-enough solution with fast results. Bentley argued that there may be better solutions, but that it approximates the optimal solution. Many other heuristic algorithms have been explored later on. There is no assurance that the solution found by a heuristic algorithm will be an ideal answer for the given problem, but it is acceptable and adequate (Pearl, 1984 ).

Heuristic algorithms of the shortest path are utilized nowadays by GPS frameworks and self-driving vehicles to choose the best route from any point of departure to any destination (for example, A* Search Algorithm). Further developed algorithms can also consider additional elements, including traffic, speed limits, and quality of roads, they may yield the shortest routes in terms of distance and the fastest ones in terms of driving time.

Computer chess

While the TSP consists of a whole set of problems which differ by the number of cities and the distances between them, determining the optimal strategy for chess is just one problem of a given size. The rules of chess make it a finite game, and Ernst Zermelo proved in 1913 that it is ‘determined’: if it were played between perfectly rational players, it would always end with the same outcome: either White always wins, or Black always wins, or it always ends with a draw (Zermelo, 1913 ). Up to the present day, it is not known which of the three is true, which points to the fact that a brute-force algorithm that would go through all possible plays of chess is practically infeasible: it would have to explore too many potential moves, and the required memory would quickly run out of space (Schaeffer et al., 2007 ). Inevitably, a chess-playing machine has to use algorithms that are ‘shortcuts’—which can be more or less intelligent.

While Simon and Newell had predicted in 1958 that within ten years the world chess champion would be a computer, it took until 1997, when a chess-playing machine developed by IBM under the name Deep Blue defeated grandmaster Garry Kasparov. Although able to analyse millions of possibilities due to their computing powers, today’s chess-playing machines apply a heuristic approach to eliminate unlikely moves and focus on those with a high probability of defeating their opponent (Newborn, 1997 ).

Machine learning

One of the main features of machine learning is the ability of the model to predict a future outcome based on past data points. Machine learning algorithms build a knowledge base similar to human experience from previous experiences in the dataset provided. From this knowledge base, the model can derive educated guesses.

A good demonstration of this is the card game Top Trumps in which the model can learn to play and keep improving to dominate the game. It does so by undertaking a learning path through a sequence of steps in which it picks two random cards from the deck and then analyses and compares them with random criteria. According to the winning result, the model iteratively updates its knowledge base in the same manner as a human, following the rule that ‘practice makes perfect.’ Hence the model will play, collect statistics, update, and iterate while becoming more accurate with each increment (Volz et al., 2016 ).

Natural language processing

In the world of language understanding, current technologies are far from perfect. However, models are becoming more reliable by the minute. When analysing and dissecting a search phrase entered into the Google search engine, a background model tries to make sense of the search criteria. Stemming words, context analysis, the affiliation of phrases, previous searches, and autocorrect/autocomplete can be applied in a heuristic algorithm to display the most relevant result in less than a second. Heuristic methods can be utilized when creating certain algorithms to understand what the user is trying to express when searching for a phrase. For example, using word affiliation, an algorithm tries to narrow down the meaning of words as much as possible toward the user’s intention, particularly when a word has more than one meaning but changes with the context. Therefore, a search for apple pie allows the algorithm to deduce that the user is highly interested in recipes and not in the technology company (Sullivan, 2002 ).

Search and big data

Search is a good example to appreciate the value of time, as one of the most important criteria is retrieving acceptable results in an acceptable timeframe. In a full search algorithm, especially in large datasets, retrieving optimal results can take a massive amount of time, making it necessary to apply heuristic search.

Heuristic search is a type of search algorithm that is used to find solutions to problems in a faster way than an exhaustive search. It uses specific criteria to guide the search process and focuses on more favourable areas of the search space. This can greatly reduce the number of nodes required to find a solution, especially for large or complex search trees.

Heuristic search algorithms work by evaluating the possible paths or states in a search tree and selecting the better ones to explore further. They use a heuristic function, which is a measure of how close a given state is to the goal state, to guide the search. This allows the algorithm to prioritize certain paths or states over others and avoid exploring areas of the search space that are unlikely to lead to a solution. The reached solution is not necessarily the best, however, a ‘good enough’ one is found within a ‘fast enough’ time. This technique is an example of a trade-off between optimality and speed (Russell et al., 2010 ).

Today, there is a rich literature on heuristic methods in computer science (Martí et al., 2018 ). As the problem to be solved may be the choice of a suitable heuristic algorithm, there are also meta-heuristics that have been explored (Glover and Kochenberger, 2003 ), and even hyper-heuristics which may serve to find or generate a suitable meta-heuristic (Burke et al., 2003 ). As Sörensen et al. ( 2018 ) point out, the term ‘metaheuristic’ may refer either to an ‘algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms’—or to a specific algorithm that is based on such a framework. E.g., a metaheuristic to find a suitable search algorithm may be inspired by the framework of biological evolution and use its ideas of mutation, reproduction and selection to produce a particular search algorithm. While this algorithm will still be a heuristic one, the fact that it has been generated by an evolutionary process indicates its superiority over alternatives that have been eliminated in the course of that process (cf. Vikhar, 2016 ).

Daniel Kahneman and Amos Tversky: heuristics and biases

Inspired by the concepts of Herbert Simon, psychologists Daniel Kahneman and Amos Tversky initiated the heuristics and biases research programme in the early 1970s, which emphasized how individuals make judgements and the conditions under which those judgements may be inaccurate (Kahneman and Klein, 2009 ).

In addition, Kahneman and Tversky emphasized information processing to elaborate on how real people with limitations can decide, choose, or estimate (Kahneman, 2011 ).

The remarkable article Judgement under Uncertainty: Heuristics and Biases , published in 1974, is considered the turning key that opened the door wide to research on this topic, although it was and still is considered controversial (Kahneman, 2011 ). In their research, Kahneman and Tversky identified three types of heuristics by which probabilities are often assessed: availability, representativeness, and anchoring and adjustment. In passing, Kahneman and Tversky mention that other heuristics are used to form non-probabilistic judgements; for example, the distance of an object may be assessed according to the clarity with which it is seen. Other researchers subsequently introduced different types of heuristics. However, availability, representativeness, and anchoring are still considered fundamental heuristics for judgements under uncertainty.

Availability

According to the psychological definition, availability or accessibility is the ease with which a specific thought comes to mind or can be inferred. Many people use this type of heuristic when judging the probability of an event that may have happened or will happen in the future. Hence, people tend to overestimate the likelihood of a rare event if it easily comes to mind because it is frequently mentioned in daily discussions (Kahneman, 2011 ). For instance, individuals overestimate their probability of being victims of a terrorist attack while the real probability is negligible. However, since terrorist attacks are highly available in the media, the feeling of a personal threat from such an attack will also be highly available during our daily life (Kahneman, 2011 ).

This concept is also present in business, as we remember the successful start-ups whose founders quit college for their dreams, such as Steve Jobs and Mark Zuckerberg, and ignore the thousands of ideas, start-ups, and founders that failed. This is because successful companies are considered a hot topic and receive broad coverage in the media, while failures do not. Similarly, broad media coverage is known to create top-of-mind awareness (TOMA) (Farris et al., 2010 ). Moreover, the availability type of heuristics was offered as a clarification for fanciful connections or irrelevant correlations in which individuals wrongly judge two events to be related to each other when they are not. Tversky and Kahneman clarified that individuals judge relationships based on the ease of envisioning the two events together (Tversky and Kahneman, 1973 ).

Representativeness

The representativeness heuristic is applied when individuals assess the probability that an object belongs to a particular class or category based on how much it resembles the typical case or prototype representing this category (Tversky and Kahneman, 1974 ). Conceptually, this heuristic can be decomposed into three parts. The first one is that the ideal case or prototype of the category is considered representative of the group. The second part judges the similarity between the object and the representative prototype. The third part is that a high degree of similarity indicates a high probability that the object belongs to the category, and a low degree of similarity indicates a low probability.

While the heuristic is often applied automatically within an instant and may be compelling in many cases, Tversky and Kahneman point out that the third part of the heuristic will often lead to serious errors or, at any rate, biases.

In particular, the representativeness heuristic can give rise to what is known as the base rate fallacy. As an example, Tversky and Kahneman consider an individual named Steve, who is described as shy, withdrawn, and somewhat pedantic, and report that people who have to assess, based on this description, whether Steve is more likely to be a librarian or a farmer, invariably consider it more likely that he is a librarian—ignoring the fact that there are many more farmers than librarians, the fact that an estimate of the probability that Steve is a librarian or a farmer, respectively, must take into account.

Another example is that a taxicab was engaged in an accident. The data indicates that 85% of the taxicabs are green and 15% blue. An eyewitness claims that the involved cab was blue. The court then evaluates the witness for reliability because he is 80% accurate and 20% inaccurate. So now, what would be the probability of the involved cab being blue, given that the witness identified it as blue as well?

To evaluate this case correctly, people should consider the base rate, 15% of the cabs being blue, and the witness accuracy rate, 80%. Of course, if the number of cabs is equally split between colours, then the only factor in deciding is the reliability of the witness, which is an 80% probability.

However, regardless of the colours’ distribution, most participants would select 80% to respond to this enquiry. Even participants who wanted to take the base rate into account estimated a probability of more than 50%, while the right answer is 41% using the Bayesian inference (Kahneman, 2011 ).

In relation to the representativeness heuristic, Kahnemann ( 2011 ) illustrated the ‘conjunction fallacy’ in the following example: based only on a detailed description of a character named Linda, doctoral students in the decision science programme of the Stanford Graduate School of Business, all of whom had taken several advanced courses in probability, statistics, and decision theory, were asked to rank various other descriptions of Linda according to their probability. Even Kahneman and Tversky were surprised to find that 85% of the students ranked Linda as a bank teller active in the feminist movement as more likely than Linda as a bank teller.

From these and many other examples, one must conclude that even sophisticated humans use the representativeness heuristic to make probability judgements without referring to what they know about probability.

Representativeness is used to make probability judgements and judgements about causality. The similarity of A and B neither indicates that A causes B nor that B causes A. Nevertheless, if A precedes B and is similar to B, it is often judged to be B’s cause.

Adjustment and anchoring

Based on Tversky and Kahneman’s interpretations, the anchor is the first available number introduced in a question forming the centre of a circle whose radius (up or down) is an acceptable range within which lies the best answer (Baron, 2000 ). This is used and tested in several academic and real-world scenarios and in business negotiations where parties anchor their prices to formulate the range of acceptance through which they can close the deal, deriving the ceiling and floor from the anchor. The impact is more dominant when parties lack time to analyse actions thoroughly.

Significantly, even if the anchor is way beyond logical boundaries, it can still bias the estimated numbers by all parties without them even realizing that it does (Englich et al., 2006 ).

In one of their experiments, Tversky and Kahneman ( 1974 ) asked participants to quickly calculate the product of numbers from 1 to 8 and others to do so from 8 to 1. Since the time was limited to 5 min, they needed to make a guess. The group that started from 1 had an average of 512, while the group that started from 8 had an average of 2250. The right answer was 40,320.

Perhaps this is one of the most unclear cognitive heuristics introduced by Kahneman and Tversky that can be interchangeably considered as a bias instead of a heuristic. The problem is that the mind tends to fixate on the anchor and adjust according to it, whether it was introduced implicitly or explicitly. Some scholars even believe that such bias/heuristic is unavoidable. For instance, in one study, participants were asked if they believed that Mahatma Gandhi died before or after nine years old versus before or after 140 years old. Unquestionably, these anchors were considered unrealistic by the audience. However, when the participants were later asked to give their estimate of Gandhi’s age of death, the group which was anchored to 9 years old speculated the average age to be 50, while the group anchored to the highest value estimated the age of death to be as high as 67 (Strack and Mussweiler, 1997 ).

Gerd Gigerenzer: fast-and-frugal heuristics

The German psychologist Gerd Gigerenzer is one of the most influential figures in the field of decision-making, with a particular emphasis on the use of heuristics. He has built much of his research on the theories of Herbert Simon and considers that Simon’s theory of bounded rationality was unfinished (Gigerenzer, 2015 ). As for Kahneman and Tversky’s work, Gigerenzer has a different approach and challenges their ideas with various arguments, facts, and numbers.

Gigerenzer explores how people make sense of their reality with constrained time and data. Since the world around us is highly uncertain, complex, and volatile, he suggests that probability theory cannot stand as the ultimate concept and is incapable of interpreting everything, particularly when probabilities are unknown. Instead, people tend to use the effortless approach of heuristics. Gigerenzer introduced the concept of the adaptive toolbox, which is a collection of mental shortcuts that a person or group of people can choose from to solve a current problem (Gigerenzer, 2000 ). A heuristic is considered ecologically rational if adjusted to the surrounding ecosystem (Gigerenzer, 2015 ).

A daring argument of Gigerenzer, which very much opposes the heuristics and biases approach of Kahneman and Tversky, is that heuristics cannot be considered irrational or inferior to a solution by optimization or probability calculation. He explicitly argues that heuristics are not gambling shortcuts that are faster but riskier (Gigerenzer, 2008 ), but points to several situations where less is more, meaning that results from frugal heuristics, which neglect some data, were nevertheless more accurate than results achieved by seemingly more elaborate multiple regression or Bayesian methods that try to incorporate all relevant data. While researchers consider this counterintuitive since a basic rule in research seems to be that more data is always better than less, Gigerenzer points out that the less-is-more effect (abbreviated as LIME) could be confirmed by computer simulations. Without denying that in some situations, the effect of using heuristics may be biased (Gigerenzer and Todd, 1999 ), Gigerenzer emphasizes that fast-and-frugal heuristics are basic, task-oriented choice systems that are a part of the decision-maker’s toolbox, the available collection of cognitive techniques for decision-making (Goldstein and Gigerenzer, 2002 ).

Heuristics are considered economical because they are easy to execute, seek limited data, and do not include many calculations. Contrary to most traditional decision-making models followed in the social and behavioural sciences, models of fast-and-frugal heuristics portray not just the result of the process but also the process itself. They comprise three simple building blocks: the search rule that specifies how information is searched for, the stopping rule that specifies when the information search will be stopped, and finally, the decision rule that specifies how the processed information is integrated into a decision (Goldstein and Gigerenzer, 2002 ).

Rather than characterizing heuristics as rules of thumb or mental shortcuts that can cause biases and must therefore be regarded as irrational, Gigerenzer and his co-workers emphasize that fast-and-frugal heuristics are often ecologically rational, even if the conjunction of them may not even be logically consistent (Gigerenzer and Todd, 1999 ).

According to Goldstein and Gigerenzer ( 2002 ), a decision maker’s pool of mental techniques may contain logic and probability theory, but it also embraces a set of simple heuristics. It is compared to a toolbox because just as a wood saw is perfect for cutting wood but useless for cutting glass or hammering a nail into a wall, the ingredients of the adaptive toolbox are intended to tackle specific scenarios.

For instance, there are specific heuristics for choice tasks, estimation tasks, and categorization tasks. In what follows, we will discuss two well-known examples of fast-and-frugal heuristics: the recognition heuristic (RH), which utilizes the absence of data, and the take-the-best heuristic (TTB), which purposely disregards the data.

Both examples of heuristics can be connected to decision assignments and to circumstances in which a decision-maker needs to decide which of two options has a higher reward on a quantitative scale.

Ideal scenarios would be deducing which one of two stock shares will have a better income in the next month, which of two cars is more convenient for a family, or who is a better candidate for a particular job (Goldstein and Gigerenzer, 2002 ).

The recognition heuristic

The recognition heuristic has been examined broadly with the famous experiment to determine which of the two cities has a higher population. This experiment was conducted in 2002, and the participants were undergraduate students: one group in the USA and one in Germany. The question was as follows: which has more occupants—San Diego or San Antonio? Given the cultural difference between the student groups and the level of information regarding American cities, it could be expected that American students would have a higher accuracy rate than their German peers. However, most German students did not even know that San Antonio is an American city (Goldstein and Gigerenzer, 2002 ). Surprisingly, the examiners, Goldstein and Gigerenzer, found the opposite of what was expected. 100% of the German students got the correct answer, while the American students achieved an accuracy rate of around 66%. Remarkably, the German students who had never known about San Antonio had more correct answers. Their lack of knowledge empowered them to utilize the recognition heuristic, which states that if one of two objects is recognized and the other is not, then infer that the recognized object has the higher value concerning the relevant criterion. The American students could not use the recognition heuristic because they were familiar with both cities. Ironically, they knew too much.

The recognition heuristic is an incredible asset. In many cases, it is used for swift decisions since recognition is usually systematic and not arbitrary. Useful applications may be cities’ populations, players’ performance in major leagues, or writers’ level of productivity. However, this heuristic will be less efficient in more difficult scenarios than a city’s population, such as the age of the city’s mayor or its sea-level altitude (Gigerenzer and Todd, 1999 ).

Take-the-best heuristic

When the recognition heuristic is not efficient because the decision-maker has enough information about both options, another important heuristic can be used that relies on hints or cues to arrive at a decision. The take-the-best (TTB) heuristic is a heuristic that relies only on specific cues or signals and does not require any complex calculations. In practice, it often boils down to a one-reason decision rule, a type of heuristic where judgements are based on a single good reason only, ignoring other cues (Gigerenzer and Gaissmaier, 2011 ). According to the TTB heuristic, a decision-maker evaluates the case by selecting the attributes which are important to him and sorts these cues by importance to create a hierarchy for the decision to be taken. Then alternatives are compared according to the first, i.e., the most important, cue; if an alternative is the best according to the first cue, the decision is taken. Otherwise, the decision-maker moves to the next layer and checks that level of cues. In other words, the decision is based on the most important attribute that allows one to discriminate between the alternatives (Gigerenzer and Goldstein, 1996 ). Although this lexicographic preference ordering is well known from traditional economic theory, it appears there mainly to provide a counterexample to the existence of a real-valued utility function (Debreu, 1959 ). Surprisingly, however, it seems to be used in many critical situations. For example, in many airports, the customs officials may decide if a traveller is chosen for a further check by looking only at the most important attributes, such as the city of departure, nationality, or luggage weight (Pachur and Marinello, 2013 ). Moreover, in 2012, a study explored voters’ views of how US presidential competitors would deal with the single issue that voters viewed as most significant, for example, the state of the economy or foreign policy. A model dependent on this attribute picked the winner in most cases (Graefe and Armstrong, 2012 ).

However, the TTB heuristic has a stopping rule applied when the search reaches a discriminating cue. So, if the most important signal discriminates, there is no need to continue searching for other cues, and only one signal is considered. Otherwise, the next most important signal will be considered. If no discriminating signal is found, the heuristic will need to make a random guess (Gigerenzer and Gaissmaier, 2011 ).

Empirical evidence on fast-and-frugal heuristics

More studies have been conducted on fast-and-frugal heuristics using analytical methods and simulations to investigate when and why heuristics yield accurate results on the one hand, and on the other hand, using experiments and observational methods to find out whether and when people use fast-and-frugal heuristics (Luan et al., 2019 ). Structured examinations and benchmarking with standard models, for example, regression or Bayesian models, have shown that the accuracy of fast-and-frugal heuristics relies upon the structure of the information environment (e.g., the distribution of signal validities, the interrelation between signals, etc.). In numerous situations, fast-and-frugal heuristics can perform well, particularly in generalized contexts, when making predictions for new cases that have not been previously experienced. Empirical examinations show that people utilize fast-and-frugal heuristics under a time constraint when data is hard to obtain or must be retrieved from memory. Remarkably, some studies have inspected how individuals adjust to various situations by learning. Rieskamp and Otto ( 2006 ) found that individuals seemingly learn to choose the heuristic that has the best performance in a specific domain. Nevertheless, Reimer and Katsikopoulos ( 2004 ) found that individuals apply fast-and-frugal heuristics when making inferences in groups.

While interest in heuristics has been increasing, some of the literature has been mostly critical. In particular, the heuristics and biases programme introduced by Kahneman and Tversky has been the target of more than one critique (Reisberg, 2013 ).

The arguments are mainly in two directions. The first is that the main focus is on the coherence standards such as rationality and that the detection of biases ignores the context-environmental factors where the judgements occur (B.R. Newell, 2013 ). The second is that notions such as availability or representativeness are vague and undefined, and state little regarding the procedures’ hidden judgements (Gigerenzer, 1996 ). For example, it has been argued that the replies in the acclaimed Linda-the-bank-teller experiment could be considered sensible instead of biased if one uses conversational or colloquial standards instead of formal probability theory (Hilton, 1995 ).

The argument of having a vague explanation for certain phenomena can be illustrated when considering the following two scenarios. People tend to believe that an opposite outcome will be achieved after having a stream of the same outcome (e.g., people tend to believe that ‘heads’ should be the next outcome in a coin-flipping game with many consecutive ‘tails’). This is called the gambler fallacy (Barron and Leider, 2010 ). By contrast, the hot-hand fallacy (Gilovich et al., 1985 ) argues that people tend to believe that a stream of the same outcome will continue when there is a lucky day (e.g., a player is taking a shot in a sport such as a basketball after a series of successful attempts). Ayton and Fisher ( 2004 ) argued that, although these two practices are quite opposite, they have both been classified under the heuristic of representativeness. In the two cases, a flawed idea of random events drives observers to anticipate that a certain stream of results is representative of the whole procedure. In the first scenario of coin flipping, people tend to believe that a long stream of tails should not occur; hence the head is predicted. While in the case of the sports player, the stream of the same outcome is expected to continue (Gilovich et al., 1985 ). Therefore, representativeness cannot be diagnosed without considering in advance the expected results. Also, the heuristic does not clarify why people have the urge to believe that a stream of random events should have a representative, while in real life, it does not (Ayton and Fischer, 2004 ).

Nevertheless, the most common critique of Kahneman and Tversky is the idea that ‘we cannot be that dumb’. It states that the heuristics and biases programme is overly pessimistic when assessing the average human decision-making. Also, humans collectively have accumulated many achievements and discoveries throughout human history that would not have been possible if their ability to adequate decision-making had been so limited (Gilovich and Griffin, 2002 ).

Similarly, the probabilistic mental models (PMM) theory of human inference inspired by Simon and pioneered by Gigerenzer has also been exposed to criticism (B.R. Newell et al., 2003 ). Indeed, the enticing character of heuristics that they are both easy to apply and efficient has made them famous within different domains. However, it has also made them vulnerable to replications or variations of the experiments that challenge the original results. For example, Daniel Oppenheimer ( 2003 ) argues that the recognition heuristic (RH) could not yield satisfactory results after replicating the experiment of city populations. He claims that the participants’ judgements failed to obey the RH not just when there were cues other and stronger than mere recognition but also in circumstances where recognition would have been the best cue available. In any case, one could claim that there are numerous methods in the adaptive toolbox and that under certain conditions, people may prefer to use heuristics other than the RH. However, this statement is also questionable since many heuristics that are thought to exist in the adaptive toolbox acknowledge the RH as an initial step (Gigerenzer and Todd, 1999 ). Hence, if individuals are not using the RH, they cannot use many of the other heuristics in the adaptive toolbox (Oppenheimer, 2003 ). Likewise, Newell et al. ( 2003 ) question whether the fast-and-frugal heuristics accurately explain actual human behaviour. In two experiments, they challenged the take-the-best (TTB) heuristic, as it is considered a building block in the PMM framework. The outcomes of these experiments, together with others, such as those of Jones et al. ( 2000 ) and Bröder ( 2000 ), show that the TTB heuristic is not a reliable approach even within circumstances favouring its use. In a somewhat heated debate published in the Psychological Review 1996, Gigerenzer’s criticism of Kahneman and Tversky that many of the so-called biases ‘disappear’ if frequencies rather than probabilities are assumed, was countered by Kahneman and Tversky ( 1996 ) by means of a detailed re-examination of the conjunction fallacy (or Linda Problem). Gigerenzer ( 1996 ) remained unconvinced, and was in turn, blamed by Kahneman and Tversky ( 1996 , p. 591) for just reiterating ‘his objections … without answering our main arguments’.

Our historical review has revealed a number of issues that have received little attention in the literature.

Deliberate vs. automatic heuristics

We have differentiated between deliberate and automatic heuristics, which often seem to be confused in the literature. While it is a widely shared view today that the human brain often relies heavily on the fast and effortless ‘System 1’ in decision-making, but can also use the more demanding tools of ‘System 2’, and it has been acknowledged, e.g. by Kahneman ( 2011 , p. 98), that some heuristics belong to System 1 and others to System 2, the two systems are not as clearly distinct as it may seem. In fact, the very wide range of what one may call ‘heuristics’ shows that there is a whole spectrum of fallible decision-making procedures—ranging from the probably innate problem-solving strategy of the baby that cries whenever it is hungry or has some other problem, to the most elaborate and sophisticated procedures of, e.g., Polya, Bolzano, or contemporary chess-engines. One may be tempted to characterize instinctive procedures as subconscious and sophisticated ones as conscious, but a deliberate heuristic can very well become a subconsciously applied ‘habit of the mind’ or learnt routine with experience and repetition. Vice versa, automatic, subconscious heuristics can well be raised to consciousness and be applied deliberately. E.g., the ‘inductive inference’ from tasty strawberries to the assumption that all red berries are sweet and edible may be quite automatic and subconscious in little children, but the philosophical literature on induction shows that it can be elaborated into something quite conscious. However, while the notion of consciousness may be crucial for an adequate understanding of heuristics in human cognition, for the time being, it seems to remain a philosophical mystery (Harley, 2021 ; Searle, 1997 ), and once programmed, sophisticated heuristic algorithms can be executed by automata.

The deliberate heuristics that we reviewed also illustrate that some of them can hardly be called ‘simple’, ‘shortcuts’, or ‘rules of thumb’. E.g., the heuristics of Descartes, Bolzano, or Polya each consist of a structured set of suggestions, and, e.g., ‘to devise a plan’ for a mathematical proof is certainly not a shortcut. Llull ( 1308 , p. 329), to take another example, wrote of his ‘ars magna’ that ‘the best kind of intellect can learn it in two months: one month for theory and another month for practice’.

Heuristics vs. algorithms

Our review of heuristics also allowed us to clarify the distinction between heuristics and algorithms. As evidenced by our glimpse at computer science, there are procedures that are quite obviously both an algorithm and a heuristic. Within computer science, they are in fact quite common. Algorithms of the heuristic type may be required for certain problems even though an algorithm that finds the optimal solution exists ‘in principle’—as in the case of determining the optimal strategy in chess, where the brute-force-method to enumerate all possible plays of chess is just not practically feasible. In other cases, heuristic algorithms are used because an exhaustive search, while practically feasible, would be too costly or time-consuming. Clearly, for many problems, there are also problem-solving algorithms which always do produce the optimal solution in a reasonable time frame. Given our definition of a heuristic as a fallible method, algorithms of this kind are counterexamples to the complaint that the notion has become so wide that ‘any procedure can be called a heuristic’. However, as we have seen, there are also heuristic procedures that are non-algorithmic. These may be necessary either because the problem to be solved is not sufficiently well-defined to allow for an algorithm, or because an algorithm that would solve the problem at hand, is not known or does not exist. Kleining’s qualitative heuristics is an example of non-algorithmic heuristics necessitated by the ill-defined problems of research in the social sciences, while Polya’s heuristic for solving mathematical problems is an example of the latter: an algorithm that would allow one to decide if a given mathematical conjecture is a theorem or not does not exist (cf. Davis, 1965 ).

Pre-SEU vs. post-SEU heuristics

As we noted in the introduction, the emergence of the SEU theory can be regarded as a kind of watershed for the research on heuristics, as it came to be regarded as the standard definition of rational choice. Post-SEU, fallible methods of decision-making would have to face comparison with this standard. Gigerenzer’s almost belligerent criticism of SEU shows that even today it seems difficult to discuss the pros and cons of heuristics unless one relates them to the backdrop of SEU. However, his criticism of SEU is mostly en passant and seems to assume that the SEU model requires ‘known probabilities’ (e.g., Gigerenzer, 2021 ), ignoring the fact that it is, in general, subjective probabilities, as derived from the agent’s preferences among lotteries, that the model relies on (cf. e.g., Jeffrey, 1967 or Gilboa, 2011 ). In fact, when applied to an ill-defined decision problem in, e.g., management, the SEU theory may well be regarded as a heuristic—it asks you to consider the possible consequences of the relevant set of actions, your preferences among those consequences, and the likelihood of those consequences. To the extent that one may get all of these elements wrong, SEU is a fallible method of decision-making. To be sure, it is not a fast and effortless heuristic, but our historical review of pre-SEU heuristics has illustrated that heuristics may be quite elaborate and require considerable effort and attention.

It is quite true, of course, that the SEU heuristic will hardly be helpful in problem-solving that is not ‘just’ decision-making. If, e.g., the problem to be solved is to find a proof for a mathematical conjecture, the set of possible actions will in general be too vast to be practically contemplated, let alone evaluated according to preferences and probabilities.

Positive vs. negative heuristics

To the extent that the study of heuristics aims at understanding how decisions are actually made, it is not only positive heuristics that need to be considered. It will also be required to investigate the conditions that may prevent the agent from adopting certain courses of action. As we saw, Lakatos used the notion of negative heuristics quite explicitly to characterize research programmes, but we also briefly review Duncker’s notion of ‘functional fixedness’ as an example of a hindrance to adequate problem-solving. A systematic study of such negative heuristics seems to be missing in the literature and we believe that it may be a helpful complement to the study of positive heuristics which has dominated the literature that we reviewed.

To the extent that heuristics are studied with the normative aim of identifying effective heuristics, it may also be useful to consider approaches that should not be taken. ‘Do not try to optimize!’ might be a negative heuristic favoured by the fast-and-frugal school of thought.

Heuristics as the product of evolution

Clearly, heuristics have always existed throughout the development of human knowledge due to the ‘old mind’s’ evolutionary roots and the frequent necessity to apply fast and sufficiently reliable behaviour patterns. However, unlike the behaviour patterns in the other animals, the methods used by humans in problem-solving are sufficiently diverse that the dual-process theory was suggested to provide some structure to the rich ‘toolbox’ humans can and do apply. As all our human DNA is the product of evolution, it is not only the intuitive inclinations to react to certain stimuli in a particular way that must be seen as the product of evolution, but also our ability to abstain from following our gut feelings when there is reason to do so, to reflect and analyse the situation before we embark on a particular course of action. Quite frequently, we experience a tension between our intuitive inclinations and our analytic mind’s judgement, but both of them are somehow the product of evolution, our biography, and the environment. Thus, to point out that gut feelings are an evolved capacity of the brain does in no way provide an argument that would support their superiority over the reflective mind.

Moreover, compared to the speed of problem change in our human lifetimes, biological evolution is very slow. The evolved capacities of the human brain may have been well-adapted to the survival needs of our ancestors some 300,000 years ago, but there is little reason to believe that they are uniformly well-adapted to human problem-solving in the 21st century.

Resource-bounded and ecological rationality

Throughout our review, the reader will have noticed that many heuristics have been suggested for specific problem areas. The methods of the ancient Greeks were mainly centred around solving geometrical problems. Llull was primarily concerned with theological questions, Descartes and Leibniz pursued ‘mechanical’ solutions to philosophical issues, Polya suggested heuristics for Mathematics, Müller for engineering, and Kleining for social science research. This already suggests that heuristics suitable for one type of problem need not be suitable for a different type. Likewise, the automatic heuristics that both the Kahneman-Tversky and the Gigerenzer schools focused on, are triggered by particular tasks. Simon’s observation that the success of a given heuristic will depend on the environment in which it is employed, is undoubtedly an important one that has motivated Gigerenzer’s notion of ecological rationality and is strikingly absent from the SEU model. If ‘environment’ is taken in a broad sense that includes the available resources, the cost of time and effort, the notion seems to cover what has been called resource-rational behaviour (e.g., Bhui et al., 2021 ).

Avenues of further research

A comprehensive study describing the current status of the research on heuristics and their relation to SEU seems to be missing and is beyond the scope of our brief historical review. Insights into their interrelationship can be expected from recent attempts at formal modelling of human cognition that take the issues of limited computational resources and context-dependence of decision-making seriously. E.g., Lieder and Griffiths ( 2020 ) do this from a Bayesian perspective, while Busemeyer et al. ( 2011 ) and Pothos and Busemeyer ( 2022 ) use a generalization of standard Kolmogorov probability theory that is also the basis of quantum mechanics and quantum computation. While it may seem at first glance that such modelling assumes even more computational power than the standard SEU model of decision-making, the computational power is not assumed on the part of the human decision-maker. Rather, the claim is that the decision-maker behaves as if s/he would solve an optimization problem under additional constraints, e.g., on computational resources. The ‘as if’ methodology that is employed here is well-known to economists (Friedman, 1953 ; Mäki, 1998 ) and also to mathematical biologists who have used Bayesian models to explain animal behaviour (McNamara et al., 2006 ; Oaten, 1977 ; Pérez-Escudero and de Polavieja, 2011 ). Evolutionary arguments might be invoked to support this methodology if a survival disadvantage can be shown to result from behaviour patterns that are not Bayesian optimal, but we are not aware of research that would substantiate such arguments. However, attempting to do so by embedding formal models of cognition in models of evolutionary game theory may be a promising avenue for further research.

NP stands for ‘nondeterministic polynomial-time’, which indicates that the optimal solution can be found by a nondeterministic Turing-machine in a running time that is bounded by a polynomial function of the input size. In fact, the TSP is ‘NP-hard’ which means that it is ‘at least as hard as the hardest problems in the category of NP problems’.

Agre P, Horswill I (1997) Lifeworld analysis. J Artif Intell Res 6:111–145

Article   Google Scholar  

Ayton P, Fischer I (2004) The hot hand fallacy and the gambler’s fallacy. Two faces of subjective randomness. Memory Cogn 32:8

Banse G, Friedrich K (2000) Konstruieren zwischen Kunst und Wissenschaft. Edition Sigma, Idee‐Entwurf‐Gestaltung, Berlin

Google Scholar  

Baron J (2000) Thinking and deciding. Cambridge University Press

Barron G, Leider S (2010) The role of experience in the Gambler’s Fallacy. J Behav Decision Mak 23:1

Barros G (2010) Herbert A Simon and the concept of rationality: boundaries and procedures. Brazilian. J Political Econ 30:3

Baumeister RF, Vohs KD (2007) Encyclopedia of social psychology, vol 1. SAGE

Bazerman MH, Moore DA (1994) Judgment in managerial decision making. Wiley, New York

Bentley JL (1982) Writing efficient programs Prentice-Hall software series. Prentice-Hall

Bhui R, Lai L, Gershman S (2021) Resource-rational decision making. Curr Opin Behav Sci 41:15–21. https://doi.org/10.1016/j.cobeha.2021.02.015

Bolzano B (1837) Wissenschaftslehre. Seidelsche Buchhandlung, Sulzbach

Bossaerts P, Murawski C (2017) Computational complexity and human decision-making. Trends Cogn Sci 21(12):917–929

Article   PubMed   Google Scholar  

Boyer CB (1991) The Arabic Hegemony. A History of Mathematics. Wiley, New York

Bröder A (2000) Assessing the empirical validity of the “Take-the-best” heuristic as a model of human probabilistic inference. J Exp Psychol Learn Mem Cogn 26:5

Burke E, Kendall G, Newall J, Hart E, Ross P, Schulenburg S (2003) Hyper-heuristics: an emerging direction in modern search technology. In: Glover F, Kochenberger GA (eds) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Busemeyer JR, Pothos EM, Franco R, Trueblood JS (2011) A quantum theoretical explanation for probability judgment errors. Psychol Rev 118(2):193

Buss DM, Kenrick DT (1998) Evolutionary social psychology. In: D T Gilbert, S T Fiske, G Lindzey (eds.), The handbook of social psychology. McGraw-Hill, p. 982–1026

Byron M (1998) Satisficing and optimality. Ethics 109:1

Davis M (ed) (1965) The undecidable. Basic papers on undecidable propositions, unsolvable problems and computable functions. Raven Press, New York

MATH   Google Scholar  

Debreu G (1959) Theory of value: an axiomatic analysis of economic equilibrium. Yale University Press

Descartes R (1908) Rules for the Direction of the Mind. In: Oeuvres de Descartes, vol 10. In: Adam C, Tannery P (eds). J Vrin, Paris

Descartes R (1998) Discourse on the method for conducting one’s reason well and for seeking the truth in the sciences (1637) (trans and ed: Cress D). Hackett, Indianapolis

Dunbar RIM (1998) Grooming, gossip, and the evolution of language. Harvard University Press

Duncker K (1935) Zur Psychologie des produktiven Denkens. Springer

Englich B, Mussweiler T, Strack F (2006) Playing dice with criminal sentences: the influence of irrelevant anchors on experts’ judicial decision making. Personal Soc Psychol Bull 32:2

Evans JSB (2010) Thinking twice: two minds in one brain. Oxford University Press

Farr RM (1996) The roots of modern social psychology, 1872–1954. Blackwell Publishing

Farris PW, Bendle N, Pfeifer P, Reibstein D (2010) Marketing metrics: the definitive guide to measuring marketing performance. Pearson Education

Fidora A, Sierra C (2011) Ramon Llull, from the Ars Magna to artificial intelligence. Artificial Intelligence Research Institute, Barcelona

Frantz R (2003) Herbert Simon Artificial intelligence as a framework for understanding intuition. J Econ Psychol 24:2. https://doi.org/10.1016/S0167-4870(02)00207-6

Friedman M (1953) The methodology of positive economics. In: Friedman M (ed) Essays in positive economics. University of Chicago Press

Ghiselin MT (1973) Darwin and evolutionary psychology. Science (New York, NY) 179:4077

Gibbons A (2007) Paleoanthropology. Food for thought. Science (New York, NY) 316:5831

Gigerenzer G (1996) On narrow norms and vague heuristics: a reply to Kahneman and Tversky. 1939–1471

Gigerenzer G (2000) Adaptive thinking: rationality in the real world. Oxford University Press, USA

Gigerenzer G (2008) Why heuristics work. Perspect Psychol Sci 3:1

Gigerenzer G (2015) Simply rational: decision making in the real world. Evol Cogn

Gigerenzer G (2021) Embodied heuristics. Front Psychol https://doi.org/10.3389/fpsyg.2021.711289

Gigerenzer G, Gaissmaier W (2011) Heuristic decision making. Annual Review of Psychology 62, p 451–482

Gigerenzer G, Goldstein DG (1996) Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev 103:4

Gigerenzer G, Selten R (eds) (2001) Bounded rationality: the adaptive toolbox. MIT Press

Gigerenzer G, Todd PM (1999) Simple heuristics that make us smart. Oxford University Press, USA

Gilboa I (2011) Making better decisions. Decision theory in practice. Wiley-Blackwell

Gilovich T, Griffin D (2002) Introduction—heuristics and biases: then and now in heuristics and biases: the psychology of intuitive judgment (8). Cambridge University Press

Gilovich T, Vallone R, Tversky A (1985) The hot hand in basketball: on the misperception of random sequences. Cogn Psychol 17:3

Glaveanu VP (2019) The creativity reader. Oxford University Press

Glover F, Kochenberger GA (eds) (2003) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Goldstein DG, Gigerenzer G (2002) Models of ecological rationality: the recognition heuristic. Psychol Rev 109:1

Graefe A, Armstrong JS (2012) Predicting elections from the most important issue: a test of the take-the-best heuristic. J Behav Decision Mak 25:1

Groner M, Groner R, Bischof WF (1983) Approaches to heuristics: a historical review. In: Groner R, Groner M, Bischof WF (eds) Methods of heuristics. Erlbaum

Groner R, Groner M (1991) Heuristische versus algorithmische Orientierung als Dimension des individuellen kognitiven Stils. In: Grawe K, Semmer N, Hänni R (Hrsg) Üher die richtige Art, Psychologie zu betreiben. Hogrefe, Göttingen

Gugerty L (2006) Newell and Simon’s logic theorist: historical background and impact on cognitive modelling. In: Proceedings of the human factors and ergonomics society annual meeting. Symposium conducted at the meeting of SAGE Publications. Sage, Los Angeles, CA

Harel D (2000) Computers Ltd: what they really can’t do. Oxford University Press

Harley TA (2021) The science of consciousness: waking, sleeping and dreaming. Cambridge University Press

Harris B (1979) Whatever happened to little Albert? Am Psychol 34:2

Heath TL (1926) The thirteen books of Euclid’s elements. Introduction to vol I, 2nd edn. Cambridge University Press

Hertwig R, Pachur T (2015) Heuristics, history of. In: International encyclopedia of the social behavioural sciences. Elsevier, pp. 829–835

Hilton DJ (1995) The social context of reasoning: conversational inference and rational judgment. Psychol Bull 118:2

Hopcroft JE, Motwani R, Ullman JD (2007) Introduction to Automata Theory, languages, and computation. Addison Wesley, Boston/San Francisco/New York

Jeffrey R (1967) The logic of decision, 2nd edn. McGraw-Hill

Jones S, Juslin P, Olsson H, Winman A (2000) Algorithm, heuristic or exemplar: Process and representation in multiple-cue judgment. In: Proceedings of the 22nd annual conference of the Cognitive Science Society. Symposium conducted at the meeting of Erlbaum, Hillsdale, NJ

Kahneman D (2011) Thinking, fast and slow. Farar, Straus and Giroux

Kahneman D, Klein G (2009) Conditions for intuitive expertise: a failure to disagree. Am Psychol 64:6

Kahneman D, Tversky A (1996) On the reality of cognitive illusions. In: Psychological Review, 103(3), p 582–591

Khaldun I (1967) The Muqaddimah. An introduction to history (trans: Arabic by Rosenthal F). Abridged and edited by Dawood NJ. Princeton University Press

Klein G (2001) The fiction of optimization. In: Gigerenzer G, Selten R (eds) Bounded Rationality: The Adaptive Toolbox. MIT Press Editors

Kleining G (1982) Umriss zu einer Methodologie qualitativer Sozialforschung. Kölner Z Soziol Sozialpsychol 34:2

Kleining G (1995) Von der Hermeneutik zur qualitativen Heuristik. Beltz

Lakatos I (1970) Falsification and the methodology of scientific research programmes. In: Lakatos I, Musgrave A (eds) Criticism and the growth of knowledge. Cambridge University Press

Leibniz GW (1880) Die Philosophischen Schriften von GW Leibniz IV, hrsg von CI Gerhardt

Lerner RM (1978) Nature Nurture and Dynamic Interactionism. Human Development 21(1):1–20. https://doi.org/10.1159/000271572

Lieder F, Griffiths TL (2020) Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences. Vol 43, e1. Cambridge University Press

Link D (2010) Scrambling TRUTH: rotating letters as a material form of thought. Variantology 4, p. 215–266

Llull R (1308) Ars Generalis Ultima (trans: Dambergs Y), Yanis Dambergs, https://lullianarts.narpan.net/

Luan S, Reb J, Gigerenzer G (2019) Ecological rationality: fast-and-frugal heuristics for managerial decision-making under uncertainty. Acad Manag J 62:6

Mäki U (1998) As if. In: Davis J, Hands DW, Mäki U (ed) The handbook of economic methodology. Edward Elgar Publishing

Martí R, Pardalos P, Resende M (eds) (2018) Handbook of heuristics. Springer, Cham

McDougall W (2015) An introduction to social psychology. Psychology Press

McNamara JM, Green RF, Olsson O (2006) Bayes’ theorem and its applications in animal behaviour. Oikos 112(2):243–251. http://www.jstor.org/stable/3548663

Newborn M (1997) Kasparov versus Deep Blue: computer chess comes of age. Springer

Newell A, Shaw JC, Simon HA (1959) Report on a general problem-solving program. In: R. Oldenbourg (ed) IFIP congress. UNESCO, Paris

Newell A, Simon HA (1972) Human problem solving. Prentice-Hall, Englewood Cliffs, NJ

Newell BR (2013) Judgment under uncertainty. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press

Newell BR, Weston NJ, Shanks DR (2003) Empirical tests of a fast-and-frugal heuristic: not everyone “takes the best”. Organ Behav Hum Decision Processes 91:1

Oaten A (1977) Optimal foraging in patches: a case for stochasticity. Theor Popul Biol 12(3):263–285

Article   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Oppenheimer DM (2003) Not so fast! (and not so frugal!): rethinking the recognition heuristic. Cognition 90:1

Pachur T, Marinello G (2013) Expert intuitions: how to model the decision strategies of airport customs officers? Acta Psychol 144:1

Pearl J (1984) Heuristics: intelligent search strategies for computer problem solving. Addison-Wesley Longman Publishing Co, Inc

Pérez-Escudero A, de Polavieja G (2011) Collective animal behaviour from Bayesian estimation and probability matching. Nature Precedings

Pinheiro CAR, McNeill F (2014) Heuristics in analytics: a practical perspective of what influences our analytical world. Wiley Online Library

Polya G (1945) How to solve it. Princeton University Press

Polya G (1954) Induction and analogy in mathematics. Princeton University Press

Pombo O (2002) Leibniz and the encyclopaedic project. In: Actas do Congresso Internacional Ciência, Tecnologia Y Bien Comun: La atualidad de Leibniz

Pothos EM, Busemeyer JR (2022) Quantum cognition. Annu Rev Psychol 73:749–778

Priest G (2008) An introduction to non-classical logic: from if to is. Cambridge University Press

Book   MATH   Google Scholar  

Ramsey FP (1926) Truth and probability. In: Braithwaite RB (ed) The foundations of mathematics and other logical essays. McMaster University Archive for the History of Economic Thought. https://EconPapers.repec.org/RePEc:hay:hetcha:ramsey1926

Reimer T, Katsikopoulos K (2004) The use of recognition in group decision-making. Cogn Sci 28:6

Reisberg D (ed) (2013) The Oxford handbook of cognitive psychology. Oxford University Press

Rieskamp J, Otto PE (2006) SSL: a theory of how people learn to select strategies. J Exp Psychol Gen 135:2

Ritchey T (2022) Ramon Llull and the combinatorial art. https://www.swemorph.com/amg/pdf/ars-morph-1-draft-ch-4.pdf

Ritter J, Gründer K, Gabriel G, Schepers H (2017) Historisches Wörterbuch der Philosophie online. Schwabe Verlag

Russell SJ, Norvig P, Davis E (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice-Hall series in artificial intelligence. Prentice-Hall

Savage LJ (ed) (1954) The foundations of statistics. Courier Corporation

Schacter D, Gilbert D, Wegner D (2011) Psychology, 2nd edn. Worth

Schaeffer J, Burch N, Bjornsson Y, Kishimoto A, Muller M, Lake R, Lu P, Sutphen S (2007) Checkers is solved. Science 317(5844):1518–1522

Article   ADS   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Schreurs BG (1989) Classical conditioning of model systems: a behavioural review. Psychobiology 17:2

Scopus (2022) Search “heuristics”. https://www.scopus.com/standard/marketing.uri (TITLE-ABS-KEY(heuristic) AND (LIMIT-TO (SUBJAREA,"DECI") OR LIMIT-TO (SUBJAREA,"SOCI") OR LIMIT-TO (SUBJAREA,"BUSI"))) Accessed on 16 Apr 2022

Searle JR (1997) The mystery of consciousness. Granta Books

Semaan G, Coelho J, Silva E, Fadel A, Ochi L, Maculan N (2020) A brief history of heuristics: from Bounded Rationality to Intractability. IEEE Latin Am Trans 18(11):1975–1986. https://latamt.ieeer9.org/index.php/transactions/article/view/3970/682

Sen S (2020) The environment in evolution: Darwinism and Lamarckism revisited. Harvest Volume 1(2):84–88. https://doi.org/10.2139/ssrn.3537393

Shah AK, Oppenheimer DM (2008) Heuristics made easy: an effort-reduction framework. Psychol Bull 134:2

Siitonen A (2014) Bolzano on finding out intentions behind actions. In: From the ALWS archives: a selection of papers from the International Wittgenstein Symposia in Kirchberg am Wechsel

Simon HA (1955) A behavioural model of rational choice. Q J Econ 69:1

Simon HA, Newell A (1958) Heuristic problem solving: the next advance in operations research. Oper Res 6(1):1–10. http://www.jstor.org/stable/167397

Article   MATH   Google Scholar  

Smith R (2020) Aristotle’s logic. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, 2020th edn. Metaphysics Research Lab, Stanford University

Smulders TV (2009) Darwin 200: special feature on brain evolution. Biology Letters 5(1), p. 105–107

Sörensen K, Sevaux M, Glover F (2018) A history of metaheuristics. In: Martí R, Pardalos P, Resende M (eds) Handbook of heuristics. Springer, Cham

Stephenson N (2003) Theoretical psychology: critical contributions. Captus Press

Strack F, Mussweiler T (1997) Explaining the enigmatic anchoring effect: mechanisms of selective accessibility. J Person Soc Psychol 73:3

Sullivan D (2002) How search engines work. SEARCH ENGINE WATCH, at http://www.searchenginewatch.com/webmasters/work.Html (Last Updated June 26, 2001) (on File with the New York University Journal of Legislation and Public Policy). http://www.searchenginewatch.com

Suppes P (1983) Heuristics and the axiomatic method. In: Groner R et al (ed) Methods of Heuristics. Routledge

Turing A (1937) On computable numbers, with an application to the entscheidungsproblem. Proc Lond Math Soc s2-42(1):230–265

Article   MathSciNet   MATH   Google Scholar  

Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5:2

Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science (New York, NY) 185::4157

Vardi MY (2012) Artificial intelligence: past and future. Commun ACM 55:1

Vikhar PA (2016) Evolutionary algorithms: a critical review and its future prospects. Paper presented at the international conference on global trends in signal processing, information computing and communication (ICGTSPICC). IEEE, pp. 261–265

Volz V, Rudolph G, Naujoks B (2016) Demonstrating the feasibility of automatic game balancing. Paper presented at the proceedings of the Genetic and Evolutionary Computation Conference, pp. 269–276

von Neumann J, Morgenstern O (1944) Theory of games and economic behaviour. Princeton University Press, Princeton, p. 1947

Zermelo E (1913) Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. In: Proceedings of the fifth international congress of mathematicians. Symposium conducted at the meeting of Cambridge University Press, Cambridge. Cambridge University Press, Cambridge

Zilio D (2013) Filling the gaps: skinner on the role of neuroscience in the explanation of behavior. Behavior and Philosophy, 41, p. 33–59

Download references

Acknowledgements

We would like to extend our sincere thanks to the reviewers for their valuable time and effort in reviewing our work. Their insightful comments and suggestions have greatly improved the quality of our manuscript.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

HHL Leipzig Graduate School of Management, Leipzig, Germany

Mohamad Hjeij & Arnis Vilks

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohamad Hjeij .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hjeij, M., Vilks, A. A brief history of heuristics: how did research on heuristics evolve?. Humanit Soc Sci Commun 10 , 64 (2023). https://doi.org/10.1057/s41599-023-01542-z

Download citation

Received : 25 July 2022

Accepted : 30 January 2023

Published : 17 February 2023

DOI : https://doi.org/10.1057/s41599-023-01542-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Socialisation approach to ai value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence.

  • Joel Janhonen

AI and Ethics (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

to state that a case study is heuristic means that

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

The theory behind heuristic evaluations.

to state that a case study is heuristic means that

November 1, 1994 1994-11-01

  • Email article
  • Share on LinkedIn
  • Share on Twitter

Heuristic evaluation   (Nielsen and Molich, 1990; Nielsen 1994) is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics").

In general, heuristic evaluation is difficult for a single individual to do because one person will never be able to find all the usability problems in an interface. Luckily, experience from many different projects has shown that different people find different usability problems. Therefore, it is possible to improve the effectiveness of the method significantly by involving multiple evaluators. Figure 1 shows an example from a case study of heuristic evaluation where 19 evaluators were used to find 16 usability problems in a voice response system allowing customers access to their bank accounts (Nielsen 1992). Each of the black squares in Figure 1 indicates the finding of one of the usability problems by one of the evaluators. The figure clearly shows that there is a substantial amount of nonoverlap between the sets of usability problems found by different evaluators. It is certainly true that some usability problems are so easy to find that they are found by almost everybody, but there are also some problems that are found by very few evaluators. Furthermore, one cannot just identify the best evaluator and rely solely on that person's findings. First, it is not necessarily true that the same person will be the best evaluator every time. Second, some of the hardest-to-find usability problems (represented by the leftmost columns in Figure 1) are found by evaluators who do not otherwise find many usability problems. Therefore, it is necessary to involve multiple evaluators in any heuristic evaluation (see below for a discussion of the best number of evaluators). My recommendation is normally to use three to five evaluators since one does not gain that much additional information by using larger numbers.


Heuristic evaluation is performed by having each individual evaluator inspect the interface alone. Only after all evaluations have been completed are the evaluators allowed to communicate and have their findings aggregated. This procedure is important in order to ensure independent and unbiased evaluations from each evaluator. The results of the evaluation can be recorded either as written reports from each evaluator or by having the evaluators verbalize their comments to an observer as they go through the interface. Written reports have the advantage of presenting a formal record of the evaluation, but require an additional effort by the evaluators and the need to be read and aggregated by an evaluation manager. Using an observer adds to the overhead of each evaluation session, but reduces the workload on the evaluators. Also, the results of the evaluation are available fairly soon after the last evaluation session since the observer only needs to understand and organize one set of personal notes, not a set of reports written by others. Furthermore, the observer can assist the evaluators in operating the interface in case of problems, such as an unstable prototype, and help if the evaluators have limited domain expertise and need to have certain aspects of the interface explained.

In a user test situation, the observer (normally called the "experimenter") has the responsibility of interpreting the user's actions in order to infer how these actions are related to the usability issues in the design of the interface. This makes it possible to conduct user testing even if the users do not know anything about user interface design. In contrast, the responsibility for analyzing the user interface is placed with the evaluator in a heuristic evaluation session, so a possible observer only needs to record the evaluator's comments about the interface, but does not need to interpret the evaluator's actions.

Two further differences between heuristic evaluation sessions and traditional user testing are the willingness of the observer to answer questions from the evaluators during the session and the extent to which the evaluators can be provided with hints on using the interface. For traditional user testing, one normally wants to discover the mistakes users make when using the interface; the experimenters are therefore reluctant to provide more help than absolutely necessary. Also, users are requested to discover the answers to their questions by using the system rather than by having them answered by the experimenter. For the heuristic evaluation of a domain-specific application, it would be unreasonable to refuse to answer the evaluators' questions about the domain, especially if nondomain experts are serving as the evaluators. On the contrary, answering the evaluators' questions will enable them to better assess the usability of the user interface with respect to the characteristics of the domain. Similarly, when evaluators have problems using the interface, they can be given hints on how to proceed in order not to waste precious evaluation time struggling with the mechanics of the interface. It is important to note, however, that the evaluators should not be given help until they are clearly in trouble and have commented on the usability problem in question.

Typically, a heuristic evaluation session for an individual evaluator lasts one or two hours. Longer evaluation sessions might be necessary for larger or very complicated interfaces with a substantial number of dialogue elements, but it would be better to split up the evaluation into several smaller sessions, each concentrating on a part of the interface.

During the evaluation session, the evaluator goes through the interface several times and inspects the various dialogue elements and compares them with a list of recognized usability principles (the heuristics). These heuristics are general rules that seem to describe common properties of usable interfaces. In addition to the checklist of general heuristics to be considered for all dialogue elements, the evaluator obviously is also allowed to consider any additional usability principles or results that come to mind that may be relevant for any specific dialogue element. Furthermore, it is possible to develop category-specific heuristics that apply to a specific class of products as a supplement to the general heuristics. One way of building a supplementary list of category-specific heuristics is to perform competitive analysis and user testing of existing products in the given category and try to abstract principles to explain the usability problems that are found (Dykstra 1993).

In principle, the evaluators decide on their own how they want to proceed with evaluating the interface. A general recommendation would be that they go through the interface at least twice, however. The first pass would be intended to get a feel for the flow of the interaction and the general scope of the system. The second pass then allows the evaluator to focus on specific interface elements while knowing how they fit into the larger whole.

Since the evaluators are not using the system as such (to perform a real task), it is possible to perform heuristic evaluation of user interfaces that exist on paper only and have not yet been implemented (Nielsen 1990). This makes heuristic evaluation suited for use early in the usability engineering lifecycle.

If the system is intended as a walk-up-and-use interface for the general population or if the evaluators are domain experts, it will be possible to let the evaluators use the system without further assistance. If the system is domain-dependent and the evaluators are fairly naive with respect to the domain of the system, it will be necessary to assist the evaluators to enable them to use the interface. One approach that has been applied successfully is to supply the evaluators with a typical usage scenario , listing the various steps a user would take to perform a sample set of realistic tasks. Such a scenario should be constructed on the basis of a task analysis of the actual users and their work in order to be as representative as possible of the eventual use of the system.

The output from using the heuristic evaluation method is a list of usability problems in the interface with references to those usability principles that were violated by the design in each case in the opinion of the evaluator. It is not sufficient for evaluators to simply say that they do not like something; they should explain why they do not like it with reference to the heuristics or to other usability results. The evaluators should try to be as specific as possible and should list each usability problem separately. For example, if there are three things wrong with a certain dialogue element, all three should be listed with reference to the various usability principles that explain why each particular aspect of the interface element is a usability problem. There are two main reasons to note each problem separately: First, there is a risk of repeating some problematic aspect of a dialogue element, even if it were to be completely replaced with a new design, unless one is aware of all its problems. Second, it may not be possible to fix all usability problems in an interface element or to replace it with a new design, but it could still be possible to fix some of the problems if they are all known.

Heuristic evaluation does not provide a systematic way to generate fixes to the usability problems or a way to assess the probable quality of any redesigns. However, because heuristic evaluation aims at explaining each observed usability problem with reference to established usability principles, it will often be fairly easy to generate a revised design according to the guidelines provided by the violated principle for good interactive systems. Also, many usability problems have fairly obvious fixes as soon as they have been identified.

For example, if the problem is that the user cannot copy information from one window to another, then the solution is obviously to include such a copy feature. Similarly, if the problem is the use of inconsistent typography in the form of upper/lower case formats and fonts, the solution is obviously to pick a single typographical format for the entire interface. Even for these simple examples, however, the designer has no information to help design the exact changes to the interface (e.g., how to enable the user to make the copies or on which of the two font formats to standardize).

One possibility for extending the heuristic evaluation method to provide some design advice is to conduct a debriefing session after the last evaluation session. The participants in the debriefing should include the evaluators, any observer used during the evaluation sessions, and representatives of the design team. The debriefing session would be conducted primarily in a brainstorming mode and would focus on discussions of possible redesigns to address the major usability problems and general problematic aspects of the design. A debriefing is also a good opportunity for discussing the positive aspects of the design, since heuristic evaluation does not otherwise address this important issue.

Heuristic evaluation is explicitly intended as a "discount usability engineering" method. Independent research (Jeffries et al. 1991) has indeed confirmed that heuristic evaluation is a very efficient usability engineering method. One of my case studies found a benefit-cost ratio for a heuristic evaluation project of 48: The cost of using the method was about $10,500 and the expected benefits were about $500,000 (Nielsen 1994). As a discount usability engineering method, heuristic evaluation is not guaranteed to provide "perfect" results or to find every last usability problem in an interface.

In This Article:

Determining the number of evaluators.

In principle, individual evaluators can perform a heuristic evaluation of a user interface on their own, but the experience from several projects indicates that fairly poor results are achieved when relying on single evaluators. Averaged over six of my projects, single evaluators found only 35 percent of the usability problems in the interfaces. However, since different evaluators tend to find different problems, it is possible to achieve substantially better performance by aggregating the evaluations from several evaluators. Figure 2 shows the proportion of usability problems found as more and more evaluators are added. The figure clearly shows that there is a nice payoff from using more than one evaluator. It would seem reasonable to recommend the use of about five evaluators, but certainly at least three. The exact number of evaluators to use would depend on a cost-benefit analysis. More evaluators should obviously be used in cases where usability is critical or when large payoffs can be expected due to extensive or mission-critical use of a system.

Nielsen and Landauer (1993) present such a model based on the following prediction formula for the number of usability problems found in a heuristic evaluation:

ProblemsFound( i ) = N(1 - (1-l) i )

where ProblemsFound( i ) indicates the number of different usability problems found by aggregating reports from i independent evaluators, N indicates the total number of usability problems in the interface, and l indicates the proportion of all usability problems found by a single evaluator. In six case studies (Nielsen and Landauer 1993), the values of l ranged from 19 percent to 51 percent with a mean of 34 percent. The values of N ranged from 16 to 50 with a mean of 33. Using this formula results in curves very much like that shown in Figure 2, though the exact shape of the curve will vary with the values of the parameters N and l , which again will vary with the characteristics of the project.

In order to determine the optimal number of evaluators, one needs a cost-benefit model of heuristic evaluation. The first element in such a model is an accounting for the cost of using the method, considering both fixed and variable costs. Fixed costs are those that need to be paid no matter how many evaluators are used; these include time to plan the evaluation, get the materials ready, and write up the report or otherwise communicate the results. Variable costs are those additional costs that accrue each time one additional evaluator is used; they include the loaded salary of that evaluator as well as the cost of analyzing the evaluator's report and the cost of any computer or other resources used during the evaluation session. Based on published values from several projects the fixed cost of a heuristic evaluation is estimated to be between $3,700 and $4,800 and the variable cost of each evaluator is estimated to be between $410 and $900.

The actual fixed and variable costs will obviously vary from project to project and will depend on each company's cost structure and on the complexity of the interface being evaluated. For illustration, consider a sample project with fixed costs for heuristic evaluation of $4,000 and variable costs of $600 per evaluator. In this project, the cost of using heuristic evaluation with i evaluators is thus $(4,000 + 600 i ).

The benefits from heuristic evaluation are mainly due to the finding of usability problems, though some continuing education benefits may be realized to the extent that the evaluators increase their understanding of usability by comparing their own evaluation reports with those of other evaluators. For this sample project, assume that it is worth $15,000 to find each usability problem, using a value derived by Nielsen and Landauer (1993) from several published studies. For real projects, one would obviously need to estimate the value of finding usability problems based on the expected user population. For software to be used in-house, this value can be estimated based on the expected increase in user productivity; for software to be sold on the open market, it can be estimated based on the expected increase in sales due to higher user satisfaction or better review ratings. Note that real value only derives from those usability problems that are in fact fixed before the software ships. Since it is impossible to fix all usability problems, the value of each problem found is only some proportion of the value of a fixed problem.

Figure 3 shows the varying ratio of the benefits to the costs for various numbers of evaluators in the sample project. The curve shows that the optimal number of evaluators in this example is four, confirming the general observation that heuristic evaluation seems to work best with three to five evaluators. In the example, a heuristic evaluation with four evaluators would cost $6,400 and would find usability problems worth $395,000.

  • Dykstra, D. J. 1993. A Comparison of Heuristic Evaluation and Usability Testing: The Efficacy of a Domain-Specific Heuristic Checklist . Ph.D. diss., Department of Industrial Engineering, Texas A&M University, College Station, TX.
  • Jeffries, R., Miller, J. R., Wharton, C., and Uyeda, K. M. 1991. User interface evaluation in the real world: A comparison of four techniques. Proceedings ACM CHI'91 Conference (New Orleans, LA, April 28-May 2), 119-124.
  • Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33 , 3 (March), 338-348.
  • Nielsen, J. 1990. Paper versus computer implementations as mockup scenarios for heuristic evaluation. Proc. IFIP INTERACT90 Third Intl. Conf. Human-Computer Interaction (Cambridge, U.K., August 27-31), 315-320.
  • Nielsen, J., and Landauer, T. K. 1993. A mathematical model of the finding of usability problems. Proceedings ACM/IFIP INTERCHI'93 Conference (Amsterdam, The Netherlands, April 24-29), 206-213.
  • Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90 Conf. (Seattle, WA, 1-5 April), 249-256.
  • Nielsen, J. 1992. Finding usability problems through heuristic evaluation. Proceedings ACM CHI'92 Conference (Monterey, CA, May 3-7), 373-380.
  • Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods . John Wiley & Sons, New York, NY.

Related Topics

  • Heuristic Evaluation Heuristic Evaluation

Learn More:

to state that a case study is heuristic means that

How to Increase the Visibility of Error Messages

Tim Neusesser · 5 min

to state that a case study is heuristic means that

How to Conduct a Heuristic Evaluation

Kate Moran · 5 min

to state that a case study is heuristic means that

Heuristic Evaluation of User Interfaces

Jakob Nielsen · 3 min

Related Articles:

Technology Transfer of Heuristic Evaluation and Usability Inspection

Jakob Nielsen · 19 min

Characteristics of Usability Problems Found by Heuristic Evaluation

Jakob Nielsen · 5 min

Severity Ratings for Usability Problems

Summary of Usability Inspection Methods

Jakob Nielsen · 1 min

10 Usability Heuristics Applied to Video Games

Alita Joyce · 10 min

Visibility of System Status (Usability Heuristic #1)

Aurora Harley · 7 min

  • Get in touch
  • Enterprise & IT
  • Banking & Financial Services
  • News media & Entertainment
  • Healthcare & Lifesciences
  • Networks and Smart Devices
  • Education & EdTech
  • Service Design
  • UI UX Design
  • Data Visualization & Design
  • User & Design Research
  • In the News
  • Our Network
  • Voice Experiences
  • Golden grid
  • Critical Thinking
  • Enterprise UX
  • 20 Product performance metrics
  • Types of Dashboards
  • Interconnectivity and iOT
  • Healthcare and Lifesciences
  • Airtel XStream
  • Case studies

Data Design

  • UCD vs. Design Thinking

User & Design Research

Heuristic analysis.

Heuristics is synonymous to rules or methods. Heuristic means ‘to discover’. It helps think through problems to reach a solution by process of elimination, trial and error, and other such means. Heuristic Analysis is conducted by experts based on the rules of heuristics, popularly used in user experience and user interface design to evaluate a website, portal or an app for their confirmation to heuristic principles.

Quick details: Heuristic Analysis

Structure: Structured

Preparation: Subject for heuristic evaluation

Deliverables: Report, Recommendations

Popular Heuristic Principles

Below are a few examples of popular heuristics that act as guiding principles for designers across the world

  • Jakob Nielsen’s Heuristics for User Interface Design
  • Ben Shneiderman’s Eight Golden Rules of Interface Design
  • Jill Gerhardt-Powals’ 10 Cognitive Engineering Principles
  • Christian Bastien and Dominique Scapin 18 Ergonomic criteria for the evaluation of human-computer interfaces
  • Bruce Tognazzini’s First principles of interaction design
  • Alan Cooper’s About face 2.0: The essentials of interaction design.

Jakob Nielsen’s Heuristics

The 10 Usability Heuristics for User Interface Design by Jakob Nielson is the most widely accepted and used within the design community. They are called “heuristics” because they are broad rules of thumb and not specific usability guidelines.

The heuristics are as under:

1. Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time .

2. Match between system and the real world

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

3. User control and freedom

Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

4. Consistency and standards

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.

5. Error prevention

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

6. Recognition rather than recall

Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

7. Flexibility and efficiency of use

Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

8. Aesthetic and minimalist design

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

9. Help users recognize, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

10. Help and documentation

Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

How to conduct Heuristic Analysis

Depending on which school of thought the designer subscribes to, the compliance to heuristics can be determined and analyzed.

The first step to conduct heuristic analysis is to list heuristics that results need to be measured against. Once this is done, the researcher can recruit either domain experts or some other user types (depending on the nature of heuristics) to measure the performance of a system, an idea, a concept, or a prototype.

The same heuristics should be measured by different users to collect a wider spectrum of issues that may not be entirely clear to just one user. As heuristic analysis is mostly conducted to detect usability issues, users of different skill sets will help arriving at a holistic analysis of usability.

Advantages of Heuristic analysis

1. online heuristics.

With online tools to conduct heuristics, a large amount of data can be collected from a large sample size.

2. Detailed Analysis

When heuristic results are collected from a large number of users who have identified different issues relevant to their usability, the analysis is a lot more detailed covering different aspects that would have been neglected by fewer users.

Disadvantages of Heuristic analysis

1. correct choice of heuristics.

Without a correct choice of heuristics, the analysis obtained would not be accurate or relevant to the study

2. Time consuming

 Heuristic analysis done by a large number of users can be time consuming.

3. Cost increases with the number of users

With more number of expert category users getting recruited, the cost for recruitment for such users is also high

Think Design's recommendation

Heuristic Analysis is a handy method that helps in analyzing a website/ application through a structured and widely accepted framework. Heuristic Analysis doesn't provide all the answers, especially when we are seeking user experience insights. Also, complying with heuristics doesn’t necessarily ensure a better experience. 

It is important to understand that heuristics are a set of measures that speak about a website/ application’s usability and user interface design. Use Heuristic Analysis when we need objectivity and the intention is to analyze a product’s usability through metrics that are less subjective and more widely acceptable. 

If your intention is to assess local, cultural or experience related context, Heuristic Analysis is not your go-to method.

Was this Page helpful?

Related methods.

  • A Day In The Life
  • Be your Customer
  • Bodystorming
  • Customer Segmentation
  • Predict next year’s Headline
  • Prototyping
  • Role Reversal
  • Simulation/ Modeling
  • Try it Yourself

UI UX DESIGN

Service design.

We use cookies to ensure that we give you the best experience on our website. If you continue we'll assume that you accept this. Learn more

Recent Tweets

Sign up for our newsletter.

Subscribe to our newsletter to stay updated with the latest insights in UX, CX, Data and Research.

Get in Touch

Thank you for subscribing.

You will be receive all future issues of our newsletter.

Thank you for Downloading.

One moment….

While the report downloads, could you tell us…

  • Media Center

Why do we take mental shortcuts?

What are heuristics.

Heuristics are mental shortcuts that can facilitate problem-solving and probability judgments. These strategies are generalizations, or rules-of-thumb, that reduce cognitive load. They can be effective for making immediate judgments, however, they often result in irrational or inaccurate conclusions.

Heuristics

Where this bias occurs

Debias your organization.

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

We use heuristics in all sorts of situations. One type of heuristic, the availability heuristic , often happens when we’re attempting to judge the frequency with which a certain event occurs. Say, for example, someone asked you whether more tornadoes occur in Kansas or Nebraska. Most of us can easily call to mind an example of a tornado in Kansas: the tornado that whisked Dorothy Gale off to Oz in Frank L. Baum’s The Wizard of Oz . Although it’s fictional, this example comes to us easily. On the other hand, most people have a lot of trouble calling to mind an example of a tornado in Nebraska. This leads us to believe that tornadoes are more common in Kansas than in Nebraska. However, the states actually report similar levels. 1

Individual effects

to state that a case study is heuristic means that

The thing about heuristics is that they aren’t always wrong. As generalizations, there are many situations where they can yield accurate predictions or result in good decision-making. However, even if the outcome is favorable, it was not achieved through logical means. When we use heuristics, we risk ignoring important information and overvaluing what is less relevant. There’s no guarantee that using  heuristics will work out and, even if it does, we’ll be making the decision for the wrong reason. Instead of basing it on reason, our behavior is resulting from a mental shortcut with no real rationale to support it.

Systemic effects

Heuristics become more concerning when applied to politics, academia, and economics. We may all resort to heuristics from time to time, something that is true even of members of important institutions who are tasked with making large, influential decisions. It is necessary for these figures to have a comprehensive understanding of the biases and heuristics that can affect our behavior, so as to promote accuracy on their part.

How it affects product

Heuristics can be useful in product design. Specifically, because heuristics are intuitive to us, they can be applied to create a more user-friendly experience and one that is more valuable to the customer. For example, color psychology is a phenomenon explaining how our experiences with different colors and color families can prime certain emotions or behaviors. Taking advantage of the representativeness heuristic, one could choose to use passive colors (blue or green) or more active colors (red, yellow, orange) depending on the goals of the application or product. 18 For example, if a developer is trying to evoke a feeling of calm for their app that provides guided meditations, they may choose to make the primary colors of the program light blues and greens. Colors like red and orange are more emotionally energizing and may be useful in settings like gyms or crossfit programs. 

By integrating heuristics into products we can enhance the user experience. If an application, device, or item includes features that make it feel intuitive, easy to navigate and familiar, customers will be more inclined to continue to use it and recommend it to others. Appealing to those mental shortcuts we can minimize the chances of user error or frustration with a product that is overly complicated.

Heuristics and AI

Artificial intelligence and machine learning tools already use the power of heuristics to inform its output. In a nutshell, simple AI tools operate based on a set of built in rules and sometimes heuristics! These are encoded within the system thus aiding in decision-making and the presentation of learning material. Heuristic algorithms can be used to solve advanced computational problems, providing efficient and approximate solutions.  Like in humans, the use of heuristics can result in error, and thus must be used with caution. However, machine learning tools and AI can be useful in supporting human decision-making, especially when clouded by emotion, bias or irrationality due to our own susceptibility to heuristics. 

Why it happens

In their paper “Judgment Under Uncertainty: Heuristics and Biases” 2 , Daniel Kahneman and Amos Tversky identified three different kinds of heuristics: availability, representativeness, as well as anchoring and adjustment. Each type of heuristic is used for the purpose of reducing the mental effort needed to make a decision, but they occur in different contexts.

Availability heuristic

The availability heuristic, as defined by Kahneman and Tversky, is the mental shortcut used for making frequency or probability judgments based on “the ease with which instances or occurrences can be brought to mind”. 3 This was touched upon in the previous example, judging the frequency with which tornadoes occur in Kansas relative to Nebraska. 3

The availability heuristic occurs because certain memories come to mind more easily than others. In Kahneman and Tversky’s example participants were asked if more words in the English language start with the letter K or have K as the third letter  Interestingly, most participants responded with the former when in actuality, it is the latter that is true. The idea being that it is much more difficult to think of words that have K as the third letter than it is to think of words that start with K. 4 In this case,  words that begin with K are more readily available to us than words with the K as the third letter.

Representativeness heuristic

Individuals tend to classify events into categories, which, as illustrated by Kahneman and Tversky, can result in our use of the representativeness heuristic. When we use this heuristic, we categorize events or objects based on how they relate to instances we are already familiar with.  Essentially, we have built our own categories, which we use to make predictions about novel situations or people. 5 For example, if someone we meet in one of our university lectures looks and acts like what we believe to be a stereotypical medical student, we may judge the probability that they are studying medicine as highly likely, even without any hard evidence to support that assumption.

The representativeness heuristic is associated with prototype theory. 6 This prominent theory in cognitive science, the prototype theory explains object and identity recognition. It suggests that we categorize different objects and identities in our memory. For example, we may have a category for chairs, a category for fish, a category for books, and so on. Prototype theory posits that we develop prototypical examples for these categories by averaging every example of a given category we encounter. As such, our prototype of a chair should be the most average example of a chair possible, based on our experience with that object. This process aids in object identification because we compare every object we encounter against the prototypes stored in our memory. The more the object resembles the prototype, the more confident we are that it belongs in that category. 

Prototype theory may give rise to the representativeness heuristic as it is in situations when a particular object or event is viewed as similar to the prototype stored in our memory, which leads us to classify the object or event into the category represented by that prototype. To go back to the previous example, if your peer closely resembles your prototypical example of a med student, you may place them into that category based on the prototype theory of object and identity recognition. This, however, causes you to commit the representativeness heuristic.

Anchoring and adjustment heuristic

Another heuristic put forth by Kahneman and Tversky in their initial paper is the anchoring and adjustment heuristic. 7 This heuristic describes how, when estimating a certain value, we tend to give an initial value, then adjust it by increasing or decreasing our estimation. However, we often get stuck on that initial value – which is referred to as anchoring – this results in us making insufficient adjustments. Thus, the adjusted value is biased in favor of the initial value we have anchored to.

In an example of the anchoring and adjustment heuristic, Kahneman and Tversky gave participants questions such as “estimate the number of African countries in the United Nations (UN).” A wheel labeled with numbers from 0-100 was spun, and participants were asked to say whether or not the number the wheel landed on was higher or lower than their answer to the question. Then, participants were asked to estimate the number of African countries in the UN, independent from the number they had spun. Regardless, Kahneman and Tversky found that participants tended to anchor onto the random number obtained by spinning the wheel. The results showed that  when the number obtained by spinning the wheel was 10, the median estimate given by participants was 25, while, when the number obtained from the wheel was 65, participants’ median estimate was 45.8.

A 2006 study by Epley and Gilovich, “The Anchoring and Adjustment Heuristic: Why the Adjustments are Insufficient” 9 investigated the causes of this heuristic. They illustrated that anchoring often occurs because the new information that we anchor to is more accessible than other information Furthermore, they provided empirical evidence to demonstrate that our adjustments tend to be insufficient because they require significant mental effort, which we are not always motivated to dedicate to the task. They also found that providing incentives for accuracy led participants to make more sufficient adjustments. So, this particular heuristic generally occurs when there is no real incentive to provide an accurate response.

Quick and easy

Though different in their explanations, these three types of heuristics allow us to respond automatically without much effortful thought. They provide an immediate response and do not use up much of our mental energy, which allows us to dedicate mental resources to other matters that may be more pressing. In that way, heuristics are efficient, which is a big reason why we continue to use them. That being said, we should be mindful of how much we rely on them because there is no guarantee of their accuracy.

Why it is important

As illustrated by Tversky and Kahneman, using heuristics can cause us to engage in various cognitive biases and commit certain fallacies. 10 As a result, we may make poor decisions, as well as inaccurate judgments and predictions. Awareness of heuristics can aid us in avoiding them, which will ultimately lead us to engage in more adaptive behaviors.

How to avoid it

to state that a case study is heuristic means that

Heuristics arise from automatic System 1 thinking. It is a common misconception that errors in judgment can be avoided by relying exclusively on System 2 thinking. However, as pointed out by Kahneman, neither System 2 nor System 1 are infallible. 11   While System 1 can result in relying on heuristics leading to certain biases, System 2 can give rise to other biases, such as the confirmation bias . 12 In truth, Systems 1 and 2 complement each other, and using them together can lead to more rational decision-making. That is, we shouldn’t make judgments automatically, without a second thought, but we shouldn’t overthink things to the point where we’re looking for specific evidence to support our stance. Thus, heuristics can be avoided by making judgments more effortfully, but in doing so, we should attempt not to overanalyze the situation.

How it all started

The first three heuristics – availability, representativeness, as well as anchoring and adjustment – were identified by Tverksy and Kahneman in their 1974 paper, “Judgment Under Uncertainty: Heuristics and Biases”. 13 In addition to presenting these heuristics and their relevant experiments, they listed the respective biases each can lead to.

For instance, upon defining the availability heuristic, they demonstrated how it may lead to illusory correlation , which is the erroneous belief that two events frequently co-occur. Kahneman and Tversky made the connection by illustrating how the availability heuristic can cause us to over- or under-estimate the frequency with which certain events occur. This may result in drawing correlations between variables when in reality there are none.  

Referring to our tendency to overestimate our accuracy making probability judgments, Kahneman and Tversky also discussed how the illusion of validity is facilitated by the representativeness heuristic. The more representative an object or event is, the more confident we feel in predicting certain outcomes. The illusion of validity, as it works with the representativeness heuristic, can be demonstrated by our assumptions of others based on past experiences. If you have only ever had good experiences with people from Canada, you will be inclined to judge most Canadians as pleasant. In reality, your small sample size cannot account for the whole population. Representativeness is not the only factor in determining the probability of an outcome or event, meaning we should not be as confident in our predictive abilities.

Example 1 – Advertising

Those in the field of advertising should have a working understanding of heuristics as consumers often rely on these shortcuts when making decisions about purchases. One heuristic that frequently comes into play in the realm of advertising is the scarcity heuristic . When assessing the value of something, we often fall back on this heuristic, leading us to believe that the rarity or exclusiveness of an object contributes to its value.

A 2011 study by Praveen Aggarwal, Sung Yul Jun, and Jong Ho Huh evaluated the impact of “scarcity messages” on consumer behavior. They found that both “limited quantity” and “limited time” advertisements influence consumers’ intentions to purchase, but “limited quantity” messages are more effective. This explains why people get so excited over the one-day-only Black Friday sales, and why the countdowns of units available on home shopping television frequently lead to impulse buys. 14

Knowledge of the scarcity heuristic can help businesses thrive, as “limited quantity” messages make potential consumers competitive and increase their intentions to purchase. 15 This marketing technique can be a useful tool for bolstering sales and bringing attention to your business.

Example 2 – Stereotypes

One of the downfalls of heuristics is that they have the potential to lead to stereotyping, which is often harmful. Kahneman and Tversky illustrated how the representativeness heuristic might result in the propagation of stereotypes. The researchers presented participants with a personality sketch of a fictional man named Steve followed by a list of possible occupations. Participants were tasked with ranking the likelihood of each occupation being Steve’s. Since the personality sketch described Steve as shy, helpful, introverted, and organized, participants tended to indicate that it was probable that he was a  librarian. 16 In this particular case the stereotype is less harmful than many others, however it accurately illustrates the link between heuristics and stereotypes.

Published in 1989, Patricia Devine’s paper “Stereotypes and Prejudice: Their Automatic and Controlled Components” illustrates how, even among people who are low in prejudice, rejecting stereotypes requires a certain level of motivation and cognitive capacity. 17 We typically use heuristics in order to avoid exerting too much mental energy, specifically when we are not sufficiently motivated to dedicate mental resources to the task at hand. Thus, when we lack the mental capacity to make a judgment or decision effortfully, we may rely upon automatic heuristic responses and, in doing so, risk propagating stereotypes.

Stereotypes are an example of how heuristics can go wrong. Broad generalizations do not always apply, and their continued use can have serious consequences. This underscores the importance of effortful judgment and decision-making, as opposed to automatic.

Heuristics are mental shortcuts that allow us to make quick judgment calls based on generalizations or rules of thumb.

Heuristics, in general, occur because they are efficient ways of responding when we are faced with problems or decisions. They come about automatically, allowing us to allocate our mental energy elsewhere. Specific heuristics occur in different contexts; the availability heuristic happens because we remember certain memories better than others, the representativeness heuristic can be explained by prototype theory, and the anchoring and adjustment heuristic occurs due to lack of incentive to put in the effort required for sufficient adjustment.

The scarcity heuristic, which refers to how we value items more when they are limited, can be used to the advantage of businesses looking to increase sales. Research has shown that advertising objects as “limited quantity” increases consumers' competitiveness and their intentions to buy the item.

While heuristics can be useful, we should exert caution, as they are generalizations that may lead us to propagate stereotypes ranging from inaccurate to harmful.

Putting more effort into decision-making instead of making decisions automatically can help us avoid heuristics. Doing so requires more mental resources, but it will lead to more rational choices.

Related TDL articles

What are heuristics.

This interview with The Decision Lab’s Managing Director Sekoul Krastev delves into the history of heuristics, their applications in the real world, and their consequences, both positive and negative.

10 Decision-Making Errors that Hold Us Back at Work

In this article, Dr. Melina Moleskis examines the common decision-making errors that occur in the workplace. Everything from taking in feedback provided by customers to cracking the problems of on-the-fly decision-making, Dr. Moleskis delivers workable solutions that anyone can implement. 

  • Gilovich, T., Keltner, D., Chen. S, and Nisbett, R. (2015).  Social Psychology  (4th edition). W.W. Norton and Co. Inc.
  • Tversky, A. and Kahneman, D. (1974). Judgment Under Uncertainty: Heuristics and Biases.  Science . 185(4157), 1124-1131.
  • Mervis, C. B., & Rosch, E. (1981). Categorization of natural objects.  Annual Review of Psychology ,  32 (1), 89–115. https://doi.org/10.1146/annurev.ps.32.020181.000513
  • Epley, N., & Gilovich, T. (2006). The anchoring-and-adjustment heuristic.  Psychological Science -Cambridge- ,  17 (4), 311–318.
  • System 1 and System 2 Thinking.  The Marketing Society.  https://www.marketingsociety.com/think-piece/system-1-and-system-2-thinking
  • Aggarwal, P., Jun, S. Y., & Huh, J. H. (2011). Scarcity messages.  Journal of Advertising ,  40 (3), 19–30.
  • Devine, P. G. (1989). Stereotypes and prejudice: their automatic and controlled components.  Journal of Personality and Social Psychology ,  56 (1), 5–18. https://doi.org/10.1037/0022-3514.56.1.5
  • Kuo, L., Chang, T., &amp; Lai, C.-C. (2022). Research on product design modeling image and color psychological test. Displays, 71, 102108. https://doi.org/10.1016/j.displa.2021.102108

About the Authors

A man in a blue, striped shirt smiles while standing indoors, surrounded by green plants and modern office decor.

Dan is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. Dan has a background in organizational decision making, with a BComm in Decision & Information Systems from McGill University. He has worked on enterprise-level behavioral architecture at TD Securities and BMO Capital Markets, where he advised management on the implementation of systems processing billions of dollars per week. Driven by an appetite for the latest in technology, Dan created a course on business intelligence and lectured at McGill University, and has applied behavioral science to topics such as augmented and virtual reality.

A smiling man stands in an office, wearing a dark blazer and black shirt, with plants and glass-walled rooms in the background.

Dr. Sekoul Krastev

Sekoul is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. A decision scientist with a PhD in Decision Neuroscience from McGill University, Sekoul's work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.

Hindsight Bias

Why do unpredictable events only seem predictable after they occur, hot hand fallacy, why do we expect previous success to lead to future success, hyperbolic discounting, why do we value immediate rewards more than long-term rewards.

Notes illustration

Eager to learn about how behavioral science can help your organization?

Get new behavioral science insights in your inbox every month..

Heuristic Methods: The Definition, Use Case, and Relevance for Enterprises

What is it.

Heuristic methods are problem-solving strategies that use practical, common-sense principles to address complex issues. These methods are often used in artificial intelligence to help machines make decisions and solve problems in a more human-like manner. Heuristic methods allow AI to reason and learn from past experiences, making them more efficient and effective in a variety of tasks.

In business, heuristic methods can be incredibly valuable for decision-making and problem-solving. They allow AI systems to quickly evaluate large amounts of data and make informed choices, which can lead to more optimized processes and better outcomes. For example, in marketing, AI can use heuristic methods to analyze customer behavior and make personalized recommendations. In finance, heuristic methods can be used to identify patterns and trends in market data. Overall, heuristic methods help businesses leverage the power of AI to make smarter decisions and achieve better results.

How does it work?

Heuristic methods are rules or guidelines that AI uses to make decisions, rather than following a strict set of instructions. For example, you can think of heuristic methods like a set of general guidelines that a coach might give to a sports team. Instead of telling the players exactly what to do in every situation, the coach gives them some general guidelines to follow based on their experience and knowledge of the game.

In the same way, AI uses heuristic methods to make decisions based on its experience and knowledge of the data it has been given. This allows AI to be flexible and adapt to new situations, rather than being limited to a rigid set of instructions.

For example, let’s say you run a retail business and you want to use AI to predict which products are likely to sell well in the coming months. You could use heuristic methods to analyze past sales data, customer behavior, and market trends to come up with a set of general rules that the AI can use to make predictions. This allows the AI to make informed decisions without needing to be explicitly told what to do in every situation.

I hope this helps to give you a better understanding of how AI works using heuristic methods!

  • Heuristic methods can provide quick, approximate solutions to complex problems, making them useful in decision-making and problem-solving scenarios.
  • They are relatively easy to implement and can be applied in a wide range of domains, from computer science to psychology to economics.
  • Heuristic methods can help in generating new ideas and insights by providing a different perspective on a problem.
  • They are not always guaranteed to produce the best solution, and their results may be suboptimal in some cases.
  • Heuristic methods can be biased and may lead to errors or inaccuracies, especially when dealing with incomplete or uncertain information.
  • There is a risk of over-reliance on heuristic methods, which may limit the exploration of alternative problem-solving approaches.

Applications and Examples

Heuristic methods are commonly used in artificial intelligence to solve complex problems or make decisions based on limited information. For example, in route planning apps, heuristic methods may be used to find the most efficient route by considering factors such as traffic conditions, distance, and historical travel data.

Another example of heuristic methods in artificial intelligence is in medical diagnosis systems. These systems may use heuristic methods to determine the likelihood of a certain disease based on a patient’s symptoms, medical history, and demographic information.

In both of these examples, heuristic methods in artificial intelligence allow for more efficient and accurate decision-making when faced with complex and uncertain situations.

History and Evolution

The term ""heuristic methods"" was coined in the early 19th century by the German cognitive psychologist, Abraham Maslow. He introduced the term to describe problem-solving strategies that are practical, intuitive, and efficient, even if not always guaranteed to produce the most optimal solution. Heuristic methods were initially used in cognitive psychology to explain how individuals make decisions in complex or uncertain situations, based on their intuition and experience.

Over time, the term ""heuristic methods"" has become a key concept in the field of artificial intelligence. In AI, heuristic methods refer to algorithms that use rules of thumb, trial and error, or domain-specific knowledge to guide the search for solutions in complex problems. The use of heuristic methods has evolved to be a crucial component in various AI applications, such as heuristic search algorithms, heuristic evaluation techniques, and heuristic optimization methods. The term's application within AI has expanded to encompass a wide range of problem-solving approaches that prioritize efficiency and speed over optimality.

Heuristic methods in artificial intelligence are crucial for businesses to understand and utilize.

These methods involve using practical and experience-based techniques to solve complex problems, making them essential for decision-making and problem-solving processes within a business. By leveraging heuristic methods, businesses can improve their operational efficiency, gain insights into customer behavior, and enhance their overall decision-making processes.

Additionally, understanding heuristic methods in AI can assist businesses in developing more effective strategies for marketing, customer engagement, and product development. With the ability to analyze large volumes of data and identify patterns, heuristic methods allow businesses to make more informed and strategic decisions, ultimately leading to improved performance and competitiveness in the market.

Overall, the importance of heuristic methods in AI cannot be understated, as they hold the potential to revolutionize the way businesses operate and compete in the modern business landscape.

Related to Heuristic Methods Sequence Modeling Tokenization Human in the Loop Training Semi-Supervised Learning Self-Supervised Learning Lifelong Learning Incremental Learning Batch Learning Offline Learning Online Learning Curriculum Learning Federated Learning Meta-Learning Fine-tuning Transfer Learning

Case Studies: Using Heuristics

  • First Online: 14 April 2017

Cite this chapter

to state that a case study is heuristic means that

  • Val Lowndes 11 ,
  • Ovidiu Bagdasar 12 &
  • Stuart Berry 12  

Part of the book series: Simulation Foundations, Methods and Applications ((SFMA))

1230 Accesses

The aim of these case studies is to demonstrate how large and complex decision-making problems can be “solved” using heuristic methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

to state that a case study is heuristic means that

Decision Theory and Rules of Thumb

to state that a case study is heuristic means that

Case-Based Decision Theory

to state that a case study is heuristic means that

What Is a Decision Problem? Preliminary Statements

Author information, authors and affiliations.

University of Derby, Kedleston Road, Derby, DE22 1GB, UK

Val Lowndes

College of Engineering and Technology, University of Derby, Kedleston Road, Derby, DE22 1GB, UK

Ovidiu Bagdasar & Stuart Berry

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stuart Berry .

Editor information

Editors and affiliations.

Department of Computing and Mathematics, College of Engineering and Technology, University of Derby, Derby, United Kingdom

Stuart Berry

University of Derby, Derby, United Kingdom

Marcello Trovati

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Lowndes, V., Bagdasar, O., Berry, S. (2017). Case Studies: Using Heuristics. In: Berry, S., Lowndes, V., Trovati, M. (eds) Guide to Computational Modelling for Decision Processes. Simulation Foundations, Methods and Applications. Springer, Cham. https://doi.org/10.1007/978-3-319-55417-4_6

Download citation

DOI : https://doi.org/10.1007/978-3-319-55417-4_6

Published : 14 April 2017

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-55416-7

Online ISBN : 978-3-319-55417-4

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. 22 Heuristics Examples (The Types of Heuristics)

    to state that a case study is heuristic means that

  2. Heuristic table for case studies analysis

    to state that a case study is heuristic means that

  3. Conducting a heuristic evaluation: case study learnings

    to state that a case study is heuristic means that

  4. Conducting a heuristic evaluation: case study learnings

    to state that a case study is heuristic means that

  5. Case Study What is case study An investigation

    to state that a case study is heuristic means that

  6. Heuristic Case Study

    to state that a case study is heuristic means that

COMMENTS

  1. Heuristics In Psychology: Definition & Examples

    Psychologists refer to these efficient problem-solving techniques as heuristics. A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

  2. Heuristic evaluation: Definition, case study, template

    The heuristic evaluation method's main goal is to evaluate the usability quality of an interface based on a set of principles, based on UX best practices. From the identification of the problems, it is possible to provide practical recommendations and consequently improve the user experience.

  3. Redefining Case Study

    We also propose a more precise and encompassing definition that reconciles various definitions of case study research: case study is a transparadigmatic and transdisciplinary heuristic that involves the careful delineation of the phenomena for which evidence is being collected (event, concept, program, process, etc.).

  4. Heuristics: Definition, Examples, and How They Work

    Effort reduction: People use heuristics as a type of cognitive laziness to reduce the mental effort required to make choices and decisions. Fast and frugal: People use heuristics because they can be fast and correct in certain contexts. Some theories argue that heuristics are actually more accurate than they are biased.

  5. Redefining Case Study

    Our proposed definition of case study—case study is a transparadigmatic and transdisciplinary heuristic that involves the careful delineation of the phenomena for which evidence is being collected (event, concept, program, process etc.)—is concomitant with Flyvbjerg's suggestions on what case study offers society.

  6. Heuristics and biases: The science of decision-making

    A heuristic is a word from the Greek meaning 'to discover'. It is an approach to problem-solving that takes one's personal experience into account. Heuristics provide strategies to scrutinize a limited number of signals and/or alternative choices in decision-making. Heuristics diminish the work of retrieving and storing information in ...

  7. What Is the Affect Heuristic?

    The affect heuristic is a possible explanation for a range of purchase decisions, such as buying insurance. Example: Affect heuristic and insurance. In a study examining how people's feelings impact their willingness to buy insurance, participants were presented with two scenarios regarding an antique clock. In both scenarios, the value of ...

  8. A brief history of heuristics: how did research on heuristics evolve

    The representativeness heuristic is applied when individuals assess the probability that an object belongs to a particular class or category based on how much it resembles the typical case or ...

  9. To state that a case study is a heuristic means that: a. it represents

    To state that a case study is a heuristic means that: a. it represents a strategy employed by scholars. b. it relies on participant knowledge. c. it shares something new about the phenomenon. d. it supports a priori hypotheses. Case Study.

  10. Heuristics Overview, Types & Examples

    Some heuristics, for example, rely on decision-making strategies based on past memories. But memories can be problematic because of limited information, incomplete information, wrong information ...

  11. Heuristic

    Gigerenzer & Gaissmaier (2011) state that sub-sets of strategy include heuristics, regression analysis, and Bayesian inference. [14]A heuristic is a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or accurately than more complex methods (Gigerenzer and Gaissmaier [2011], p. 454; see also Todd et al. [2012], p. 7).

  12. The Use of Heuristics in Decision Making Under Risk and ...

    The recognition heuristic (another lexicographic heuristic) is one of the most-researched heuristics. The formal rule of the heuristic states that in the case of a definite number of alternatives, rank all recognized alternatives higher on the criterion than the unrecognized ones (Goldstein & Gigerenzer, 2002). Specifically, search an object ...

  13. Heuristics

    What does the heuristic evaluation mean: It is a method that helps to identify or to point out the usability problems in the user interface(UI) of digital products like software, mobile applications & websites. It is extra beneficial if it's done in the early stages of the design. ... A UI design case study to redesign an example user ...

  14. The Theory Behind Heuristic Evaluations, by Jakob Nielsen

    Independent research (Jeffries et al. 1991) has indeed confirmed that heuristic evaluation is a very efficient usability engineering method. One of my case studies found a benefit-cost ratio for a heuristic evaluation project of 48: The cost of using the method was about $10,500 and the expected benefits were about $500,000 (Nielsen 1994).

  15. Heuristics and Evidences Decision (HeED) Making: a Case Study in a

    Studies refer to Heuristics and Evidences Decision Making approaches in a comparative manner; however, it is identified that these two approaches are inseparable and are applied in parallel. The objective of this paper is to provide a qualitative analysis of a systems thinking framework that defines a transition path from either a heuristic dominated or evidence-based dominated decision-making ...

  16. Heuristic Analysis in User Research

    Heuristic Analysis. Heuristics is synonymous to rules or methods. Heuristic means 'to discover'. It helps think through problems to reach a solution by process of elimination, trial and error, and other such means. Heuristic Analysis is conducted by experts based on the rules of heuristics, popularly used in user experience and user ...

  17. Heuristics

    Heuristics are mental shortcuts that can facilitate problem-solving and probability judgments. These strategies are generalizations, or rules-of-thumb, that reduce cognitive load. They can be effective for making immediate judgments, however, they often result in irrational or inaccurate conclusions. Most of us work & live in environments that ...

  18. Heuristic Methods: The Definition, Use Case, and Relevance for

    Over time, the term ""heuristic methods"" has become a key concept in the field of artificial intelligence. In AI, heuristic methods refer to algorithms that use rules of thumb, trial and error, or domain-specific knowledge to guide the search for solutions in complex problems. The use of heuristic methods has evolved to be a crucial component ...

  19. 10 Heuristic Evaluation with real life examples

    Here are 10 commonly used heuristics in heuristic evaluation along with real-life examples: 1. Visibility of system status. The heuristic of "Visibility of system status" emphasizes the ...

  20. Workshop DUXAIT: Conducting Efficient Heuristic Evaluations

    These studies served as a base so the authors could propose a software that aids UX enthusiasts conduct Heuristic Evaluations, and other types of usability evaluations, more efficiently. This software, named DUXAIT-NG, supports various evaluations and reduces the time for processing the final results, even when having a large group of participants.

  21. Case Studies: Using Heuristics

    This case study aims to show how the Mathematical Programming Model for this problem is used to develop, and validate, a heuristic means of deriving optimal solutions to such a problem. The initial models consider a case where there is a single product, and all working is carried out during the normal working hours.