Making Decisions - Risk, Uncertainty and Rationality

obyvatel

The Living Force
From some of the topics in this cognitive science board - like "You Are Not So Smart" and "Thinking, Fast and Slow" we have been acquainted with the topic of cognitive illusions and how our minds are fooled by certain problems and situations. Psychologist Gerd Gigerenzer has a different view than Kahneman regarding the topic of cognitive illusions. What he has to say regarding the nature of gut feelings, how they sometimes work and when they don't, goes into the heart of how we humans make decisions. Decision making touches every aspect of our lives - so the importance of this topic does not need any emphasis. The aim of starting this thread is to have an interactive discussion and deepen our collective understanding of this topic. Gigerenzer's "Risk Savvy:How to Make Good Decisions" and "Gut Feelings: The Intelligence of the Unconscious" are a couple of books that I intend to use as source material.

Let us start with risk and probability (now don't panic if you are not a math whiz). Kahneman showed examples where the mind is fooled by probabilities and mostly left it at that making us feel somewhat stupid and helpless in the face of "cognitive illusions". Gigerenzer shows a method which works quite well in getting unfooled which does not require knowledge of Baye's theorem and base rates. To illustrate this with an example, here is a commonly encountered situation in healthcare:

Mammograms are said to be effective in early detection of breast cancer. The prevalent rate of breast cancer in women is say 10%. A mammogram has 90% chance of detecting cancer incidence correctly. It also has a false positive rate of 9% - that is it can falsely detect cancer when none is actually present. Someone goes for a mammogram and gets a positive result. What is the chance that she has cancer?

A commonly given answer to this question is 90% - several doctors and healthcare professionals give this answer according to Gigerenzer's research. This answer is wrong. Those who have read "Thinking, Fast and Slow" may vaguely recall Kahneman's advice of taking "base rates" into account. But unless one remembers Baye's rule, it is not readily apparent how to solve this problem. Gigerenzer says that the problem with solving the problem is with how the information is presented. The human mind is far more effective in dealing with natural frequencies and counting and can makes sense of the information provided that way instead of dealing directly with probabilities.

Take a starting number of 1000 women. Of them 10% that is 100 of them are likely to have breast cancer and 90 of them would be correctly identified for the presence of cancer through the mammogram. However, 9% of 1000 - that is 90 of them would show a false positive mammogram. So the chance of actually having cancer diagnosed through a positive mammogram is

90/ (90 +90 ) = 0.5 = 50%.



This kind of information is useful to know since it can help put things in perspective regarding risks and based on research, doctors and healthcare professionals who are advising us are often ignorant about interpreting the numbers correctly. The social cost of having a positive mammogram is not that significant - but a positive HIV test is and people have been known to commit suicide on learning they tested positive.

Moving on, here is something to think about. This example is from a real life game show but we can take the the understanding gained from the example to much more relevant real life situations.

Consider a game where there are 3 closed doors. Behind 2 of the 3 doors there are goats. Behind one door is a grand prize - a cadillac. A participant chooses one of the 3 closed doors. The game show host who knows which door hides what opens a different door to display a goat. Now the participant is given a final choice to stick with what he picked initially or switch. What should he do? In this scenario does switching his choice increase his chances of winning the grand prize (assume he like cadillacs more than goats).

This is a tricky problem. Think for some time before reading on for the answer.


In this situation, the participant has more chance to win the prize if he switches his original choice. It seems counter-intuitive - at least it did to me and I thought that chances are equal for him to win the prize whether he switches his choice or not. Gigerenzer uses a similar strategy of using natural frequency to approach the problem. Consider 3 participants each choosing one of three doors 1, 2 and 3. Door 2 has the cadillac, while 1 and 3 have goats. For the participant choosing door 1, the show host would open door 3 and vice versa. For the participant choosing 2, the show host can open either 1 or 3. Now, for both initial choices of 1 and 3, switching to 2 when given the option would win the prize. Only for the initial choice of 2 - which was the correct choice, switching to 1 or 3 would lose the prize. Since two of our three participants stand to gain from switching, we can conclude that in this situation, a participant increases his chance of winning if he switches his choice.
This problem is known as the Monty Hall problem.

This is the type of problem that can be "solved" using logic. Now consider the case where the game show host does not act in the same way in every situation - and he may even change the parameters.

[quote author=Risk Savvy]
The Monty Hall problem posed by Marilyn and others before her involves a world of risk, not uncertainty . Probability theory provides the best answer only when the rules of the game are certain, when all alternatives, consequences, and probabilities are known or can be calculated.

Here is my question: Is switching also the best choice on the real game show?

The crucial issue is whether Monty always offered his guests the chance to switch. For instance, if Monty were mischievous by nature, he would make the offer only if contestants picked the door with the grand prize behind it. Switching would then always lead to the goat and NBC could keep the big prize for the next show.
[/quote]

The bolded part has implications in decision making in real life and knowing the difference between the "world of risk", where a lot is known, to the "world of uncertainty", where less is known, as it applies to specific situations brings about different strategies. In the world of uncertainty, often simple rules of thumb can outperform complex solutions - as Gigerenzer shows with multiple examples pertaining world of finance (stock picking for example) as well as other more common and regular experiences.

Coming back to the game show example

[quote author=Risk Savvy]
Is the best decision under risk also the best one on the real show? As Monty himself explained, it can be the worst. After one contestant picked door 1, Monty opened door 3, revealing a goat. While the contestant thought about switching to door 2, Monty pulled out a roll of bills and offered $ 3,000 in cash not to switch.

“I’ll switch to it,” insisted the contestant.

“Three thousand dollars,” Monty Hall repeated, “Cash. Cash money. It could be a car , but it could be a goat. Four thousand.”

The contestant resisted the temptation. “I’ll try the door.”

“Forty-five hundred. Forty-seven. Forty-eight. My last offer: Five thousand dollars.”

“Let’s open the door.” The contestant again rejected the offer.

“You just ended up with a goat,” Monty Hall said, opening the door. And he explained: “Now do you see what happened there? The higher I got, the more you thought that the car was behind door 2. I wanted to con you into switching there, because I knew the car was behind 1. That’s the kind of thing I can do when I’m in control of the game.”

In the real game, probability theory is not enough. Good intuitions are needed, which can be more challenging than calculations. One way to reduce uncertainty is to rely on rules of thumb. For instance, the “minimax rule” says:

Choose the alternative that avoids the worst outcome.

Ending up with a goat and foregoing the money is the worst possible outcome. That can only happen if the contestant switches. For that reason, the rule advises taking the money and sticking with door 1. It’s called “minimax” because it aims at minimizing your losses if the maximum loss scenario happens (here, opening the door with a goat). This simple rule would have cut straight through Monty’s psychological game and got the contestant the money— and the car to boot.

Intuitive rules are not foolproof, but neither are calculations. A second way to reduce uncertainty is to guess Monty’s motivation, which is harder, particularly when nervously standing in the spotlight before TV cameras. It requires putting oneself into his mind. Monty appears to have offered the switch because he knew the contestant had chosen the winning door, and then offered money not to switch in order to insinuate that the car was behind the other door. This psychological reflection leads you to stick with your door, the same choice you would make when using the minimax rule.

[/quote]

Finding simple rules which work well in specific environmental conditions relates to the pursuit of ecological rationality. Rather than focusing on the human mind and logic - as Kahneman and others have done in their treatment of cognitive illusions - ecological rationality involves looking at thumb rules or heuristics and how they apply to the human mind and the environment it is operating in. The advantage of using heuristics is that heuristics can apply to real world problems having a natural complexity where an "optimal" (meaning the best possible) solution is often unknown and computationally intractable.

Our unconscious mind uses heuristics. We make mistakes when a heuristic which is supposed to be applied in one environment or situation is misapplied in another situation. Identifying and bringing such rules to consciousness, choosing the right rules from the toolbox and applying them judiciously in keeping with the situation at hand is the goal of ecological rationality.
We are constantly faced with the challenge of making decisions in the face of incomplete and often scarce information - and to be able to navigate this challenge successfully, we need to be able to not only gather inputs from different sources but also learn what to consider as data and what to consider as irrelevant to the context (or noise). This requires a cooperation between rigorous analytical skills as well as intuitive gut feelings and when they work together we get higher degrees of ecological rationality. The quality of gut feelings can be improved - and if Gigerenzer is on the right track, then understanding the rules or heuristics ( for pattern recognition activities) which intuition uses can be of help in this regard.
 
Thanks, obyvatel, very interesting topic.

I especially liked the mammography example - I was thinking 90% as well. The problem is that we are not used to think "statistically", as Kahneman wrote in his book, the best example being the "librarian stereotype". I find it very hard to think these things through, my System 1 has already long jumped to a conclusion and drowned the process.

As to the heuristic process, it seems to me to be easier in a way, because it cannot be worked out with logic, it requires (as you have written) intuition and multiple inputs. And this is probably also the way we can improve our heuristic process - through the network. A bit like in computational neuronal networks, where increasing the number of nodes increases the quality of the output - up to a certain point.

Might need to put this book on my reading list, problem is, the way my list is growing, I might not be reading it before 2017.

:D

Thanks again!
 
obyvatel said:
The aim of starting this thread is to have an interactive discussion and deepen our collective understanding of this topic (Making Decisions - Risk, Uncertainty and Rationality).

Seeing as how you also use the phrase "ecological rationality", I presume the presence of additional aims like enhancing rationality and determining efficient outcomes. Is that fair to say? I'm just wondering because I'm familiar with these authors and many of their works and was looking for a way into the discussion, so to speak.

obyvatel said:
Let us start with risk and probability (now don't panic if you are not a math whiz). Kahneman showed examples where the mind is fooled by probabilities and mostly left it at that making us feel somewhat stupid and helpless in the face of "cognitive illusions". Gigerenzer shows a method which works quite well in getting unfooled which does not require knowledge of Baye's theorem and base rates.

BTW, there are even simpler ways to explain and demonstrate Gerd's ideas for 'getting unfooled'. More on that later, if interested.

Also, I've found that even when people get their "aha!" moments here, the Monty Hall problem still doesn't click for many people for some reason. You can actually run through the 6 possible games in your head, though, and see how switching is advantageous and will win the Cadillac for the majority of the games. As for me, I thought, "Well, I've got 1 out of 3 chances for having chosen the Cadillac on the first try and 2 out of 3 chances for being wrong on the first try. I'll go with that and switch, since I'm not that lucky." :D
 
Buddy said:
obyvatel said:
The aim of starting this thread is to have an interactive discussion and deepen our collective understanding of this topic (Making Decisions - Risk, Uncertainty and Rationality).

Seeing as how you also use the phrase "ecological rationality", I presume the presence of additional aims like enhancing rationality and determining efficient outcomes. Is that fair to say? I'm just wondering because I'm familiar with these authors and many of their works and was looking for a way into the discussion, so to speak.

Hi Buddy,
Yes. What I found useful in Gigerenzer's work is the investigation and discovery of simple rules tailored to the specific environment in which someone is trying to make decisions. For many of us, over-thinking and getting lost in complexity is common. This results in loss of energy on the one hand as well as decisions which could be improved upon for the benefit of ourselves as well as others. Feel free to add your inputs.

Continuing on.

There is a dog I see in a park who unerringly chases down a frisbee thrown in the air by his owner. He runs behind it and times his jump exquisitely to catch the frisbee in mid air. Every time I see this I am impressed. The dog is not a trained circus dog. How does he sense where the frisbee is going in the presence of unpredictable air currents and effortlessly follows the complex trajectory? Similar examples exist in other areas of our experience.

One view is that the unconscious mind performs a series of very complex calculations - the dog knows calculus and more but is not consciously aware of it. Gigerenzer provides another explanation - the gaze heuristic. The dog runs after the frisbee in a way that the angle of gaze remains constant. The dog runs adjusting his speed and direction such that the image of the frisbee moves in a straight line at a constant speed. Baseball players use the same heuristic to chase down balls. This skill can be taught easily.

[quote author=Gut Feelings]
The gaze heuristic exemplifies how a complex problem that no robot could match a human in solving —catching a ball in real time— can be easily mastered. It ignores all causal information relevant to computing the ball’s trajectory and only attends to one piece of information, the angle of gaze. Its rationale is myopic, relying on incremental changes, rather than on the ideal of first computing the best solution and thereafter acting on it.
[/quote]

Let us go from dogs to stock picking skills. Stock market is an uncertain environment. Recall from the previous post the distinction between an environment where risk can be calculated to an uncertain environment where it cannot. Harry Markowitz won the nobel prize for economics in 1990 for his work on optimal asset allocation using a complex mathematical formula. However, when his own investing strategies were studied, it was found that he did not use his own nobel prize winning formula for investing but instead used a simple heuristic of fair sharing:

Allocate your money equally to each of N funds.

This is known as the 1/N rule.

[quote author=Gut Feelings]
A recent study compared a dozen optimal asset allocation policies, including that of Markowitz, with the 1/ N rule in seven allocation problems. The funds were mostly portfolios of stocks. One problem consisted of allocating one’s money to the ten portfolios tracking the sectors comprising the Standard & Poor’s 500 index, and another one to ten American industry portfolios. Not a single one of the optimal theories could outperform the simple 1/ N rule, which typically made higher gains than the complex policies did.
[/quote]

Using the 1/N rule does involve choosing N ( how many different funds) and which particular funds/stocks to pick. In another experiment, the knowledge of financial industry experts was pitted against 100 Berlin pedestrians, 50 men and 50 women, who chose which stocks to pick based on the recognition heuristic

If one recognizes the name, it has higher value .

The 1/N strategy where the N stocks were picked based simply on the recognition of ignorant pedestrians who knew little about the stock market consistently outperformed portfolios picked by finance experts in different market conditions.

Before proceeding further, it is worthwhile to think about why these strategies worked and where they would fail. The recognition heuristic implies limited knowledge - if one is intimately familiar with all or most of stocks among which one is making the choice, then the heuristic no longer works to one's advantage.

When American and German students were asked "Which of the two cities has a larger population: Detroit or Milwaukee?", American students did worse than Germans. Most German students had only heard of Detroit and picked it which was the right answer in this case. Americans had heard of both cities - so they could not use the recognition heuristic. And of course, it can be shown how the recognition heuristic would go wrong by picking a small and more famous city and a large but not so famous one or changing the question from "which has a higher population" to "which is further away from the sea". In the latter situation, recognizing the name of a city may not be of much help in finding the answer.

[quote author=Gut Feelings]

Environmental structures are the key to how well or poorly a rule of thumb works. For instance, the recognition heuristic takes advantage of situations where name recognition matches the quality of products or the size of cities. A gut feeling is not good or bad, rational or irrational per se. Its value is dependent on the context in which the rule of thumb is used.
[/quote]

This is what Benjamin Franklin told his nephew about how to pick a life partner.

If you doubt, set down all the Reasons, pro and con, in opposite Columns on a Sheet of Paper , and when you have considered them two or three Days, perform an Operation similar to that in some questions of Algebra; observe what Reasons or Motives in each Column are equal in weight, one to one, one to two , two to three, or the like, and when you have struck out from both Sides all the Equalities, you will see in which column remains the Balance. . . . This kind of Moral Algebra I have often practiced in important and dubious Concerns, and tho’ it cannot be mathematically exact, I have found it extremely useful. By the way, if you do not learn it, I apprehend you will never be married.

Franklin's method forms the foundation of rational decision making process especially in economics where the goal is to maximize the utility function. It would work well where one has a lot of relevant information and uncertainty is low. Gut feelings on the contrary work very differently. One good reason is often enough to sway the decision, ignoring all other information.

[quote author=Gut Feelings]
Can following your gut feelings lead to some of the best decisions? It seems naive, even ludicrous, to think so. For decades, books on rational decision making, as well as consulting firms, have preached “look before you leap” and “analyze before you act.” Pay attention. Be reflective, deliberate, and analytic. Survey all alternatives, list all pros and cons, and carefully weigh their utilities by their probabilities, preferably with the aid of a fancy statistical software package. Yet this scheme does not describe how actual people— including the authors of these books— reason. A professor from Columbia University was struggling over whether to accept an offer from a rival university or to stay. His colleague took him aside and said, “Just maximize your expected utility— you always write about doing this.” Exasperated, the professor responded, “Come on, this is serious.”
[/quote]

Seems like highly learned people like Markowitz and this professor did not really trust what they publicly professed when making decisions in their own personal life.

We will look at some more heuristic rules that are shown to work well in real life situations next.
 
obyvatel said:
From some of the topics in this cognitive science board - like "You Are Not So Smart" and "Thinking, Fast and Slow" we have been acquainted with the topic of cognitive illusions and how our minds are fooled by certain problems and situations. Psychologist Gerd Gigerenzer has a different view than Kahneman regarding the topic of cognitive illusions. What he has to say regarding the nature of gut feelings, how they sometimes work and when they don't, goes into the heart of how we humans make decisions. Decision making touches every aspect of our lives - so the importance of this topic does not need any emphasis. The aim of starting this thread is to have an interactive discussion and deepen our collective understanding of this topic. Gigerenzer's "Risk Savvy:How to Make Good Decisions" and "Gut Feelings: The Intelligence of the Unconscious" are a couple of books that I intend to use as source material.

Let us start with risk and probability (now don't panic if you are not a math whiz). Kahneman showed examples where the mind is fooled by probabilities and mostly left it at that making us feel somewhat stupid and helpless in the face of "cognitive illusions". Gigerenzer shows a method which works quite well in getting unfooled...

And if I may elaborate on this line of thought, we might add Gerd's distinctions concerning information and information formats and cognitive algorithms. These relationships are important in the overall ecology of rationality, I think.

The following info might be a little long and boring. I'm including it so that when interested readers broaden their understanding of this topic and come upon mentions of opposing views regarding cognitive illusions they can refer back to this post for some background info. Those in a hurry may skip down to the tl;dr below.

First, a little information to show an existing conflict between researchers prominent in this field (the information for this post comes from How to Improve Bayesian Reasoning Without Instruction by Gerd Gigerenzer and Ulrich Hoffrage).

During the Enlightenment, classical probabilists believed the mind easily does Bayesian inference. Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as "hommes éclairés."

Laplace (1814/1951) declared that "the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct, without being able to explain how with precision".

Ward Edwards and his colleagues (Edwards, 1968; Phillips & Edwards, 1966; and earlier, Rouanet, 1961) were the first to test experimentally whether human inference follows Bayes' theorem. Edwards concluded that inferences, although "conservative," were usually proportional to those calculated from Bayes' theorem.

However, not everyone agreed.

Kahneman and Tversky (1972, p. 450), arrived at the opposite conclusion: "In his evaluation of evidence, man is apparently not a conservative Bayesian: he is not Bayesian at all."

In the 1970s and 1980s, proponents of their "heuristics-and-biases" program concluded that people systematically neglect base rates in Bayesian inference problems.

"The genuineness, the robustness, and the generality of the base-rate fallacy are matters of established fact." (Bar-Hillel, 1980, p. 215) Bayes' theorem, like Bernoulli’s theorem, was no longer thought to describe the workings of the mind. But passion and desire were no longer blamed as the causes of the disturbances. The new claim was stronger. The discrepancies were taken as tentative evidence that "people do not appear to follow the calculus of chance or the statistical theory of prediction" (Kahneman & Tversky, 1973, p. 237).

It was proposed that as a result of "limited information-processing abilities" (Lichtenstein, Fischhoff, & Phillips, 1982, p. 333), people are doomed to compute the probability of an event by crude, nonstatistical rules such as the "representativeness heuristic." Blunter still, the paleontologist Stephen J. Gould summarized what has become the common wisdom in and beyond psychology: "Tversky and Kahneman argue, correctly I think, that our minds are not built (for whatever reason) to work by the rules of probability." (Gould, 1992, p. 469)

So, it seems that some researchers would have us believe the mind is predisposed against Bayesian inference. Previous research on base rate neglect tends to suggest that our mind lacks appropriate cognitive algorithms, therefore our reasoning processes are believed to be under the influence of many cognitive illusions.

This is a less than full picture of the situation, however. Furthermore, any claim against the existence of any cognitive algorithms, whether Bayesian or any other must be evaluated within the information format in which the algorithm was designed to operate.

To summarize, here is the problem:

There are contradictory claims as to whether people naturally reason according to Bayesian inference. The two extremes are represented by the Enlightenment probabilists and by proponents of the heuristics-and-biases program. Their conflict cannot be resolved by finding further examples of good or bad reasoning; text problems generating one or the other can always be designed.

Some have proposed a compromise and suggest that maybe the mind does reasoning with a little of both, Bayesian algorithms and quick-and-dirty inference. While this proposal avoids the polarization of views, it makes no headway on the theoretical front.

The solution to this dilemma is currently thought to be in a realization that both views are based on an incomplete analysis: They focus on cognitive processes, Bayesian or otherwise, without making the connection between what we will call a cognitive algorithm and an information format. This is important to understand and even Richard Feynman made the point in a more general form in his The Character of Physical Law, Feynman (1967) where he placed a great emphasis on the importance of deriving different formulations for the same physical law, even if they are mathematically equivalent.

Back to Gerd's distinctions mentioned earlier...

Gerd and company agree with Feynman's understanding and the assertion that mathematically equivalent representations can make a difference to human understanding is the key to our analysis of intuitive Bayesian inference.

Remember, the two basic information formats are:
1) Standard Probability Format and
2) frequency, or Natural Sampling of Frequencies.

We can show that Bayesian algorithms are computationally simpler in frequency formats and we can show that frequency formats are natural for humans as well as other animals. In fact, Bayesian algorithms and Bayesian inference are sometimes both used to refer to natural cognitive functions we share with the animal kingdom and sometimes called inductive inference. In more formal probability contexts, however, we normally find the phrase Natural Sampling of Frequencies.

Some background support:

Evolutionary theory asserts that the design of the mind and its environment evolve in tandem. Assume that humans have evolved cognitive algorithms that can perform statistical inferences. These algorithms, however, would not be tuned to probabilities or percentages as input format. For what information format were these algorithms designed?

We assume that as humans evolved, the "natural" format was frequencies as actually experienced in a series of events, rather than probabilities or percentages (Cosmides & Tooby, in press; Gigerenzer, 1991b, 1993a).

From animals to neural networks, systems seem to learn about contingencies through sequential encoding and updating of event frequencies (Brunswik, 1939; Gallistel, 1990; Hume, 1739/1951; Shanks, 1991). For instance, research on foraging behavior indicates that bumblebees, ducks, rats, and ants behave as if they were good intuitive statisticians, highly sensitive to changes in frequency distributions in their environments (Gallistel, 1990; Real, 1991; Real & Caraco, 1986).

Similarly, research on frequency processing in humans indicates that humans, too, are sensitive to frequencies of various kinds, including frequencies of words, single letters, and letter pairs (e.g., Barsalou & Ross, 1986; Hasher & Zacks, 1979; Hintzman, 1976; Sedlmeier, Hertwig, & Gigerenzer, 1995).

The sequential acquisition of information by updating event frequencies without artificially fixing the marginal frequencies (e.g., of disease and no-disease cases) is what we refer to as natural sampling (Kleiter, 1994). Brunswik’s (1955) "representative sampling" is a special case of natural sampling.

In contrast, in experimental research the marginal frequencies are typically fixed a priori. For instance, an experimenter may want to investigate 100 people with disease and a control group of 100 people without disease. This kind of sampling with fixed marginal frequencies is not what we refer to as natural sampling.

The evolutionary argument that cognitive algorithms were designed for frequency information, acquired through natural sampling, has implications for the computations an organism needs to perform when making Bayesian inferences.

tl;dr:

Here is the question to be answered: Assume an organism acquires information about the structure of its environment by the natural sampling of frequencies. What computations would the organism need to perform to draw inferences the Bayesian way?

At this point, the reader may wish to refer to the attached image of the Natural Sampling Tree (must be logged in to see).

Imagine an old, experienced physician in an illiterate society. She has no books or statistical surveys and therefore must rely solely on her experience. Her people have been afflicted by a previously unknown and severe disease. Fortunately, the physician has discovered a symptom that signals the disease, although not with certainty. In her lifetime, she has seen 1,000 people, 10 of whom had the disease. Of those 10, 8 showed the symptom; of the 990 not afflicted, 95 did. Now a new patient appears. He has the symptom.

What is the probability that he actually has the disease?

The physician in the illiterate society does not need a pocket calculator to estimate the Bayesian posterior. All she needs is the number of cases that had both the symptom and the disease (here, 8) and the number of symptom cases (here, 8 + 95). So, go ahead and sum the 8 plus 95 and you get 103. All you need now is to put 8 on top of a "divided by" sign and 103 on the bottom then do the simple division. You will get .078. The same answer you will get if you use a much more complicated probability formula. This example shows that the physician does not need to keep track of the base rate of the disease.

In fact, humans and other organisms do not need to keep track of the whole natural sampling tree, but only of the two pieces of information contained in the bold circles. These are the hit and false alarm frequencies (not to be confused with hit and false alarm rates).

Further, this clinical example and others cited in the paper referenced illustrates that the standard probability format is a convention rather than a necessity. Clinical studies often collect data that have the structure of frequency trees as in that attached image. Such information can always be represented in frequencies as well as probabilities.

These converging results do have implications for how to understand so-called cognitive illusions. Were cognitive illusions due to the mind’s inherent inability to reason statistically, or if they were simply the result of wishful thinking or other motivational deficits, then a frequency format should make no difference. The evidence so far, however, suggests that frequency format can make quite a difference.

Thanks for reading.
 

Attachments

  • natural sampling tree.PNG
    natural sampling tree.PNG
    76.7 KB · Views: 229
We saw earlier that neither the Nobel prize winning economist nor the professor of decision theory used their own complex "rational" formula for actually making decisions in their life. So how do people make decisions and what general principles can be abstracted from them?

Take the simple example of shopping for a pair of trousers. How does one go about it? It is very simple in certain parts of the world. In the US where the "deal culture" is prevalent, there are people who spend inordinate amount of time and energy to keep track of the best possible deals and strike the best possible bargain in terms of quality and price. This is the strategy of "maximizing" where one wants the best and would be mortified to settle for anything less than the best. Does such maximizing strategies lead to happiness? If one tries to maximize, after all the effort when one does settle with "the" deal, there is a nagging feeling "did I miss something"? And if a "friend" were to say "you paid that much for this trousers - I paid only this at the store in the other side of the town" - well, any tiny bit of satisfaction at having the new trousers heads straight out of the window and with a grim faced determination, the maximizer vows not to be outdone in this department again.

The above can seem like an exaggeration but it is not too far from reality imo. So what are alternative approaches? One is called "satisficing". Unlike maximizing, one who is "satisficing" is settling for the good enough instead of going for the best. One sets an aspiration level and the first product which meets the aspiration level is selected. A satisficer for example could set a budget, choose a brand known for quality and reliability using the recognition heuristic, find a match and stop there. If he cannot use the recognition heuristic, he could ask a suitable sales person - not what he would recommend - but "what would you buy if you were in my place"? He could also imitate someone he knows and trusts to have good sense of clothes. In using these strategies, he would perhaps spend less time and energy in making the decision and would be happier with the end result than the maximizer. A satisficer in the end is satisfying an internal goal going for the good enough which is within his control to meet; a maximizer on the other hand is trying to satisfy an external goal which is dependent on uncertain external conditions beyond his control and so has less chance in succeeding in his goal of getting the "best". This basic dynamic can be taken and tailored to various situations in life.

Imitation was mentioned as one of the possible strategies. In general, where does imitation stand more chance of succeeding and where does it fail? Gigerenzer writes

[quote author=Gut Feelings]
The success of imitation depends again on the structure of the environment. Structural features that can make imitation adaptive include

- a relatively stable environment,
- lack of feedback, and
- dangerous consequences for mistakes.
[/quote]

Imitation works better in relatively stable environments where past successful strategies followed by others are likely to provide good results. If the environment is changing rapidly, then previously successful strategies are no longer in sync with the environment and imitation would not work.

In situations where feedback about actions are not readily available - we cannot easily know whether what we are doing is good or bad in the long term - imitating successful people can be useful strategy. Child rearing is one such example. Rapid feedback from the environment is conducive to individual learning by trial and error and in such cases imitation may not be the strategy of choice.

Imitation is a fundamental instinct in humans. We do it often and do not realize we are doing it. It is easier to see in others. Knowing when and whom to imitate in appropriate situations makes the process more conscious and deliberate. It is a powerful tool in our adaptive toolbox when used judiciously.
 
obyvatel said:
[quote author=Gut Feelings]
Can following your gut feelings lead to some of the best decisions? It seems naive, even ludicrous, to think so. For decades, books on rational decision making, as well as consulting firms, have preached “look before you leap” and “analyze before you act.” Pay attention. Be reflective, deliberate, and analytic. Survey all alternatives, list all pros and cons, and carefully weigh their utilities by their probabilities, preferably with the aid of a fancy statistical software package. Yet this scheme does not describe how actual people— including the authors of these books— reason. A professor from Columbia University was struggling over whether to accept an offer from a rival university or to stay. His colleague took him aside and said, “Just maximize your expected utility— you always write about doing this.” Exasperated, the professor responded, “Come on, this is serious.”

Seems like highly learned people like Markowitz and this professor did not really trust what they publicly professed when making decisions in their own personal life.[/quote]

This is probably the most disturbing part of this topic. Why would these people write one thing and do another? I suppose there must be some prestige associated with 'Decision Science' such that people who connect themselves with it can siphon off some respect, fame and money for themselves just by putting their thoughts in a linear sequence that matches the accepted logic structures and mapping out this idealized thinking process for others.

Of course, there is a time and place for rationality as defined that way, but as Gerd and others point out, only "LaPlace's Demon" has all the time in the world to gather maximum information, correlate and integrate, weigh, measure, sum and whatever else they do to arrive at the perfect conclusion before they act.

In contrast, Gerd's work focuses on a need to act and to do so with limited information and to discover that one has made a choice or decision that rivals in accuracy any conclusion formulated the long way.

I think fear also plays a part in, either preventing some people from trusting their gut feelings, or distorting them, otherwise what is the point for political correctness or that desire to avoid malpractice suits, etc?

There is a story that illustrates the possible tragic consequences for trying to force a decision making process that privileges information over a process that privileges action intending to save a life. You've probably read it in Gerd's book.

I here quote from an article Nathaniel A. Rivers once wrote in defense of Gut Feelings in his elaboration of Carolyn R. Miller’s critique of decision science (I don't have a url reference at the moment).

A 21 month old boy is admitted to a hospital: he is underweight, he isn’t eating, he has constant ear infections, and he is withdrawn. One of his guardians is not in the picture, and the other “sometimes missed feeding him altogether” (Gigerenzer 20). The doctor is not comfortable ordering the invasive testing required to diagnose the boy. The doctor instead works to provide a supportive environment and encourages the boy to eat, which the boy does, and his condition improves.

The doctor, however, has supervisors who discourage this unconventional, intuitive effort. They demand detailed information about the boy’s condition, which necessitates a battery of tests: “CT scans, barium swallow, numerous biopsies and cultures of blood, six lumbar punctures, ultrasounds, and dozens of other clinical tests” (21). The tests reveal nothing, but under such treatment the boy stops eating again. “If he dies without a diagnosis, then we have failed,” the thinking went (21).

The young boy dies before yet another scheduled test. An autopsy is performed to “find the hidden cause” (21). Nothing is found. One doctor remarks, “Why, at one time he had three IV drips going at once! He was spared no test to find out what was really going on. He died in spite of everything we did!”

The unspoken irony here might be humorous if it were not deeply troubling and tragic.

The imperative to diagnose and the desire to formalize the attending doctor’s intuition into a set of procedures designed to expose the best solution to the problem resulted in the death of a child who simply was not eating.

Rather than deliberate upon the value of all the diagnostic tests, the doctor’s supervisors and the specialists they employed enacted a deadly form of decision science.

Like you've already said in different words, obyvatel, gut feelings and decision science are not opposites; they can be combined for maximum effectiveness, yet decision science seeks to rid decision making of gut feelings and the values that Gerd's Gut Feelings work holds forth as paramount.

As Rivers states it,

Conceptually, gut feelings make salient the values that operate within any method of deciding.
 
Buddy said:
obyvatel said:
Seems like highly learned people like Markowitz and this professor did not really trust what they publicly professed when making decisions in their own personal life.

This is probably the most disturbing part of this topic. Why would these people write one thing and do another? I suppose there must be some prestige associated with 'Decision Science' such that people who connect themselves with it can siphon off some respect, fame and money for themselves just by putting their thoughts in a linear sequence that matches the accepted logic structures and mapping out this idealized thinking process for others.

I think there is pressure to conform to the environment; human behavior cannot be understood without taking into account the environment in which it takes place. As Gigerenzer writes
[quote author=Gut Feelings]
An ant rushes over a sandy beach on a path full of twists and turns. It turns right, left, back, then halts, and moves ahead again. How can we explain the complexity of the path it chose? We can think up a sophisticated program in the ant’s brain that might explain its complex behavior, but we’ll find that it does not work . What we have overlooked in our efforts to speculate about the ant’s brain is the ant’s environment. The structure of the wind-and-wave-molded beach, its little hills and valleys, and its obstacles shape the ant’s path. The apparent complexity of the ant’s behavior reflects the complexity of the ant’s environment, rather than the ant’s mind. The ant may be following a simple rule: get out of the sun and back to the nest as quickly as possible, without wasting energy by climbing obstacles such as sand mountains and sticks. Complex behavior does not imply complex mental strategies.
[/quote]

So in the "publish or perish" world of professional academia, one chooses topics which are more likely to attract funding and are easier for generating publications. The imitation heuristic is at play for fresh entrants into the field - so they generally follow what others have done and are doing to secure funding and academic prestige. Then there is the don't break ranks moral rule. The result is what Gurdjieff called the separation of the line of knowledge and line of being. The knowledge acquired does not touch life but is relegated to academic papers and abstract complex theories.

Don't break ranks is another fundamental rule that holds sway over human behavior in many different areas. This rule is a fallout of what is considered by morality researchers like Jonathan Haidt as one of the foundations of human morality (which are considered as evolved capacities) - that of "in-group loyalty". Morality Foundation Theory is an interesting topic and deserves dedicated attention imo. For the discussion here, it suffices to say that for the doctors and their actions, "in-group loyalty" comes into conflict with another fundamental moral foundation called the "care/harm" foundation. While "in-group loyalty" dictates "don't break ranks", the "care/harm" moral foundation urges to act in a way which puts patient care and well-being above others. This is the internal human conflict between two different moral foundations both of which have evolved down the ages and have adaptive value. The environment dictates to a large extent which moral rule has more chance of winning the conflict.

The litigation-ridden environment of medical practice where the best intentions of doctors can easily fall prey to malpractice lawsuits has played a significant role in setting up the procedures that are required to be followed in every situation. So the doctor has to order a battery of tests - many unnecessary from their expert perspective - to make sure if things go wrong as they often do in that uncertain environment, they or the hospital would not be sued out of existence. Along with being a person seeking care, every patient becomes a potential litigator against whom the doctor and the institution needs to protect themselves. In this situation there is a divide that is created - on one side lies the hospital and the medical personnel and on the other side is the patient/potential litigator. Any doctor who acts contrary to established procedures, which are set up to protect the interests of the hospital and medical personnel rather than serve the best interests of the patient, violates "in-group loyalty" and breaks ranks. Few are willing to risk it.

Edit: For anyone interested in Haidt and others' research on MFT, here is a brief synopsis with link to source materials.
 
obyvatel said:
The litigation-ridden environment of medical practice where the best intentions of doctors can easily fall prey to malpractice lawsuits has played a significant role in setting up the procedures that are required to be followed in every situation. So the doctor has to order a battery of tests - many unnecessary from their expert perspective - to make sure if things go wrong as they often do in that uncertain environment, they or the hospital would not be sued out of existence. Along with being a person seeking care, every patient becomes a potential litigator against whom the doctor and the institution needs to protect themselves. In this situation there is a divide that is created - on one side lies the hospital and the medical personnel and on the other side is the patient/potential litigator. Any doctor who acts contrary to established procedures, which are set up to protect the interests of the hospital and medical personnel rather than serve the best interests of the patient, violates "in-group loyalty" and breaks ranks. Few are willing to risk it.

I agree, but it's a Fool's Choice, conceptually similar to that in Crucial Conversations. There's always a third option.

In the mid 90's, a rural hospital in Michigan was having problems with overcrowding in the coronary care unit. University of Michigan researchers wanting to help discovered the same situation in all the other hospitals in the area. Attempting to solve the problem, the researchers gathered 59 items related to heart disease, took 50 of them and created a chart to prioritize them. The idea was to train the doctors to use this chart, having them check the presence or absence of combinations of seven symptoms, insert the relevant probabilities into a pocket calculator which determines the probability that a patient has acute heart disease.

The formula used was a logistic regression algorithm that combines and weighs the binary (yes/no) info for the seven symptom combinations. Incredible! Using this chart, the doctor's decisions improved and the coronary care unit went from 90% occupancy where only 25% of the patients actually had heart disease down to 60% where almost all of them would be found to have heart disease, but this charting procedure created another dilemma. Doctors are not trained in logistic regression, the process was time consuming and doctors didn't like it. They asked: should patients in life or death situations be classified by intuitions that were natural but in this case suboptimal or should they rely on complex calculations that are alien but might be more accurate?

The reader might see a connection here between this medical dilemma and with other areas of lfe as well, including financial decision making.

To try and address the doctor's concerns, an experiment was tried with unexpected results. Doctors made their intuitive diagnosis first, then made diagnoses with the chart to compare. Then, the chart was taken away again and doctors relied on their intuition alone.

Overall result? After having been exposed to the chart, doctors improved their intuitive diagnoses and the improvement remained, even when the chart was taken away! Why? The researchers didn't know for sure but they suspected the doctors simply picked up on the most important variables since they couldn't possibly have memorized the entire chart.

From this experiment, the concept of smart heuristics entered the Michigan hospitals. Now only three questions need be asked and the decision that is made whether or not to send the patient to the coronary care unit is as accurate, or more so, than the charting technique ever was.

Go smart heuristics!


Here is a brief summary of the above and the actual source I used is the second reference at page bottom.
_http://www.unc.edu/courses/2010spring/psyc/433/001/tutorials/snow.html
 
obyvatel said:
Our unconscious mind uses heuristics.

Yes, that's adaptive. From what I've learned, gut feelings for Gigerenzer are patterns of response; based on rules of thumb, they are focused on action rather than knowledge. They foreground action by stressing their function - the how and why of filtering information - rather than the implicit value of information itself.

obyvatel said:
We make mistakes when a heuristic which is supposed to be applied in one environment or situation is misapplied in another situation.

Like when we learn as children that attempts to understand why something is so, is interpreted by specific adults as "talking back" and we get slapped. So, after a few of these experiences we might form a rule of thumb whose building blocks consist of someone making an assertion with a certain degree of force, using certain tones of voice and body language which may consist of certain posture and facial expressions and the strong possibility of a painful physical or emotional impact.

Can we call this an 'environment' consisting of 'cues' which invoke uncomfortable and maybe vague or unconscious memories of physical or emotional pain? Perhaps as an adaptive response, we form a rule of thumb for a response that requires us to go silent, become meek and freeze the throat before words escape?

when we have this reaction later in life, we are misapplying it as this formerly adaptive response in a specific environment and situation is now an out of context reflex.

obyvatel said:
Identifying and bringing such rules to consciousness, choosing the right rules from the toolbox and applying them judiciously in keeping with the situation at hand is the goal of ecological rationality.

Like standing your ground, but in a way that is both honest or candid and respectful until there is reason to act differently? At least, maybe when there's a crucial confrontation involved.
 
Buddy said:
obyvatel said:
We make mistakes when a heuristic which is supposed to be applied in one environment or situation is misapplied in another situation.

Like when we learn as children that attempts to understand why something is so, is interpreted by specific adults as "talking back" and we get slapped. So, after a few of these experiences we might form a rule of thumb whose building blocks consist of someone making an assertion with a certain degree of force, using certain tones of voice and body language which may consist of certain posture and facial expressions and the strong possibility of a painful physical or emotional impact.

Can we call this an 'environment' consisting of 'cues' which invoke uncomfortable and maybe vague or unconscious memories of physical or emotional pain? Perhaps as an adaptive response, we form a rule of thumb for a response that requires us to go silent, become meek and freeze the throat before words escape?

when we have this reaction later in life, we are misapplying it as this formerly adaptive response in a specific environment and situation is now an out of context reflex.

I agree. Great example. From the perspective of the moral foundation theory our minds come equipped with the capacity to learn moral values and norms. One of the moral foundations is identified as "authority" - we come prepared to recognize, accept and follow legitimate authority, a feature we share with higher primates who have a hierarchical social structure as well. Material from the experience you described gets deposited on the authority foundation of our personal unconscious mind and the resulting structure perhaps infers something like questioning authority is dangerous. Now as one develops, inherited traits as well as life experiences determine how the overall attitude towards authority would shape up.

Someone may have a lot of material deposited on the moral foundations of "loyalty" and "sanctity" in addition to "authority" with little development on the "care" and "fairness" foundations. He would likely become an authoritarian follower type holding more conservative and puritan views about life. Such people often become instruments of evil deeds in active or passive ways - depending on what the authorities/institutions who dominate their lives ask of them.

If on the other hand, the person has strong foundations of "care" and "fairness" (perhaps from genetic constitution or otherwise) then he would grow up with an unconscious distrust and fear of authority due to the early experiences. He may have an ambivalent relationship to authority in some ways - conforming in certain areas of exterior life and being a rebel in other areas. He may trust ideals and principles more than institutions and authority figures, especially if he is intellectually inclined. The fundamental distrust of authority would prevent him from being a seriously committed card-carrying member of any hierarchical institution.

There are of course very many variations possible - I mentioned these two as I have known such people in my life, especially the latter type.

Buddy] [quote author=obyvatel said:
Identifying and bringing such rules to consciousness, choosing the right rules from the toolbox and applying them judiciously in keeping with the situation at hand is the goal of ecological rationality.

Like standing your ground, but in a way that is both honest or candid and respectful until there is reason to act differently? At least, maybe when there's a crucial confrontation involved.
[/quote]

Yes, that is part of it. Some find it easier than others - especially if they have strong foundations in "care" and "fairness" that can overcome fears and hesitations. The type with strong "care" and "fairness" morals who have faced early traumatic injury from authorities may find it more challenging to put their trust in legitimate authorities or institutions at a deeper level due to the earlier inferred unconscious rule that authorities are not to be trusted.

OSIT
 
obyvatel said:
I agree. Great example.

Thanks.

obyvatel said:
From the perspective of the moral foundation theory our minds come equipped with the capacity to learn moral values and norms. One of the moral foundations is identified as "authority" - we come prepared to recognize, accept and follow legitimate authority...

What is 'legitimate authority' to Haidt or other MFT proponents?
 
Buddy said:
obyvatel said:
From the perspective of the moral foundation theory our minds come equipped with the capacity to learn moral values and norms. One of the moral foundations is identified as "authority" - we come prepared to recognize, accept and follow legitimate authority...

What is 'legitimate authority' to Haidt or other MFT proponents?

I have not found any elaboration on the term "legitimate" yet from the MFT standpoint apart from its mention here

[quote author=moralfoundations.org]
Authority/subversion: This foundation was shaped by our long primate history of hierarchical social interactions. It underlies virtues of leadership and followership, including deference to legitimate authority and respect for traditions.
[/quote]

Gigerenzer references the MFT in "Gut Feelings" and both he and MFT researchers like Haidt state that this thesis is descriptive in nature and deals with morality as it is observed in human societies (as well as in precursor form in animal societies). It does not deal with the normative question of "how morals should be" in an ideal world.

So I started looking for animal studies. Social dominance hierarchies are seen in animal groups. A new member of a group figures out this hierarchy and maintains proper behavior in order to be accepted within the group. Alpha males are often the "legitimate" authority in an animal group by virtue of their physical strength and aggressive instincts. However, there are exceptions to the alpha male rule- like the elephant society. A typical elephant society is led by an experienced matriarch.


http://www.livescience.com/42576-elephant-matriarchs-guide-society.html

The elephants in Amboseli exhibit a complex and dynamic fluid, fission-fusion society in which group membership changes over time as individuals come together (fusion) and then part (fission). These sorts of societies are rarely observed in animals other than humans or non-human primates. And, the oldest and most experienced females take the lead. Ms. Ogden notes, "[G]roup size is constantly changing, responding to the seasons, the availability of food and water, and the threat from predators. An adult female elephant might start the day feeding with 12 to 15 individuals, be part of a group of 25 by mid-morning, and 100 at midday, then go back to a family of 12 in the afternoon, and finally settle for the night with just her dependent offspring." And, other research in Proceedings of the Royal Society B has shown that "the more closely related individuals are, the more time they tend to spend with one another."

Furthermore, there appears to be a survival advantage for groups led by older matriarchs. Vicki Fishlock, a resident scientist with the AERP notes, "Good matriarch decisions balance the needs of the group, avoiding unnecessary travel while remembering when and where good resources are available. ... The matriarch has a very strong influence on what everybody does." Indeed, "Studies in Amboseli have revealed that families with older, larger matriarchs range over larger areas during droughts, apparently because these females better remember the location of rare food and water resources." A number of different studies have shown that groups benefit from the presence of "wise old matriarchs" and that "elephants defer to the knowledge of their elders, and that matriarchs call the shots when it comes to deciding what anti-predator strategy to adopt" according to elephant expert Karen McComb at the UK's University of Sussex.
...................

"Older matriarchs also seem to be better at judging 'stranger danger' from other elephants. At Amboseli, each family group encounters some 25 other families in the course of the year, representing about 175 other adult females. Encounters with less familiar groups can be antagonistic, and if a family anticipates possible harassment it assumes a defensive formation called bunching." McComb tested whether a matriarch's age influenced her ability to discriminate between contact calls. They discovered that because older matriarchs have a better memory for various elephant voices families who were led by them were less reactive to vocalizations by less familiar elephants.

The legitimacy of the authority of older matriarchs seems to be based on superior knowledge and the instinct of caring for others. Group members defer to her naturally.

Coming back to humans, a good case study would be a hunter-gatherer society which is supposedly more egalitarian in organization. Haidt and others note

[quote author=Haidt et al]
Authority is a particularly interesting case in that hunter-gatherer societies are generally egalitarian. Yet as Boehm (1999) explains, it’s not that they lack the innate cognitive and emotional structures for implementing hierarchical relationships, because such relationships emerge very rapidly when groups take up agriculture. Rather, hunter-gatherers generally find cultural mechanisms of suppressing the ever-present threat of alpha-male behavior, thereby maintaining egalitarian relationships among adult males in spite of the hierarchical tendencies found among most primates, including humans.
[/quote]

From the above, it seems that egalitarian societies may not readily deposit material on the "authority" foundation of morality. Brings back the question then is such a foundation innate? The argument put forward for innateness comes from observing such tendencies (hierarchical society formation) in the animal world, its widespread cultural existence and its potential adaptive benefits. Legitimacy of an authority would be determined by different criteria in different environments.
 
Some clarifications and insight into rationality in the face of uncertainty.

[quote author=Gerd Gigerenzer in Rationality for Mortals: How People Cope with Uncertainty]

“Human rational behavior is shaped by a scissors whose two blades are the structure of task environments and the computational capabilities of the actor” (Simon, 1990: 7). Just as one cannot understand how scissors cut by looking only at one blade, one will not understand human behavior by studying either cognition or the environment alone.

The two key concepts are adaptive toolbox and ecological rationality. The analysis of the adaptive toolbox is descriptive, whereas that of ecological rationality is normative.

The adaptive toolbox contains the building blocks for fast and frugal heuristics. A heuristic is fast if it can solve a problem in little time and frugal if it can solve it with little information. Unlike as-if optimization models, heuristics can find good solutions independent of whether an optimal solution exists. As a consequence, using heuristics rather than optimization models, one does not need to “edit” a real-world problem in order to make it accessible to the optimization calculus (e.g., by limiting the number of competitors and choice alternatives, by providing quantitative probabilities and utilities, or by ignoring constraints).

Heuristics work in real-world environments of natural complexity, where an optimal strategy is often unknown or computationally intractable. A problem is computationally intractable if no mind or machine can find the optimal solution in reasonable time, such as a lifetime or a millennium.
............

The study of ecological rationality answers the question: In what environments will a given heuristic work? Where will it fail? Note that this normative question can only be answered if there is a process model of the heuristic in the first place, and the results are gained by proof or simulation.
[/quote]

The gaze heuristic example discussed earlier is one example of a fast and frugal heuristic. Its applicability is limited by the structure of the specific problem being solved - how to track a flying object.

The 1/N fund allocation in investment along with usage of the recognition heuristic to pick which N funds/stocks to choose is another example of fast and frugal heuristics. The recognition heuristic is practically useful only in the case of incomplete knowledge of the market. As described in the earlier quote, an expert in the market cannot use the recognition heuristic to his advantage.

Imitation is a very commonly used, instinctive fast and frugal heuristic. When in doubt look at what others (preferably experienced and knowledgeable) are doing. The successful applicability of the imitation heuristic is dependent on environmental characteristics like
- the environment is relatively stable and does not change quickly
- lack of feedback in the sense that it is not immediately apparent whether a particular choice gives rise to good or bad outcomes
- dangerous consequences for mistakes.

Maximizing vs satisficing provide different strategies applicable to different environments. Maximizing involves identifying key variables, assigning them relative importance (weights) and calculating the resulting value for different combinations. It is a tried and tested method in economics and utility theories and works well in situations where there is a lot of relevant information and low uncertainty. Relevant information means that when information is presented, its importance and impact is known and can be roughly estimated. It is like answering a puzzle or solving a math problem in a structured setting.

Satisficing or one-rule decision making is a fast and frugal heuristic often outperforms maximizing in real life, uncontrolled situations where one does not know if or how much a particular piece of information is relevant to the given problem.

There is empirical evidence available from research which can be summarized in this somewhat counter-intuitive hypothesis:

For judgements under significant uncertainty, one has to ignore information to make good predictions.

If there is interest in this topic, we can have more discussion.
 
Back
Top Bottom