Introduction for Students and Professionals in Cognitive Science

RBT Cover for SRBTThe following is a sneak preview from Jack Pelham’s upcoming book, Reality-Based Thinking: How everyone–including you–can think better. This special introduction (to be distinguished from the book’s “Brief” and “Thorough” introductions) is for cognitive scientists and students of cognitive science.

Click here to be informed when the book is published. (Expected by December 2015.)

—————————————————————

INTRODUCTION

It seems useful to provide a special introduction for those readers who are working or studying in the field of cognitive science, so that you can understand quickly both the intent of and need for this book. Let me state at the outset that I am not a scientist by profession. Rather, I am an extraordinarily interested human who has been something of a frustrated activist, long stymied by how hard it is to influence many individuals to correct their own thoughts, decisions, and beliefs, even when the facts self-evidently favor that correction. This failure to self-correct is a widespread and serious problem in our culture, and is itself fueled by both a lack of learning in the strategies of epistemic rationality and a lack of moral diligence concerning the outcome of one’s own thinking strategies.

Meanwhile, however, recent studies in psychology show considerable cause to believe that humans of normal brain health are generally equipped with the cognitive faculties for discerning reality/truth from unreality/falsehood, as well as for controlling themselves for the purpose of regulating the certainty of their own beliefs so as to fit the sum of the available evidence. The common impediments are threefold, and include:

  1. a thinking disposition that frequently avoids System-2/Algorithmic mental processes (cognitive miserliness),
  2. a lack of learning in certain thinking skills (particularly in logic and probability theory), and
  3. a disposition toward cognitive biases.

It appears from the literature that all these impediments can be overcome by learning and diligence, yet even so, there exists no public initiative aimed at encouraging epistemic rationality as a sustained strategy for life. Hence, the need for this book.

My thirteen-year journey to understand why it is so hard to help other people change their minds reached a milepost when I discovered What Intelligence Tests Miss: The psychology of rational thought, by Keith E. Stanovich. Stanovich’s tripartite model of the mind was quite useful in distinguishing the Reflective Mind from the standard dual process model as a third process, which is in charge of cognitive quality control, and which is seat of the thinking disposition. Though Stanovich didn’t make any explicit mention of it in his book, the implication was fairly obvious that the cognitive output in his tripartite model is not only the result of nature and nurture, but also of choice. If the skills of epistemic rationality can be learned, it follows that the thinker may choose to use them if he or she likes.

Choice being a subject germane to the field of ethics and morality, it seemed obvious to me that something is generally missing at this present time from the standard discussion of cognitive miserliness—that is, that it has an underlying condition or cause that lies within the control of the human. If it is the thinking disposition of certain individuals to be particularly miserly with their cognitive energies, especially when it comes to correcting themselves on matters about which they do, could, or should know that they are wrong, then their cognitive miserliness is, at least in part, the result of moral miserliness.

On this point, the work of Dan Ariely was most helpful. In particular, his book The (Honest) Truth About Dishonesty tells of several experiments on moral choices, demonstrating that most people are quite willing to participate in activities that they themselves consider immoral (particularly lying, cheating, and stealing), but that they tend to curb these activities when it becomes too difficult for them to maintain a positive view about themselves. In other words, there must be some mechanism in the mind for making judgments about the trends of one’s own moral behavior, and the standards by which those trends should be measured. In my view, this looks quite a lot like Stanovich’s tripartite model, wherein the Reflective Mind makes judgments about how and when the Algorithmic Mind will be utilized.

Ariely goes on to note two very interesting facts that emerged from his research:

  1. Even subjects suffering from ego depletion were still capable of doing math problems when asked. [Dan Ariely. The (Honest) Truth About Dishonesty. HarperCollins. 2012. Page 106] Math problems, of course, require the Algorithmic Mind (or System 2 / Type 2 processing). are the very processes that the cognitive miser tends to avoid, and yet we see that they can still be implemented even in a moment in which the subject is not generally disposed to implement them.
  2. Experimental subjects were considerably less inclined to cheat (when given an opportunity) after having been reminded of some manner of moral code. [Dan Ariely. The (Honest) Truth About Dishonesty. HarperCollins. 2012. Pages 39-53] This suggests that the ability to resist the strategy of cheating was always there, but that a reminder to engage that resistance sometimes makes the difference between whether it is engaged or not. I noted that if that resistance can be sparked from without, it can certainly be sparked from within when a subject makes it his or her habit to self-remind as to cognitive/moral standards.

These two points of fact seemed quite agreeable with several of Stanovich’s observations about how thinkers tend to think better when cued to do so (emphasis added):

  • “More intelligent people appear to reason better only when you tell them in advance what good thinking is!” [What Intelligence Tests Miss: The psychology of rational thought. Keith E. Stanovich. 2009. Page 38.]
  • “In short, subjects of higher intelligence are somewhat less likely to show irrational framing effects when cued…that an issue of consistency is at stake; but they are no more likely to avoid framing without such cues.” [What Intelligence Tests Miss: The psychology of rational thought. Keith E. Stanovich. 2009. Page 99.]
  • “Intelligent people perform better only when you tell them what to do! (I am referring here specifically to the domain of rational thought and action.)” [What Intelligence Tests Miss: The psychology of rational thought. Keith E. Stanovich. 2009. Page 99.]

Two questions are raised in all this: 1) whether a person will do his or her best rational thinking at a particular moment, and 2) what will it take to make that happen. Again, if a subject can engage in epistemic rationality, but chooses not to, then this is a question of either an initial moral/behavioral choice, or of one’s ability to maintain a moral/behavioral choice while under a state of temptation.

Walter Mischel describes the temptation-fighting role of Executive Function in his book The Marshmallow Test: Mastering Self Control. In a description that brings to mind Stanovich’s model of the Reflective Mind, Mischel details these three strategies of Executive Function (EF) that helped his test subjects resist eating a marshmallow if they knew it meant they could have two marshmallows later (emphasis added):

“Each child who waited successfully had a distinctive methodology for self-control, but they all shared three features of EF: First, they had to remember and actively keep in mind their chosen goal and the contingency (‘If I eat the one now, I don’t get the two later’). Second, they had to monitor their progress toward their goal and make the necessary correction by shifting their attention and cognitions flexibly between goal-oriented thoughts and temptation-reducing techniques. Third, they had to inhibit impulsive responses—like thinking about how appealing the temptations were or reaching out to touch them—that would prevent them from attaining their goal.

This self-monitoring, self-correcting, self-distracting (from hot temptations) strategy enabled Mischel’s subjects to succeed, just as Stanovich’s cues helped thinkers to execute their rationality skills, and as Ariely’s moral code reminders helped test takers not to cheat.
In my view, all this fuses together into a model in which the reminded human does better in the interlaced areas of rationality/morality/self-control than does the unreminded human, and in which the self-reminding human can do better still, inasmuch as he or she does not require an external reminder. Again, all this seems to put the human firmly in the position of a moral agent who may decide how he or she will think, decide, believe, and behave.

It is by this reasoning that I have devised my own Self Correction Ethic:

Self correction is a natural function of the human mind, and is therefore the rightful duty of all humans.

The first clause of this ethic is a matter of fact, based on the findings of cognitive science. The second clause is, of course, a matter of moral philosophy. It assumes that just as a society works better when each person provides for his or her own needs and regulates his or her own behavior, it will also work better for the individual and the group alike when each individual corrects his own thoughts, decisions, and beliefs. I liken a human not making thorough use of his or her Algorithmic (System 2) and Reflective Minds to a horse deciding not to make use of one of its legs. The latter we would consider to be odd and “unnatural”. The former, however, is culturally “normal” in our society, so we tend to lose sight of just how unnatural it is. The fact of the matter is that our world cultures have adopted a strangely-low goal for cognitive maturity without any natural cause for it except the avoidance of either the cognitive work involved, or of the emotional shock that might ensue from investigating matters more diligently.

The Self Correction Ethic raises the need for a standard for self correction. That is, to what standard shall we correct ourselves? Ariely recognizes this question when he quotes Oscar Wilde: “Morality, like art, means drawing a line somewhere.” Ariely aptly follows with, “The question is: Where is the line?” [Dan Ariely. The (Honest) Truth About Dishonesty. HarperCollins. 2012. Page 28]

In this present book, I propose a model in which the line should be drawn at reality, which I define as follows: the state of things as they actually exist, as opposed to an idealistic or notional idea of them. [Compact Oxford English Dictionary of Current English, Oxford University Press, 2005.] My position, therefore, is that humans should, as a matter of deliberate and consistent habit:

  1. Reject what is not really true;
  2. Regulate their beliefs and behaviors to fit the evidence, rather than maintaining beliefs and behaviors that are at odds with reality.

This does not preclude the judicious use of the imagination to devise ways to alter our situations as needed—such as adding onto or moving out of a too-small house, rather than taking an attitude of “passively submitting to the reality that it is too small.” It does, however, preclude such behaviors as taking what is not really yours, promoting what is not really true, repeating what did not really work, and so forth.

This, therefore, is the unabashed philosophical stance of this book:

Since we live in a real world, things naturally go better when we think, decide, and believe in such a way as to be deliberately responsible to reality.

Activism
I believe it is fairly obvious that most of the problems that plague individuals and societies alike are caused by the under-use of epistemic rationality. It is my aim, therefore, to encourage people to learn and to use epistemic rationality, and then both to remind them to do better, as well as to give them the tools by which they may remind themselves. When people become self-correcting, life becomes much more efficient and no longer requires all the “overhead” of oversight and external regulation. Thus do I deem this the most efficient form of activism, as well as that most likely to succeed.

Most activists simply seek to make some specific change in the end product of what people think. That is, they seek to change what a person thinks about this or that. Such a strategy generally fails to bring about sweeping changes in a culture, however—and particularly in a culture in which people make little of their own cognitive faculties. The better strategy, it seems, is to teach them how to think at the foundational level, and from there, they can do their own adjusting of their final positions on various matters of importance.

Because the term Epistemic Rationality is not easily grasped by the layman, who knows little of the nomenclature of philosophy or of cognitive science, I have adopted the previously-little-used term, Reality-Based Thinking in its place. I focus far more on thinking, deciding, and believing than on behavior because I see the latter as the natural result of the former items. Further, to focus on specific bad outward behaviors seems a sure-fire method of evoking the backfire effect. I would rather sell a man a vacuum cleaner for his own use than to presume to go to his house and clean up his mess for him. For this reason, I strive to stay away from political and religious topics with few exceptions, for these tend to be where messily-derived beliefs have their greatest adamancy, and where people are least likely to aspire toward being responsible to reality.

Panglossian or Meliorist?
If you haven’t figured it out by now, I fall squarely on the Meliorist side of the ongoing Panglossian/Meliorist debate about rationality. Where the Panglossian believes that “all is for the best in this best of possible worlds” [http://www.merriam-webster.com/dictionary/panglossian], I believe that a lot of what ails us can and should be meliorated to an appreciable extent. And if this is only barely true that our cultures could be improved, it is most certainly that the individual life of any particular person can be improved. Consider the one-liner that lies beneath what I call the Activist’s Curse:

The Activist’s Curse
If I can change my mind, and if you can change your mind, then who are we to declare that it is too late for the rest of our society to change its mind, too?

No accomplished Reality-Based Thinker—which is what I call a Realitan—will look at the evidence that individuals can change, and then proclaim that we live in the “best possible world”. Let it not escape our notice that this “best possible world” notion squarely implies that things are as good as they’re going to get. This notion is easily disproved by the individual. For example, when I first answered many of the questions of the sort that are likely to be on the upcoming Rationality Quotient (RQ) Test, I got them wrong. Since then, however, I have considerably improved my skills in epistemic rationality, as well as my resolve in applying those skills as often as they are needed. I have, therefore, my own success in learning, which makes all the more believable the scientific evidence that epistemic rationality is learnable.

Beyond that, however, there are a few society-wide improvements to which we can point as evidence that wide-scale improvement is indeed possible. Among them are the decreases (not extirpation, mind you, but decreases) in smoking, pollution, and racism. If a culture can change its paradigms regarding such things, what evidence is there that it cannot change its paradigms with regard to its thinking?

At the conclusion of his book, Rationality And The Reflective Mind, Keith Stanovich put it this way:

“…rational thought is related to real-world consequences and it can be taught.”
[Page 246]

Stanovich even believes that the public’s view of intelligence can be altered so as to highlight the distinction between intelligence and rationality:

“I think that folk psychology does now distinguish between intelligence and rationality somewhat, but that folk psychology could be reformed to do this even more.”
[What Intelligence Tests Miss: The psychology of rational thought. Keith E. Stanovich. 2009. Page 55.]

This, in part, is that with which I hope to help.

Don’t Forget the Culture!
Just as the ongoing nature vs. nurture debate needs to make room for choice, we had better start looking at the influence of culture in an individual’s use of epistemic rationality. In this book, I talk about culture a good deal. And I’m not alone in this as many in philosophy and cognitive science have written about “meme culture” or “hearsay culture”. Indeed, if reminders are useful in bolstering rationality, morality, and perseverance under temptation, then they are also bound to be useful in influencing people to do the opposites of these needful tasks. Studies on peer influence abound, however, and there is no need for me to tell you about this. I bring it up, however, to emphasize the importance of keeping the forces of culture ever in mind as we ponder normal human behavior. If Billy is behaving as a particularly miserly thinker, we should resist the temptation to chalk it up solely to Billy’s DNA or his family’s paradigms, for Billy exists in a culture of cognitive misers, where those traits are frequently affirmed as “normal”.

Overgeneralizations Among Scientists
Not every branch of human knowledge is fully developed, of course. The current state of knowledge about rationality and dysrationalia still have quite a way to go before they are close to completion, and the advent of the upcoming RQ Test will be invaluable toward this end. In the meantime, many champions in cognitive science are busy gathering and analyzing data—lots and lots of data about what we humans tend to be like in our thinking. I am concerned, however, that we should all be careful not to draw some conclusions about humankind in a hasty manner. For example, we could look at results from the Wason 4-card Selection Task and see that 90% of humans make one or more errors of judgment on that task. But will we rush to a hasty overgeneralization of humanity on account of this data? Will we declare something like the idea that: people just aren’t good at this sort of problem? Or will we rather take the higher road and declare instead that: some people are good at this sort of problem while most are not?

In all my reading, I remember coming across one account of an experiment in which only about 25% of subjects took the appropriate moral action in what they thought was an emergency situation. When summarizing the data, however, one author proceeded to sum up this experiment by proclaiming that people tend to excuse themselves from responsibility in an emergency when they know that others are present and are also aware of the need. When I read this conclusion, I was shocked, for we had just been told that 25% had not excused themselves, but had done the right thing.

Why this overgeneralization? Why this cognitive error in a book about cognitive errors? Well, obviously—as even the author in question said elsewhere himself—we are all prone to cognitive errors. I am simply urging that in discussions of these matters, we exert extra diligence to avoid giving inaccurate impressions of “what we humans are like”. In particular, I want to guard against any bias of ignoring cognitive successes amongst the experiment results. In fact, I want to know how those 25% managed to avoid the self-excusing exercised by the remaining 75%. In my view, such things should be studied and emulated, rather than dismissed and ignored. Those who avoid the common cognitive errors (of any type) should not be ignored as of little statistical importance, but studied as prototypes of what others may also achieve.

As one might expect from a guy writing a book on Reality-Based Thinking, I favor a reality-based view of the data, as opposed to a biased one. I also favor a reality-based view of what can be improved with careful intervention. Let me give an example based on the assumption that the percentages on the Wason Task would hold true worldwide. If 10% can get Wason right, to give just one possible example, then there is a huge portion of the world population that might just be helping in teaching others the skills that they use themselves. And with that, there’s a reasonable hope that this number can grow from 10% to 11% and beyond with a deliberate initiative to this end. And let me remind you that 10% of the world’s population (the group we would expect to succeed at Wason) comes to roughly 700 million people. To increase that group to the 11% mark would be to help another 70 million people learn to do reasoning of a type that they are currently failing to do.

Making a Difference
When was the last time you helped 70 million people to improve their lives? I ask this question because I find it a personal blessing to help even one person to learn something new with which he may improve his own life. So while I’m quite aware that about 6,300,000,000 people on Earth may get the Wason task wrong, I’m far more interested in the 700 million who get it right, and what they might do to help others learn the same skills.

Or consider the famous Bat & Ball question, on which about 80% fail, primarily for the simple failure to check their math before finalizing their answer. Let us suppose that a public initiative aimed at teaching people the habit of checking their thinking were launched in the United States (~315 million people), and it resulted in an improvement by which a modest 10% of the people began to check their math 10% more often than they currently do. This would directly affect 31.5 million people, saving them the trouble of a significant number of their current errors—not only in math, but in whatever other areas of life in which they decided to “check their math”. Then there’s the incalculable matter of the indirect value of this improvement. How many dollars that are currently lost to error would be retained? How many cases of error-based misinformation would be avoided?

The outlook for meaningful improvement is positive in my view, yet there is not one public initiative for the promotion of rationality, excepting my own modest startup initiative, The Society for Reality-Based Thinking. It remains to be seen what all can be achieved with it, of course, but the task of taking the message to the public is a worthy one, even if relatively few people will prove to take advantage of it to improve their own lives.

A Gateway
This book is intended to be a gateway for learning. It is written for the lay audience as a summary on this expansive topic of rationality and the harm that is done to our cultures by a deficit in rationality. My hope is to encourage readers to take a personal interest in the topic, and to begin on some manner self-directed program of reading relevant books that cognitive scientists are publishing. Even a modest program of reading one book related to epistemic rationality each year could go a very long way in helping people to improve their own lives.

My own reading list appears in the appendix of this book, along with very brief summaries of the books I have read in studying this topic. Further, the website for the Society for Reality-Based Thinking (SRBT) has an Amazon-affiliate store where these books will be promoted and sold. In short, while the scientists are busy doing the science, I plan to be busy promoting not only the science, but the useful application of it in the real world.

Call For Feedback
Even if I were a cognitive scientist, the likelihood of having made one or more errors in this book would be high. If you spot any errors or mischaracterizations, please do not hesitate to inform me. My email address is jackpelham@realitybasedthinking.org.
Further, I’m always open to new scholarly articles—particularly those appropriate for lay audiences—for posting or linking at the website. I do not view myself as a competitor in this field, but as a facilitator for the public, so if you think of any way I can do that better, I’m all ears.