youâve traveled many times, you are absorbing information about the environment and putting it into immediate action. As we say, much of this happens on autopilot. The processes involved are reliable in most ordinary circumstances, which is why most of us can do something dangerous like drive a car without a major mishap. Of course, each of these processes is itself composed of highly complex sub-processes, and each of those is composed of still more moving parts, most of which do their jobs without our conscious effort. In normal operations, for example, our brain weeds out what isnât coherent with our prior experiences, feelings and what else we think we know. This happens on various levels. At the most basic one, weâagain, unconsciouslyâtend to compare the delivery of our senses, and we reject the information we are receiving if it doesnât match.
This sort of automatic filtering that accompanies our receptivestates of mind is described by Daniel Kahneman and other researchers as the product of âsystem 1â cognitive processes. System 1 information processing is automatic and unconscious, without reflective awareness. It includes not only quick situational assessment but also automatic monitoring and inference. Among the jobs of system 1 are âdistinguishing the surprising from the normal,â making quick inferences from limited data and integrating current experience into a coherent (or what seems like a coherent) story. 4 In many everyday circumstances, this sort of unconscious filteringâcoherence and incoherence detectionâis an important factor in determining whether our belief-forming practices are reliable. Think again about driving your car to work. Part of what allows you to navigate the various obstacles is not only that your sensory processes are operating effectively to track the facts, but that your coherence filters are working to weed out what is irrelevant and make quick sense of what is.
Yet the very same âfast thinkingâ processes that help us navigate our environment also lead us into making predictable and systematic errors. System 1, so to speak, âlooksâ for coherence in the world, looks for it to make sense, even when it has very limited information. Thatâs why people are so quick to jump to conclusions. Consider: How many animals of each kind did Moses take into the ark? Ask someone this question out of the blue (it is often called the âMoses Illusionâ) and most wonât be able to spot what is wrong with itânamely, that it was Noah, not Moses, who supposedly built the ark. Our fast system 1 thinking expects something biblical given the context, and âMosesâ fits that expectation: it coheres with our expectations well enough for itto slip by. 5 Something similar can happen even on a basic perceptual level; we can fail to perceive what is really there because we selectively attend to some features of our environment and not others. In a famous experiment, researchers Christopher Chabris and Daniel Simons asked people to watch a short video of six people passing a basketball around. 6 Subjects were asked to count how many passes the people made. During the video, a person in a gorilla suit walks into the middle of the screen, beats its chest, and then leavesâsomething youâd think people would notice. But in fact, half of the people asked to count the passes missed the gorilla entirely.
So, the âfastâ receptive processes we treat with default trust are reliable in certain circumstances, but they are definitely not reliable in all. This is a lesson we need to remember about the cognitive processing we use as we surf the Internet. Our ways of receiving information onlineâGoogle-knowingâare already taking on the hallmarks of receptivity. We are already treating it more like perception. Three simple facts suggest this. First, as the last section illustrated, Google-knowing is