In the study of cognitive science, we would like to think that humans employ a fairly formal system of reasoning, meaning that computations in the mind are somehow form-invariant, that words are not minced, and that we take what was said exactly as it was said – at least that’s what we used to think. Language is such a fundamental basis of cognition that it is often overlooked, where it’s become an implicit assumption that people rarely question. However, it is due to exactly the subtle and pervasive nature of language that we stumble in the formalization of human cognition, because not only is language often an uncontrollable variable in the black box of the human mind, it also has profound impacts on the way we investigate cognition. A distinction should be made here between our spoken language and “language” in general, because mathematical symbols, for example, are also used as a language, a more precise one at that, but not without its own problems. This essay will first look at some of the issues we face with the spoken language, and hopefully then generalize to the broader sense of the word.

Language is the basis of a standardized communication, moreover, it is the form that we use to construct internal thoughts: such is the importance of language in reasoning – it is the vehicle for communication. This should be intuitive if not obvious, so I will continue by talking about the problems we face in cognitive science when its importance is ignored. Without language, we can’t reason with each other, and we will probably do a bad job reasoning with ourselves. Reasoning is by definition a deductive process, i.e. “if A then B”, which is a systematic reduction of possibilities in pursuit of a relationship between cause and effect. Therefore, reasoning requires precise symbols and syntax, such that no new information is introduced by the ambiguities of their application. This is exactly where the spoken language creates problems in our study of cognition – not so much that the expectation of precision is not met, but that the gap between expectation and reality is often overlooked, and thus, unaccounted for.

Grice’s maxims of cooperative communication, as they are introduced in Kahneman & Tversky, nicely model our expectations in the usage of language, such as the maxim of quantity whereby the listener infers hidden meaning from the message, assuming that the speaker would not say things that the listener can infer. The details of the maxims, revelatory as they are, are not extremely important. The key takeaway is that listeners routinely make inferences from what is explicitly said, which essentially breaks down the requirement of precision in the symbols and syntax of a step in logical deduction. It is actually quite amazing that we are able to use the spoken language to reason at all, but of course, a different set of expectations may be used in strictly deductive conversations, such as in academic discourses. Curiously, the inference of hidden meaning from an explicit message is itself a form of reasoning, where we use contextual information to make logical conclusions about the intended message. This is a sort of meta-reasoning – reasoning about the vehicle of reasoning itself. In this instance, it is possible that we’re not using language in our internal reasoning, being able to quickly bypass explicit formulations of the object in question and instead preferring to use a fuzzier integrator. Nonetheless, it creates problems in our investigation of cognitive processes, as it was raised in both Tversky and Gigerenzer, since the explicit inputs and outputs do not necessarily reflect the internal processes.

As imprecise as the spoken language is, could we expect to do better with a more precise system of communication, such as mathematical symbols? Yes, we can, but not always. Mathematical symbols themselves have almost zero ambiguity in the message it’s trying to convey – a ‘+’ is always a sum of two things. Actually, I shouldn’t say zero ambiguity, as a good point was raised in Tversky that there is still contextual information, and that base-2 operations is completely different from base-10 operations. However, with mathematical and other formal systems, it’s much easier to pinpoint the implicit contextual information, often represented by the axioms of the system itself. An interesting result of all three readings is that when you take two representations of the exact same mathematical message, and translate it to the spoken language, our ability to reason heavily depends on which representation is seen: the presented argument between probabilists and frequentists. It would seem extraordinary that two equivalent representations could produce such dissimilar results, it’s like saying a calculator suddenly doesn’t know how to calculate angles when it goes from degrees to radians. Upon further investigation, though, this result shouldn’t be so surprising. In a strictly mathematical system, 1/2, 50/100, and 0.5 all mean literally the exact same thing, because the contextual information (axiom) states that the numerator and denominator should be ignored, and they are merely representations of the same final result: 50%. When this message is taken into the world of the spoken language, however, the explicit rule is lost. Or rather, it’s drowning in the sea of contextual information humans deem to be more important based on the specific situation, such as the chance that a hunt will be successful. As the example raised in the reading, we can be a lot more confident when we catch dinner 50 out of 100 times, as opposed to 1 out of 2 times.

As I near the end of my essay, I’ve undergone a minor shift in stance. Although language is still very important in our ability to reason and in communicating our reasoning, it is not the only component that enables reasoning, as we are very apt at reasoning implicitly using the contextual information available to us. In addition to our inability to control specific interpretations of language, we’re often unaware of an entirely different system that was used to reason. Perhaps this is the same reason for what now seems to be badly-contrived experiment designs in the past: we wrongly assume that language is the only vehicle of reasoning, hence if we interpret the language used in a “think-aloud” experiment intelligently, even given the ambiguities of interpretation, we should still be able to make reasonable conclusions about the cognitive processes in play. However, the truth may be that we employ other vehicles of reasoning that simply are not reflected in the explicit and conscious formulation of Language (big L, including math and the likes). This also changes the definition of reasoning. Historically, we think of reasoning as a higher order function that necessarily recruits a conscious thought process. After all, Aristotle claimed that reasoning is the pinnacle of human intelligence. In reality, much of our reasoning, especially in the context of everyday life, occurs subconsciously, far removed from explicit representations through language. In this sense, there’s no reason to think that animals do not reason, as every animal is able to make conclusions based on previous and contextual evidence. It is with the realization of having committed the mortal sin of anthropocentricity that I shamefully conclude my essay.