Who came first: the chicken or the egg?
Is similarity the basis of categorization? Or does categorization inevitably lead to similarity? Will we ever know? Is this even a good question? In addressing this century-old psychology question, I will first go over the four readings of this week, summarizing their relevant contributions to this discussions. Then, in the second half, I will attempt to define what I think categories and features are, and what kind of problems my definitions will face. In short, I agree with the fuzzy and theory-driven model of concept formation, though my version is a bit more extreme than that presented by Medin. You can go to middle of page 2 to skip the summaries.
Tversky and Tenenbaum
We begin with Tversky’s proposal of the set-theoretic approach to similarity. In response to the classical geometric model of similarity, Tversky shows that its three assumptions – minimality, symmetry, and triangle inequality – do not hold well in an array of scenarios. Instead, he proposes a more general theory of feature membership, in an attempt to align with the empirically observed data. In a nutshell, Tversky’s theory not only considers the intersecting features of two objects, but also the difference of features, in both directions, and by adjusting the weights on each of the three functions, he shows reliable prediction of behavioral data. Furthermore, he gives consideration to some of the observed human nuances, such as diagnosticity and what makes a good metaphor. Overall, Tversky’s theory is a fair one in trying to move away from the rather rigid geometric model, as the free parameters do a great job in capturing situations such as when humans consider context. Unfortunately, because the theory is so general in attempting to capture very abstract processes, all the while being mathematically “defined”, it seems rather empty on the inside: the harder he tries to prove the theory with experimental data, the more contrived it feels.
Tenenbaum & Griffiths, on the other hand, commits to being (largely) a mathematical discussion. While Tenenbaum also provides examples of application, he doesn’t claim that the human brain IS doing any kind of Bayesian analysis. The paper presents a Bayesian model of generalization, and extends it to multiple prior observations. It’s also shown that the model can deal with arbitrary (non-continuous) stimulus structures, and is in fact quite similar to the set-theoretic model. In the end, Tenenbaum concedes by saying that we have very little clue regarding the origin of the priors, but the model remains useful nonetheless, given good estimates of priors. Here, the magic broom of unknown priors cunningly save the theory from unnecessary conjectures about the wide range of human behaviors, and limits its narrow, but useful applicability.
Medin and Rosch
The Medin and Rosch papers were much more palatable, perhaps because they a) were more general reviews, and b) shunned away from any mathematical formalization. Rosch breaks open the discussion by renouncing the classical theory of well-defined sets, and introduces the fuzzy and graded features. She provides a list of evidence why graded features are more natural for humans, and coins the term conceptual prototype, which can take on various forms, such as an average example of class membership, or an extreme, or just some discrete prior instances. She then handles some of the criticism of the model, as well as problems involving operations with fuzzy features. Smartly, she stays away from the mathematically defined “fuzzy-logic”.
Finally, Rosch nicely summarizes the different perspectives of the classical category view and the probabilistic (fuzzy) view. As a proponent of the fuzzy model, he tackles some of the issues facing prototypical versus exemplar representations. Finally, in the discussion that I found to be the most reasonable and insightful, he raises the question of just what exactly concepts and features are, and how their intricate relationship complicates our pursuit of a generalized theory of categorization. Hence, this is where I will begin my discussion.
The Category of Features
It seems that the definition of what a feature is should be a very important concept to settle on, but instead, the older literature have skimmed over it rather haphazardly, going for the easy and intuitive ones, like color or some visible physical property. It’s only discussed later on that goal-driven features are also possible, such as “things to eat while dieting”. We can easily extrapolate that further: in fact, the (degree of) satisfaction of any arbitrary criteria can be deemed to be a feature. Medin briefly discussed the way people use features of different depth to describe drawings, though the possibility is not only deep, but broad. I really want to emphasize this point, that any information-bearing construct can be used as a feature, because it should then be obvious that humans don’t have an intrinsic set of features for any category. Sure, there are common ones, ones we can easily draw into our consciousness, such as “redness” and “sweetness” for the category of apples, but we also have the remarkable ability to judge one’s ownership of spontaneous, arbitrary, and even senseless features (think happiness of a fork). What, then, enables us to make that judgement after the fact of having categorized a certain object?
Features of a Category
Conversely, what constitutes as a category? In the most general sense, a category is an abstraction, a projection from an infinite-dimensional space onto a lower, and more importantly, relevant dimensional-space. It’s ironic that our excellence at generalizing is exactly what’s hindering the study of generalization: because if we can find a dimension onto which humans are incapable of projecting, then we’ve reached some sort of hardware limit, which we can use as the basis, or atomic unit, of the cognitive process of generalization. Just to exemplify how much we generalize: why do we think a physical object is constant across time? Really, every infinitesimal time step should produce an instantiation of the object-category, instead of “being” the object itself. For example, the apple I’m holding is not the same apple as it was half a second ago, and it will be different from what it will be half a second later. Of course, this is rather extreme and thinking about it this way serves little practical use, but the point is that we do this so implicitly that maybe categorization, in its most basic form, occurs on a subconscious level, for which the conscious mind simply rationalizes about. The school children drawing study in Medin is a great example, because rather than “learning” about the features of a particular category (farm vs. city, etc), people just latch onto the differentiating features and rationalize it after the fact.
It’s a Formless Cloud of Relations
Now that I’ve posited that categories and features can be anything and everything, what next? Well, instead of the categories being vertices and the features being lattices in a connective framework, which is how we typically think about them, they should really be equivalent forms of entities. After all, “red” is as much of a concept as “apple” is, and both are members of each other. A great line in Medin: “properties in general are not independent but rather are linked by a variety of interproperty relations.” But instead of properties, it’s really all things. Is it possible to know what an object (let’s call it A) is, without knowing what not-A is? Hence, an object shouldn’t really have anything to call its “own”, but rather, it is defined by its relationship with other objects. In short, I agree more with what the theory-driven model has to say about categorization, at least in principle. But I’m uncertain whether such a conscious process like theory-forming should be used as the basis for a behaviour that is much more pervasive.
For The Sake of Science
I may have arrived at a spiritually satisfying answer regarding what categories and features are, but it suffers from the same flaws that, in my opinion, plagues Tversky’s paper, which is that it’s too general and all- encompassing, but useless in the context of studying cognition without further constraints. I won’t even begin to formalize a definition of similarity from my definition of categories, so let me conclude with some possible constraints. First, (human) cognition seems to be intimately intertwined with perception, such that representation occurs even in the lowest neural circuits (directional preference in retinal neurons, texture sensitivity in finger tips, etc). This implies that there could be a lower end in the spectrum of abstraction, i.e. we can’t really describe some fundamental things, like what red is, aside from that it’s not green or yellow. Second, there is an upper bound to the number of objects we can represent, because there is a finite number of cells and connections in our brains. For all practical purposes, this number is so big that it is infinity, but it isn’t quite. This means that, while we can theoretically have an infinite number of relationships between objects, we’re constrained by our biological system, and one day, we may finally be able to do accounting on all the relationships that exist between mental representations of objects. And lastly (not because it’s actually the last, but this paper has to end somewhere), the process of acquiring new representations, or, learning, is a physical phenomenon occurring in the molecules and cells inside our brains. This last constraint will perhaps be the saving grace of the study of categories, as it enables us to say (more) definitively how concepts and relationships are stored in the brain. Obviously, seeing how the constraints are all biological in nature, this discussion is full of the biases of a neuroscience student, but I do believe that uncovering the mechanisms of our behaviour is a leg up from just characterizing the input-output responses of the blackbox that is the human brain.