| |

Conceptual Leaps and Definition Change

In each iteration of the Define/Destroy method, definitions of terms are changed in order to reflect new ideas. At the beginning of an iteration, we may have a definition D1 of a term T, and at the end, we have D2. The reason we change D1 into D2 is to align the documented meaning of T closer to the ideas we want it to represent (and thus, when we use it, to convey to others). Because the first iteration of Define/Destroy involved taking standard, dictionary definitions of terms, all changes during that and later iterations are due to the need to accommodate innovation. In other words, the definition of some term T (and T might be a neologism – this makes no difference here) needs to convey new ideas(s), which is why the standard definition no longer applies.

What is the mechanism by which that change occurs? That is, what is the mechanism by which we go from one concept to the new (changed) one, or we introduce an entirely new concept, and therefore, need to move from one definition thereof, to another?

Yayoi Kusama, Infinity Mirrored Room—The Souls of Millions of Light Years Away, 2013; collection of the artist; The Broad Art Foundation, Los Angeles [Source]

In “The Origin of Concepts”, Carey presents a theory of concept development that rests on three theses summarized below. Such a theory needs to explain the starting point, what one has before developing new concepts, and processes through which new concepts are added, existing ones changed. 

1: The starting point, termed “core cognition” , is the foundation on which one develops concepts. Core cognition is innate in the sense that it is not the output of learning, whereby “learning […] build[s] representations of the world on the basis of computations on input that is itself representational” [1]. The assumption is that core cognition is at least in part shaped by causal relationships between things in the world and representations in the mind, which are selected through evolution; to put it in simpler terms, without hopefully distorting the idea too much, core cognition seems to be determined by mechanisms that that connected perception, mental representation, and behavior in ways that resulted in better fitness to the environment, i.e., got selected through survival. I’ll not go into whether this is all there is to core cognition – the topic is complex, and I highly recommend reading Carey’s “The Origin of Concepts”.

2: Assuming there is core cognition, learning is the term used to denote processes for expanding and changing the set of concepts one has. Concepts develop through learning. The problems then, is of course what learning mechanisms are, and what limits learning has: can we learn what we cannot represent? For the purposes here, I’ll consider that all learning is hypothesis testing of some kind. If we learn by hypothesis testing, then we have to have the representations in order to pose hypotheses to test; this is called the “continuity thesis”: “The continuity thesis is that all the representational structures and inferential capacities that underlie adult belief systems either are present throughout development or arise through processes such as maturation.” [1] Carey argues that there are learning mechanisms that generate conceptual outputs that are more powerful (more expressive) than the inputs; and if this is true, then it is possible to pose hypotheses over concepts that are not representable via the vocabulary one has at the outset of a learning episode. Such changes, where the output is more expressive than inputs, are called discontinuities in conceptual development. Carey takes the position that these are possible, contra the continuity thesis (and she provides empirical evidence supporting that position).

3: If there is core cognition, and conceptual leaps (discontinuities in conceptual development) are possible, then it is necessary to explain how leaps occur. Carey uses the term “bootstrapping” for the mechanism(s) that result in conceptual leaps. I summarize below the key steps in her explanation of bootstrapping that leads a child to learn the numeral list representation (1, and then n+1 for successor integers) from knowing “‘one, two, three, four, five…’ as a list of meaningless lexical items”.

  1. “The problem of how the child builds an numeral list representation decomposes into the related subproblems of learning the ordered list itself (‘one, two, three, four, five, six…’), learning the meaning of each symbol on the list (e.g., “three” means three and “seven” means seven), and learning how the list itself represents number, such that the child can infer the meaning of a newly mastered numeral symbol (e.g., ‘eleven’) from its position in the numeral list.”
  2. “English-learning children first learn the distinction between singular and plural ‘a block’ versus ‘some blocks’ – by noting that ‘a block’ applies when the speaker is referring to an array that contains only a single individual, whereas “some blocks” applies when the speaker is referring to an array that contains more than one individual.”
  3. “Second, children learn that the word ‘one’ is a quantifier that picks out individuals by noting that it applies in the same situations as “a,” which they have already learned.”
  4. “Children also learn that the other number words are quantifiers that pick out sets of individuals, by noting that they apply in the same situations as ‘some,’ always in the presence of the plural marker.”
  5. “[C]hildren learn that the word ‘two’ applies only to a subset of plural representations: those in which the speaker is referring to an array that can be put into 1–1 correspondence with a set that contains two individual files.”
  6. “Later, children create a model in long-term memory that analogously supports the meaning of ‘three,’ again supported by whatever processes allow children to construct meanings for trial markers in languages that have them. Finally […] they also create a long-term memory model to support the meaning of ‘four.’ Note that up through this step, the count list and the counting routine play no role in the process of children’s constructing numerical interpretations of the first four numerals.”
  7. “Independently of the above steps (though perhaps concurrently with them), children learn the count sequence as a meaningless routine. They note the identity of the words ‘one, two,’ ‘three,’ and ‘four’ which now have numerical meaning, and the first words in the otherwise meaningless counting list.”
  8. “At this point, the stage is set for the crucial induction. The child must notice an analogy between next in the numeral list and next in the series of models ({i}, {j k}, {m n o}, {w x y z}) related by adding an individual. Remember, core cognition supports the comparison of two sets simul- taneously held in memory on the basis of 1-1 correspondence, so the child has the capacity to represent this latter basis of ordering sets. This analogy licenses the crucial induction: if ‘x’ is followed by ‘y’ in the counting sequence, adding an individual to a set with cardinal value x results in a set with cardinal value y. This generalization does not yet embody the arithmetic successor function, but one additional step is all that is needed. Since the child has already mapped single individuals onto ‘one,’ adding a new individual is equivalent to adding one.”
  9. “I have argued here that the numeral list representation of number is a representational resource with power that transcends any single repre- sentational system available to prelinguistic infants. When the child, at around age [3 and 1/2], has mastered how the count sequence represents number, he or she can in principle precisely represent any positive integer. Before that, he or she has only the quantificational resources of natural languages, parallel individuation representations that implicitly represent small numbers, and analog magnitude representations that provide approxi- mate representations of the cardinal values of sets.”

Not all concepts will form in the same way, and that realization is important here because it relates Define/Destroy method and Plastic Definitions to Carey’s general explanation of mechanisms that produce conceptual leaps. 

“Some concepts, such as object and number, arise in some form over evolutionary time. Other concepts, such as kayak, fraction, and gene, spring from human cultures, and the construction process must be understood in terms of both human individuals’ learning mechanisms and sociocultural processes. Humans create complex artifacts, as well as religious, political, and scientific institutions, that themselves become part of the process by which further representational resources are created.” [1]

Define/Destroy, with Plastic Definitions that it shapes, is intended as one among many means to stimulate concept development of the second type: new concepts formed quickly, thrown away as fast, and replaced, until they are stable enough to be useful; eventually, if innovations they support are successful, the new concepts may become part of the cultural stock on which new ones develop.

Although the case of learning integers above is interesting, it does not say much about the broad problem of where hypotheses to test come from; how are hypotheses formed when learning, or when developing new concepts?

”[I]n cases of developmental discontinuity, the learner does not initially have the representational resources to state the hypotheses that will be tested, to represent the variables that could be associated or could be input to a Bayesian learning algorithm. […] bootstrapping is one learning process that can create new representational machinery, new concepts that articulate hypotheses previously unstatable. In […] bootstrapping episodes, mental symbols are established that correspond to newly coined or newly learned explicit symbols. These are initially placeholders, getting whatever meaning they have from their interrelations with other explicit symbols. As is true of all word learning, newly learned symbols must necessarily be initially interpreted in terms of concepts already available. But at the onset of a bootstrapping episode, these interpretations are only partial – the learner (child or scientist) does not yet have the capacity to formulate the concepts the symbols will come to express. The bootstrapping process involves modeling the phenomena in the domain, represented in terms of whatever concepts the child or scientist has available, in terms of the set of interrelated symbols in the placeholder structure. Both structures provide constraints, some only implicit and instantiated in the computations defined over the representations. These constraints are respected as much as possible in the course of the modeling activities, which include analogy construction and monitoring, limiting case analyses, thought experiments, and inductive inference.” [2]

Besides children being somewhat like scientists and the other way around, there are two important ideas to take away: one, new concepts have to relate to old, without being remixes of old, and two, new concepts do develop. Hard to escape what you already know, but you still can. 

The new concepts, those playing a role in the explanation of your inventions in innovation, themselves come from your work on the placeholders. The placeholders, in turn, come from the distinctions, the differences from what you observe or think do or should exist. Simplifying, there’s a concept of apple with seeds, and you want it without, so there is a placeholder for a seedless apple, waiting to be thought out more – which other properties, other than the absence of seeds (the difference from the observed) you want? The further you think it through, the more substance in the placeholder, so to speak, and the less it is a placeholder. If something in what you know is not how you think things are, then creating a placeholder is about deciding what exactly should be different, so that it would turn out how you think it should.

”The process of construction involved positing placeholder structures and modeling processes that aligned the placeholders with the new phenomena. In all three cases, this process took years. For Kepler, the hypothesis that the sun was somehow causing the motion of the planets was a placeholder until the analogies with light and magnetism allowed him to formulate the concept vis motrix. For Darwin, the source analogies were artificial selection and Mathus’s analysis of the implications of a population explosion for the earth’s capacity to sustain human beings. For Maxwell, a much more elaborate placeholder structure was given by the mathematics of Newtonian forces in a fluid medium. These placeholders were formulated in external symbols: natural language, mathematical language, and diagrams.” [1]

How does this relate to Define/Destroy, and Plastic Definitions it shapes? 

Firstly, innovation, being a process, must have a starting point. We can see that starting point by analogy to core cognition, a basis that will both enable and limit possible outcomes. This is only an analogy, since core cognition is about innate mechanisms, whereas an innovation process starts from a social, technological, economic setup (to name some aspects of context, broadly speaking). The analogy is there to highlight that innovation is going to be influenced by conceptual foundations if you will, which are determined by the people involved, their knowledge, assumptions, culture and ability really, in a very broad sense. 

Secondly, invention is another name for outcomes of discontinuity in concept development. Innovation can be seen as a learning process, conditioned by its starting point, as mentioned above, and the ability to make and fill placeholders – for concepts and relationships between concepts. 

Thirdly, where do placeholders come from? They come from differences that participants and carriers of innovation are able to identify, and their ability to hypothesize about the causes and consequences of these differences. In Define/Destroy, to name placeholders is to have terms whose definitions do not match perception and ideas, and to define or redefine terms is to work to fill placeholders in ways the achieve some coherence of the conceptual framework that is reflected by the glossary of terms, some terms being about concepts, others about relationships between these concepts.

Fourthly, it is encouraging to see, at least from Carey’s work, that cognitive science theories of the origin and development of concepts allow for new concepts not to equate to remixes of old. There is, as anyone who does innovation wants to believe, room for true conceptual leaps, as witnessed by significant developments of complex conceptual frameworks in mathematics, natural sciences, and other disciplines. 

Finally, there are and always will be some concepts that reflect core cognition or come from long term evolutionary processes as they have very broad applicability (e.g., “object”), whose definitions are stable, or whose change is infrequent (although this does not mean that they have a single universally agreed meaning). Others may be newer, less stable. And then, we have the most unstable ones, local to a delimited context (e.g., {Service} in the business services marketplace example) which Plastic Definitions are for, and Define/Destroy is used to rapidly stabilize.

References

  1. Susan Carey. “Précis of the origin of concepts”. In: Behavioral and Brain Sciences 34.3 (2011), pp. 113–124.
  2. Susan Carey. The origin of concepts. Oxford University Press, 2009.

Similar Posts