| | | |

What Is an Explanation?

Many people spent a lot of time, across centuries, trying to build good explanations, and trying to distinguish good from bad ones. While there is no consensus on what “explanation” is (always and everywhere), it is worth knowing what good explanations may have in common. It helps develop a taste in explanations, which is certainly helpful given how frequently you may need to explain something, and how often others offered explanations to you.

Rembrandt – The Anatomy Lesson of Dr Nicolaes Tulp (source: Wikipedia)

When you explain something that happened, “you give people reasons for it, especially in an attempt to justify it” [1]; to explain is to “make plain, manifest, or intelligible; to clear of obscurity; to illustrate the meaning of” [2].

The purpose of this short text is to help develop a taste in explanations, or at the very least help you know good from bad explanations. Explanations are so common, that it is regrettable if you cannot tell the difference.

In Marvel’s 2016 movie “Captain America: Civil War” [12], a band of superheroes is asked to accept United Nations’ oversight of their activities; as they discuss possible implications of this, one of them, called Vision, offers an explanation of why that oversight may have become an idea at all [13]:

Vision: In the eight years since Mr. Stark announced himself as Iron Man, the number of known enhanced persons [i.e., same as ‘superheroes’ for all practical purposes] has grown exponentially. And during the same period, the number of potentially world-ending events has risen at a commensurate rate. Steve Rogers: Are you saying it’s our fault? Vision: I’m saying there may be a causality. Our very strength invites challenge. Challenge incites conflict. And conflict . . . breeds catastrophe. Oversight . . . oversight is not an idea that can be dismissed out of hand.”

In the more specialized setting of philosophy of science, there is no unifying or dominant account of what constitutes a scientific explanation. It is useful, as a matter of providing context, to summarily mention below oft-cited ideas for what is needed for something to be called a scientific explanation; you can see these as aspirational criteria, as common explanations rarely satisfy them.

Deductive-Nomological Model of Scientific Explanation

Although oft-cited and much discussed, the Deductive-Nomological Model of scientific explanation remains central still [3] (DNM below; a.k.a. “Hempel–Oppenheim model” and “Covering Law Model” [4]).

“We divide an explanation into two major constituents, the explanandum and the explanans. By explanandum, we understand the sentence describing the phenomenon to be explained (not the phenomenon itself); by the explanans, the class of those sentences which are adduced to account for the phenomenon. […] [The] explanans falls into two subclasses; one of these contains certain sentences C1, C2, . . . , Ck which state specific antecedent conditions; the other is a set of sentences L1, L2, . . . , Lr which represent general laws.”

An explanation must be a sound deductive argument, where at least one of the premises has the status of a law, and this latter condition distinguishes explanations from any sound deductive argument (such as an accidentally true generalization). Here is some of what Hempel and Oppenheim say about laws:

“The explanans must contain general laws, and these must actually be required for the derivation of the explanandum. […] The explanans must have empirical content; i..e, it must be capable, at least in principle, of test by experiment or observation.”

While DNM may remain relevant among models of scientific explanation for observable physical, chemical, and biological phenomena, many explanations we have to deal with elsewhere will not satisfy the model’s criteria.

The difficulty is not with the logical form of explanations for, for example, movements of stock prices, changes in structure of or relationships within social groups, of decisions that people take. They can be formulated as deductive arguments. Instead, the problem lies with the difficulty to meet the following two conditions (themselves imprecisely stated as well):

  1. Explanans must include statements which represent general laws of the sort we find in physics, that is, “laws of nature” [5]; this brings into play hard issues about what qualifies as a law of nature (and when and why qualifying anything as such makes sense). Laws of this kind appeal to the idea of non-accidental regularity.
  2. Since an explanation must be a sound deductive argument, its premises must be true, and in turn, their truth must be established from available empirical evidence.

In short, a scientific explanation has to have a certain logical form, its premises must include something that represents a “law of nature”, and there must be empirical evidence for the truth of its premises. An explanation which does this will, at the same time, act also as a prediction: whenever the premises are true, their conclusion must also be, and so whenever we observe phenomena which confirm all premises, we should observe phenomena that the conclusion represents.

An explanation is of something for someone, that is, it is there because it has to provide someone an understanding of something. The term “explanation” is pragmatic. De Regt and Dieks emphasized the role of explanation for understanding as follows [6]:

“Understanding is an inextricable element of the aims of science. As another illustration of this claim, and as a further step towards a characterisation of scientific understanding, contrast a scientific theory with a hypothetical oracle whose pronouncements always prove true. In the latter case empirical adequacy would be ensured, but we would not speak of a great scientific success (and perhaps not even of science tout court) because there is no understanding of how the perfect predictions were brought about. An oracle is nothing but a ‘black box’ that produces seemingly arbitrary predictions. Scientists want more: in addition they want insight, and therefore they need to open the black box and consider the theory that generates the predictions. Whatever this theory looks like, it should not be merely another black box producing the empirically adequate descriptions and predictions (on pain of infinite regress). In contrast to an oracle, a scientific theory should be intelligible: we want to be able to grasp how the predictions are generated, and to develop a feeling for the consequences the theory has in concrete situations.”

Three subsequent models have received particular attention, the causal-mechanical model, unificationist theory of explanation, and contextual theory of scientific understanding. They are outlined below.

Causal-Mechanical Model

According to the causal-mechanical model [11,7,8], explanation works by providing an account of how some events cause others that we are interested in.

“Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions. For example, in the mechanism of chemical neurotransmission, a presynaptic neuron transmits a signal to a post-synaptic neuron by releasing neurotransmitter molecules that diffuse across the synaptic cleft, bind to receptors, and so depolarize the post-synaptic cell. In the mechanism of DNA replication, the DNA double helix unwinds, exposing slightly charged bases to which complementary bases bond, producing, after several more stages, two duplicate helices. Descriptions of mechanisms show how the termination conditions are produced by the set-up conditions and intermediate stages. To give a description of a mechanism for a phenomenon is to explain that phenomenon, i.e., to explain how it was produced.”

You can think of a mechanistic explanation as of a model of a system, the model representing the system’s behavior, and that behavior generating the phenomenon being explained. Abusing the idea somewhat, it is as if the explanation is a specification (as in computer science) of the behavior. This turn of seeing mechanisms as systems, as something dynamic that generates explained phenomena is apparently a novelty in the last few decades, in the philosophy of science [14]. The following alternative definition of mechanism highlight that to explain is to specify the system generating behavior.

“A mechanism for a behavior is a complex system that produces that behavior by the interaction of a number of parts, where the interactions between parts can be characterized by direct, invariant, change-relating generalizations.” [15]

Causality and inference (as in the deductive argument in DNM) are fundamentally different kinds of relationships, not least in the latter being over representations, and the former being over phenomena. We can probably describe causal mechanisms using deductive arguments, but it is not because we describe a sequence of events in a deductive argument that these events are causally related.

Unificationist Theory of Explanation

According to the unificationist theory of explanation [9,10], scientific explanation helps us understand something by removing unnecessary phenomena, identifying the necessary and sufficient ones, and describing the relationships between these. It is by simplifying, by reducing to fewer parameters
and their interactions, all the while ensuring broad applicability, that scientific explanations have a special role in helping us understand what we experience.

Contextual Theory of Scientific Understanding

De Regt and Dieks’ contextual theory of scientific understanding [6] is based on the thesis that explanation serves understanding. In turn, they argued, understanding an explanation requires that the explanation is intelligible. They require a scientific explanation to satisfy two criteria:

  1. Criterion for Understanding Phenomena (CUP): “A phenomenon P can be understood if a theory T of P exists that is intelligible (and meets the usual logical, methodological and empirical requirements).”
  2. Criterion for the Intelligibility of Theories (CIT): “A scientific theory T is intelligible for scientists (in context C) if they can recognise qualitatively characteristic consequences of T without performing exact calculations.”

The term “theory” in CUP and CIT is, for all purposes here, equivalent to the term “explanation”. CUP says that something we want to explain, some phenomenon P, can be understandable to someone if there is an explanation and that explanation is intelligible; it qualifies as a scientific explanation if it satisfies the “usual logical, methodological and empirical requirements” of science (e.g., there is a repeatable procedure for collecting evidence, measuring outcomes, controlling for various factors, and so on, which current scientific methods command in specific fields). CUP does not require a format for an explanation, it may, but need not be a deductive argument as in DNM for example. CIT then says
when an explanation is intelligible by appealing to one’s ability to recognise or predict consequences of observing things as they are described by the explanation. The idea with “recognise qualitatively characteristic consequences” is that an explanation is intelligible to you if you can recognize what happens next in a way, or more strongly, the consequences of observing or assuming premises in the explanation to be true, yet without having to go through the exact steps of logical and mathematical calculations which would have achieved the same, but in more detail. You could, in other words, do all through calculations, but, De Regt and Dieks insist, an explanation shouldn’t only work by telling you what to compute; it should enable you to relate the explanation to what you experience.

Desirable Properties of an Explanation

The following conditions for explanation were mentioned in the four theories or models of scientific explanation cited earlier:

  • Logical form: Explanation’s logical form is that of an argument; it is a pair of one or more propositions, called explanans (the premises of the argument), and another proposition, called explanandum (the conslusion of the argument).
  • Law-like premises: One or more premises in the explanation must state natural laws.
  • Empirical evidence: There must be empirical evidence for the truth of the premises in an explanation.
  • Causality: Explanations refer to causal relationships between phenomena that premises describe, and phenomenon that the conclusion describes and which is being explained.
  • Minimality: Premises should be necessary and sufficient to explain (in the sense above) the phenomenon.
  • Unification: As much as possible, an explanation should refer to and build on existing explanations; this may be by relating the same primitives, reusing premises, and so on.
  • Intelligibility: One’s understanding of an explanation depends on what one knows and assumes when consuming the explanation, what one wants to do having learned the explanation, and the various other factors which determine context. An explanation needs to be formulated in a way which accounts for the context in which it will be used, and it will be considered as intelligible if those who use it can, when they can recognize phenomena described in the premises, also anticipate and recognize phenomena that the explanandum is about.

When others give you explanations, knowing the above helps you see how rough these explanations may be; it can help you evaluate just how confident to have in explanations, and therefore, how knowing them affects what you think and do. Why not, then, attempt to meet all of the above, when providing any explanation?


  1. “Explanation”, Collins Dictionary, https://www.collinsdictionary.com/dictionary/english/explanation
  2. “Explain”, Wiktionary, https://en.wiktionary.org/wiki/explain
  3. Carl G Hempel and Paul Oppenheim. “Studies in the Logic of Explanation”. In: Philosophy of science 15.2 (1948), pp. 135–175.
  4. Fred Eidlin. “The deductive-nomological model of explanation”. In: Encyclopedia of Case Study Research, SAGE Publications (2010).
  5. John W. Carroll. “Laws of Nature”. In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Fall 2016. Metaphysics Research Lab, Stanford University, 2016.
  6. Henk W De Regt and Dennis Dieks. “A contextual approach to scientific understanding”. In: Synthese 144.1 (2005), pp. 137–170.
  7. Wesley C Salmon. Scientific explanation and the causal structure of the world. Princeton University Press, 1984.
  8. James Woodward. Making things happen: A theory of causal explanation. Oxford university press, 2005.
  9. Michael Friedman. “Explanation and scientific understanding”. In: The Journal of Philosophy 71.1 (1974), pp. 5–19.
  10. Philip Kitcher. “Explanatory unification”. In: Philosophy of science 48.4 (1981), pp. 507–531.
  11. Machamer, P., Darden, L. and Craver, C.F., 2000. Thinking about mechanisms. Philosophy of science67(1), pp.1-25.
  12. Marvel, “Captain America: Civil War”, 2016, https://www.imdb.com/title/tt3498820/
  13. Transcripts Wiki, “Captain America: Civil War” Fan Transcript, https://transcripts.fandom.com/wiki/Captain_America:_Civil_War
  14. Felline, L., 2018. Mechanisms meet structural explanation. Synthese195(1), pp.99-114.
  15. Glennan, S., 2002. Rethinking mechanistic explanation. Philosophy of science69(S3), pp.S342-S353.

Similar Posts