An official website of the United States government.

This is not the current EPA website. To navigate to the current EPA website, please go to www.epa.gov. This website is historical material reflecting the EPA website as it existed on January 19, 2021. This website is no longer updated and links to external websites and some internal pages may not work. More information »

CADDIS Volume 1

About Causal Assessment

A Conceptual and Historical Explanation of Our Causal Approach

On this Page

Causation is a difficult and controversial concept. Thus, any causal methodology needs a strong conceptual foundation to be useful and defensible. This page summarizes the conceptual basis for CADDIS, derived largely from the reviews summarized in the third and fourth tabs of this page. The Causal Concepts page summarizes how scientists and philosophers have described causation. They are discussed further on the Causal History tab. Links to relevant sections and glossary definitions are provided in the Related Information box.

CADDIS users should read this to:
  • Better understand why the CADDIS method is what it is.
  • Understand the background enough so you could modify this method for your own scenario.
  • Learn about alternative causal analysis methods proposed by others.

The Basis for CADDIS

The Issue

How can environmental assessors and managers determine the causes of environmental impairments? This question is difficult to answer for several reasons:
  • Ecosystems are complex and environmental evidence is diverse. However, most fundamental causal analysis work has addressed simple situations with only one candidate cause.
  • Most published causal analysis methods deal with general rather than specific causation. For example, these methods might evaluate if silt can decrease caddisflies, instead of whether silt caused decreased caddisflies at a specific site. CADDIS addresses causation for specific, individual cases.
  • No single method can accommodate the range and diversity of evidence often available.

A Philosophical System

Our strategy draws upon pragmatism, a general system of philosophy. Charles Sanders Peirce and his followers, William James and John Dewey, developed pragmatism. It is a philosophy based on the premise that thinking is for doing. Logic should lead to the action that results in the desired outcome.

Peirce added a third type of inference, abduction, to deduction and induction (Hacking 2001, Josephson and Josephson 1996). Abductive inference identifies the hypothesis that best explains available information. In CADDIS, abduction identifies the candidate causes that best account for observed effects.

Peirce also included deduction and induction into the process. Abductive inferences should be followed by deduction of the consequences of acting on the abduction. Induction from subsequent observations then should be used to support or refute deductive predictions.

For example, we may determine that an effluent is the likely cause of an impairment using abduction. We then can deduce that eliminating the effluent would result in biotic recovery. Environmental monitoring could then be used inductively to infer that effluent was indeed the cause. Effectively, this is adaptive management (Holling 1978, Walters 1986).

It is a philosophy based on the premise that thinking is for doing. Logic should lead to the action that results in the desired outcome.

Peirce believed that no method reliably delivers truth, but science can provide useful approximations of truth. Our general approach to causal assessment is pragmatic: it connects logic with action.

Causal Metaphysics

We began by accepting causation as a logically primitive concept (i.e., it is accepted without proof or derivation). This is justified pragmatically because we could accomplish nothing without a causal relationship to be manipulated.

It is also justified by the realization that humans inherently think causally without any experience, logic, or training. This realization was formalized by Hume and Kant. Ruse presented causal explanation as an epigenetic rule. According to Ruse, causation is true in that causally-thinking ancestors had a selective advantage.

Top of Page


An Approach to Causal Inference

Our approach is an example of causal pluralism. We accept multiple causation concepts and all relevant evidence and methods for turning data into evidence.

Comparison of Candidate Causes

We can never prove a cause, and can seldom disprove, a cause. However, we can apply abductive inference to determine which cause is best supported by evidence. After defining the case (Step 1), we list the plausible candidate causes (Step 2).

Weighing of Evidence

We believe that all relevant evidence should be considered. Evidence comes from diverse sources of information. Common sources include site observations, regional monitoring studies, environmental manipulations, laboratory experiments, and general scientific knowledge. Information may come from the literature or may be generated ad hoc. Evidence may in turn be generated from information by various methods. These methods include interpretation of reported observations, summary statistics, and statistical and mathematical modeling.  The modern tradition of weighing causation evidence is based on Hill's “criteria.” For transparency and consistency, CADDIS adopts the scoring system developed by Susser (and introduced to ecologists by Fox). Weighing evidence requires that evidence be categorized. CADDIS uses 17 types of evidence from the site (Step 3) and from elsewhere (Step 4). Scores represent evidence relevance and quality, based on different outcomes for each type of evidence.

Rejection

Like Popper, we recognize that one can more confidently eliminate than accept a causal hypothesis. For example, suppose an effect occurs downstream of a source. This provides weak supporting evidence for emissions from that source as a cause. However, if the effect occurs upstream of the source, rejection with confidence is possible.

Rejection requires evidence from the site expressed as an "R" score in the strength of evidence table. An R score is sufficient to negate all positive evidence for a candidate cause. However, we can never reject all but one candidate cause.

Therefore, we begin weighing evidence by rejecting as many causes as possible. Rejection requires evidence from the site (Step 3). It is expressed as an R score in the Strength of Evidence tables. An R score is sufficient to negate all positive evidence for a candidate cause. However, we can never reject all but one candidate cause.​

Diagnosis

Diagnosis is the determination of a cause based on characteristics of the effects. These characteristics might be symptoms such as lesions or eroded fins, or chemical accumulation in organs. Pathologists often use diagnosis to investigate fish and wildlife kills. Community-level diagnostics have been attempted in ecological research. However, to date, their application and reliability have been very limited. In CADDIS, diagnosis is treated as an extreme case of symptomatic evidence. It is given a D score. A high-quality diagnosis is sufficient to negate all other evidence for a candidate cause.

Synthesis of Evidence

After evaluating each candidate cause for all available types of evidence, assessors must compare candidate causes. In many cases, one candidate cause will clearly be more consistent with the evidence. If not, assessors should consider potential sources of uncertainty. These sources include lack of data, poor data quality, poorly defined impairments, and multiple causes. In difficult cases, condensing types of evidence to a few characteristics may be helpful (Step 5). If assessors have identified the most likely cause(s) with sufficient confidence, remedial actions may be taken.

Summary

The CADDIS approach to causal inference involves comparison of alternate candidate causes. These causes are evaluated to determine which is best supported by the totality of evidence. This standardized process provides transparency and reduces inferential errors, without restricting the types of evidence used.

Top of Page

Causal Concepts

Causation is an ambiguous and contentious concept that is important to philosophers and to pure and applied scientists. As a result, causation has its own terminology, including arcane terms as well as common terms that are used in uncommon ways. These terms convey important concepts that should be considered when developing a method for causal analysis. This section explains how we view these concepts and how we address them within the CADDIS methodology (“we” are the three EPA developers of the inferential approach in CADDIS). We define each term, explain our position concerning the concept, and provide a historical background for the concept. Throughout this volume, causes and effects, as logical propositions or statistical parameters, are represented as C and E, respectively.

On this Page


Agent Causation

Scientists and Philosphers

Related Concepts

Definition

Agent causation is the concept that only “things” have the power to change the world (i.e., to be causes). Agent causation requires the specification of a “thing” that caused the effect. For example, the brick broke the window. Modern versions of agent causation are largely limited to purposeful agents that are self-directed (teleology). The brick did not break the window; the person throwing it did. Agent causation has been largely supplanted by event causation.

Our Position

We believe that the agent/event dichotomy in causal philosophy is analogous to the structure/function dichotomy in biology or the particle/wave dichotomy in quantum physics. They are different ways of looking at a phenomenon. Causal hypotheses (candidate causes) are often defined as agents (e.g., cadmium), but an event is at least implicit and should be described (e.g., aqueous exposure to cadmium) as well. This is because much of the logic of causality (e.g., time order and directionality) depends on describing causation as a relationship between events.

In addition, the same agent may be the cause of opposite effects depending on whether it is removed (decreasing) or applied (increasing), which are different events. However, the agent/event distinction may become indistinct. For example, we may say that a storm caused benthic invertebrates to be scoured from the stream. The “storm” may be interpreted as an agent (perhaps even a named tropical storm) or as an event (the occurrence of a certain rate of precipitation for a certain duration). This definition of a cause as both an agent and an event or process has been termed dualistic causation (Campaner and Galavotti 2007). CADDIS assessments may define causes either way, depending on which is clearest and most natural to the case. However, in general, the causes in CADDIS should be defined as agents that participated in defined events.

Background

Aristotle and other ancient and preancient philosophers were concerned primarily with the nature of the agents that caused effects. The requirement that an agent be specified when defining a cause became an important issue in science when Newton proposed his theory of gravitation without specifying an agent that caused the force. Leibniz considered this a fatal flaw in the theory, but Newton famously refused to frame hypotheses. Currently, agent causation as a philosophy of science is largely limited to psychology. If humans have free will, they cause things to happen by acting as free agents—not as part of a sequence of causal events (Botham 2008, O'Connor 1995).

Top of Page


Analogy

Scientists and Philosophers

Definition

Similar causes have similar effects. For example, if the impairment of concern involves a large and rapid decline in the abundance of aquatic insects, and if insecticides have been found to cause similar declines in other cases, then by analogy, that evidence supports an insecticide as the cause.

Our Position

Although analogy is potentially a useful method for generating evidence, it has been seldom used in our case studies. However, analogies can be used to identify candidate causes or to support a candidate cause. Analogies begin with a well-defined causal relationship that serves as a model (Cm caused Em in conditions Xm). If a similar effect Es occurs in a case with similar circumstances Xs, then, by analogy, anything similar to Cm is supported as the cause in that case.

Background

Literary analogies are at least as old as written literature, and some of these have been causal. The most famous use of analogy in scientific causation is Darwin's analogy between selection by animal breeders and the processes that have caused the evolution of life (i.e., artificial and natural selection). Analogy appears as one of Hill's (1965) criteria for causation in epidemiology. However, it has been sharply criticized. “Whatever insight might be derived from analogy is handicapped by the inventive imagination of scientists who can find analogies everywhere.

At best, analogy provides a source of more elaborate hypotheses about the association under study; absence of analogies only reflects lack of imagination or lack of evidence” (Rothman and Greenland 1998). However, analogy has been formalized by various means in the field of artificial intelligence, where it is referred to as case-based reasoning. Case-based reasoning uses the following general process:
  1. Retrieve the most similar case(s) by comparing the new case to the library of past cases
  2. Use the retrieved case to try to solve the current problem
  3. Revise and adapt the proposed solution if necessary
  4. Retain the final solution in the library of cases

Examples include diagnostic systems that retrieve past medical cases with similar symptoms and assessment systems that determine the values of variables by searching for similar implementations of a model.

Top of Page


Associationist Causation

Scientists and Philosophers

Related Concepts

Definition

A cause and effect must be associated in space and time. If association does not occur, given allowance for time delays and action at a distance, causation can be rejected. Causation is inferred if the association is regular (i.e., occurs in all relevant cases). In practice, it is also inferred in single cases if the association creates a distinct impression of causality, particularly if a mechanism is apparent (e.g., one might infer that an anvil on someone's head is the cause of death, even if that association has not been witnessed regularly). Synonyms include conjunction and co-occurrence.

Our Position

In specific cases, CADDIS requires that the candidate cause and the effect be associated in space and time (allowing for time lags and movement during those lags). Lack of association can refute a cause. However, association is weak positive evidence, particularly in specific cases. Regular association in similar cases (e.g., other streams in the region) is used as evidence that the association is causal in general.

Background

Hume famously argued that we observe association and infer causation. That is, the association is real and open to the senses, but causation is only an inference that we draw from associations. Most of the subsequent literature on causation has been an elaboration of or a response to that argument.

Top of Page


Confounding

Scientists and Philosophers

Definition

Confounding is a bias in the analysis of causal relationships due to the influence of extraneous factors (confounders). Confounding may result from a common cause of both the putative cause and the effect or of the putative cause and the true cause. A synonym is spurious correlation, but that term is broader.

Our Position

Confounding is a common problem in ecoepidemiological studies. For example, we may wish to determine the effect of flashy hydrology on stream communities, but, because flashy flow patterns are found in urban and suburban streams, flow is confounded by temperature, channel modification, lawn chemicals, and other factors. Assessors may treat confounders as background, attempt to correct for them, censor them from the data set, or use a multivariate model that treats them all as causes. All of these are options in CADDIS. However, all but the first option requires that the confounders be identified and quantified, which may not be possible in typical data-limited cases.

Confounding is reduced by random assignment of treatments in experiments. However, it can still occur due to unintended factors in experimental treatments or bias in the administration of treatments. This is a particular problem in field experiments. For example, if experimentalists spread shade cloth over streams to reduce temperature, the effects of reduced temperature would be confounded by effects of reduced light for photosynthesis and exclusion of avian predators. Many experimental studies of the effect of diversity on the productivity of ecosystems suffered from confounding of the manipulation of diversity (Huston 1997). Hence, assessors must be on the lookout for confounding in all sources of data.

Background

The first description of confounding in scientific studies occurs in Mill (1843). The solution of randomized experiments with controls was developed by Fisher (1937). Greenland et al. (1999b) defined confounding in the context of counterfactual theory and distinguished the usual definition from the related concepts, non-collapsibility and aliasing. Renton (1993) listed three ways to identify confounders: (1) other factors known to cause the effect may confound the cause of interest, (2) factors known to be frequently associated with the cause may be confounders, and (3) factors that are known to interact with the mechanism of the cause may be confounders.

Top of Page


Counterfactual Causation

Definition

Had C not occurred, E would not have occurred, therefore, C must be a cause of E. For example, if the water had not been anoxic, the fish kill would not have occurred. This is the opposite of regularity of association as a definition of causation. A synonym is contrary to fact conditional.

Our Position

Counterfactual arguments have little direct applicability to causal analysis for cases of ecological impairment, so they are not discussed in the CADDIS methods. Because counterfactual arguments refer to a hypothetical state, we do not have counterfactual evidence concerning the causes of events that have occurred. That is, we do not have evidence of what would not have caused the impairment if it had not been present. For example, if a stream community is impaired and temperature is a candidate cause due to the lack of shading, we have no observation of that community without elevated temperature with which to evaluate the counterfactual case. Removing a candidate cause and observing the response is a manipulationist approach that does not directly address the counterfactual status of candidate causes of the specific observed impairment. It can answer the related counterfactual question: “Without the candidate cause, will the impairment continue?” Hence, it is relevant but indirect evidence. Counterfactuals can be directly evaluated in experiments and in experiments on models, as in Lewis's (1973) neural nets or Pearl's (2000) directed acyclic graphs (see Network Models).

Although counterfactual arguments are difficult when identifying causes, the manipulationist variant is inevitable when planning remedial actions and assessing risks and benefits. In practice, this analysis is performed using an exposure-response model. For example, if the suspended sediment concentration will be reduced to x/2 (a future equivalent of a counterfactual condition) from x, will the estimated number of taxa rise above the threshold for impairment?

Background

In the Enquiries, Hume proposed a counterfactual definition of causation but did not develop it as he did the associationist/regularist theory: “One object followed by another… where, if the first had not existed, the second had never existed.” Counterfactual arguments are inherently problematical because they depend on characterizing events that did not occur. This concept was revived relatively recently by Lewis (1973). The counterfactual definition is popular with philosophers because it seems to have fewer logical problems than regularist accounts of causation (Collins et al. 2004). However, there are conceptual objections as well as practical ones. One problem with the original alternative worlds approach is that it requires hypothesizing possible worlds in which C did not occur and demonstrating that in every one E did not occur. Clearly, defining an appropriate set of possible worlds presents difficulties, because, in general, a world without C would differ in other ways that are necessary to bring about the absence of C, which would have other consequences. Hence, Lewis developed the concept of similarity of worlds and of the nearest possible world. Also, counterfactual accounts of causation can result in paradoxes involving loss of transitivity, preemption and overdetermination (Cartwright 2007). For example, if the water had not been green, the DO sag that killed the fish would not have occurred (a good counterfactual argument). However, the green color was an effect of the algal bloom that caused the DO sag and was not itself a cause. Also, if two chemicals—each at lethal concentrations—are spilled into a stream, neither one is the counterfactual cause of the subsequent fish kill because even if one was absent the other would still have killed the fish. Menzies (2004) pointed out that counterfactual theories suffer from “the problem of profligate causes.” Many conditions must hold for a particular effect to occur, so which should be left out of the alternative world? Finally, to determine what would have happened in the counterfactual condition, philosophers appeal to causal laws or knowledge, so “counterfactual theories seem to require the knowledge they were intended to provide” (Wolff 2007).

In statistics, the potential outcomes analysis developed in the 1920s by Jerzy Neyman provided a method to analyze the difference between outcomes with and without a potentially causal factor (Rubin 1990). Holland (1986) demonstrated in his review of statistical approaches to causality that statistics can determine only the effects of causes (the difference between treatments) not the causes of effects, and it can do that only if homogeneity and independence of units can be assumed. That limits counterfactual statistics to replicated and randomized experiments. He also argues that attributes cannot be causes in the counterfactual sense. That is, we cannot say “Cheryl is empathetic because she is a girl,” because her gender could not be otherwise. Counterfactual causes must be things that could be, in principle, experimental treatments.

Susser (2001) stated that counterfactual analysis is “unattainable in practice” in epidemiology but may be approximated by Bayesian adjustments. Greenland (2005) considers Neyman's potential outcome model to be equivalent to the sufficient-component cause models that are used in epidemiology. However, he admits that there are problems (particularly confounding) with using a reference case as equivalent to the case of concern without the cause.

Pearl (2000) resolves many of the conceptual problems with counterfactual accounts of causality, by employing structural graphs and demonstrated that potential outcomes analysis and structural equation modeling are consistent. However, Pearl's approach still has limitations (Cartwright 2007). In addition, although Pearl showed that one can cleanly create a counterfactual condition in a model by “surgery on variables,” that does not resolve the problem of identifying real world counterfactual cases.

Top of Page


Covering Law

Scientists and Philosophers

Definition
Causes are instances of the action of scientific laws in relevant circumstances. That is, a causal explanation consists of deduction from one or more laws and one or more facts concerning relevant conditions. Synonyms include the deductive-nomological model and the inductive-statistical model (for deterministic and probabilistic laws, respectively) as well as Hempel's model, the Hempel-Oppenheim model and the Popper-Hempel model.

Our Position
Laws are seldom available to causally explain events in the environment—except in trivial cases (e.g., the polluted water flowed between points A and B because of gravitation (the law) and the slope between the points [the fact]). CADDIS treats empirical generalizations derived from environmental data as “evidence”, rather than as “laws”.

Background
This model of scientific explanation is implicit in scientific practice dating back at least to Newton, who famously refused to frame a hypothesis of what gravity or any of the other physical variables in his laws might actually be. The law itself was sufficient. The formal development of the idea is attributed to Hempel (1965), who considered it a complete theory of causal explanation. It was popular in the mid-twentieth century with philosophers of science, but since then its limitations have been recognized (Woodward 2003). In particular, causation in biological and social systems is not too complex to be defined by scientific laws.

Top of Page


Criteria, Causal

Definition

Criteria are considerations that are employed to assist judgment concerning causation. Synonyms include guidelines, postulates, and considerations.

Our Position

Evaluation of the evidence in terms of a set of considerations (commonly termed criteria) is the best available method for weighing multiple lines of evidence. However, CADDIS evaluates “types of evidence,” which we distinguish from sources of evidence, qualities of evidence, and characteristics of causation. We follow Susser and Fox in evaluating the degree to which evidence meets the criteria by a scoring system.

Background

Mill (1843) provided the first set of causal criteria. Koch's postulates (aka the Henle-Koch postulates) are a set of three or four criteria (depending on the version) that together constitute a standard of proof for infectious agents as causes of disease. The Surgeon General's Committee and Austin B. Hill developed criteria to demonstrate that the body of evidence supported cigarettes as a cause of lung cancer (Hill, 1965; US Department of Health Education and Welfare, 1964). Susser (1986a) extended Hill's criteria and added a scoring system. Many other authors, particularly epidemiologists, have developed lists of criteria, but these are the most often cited. Criteria have been adopted and adapted by ecologists for ecoepidemiology (U.S. EPA 1998, Fox 1991, Suter 1990, Suter 1998, U.S. EPA 2000).

Top of Page


Deterministic Causation

Scientists and Philosophers

Related Concepts

Definition

  1. Natural determinism is the position that the state of a system can, in principle, be fully explained by its state at the prior instant and knowledge of natural laws.
  2. Causal determinism holds that the cause always induces the effect in appropriate conditions. However, because causation may be complex and nonlinear, causal determinism does not necessarily imply predictability.

Our Position

CADDIS is based on pragmatic determinism. In our conceptual approach, the cause determines the effect in the given context. We hold this position despite quantum indeterminacy, chaos theory, and uncertainty.

Quantum indeterminacy is the only source of true randomness, and it is irrelevant to us. Phenomena at our level are buffered from quantum indeterminacy—apparently by the effect of large numbers. We can predict and manipulate causal events at macro levels, because they are determinate.

Chaotic systems are effectively unpredictable because of imperfect knowledge of initial conditions and the properties of nonlinear systems that amplify small differences in conditions. However, there is no indeterminacy in chaotic models.

Determinism does not mean that we are not uncertain—only that our uncertainty is not due to inherent randomness. Our uncertainty is due to lack of knowledge—not a property of the system. Hence, if the cause does not consistently induce the effect, it is because we have incompletely specified the cause or the set of conditions in which it is effective.

Background

In the Physics, Aristotle stated that whatever “we ascribe to chance has some definite cause.” However, his determinism was associated with his teleology. A cause must induce its effect to fulfill its purpose.

Galileo, in the Dialogo sopra i due massimi sistemi del mondo (1632), rejected Aristotle's teleology to present the first scientific concept of causation and presented a mechanistic and apparently deterministic theory of causation. When necessary conditions occur, the effect necessarily follows.

Hume argued in the Treatise that lack of regular association was due to hidden or unknown factors rather than chance. “What the vulgar call chance is nothing but a secret and concealed cause.”

The most famous statement of determinism is found in Pierre-Simon Laplace's Philosophical Essay on Probabilities (1820):

We ought to regard the present state of the universe as an effect of its antecedent state as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present in its eyes.

This predictability is purely hypothetical, and we are not interested in determining the state of everything in the universe—only a relatively small system, but Leplace made the issue of determinism explicit.

Science is now in the situation of believing that the universe is fundamentally both deterministic (relativity theory) and probabilistic (quantum mechanics). In addition, the development of chaos theory showed that even if the universe is driven by deterministic laws, it can appear to be probabilistic.

Top of Page


Diagnosis

Definition

Diagnosis is the identification of a cause by recognizing characteristic signs and symptoms. Differential diagnosis is identification of a disease by comparing all diseases that might plausibly account for the known symptoms.

Our Position

Diagnosis is one of the three types of inference in the Stressor Identification and CADDIS protocols (the others being rejection and weighing of evidence). It is used primarily to examine the cause of death in fish kills. Community diagnostics have been proposed in the literature that would identify causes of impairment based on changes in community composition, but so far they are best used to suggest candidate causes.

Background

The diagnosis of disease based on characteristic symptoms is as old as the practice of medicine. The first fully natural theory of disease and diagnosis comes from the Hippocratic treatises. The current practice of medicine is based on the approach developed by William Osler in the late 19th century. It focuses on a developing a diagnosis based on an algorithmic analysis of symptoms and the generation of symptoms through testing. Archibald Garrod extended diagnosis to include individual biochemical and genetic differences. In the last few decades, a theory of diagnosis has been developed within the field of artificial intelligence that is used in diagnostic expert systems (Reiter 1987). In addition, new diagnostic symptoms are being developed based on genomics, metabolomics, and proteomics.

Diagnostic protocols for nonhuman animals and plants are available in the veterinary, wildlife, fishery, and plant pathology literatures. For example, diagnostic criteria for lead poisoning in waterfowl include a hepatic lead concentration of at least 38 ppm and four characteristic symptoms (Beyer et al., 1998). Examples for chemically induced fish kills, from Norberg-King et al. (2005), are

Symptom Possible Causative Agent
White film on gills, skin, and mouth Acids, heavy metals, trinitrophenols
Sloughing of gill epithelium Copper, zinc, lead, ammonia, detergents, quinoline
Clogged gills Turbidity, ferric hydroxide
Bright red gills Cyanide
Dark gills Phenol, naphthalene, nitrite, hydrogen sulfide, low oxygen 
Hemorrhagic gills Detergents
Distended opercules Phenol, cresols, ammonia, cyanide
Blue stomach Molybdenum
Pectoral fins in extreme forward position  Organophosphates, carbamates
Gas bubbles (fins, eyes, skin, etc.) Gas supersaturation

In some cases, diagnostics syndromes have been identified as a result of ecoepidemiological studies. Perhaps the best known case is the Great Lakes embryo mortality, edema, and deformity syndrome (GLEMEDS) (Gilbertson et al. 1991). This syndrome has been identified in multiple species of fish-eating birds and has been associated with dioxin-like compounds, but it is characterized by more symptoms than the laboratory-induced effect of dioxin—chick edema syndrome.

Munkittrick and Dixon (1989a, 1989b) proposed a system to diagnose the causes of declines in fish populations. The causes are defined as a set of standard causal mechanisms and the diagnostic criteria were based on a set of metrics commonly obtained in fishery surveys. This method was subsequently refined and expanded (Gibbons and Munkittrick 1994), applied to assessments of Canadian rivers (Munkittrick et al. 2000), and incorporated into the causal analysis component of the Canadian Environmental Effects Monitoring Program (Hewitt et al. 2005). Numerous metrics contribute to the symptomology, but they are condensed to three responses: age distribution, energy expenditure, and energy storage. The most recent list of types of causes is exploitation, recruitment failure, multiple stressors, food limitation, niche shift, metabolic redistribution, chronic recruitment failure, and null response.

Many investigators have attempted to perform community diagnostics by identifying changes in the composition of biotic communities that are symptomatic of particular causal agents. Those efforts has been reasonably successful for the organic loading that characterizes poorly treated sewage (see Hilsenhoff 1987). However, efforts to devise more general systems for community diagnostics have been less successful (see Chessman and McEvoy 1998, Norton et al. 2000, Riva-Murray et al. 2002, Yoder and Rankin 1995).

Top of Page


Directionality

Scientists and Philosophers

Related Concepts

Definition

Hypothesis testing is a statistical technique that uses experimental data to determine whether a hypothesis is incorrect. Most commonly, a hypothesis of no effect is tested by determining whether data as extreme as those obtained in an experiment or more extreme, would occur with a prescribed low probability given that the null hypothesis is true.

Our Position

Hypothesis testing is applicable only to experimental studies in which independent replicate systems are randomly assigned to treatments. Observational data, such as those from environmental monitoring studies, are inappropriate for hypothesis testing. Replicate samples in such studies are pseudoreplicates. Pseudoexperimental designs such as Before-after-control-impact (BACI) can reduce—but not eliminate—the likelihood that the study will be confounded (Stewart-Oaten 1996).

Even when experiments are used as supporting evidence, hypothesis testing is undesirable in CADDIS or any other assessment of environmental causes. The null hypothesis is meaningless, because all environmental variables that would be considered in a causal assessment have some effect that would be “significant” if enough samples were taken. We are interested in determining the relationship between the cause and effect (e.g., estimating a concentration-response relationship from test data), not in rejecting the hypothesis that the cause had no effect

Background

Statistical hypothesis testing was developed by Fisher (1937) to test causal hypotheses such as, does fertilizing with sulfur cause increased alfalfa production, by asking whether the noncausal hypothesis is credible given experimental results. Neyman and Pearson (1933) improved on Fisher's approach by testing both the noncausal and causal models. Fisher's probabilistic rejection of hypotheses became even more popular as Popper's rejectionist theory of science caught on in the scientific community (see Rejection). It came to be taught as the standard form of data analysis in biostatistics courses. As a result, it was applied to test causal hypotheses in inappropriate data sets, including those from environmental monitoring programs. A fundamental conceptual flaw in this practice was pointed out by Hurlbert (1984), who invented the term pseudoreplication to describe the practice of treating multiple samples from a system as if they were from replicate systems. More fundamentally, hypothesis testing does not allow one to accept a causal hypothesis, does not indicate the strength of evidence for a causal hypothesis, and is based on the probability of the data given a hypothesis rather than the probability of the hypothesis given the data. Numerous critiques of hypothesis testing have demonstrated its flaws, but they have had little impact on environmental scientists (Anderson et al. 2000, Bailar 2005, Germano 1999, Johnson 1999, Laskowski 1995, Richter and Laster 2005, Roosenburg 2000, Stewart-Oaten 1995, Suter 1996, Taper and Lele 2004).

Top of Page


Interaction

Scientists and Philosophers

Definition

The cause physically interacts with the affected entity in a way that induces a change (the effect).

Our Position

Causes induce their effects through a physical interaction with the affected entity. The interaction may be described as a process or set of mechanistic events. For example, metals bind to ligands, reducing nutrient element uptake by ion channels, and elevated temperatures denature enzymes, reducing reaction rates. The evidence for interactions is still inferred from associations, but evidence of a mechanism of interaction at a lower level of organization strengthens the inference. Hence, in CADDIS, Evidence of Exposure or Biological Mechanism may strongly support causation.

Background

Hume famously argued that all we know of causation is regular co-occurrence from which we infer an interaction. While association has been considered sufficient for many scientific purposes (e.g., using empirical correlations), much of the development of science can be described as attempts to provide causal explanations that go beyond association. Newtonian physics and Newton's successors seemed to promise explanations in terms of covering laws (e.g., the force with which the apple hit the ground is caused by laws that cover the fall of apples as a particular case). However, there are no laws to cover most causal relationships of interest— even fairly fundamental ones like protein synthesis. The most common alternative source of causal explanations is reductionism. That is, the causal relationship is explained by processes and events that are more fundamental than the relationship itself. For background on these approaches, see Mechanistic Causation and Process Connection.

Top of Page


INUS

Definition

A cause is an Insufficient but Necessary part of a condition which is, itself, Unnecessary but Sufficient to result in the effect. Synonyms include component causes model and sufficient component causes model.

Our Position

This is an important part of CADDIS’s concept of causality. For example, if a release of copper "causes" a fish kill, that copper is insufficient because other conditions such as low pH, low dissolved organic matter, the presence of fish, the susceptibility of the fish, etc. must also occur. However, copper is necessary because the kill would not occur with only the other conditions. Further, although that set of conditions (copper and the others) is sufficient, it is unnecessary, because other sets of conditions could result in a fish kill. The identified cause is distinguished from the other conditions in the set by being the last to occur, by being the least common, by being anthropogenic, by being of regulatory concern, or by some other criterion.

Background

The INUS concept was developed by Mackie (1965, 1974). It is a formalization of the concept that a cause results in an effect only under appropriate conditions, which dates back at least to Hume. However, prior authors tended to treat other conditions as an unchanging background. Others, like Mill (1843), considered all preceding and contributing events and conditions to be causal. Mackie treated some conditions as background but others as variables that must be analyzed along with the nominal cause. He called the background the causal field, and the cause and other modeled conditions are the INUS conditions. Like other formal definitions of causation, INUS fails to describe some sorts of causation and creates illogical results for some conditions (Cartwright 2007, Pearl 2000).

This concept, in less formal or complete terms occurs in other writings on causation. For example, Rothman (1986) has argued that causes are components of alternative minimal sufficient sets (his sufficient component causes model of causation).

Olsen (2003) argued that the INUS/sufficient component causes concept reconciles determinism with the practice of expressing epidemiological causes as probabilities. That is, probabilities are due to unknown or unmeasured component causes. However, others have argued that the determinism of this concept is unjustified and that it is better to accept inherently probabilistic causes than to hypothesize unknown component causes (Parascandola and Weed 2001).

Top of Page


Manipulationist Causation

Definition

Manipulationist causation is the proposition that we know that a causal relationship exists when we have manipulated C and observed a response E. Further, in cases of a network of multiple factors that jointly affect E, a manipulationist says that the cause is the thing that we manipulate. Symbolically, we distinguish interventional probabilities P (E | do C ) from the simple conditional probability P (E |C ). Intervention is often a synonym for manipulation although some authors distinguish the two.

Our Position

The goal of CADDIS is to identify causes that may be manipulated to restore the environment, so our causes are at least potentially manipulationist causes. Further, manipulations (both experiments and uncontrolled interventions) can provide particularly good evidence of causation. However, we do not require evidence from manipulations to identify the most likely cause.

Background

Ducheyne (2006) argued that Galileo was the first manipulationist. However, because Galileo's writings on this are ambiguous, the evidence is based primarily on his experimental practices.

Hume believed that the concept of causation arose from people's experience in manipulating things (causes) to achieve ends (effects).

Experiments are controlled manipulations, and Mill is the first philosopher of science to clearly argue the priority of experiments over uncontrolled observations. In A System of Logic, Deductive and Inductive (1843), he wrote “... we have not yet proved that antecedent to be the cause until we have reversed the process and produced the effect by means of that antecedent artificially, and if, when we do, the effect follows, the induction is complete ...”

Since Mill (and particularly since Fisher), experimental science has become the most reliable means of identifying causal relationships. However, when we extrapolate from experimental results to the uncontrolled real world, we run into the same old problem of induction. That is, we have no reliable basis for assuming that the causal relationship that we see in an experiment will hold in a real-world case. In fact, the problem is worse because experimental systems are usually simplifications of the real world. In addition, because of the complexity of ecological systems, the manipulations themselves may be confounded. For example, some experiments to determine whether diversity causes stability have actually revealed effects of fertility levels, nonrandom species selection, or other “hidden treatments” (Huston 1997).

Contemporary philosophers who support manipulationist theories of causation have run into criticisms that the theories are circular, because they make manipulation more fundamental than causation, but manipulation is inherently causal. Further, the concept of manipulation seems anthropocentric. However, these criticisms may be avoided by treating manipulation as a sign or feature rather than a definition of causation and by allowing natural manipulations and even hypothetical manipulations (Woodward 2003).

Pearl (2000) models causal relationships as networks with nodes connected by equations. Manipulation of the networks is performed through “surgery on equations.” This mathematical version of manipulation allows analysts to estimate intervention probabilities from data concerning conditional probabilities from observations.

Top of Page


Mechanistic Causation

Definition

The mechanism is the physical means by which a cause induces the effect. The physical mechanism can be thought of as a series of events at a lower level of organization than the cause and effect events. In other words, “effects are produced by mechanisms whose operations consist of ordered sequences of activities engaged in by their parts” (Bogen 2004).

Our Position

The term mechanism is used in CADDIS as it is in toxicology, pharmacology, and other biological fields to describe the events at a lower level of organization that connect the cause and effect (the mechanism of action). For example, salmon perceive the lack of gravel, which changes their brain state, resulting in a change in behavior, and eggs are not deposited. In sum, a mechanistic analysis of a causal relationship is reductionistic. Every causal relationship in an ecological system can be reduced to a set of events involving entities at a lower level of organization. Because causes have physical mechanisms, observations of the products of a mechanism (e.g., low blood cholinesterase levels) or even the plausibility of a mechanism can be important evidence. However, some interactions are more readily defined as processes rather than as a series of events (i.e., process causation). Knowledge of mechanisms has at least three uses in CADDIS:
  1. Mechanisms provide a description of a causal relationship at a lower level of organization, which, if they are consistent with established science, increase the credibility of the relationship (i.e., Mechanistically Plausible Cause). Actual measurements of components of the mechanism (Evidence of Exposure or Biological Mechanism) provide even stronger evidence.
  2. Mechanistic knowledge at the same level of organization as the hypothesized causal relationship can fill in missing steps in the causal pathway (e.g., increased algal production is a step in the sequence between nutrient releases and low dissolved oxygen, but not a step in the sequence from organic matter releases to low dissolved oxygen).
  3. Knowledge of mechanisms allows the prediction of previously unobserved effects of a hypothesized cause (Verified Predictions).

Background

Enlightenment philosophers such as Leibniz and Laplace were metaphysical mechanists in that they considered the universe to be driven by physical interactions between entities. Newton's theories, which involved action at a distance, supplanted that concept in physics, and the rise of quantum mechanics further diminished the concept of mechanism in physics.

A known plausible mechanism has become one of the criteria for judging an empirical association to be causal in statistics (Mosteller and Tukey 1977) and epidemiology (Hill 1965, Russo and Williamson 2007, Susser 1986a).

An alternative to the concept of mechanism presented here is the definition of mechanism as the chain or network of events that precede the effect (Pearl 2000, Simon and Rescher 1966). A conceptual problem with the concept of mechanisms of a cause is determining when is an event a mechanism for a cause and when may it be considered a cause itself? Simon and Rescher formally addressed this problem in terms of the causal ordering (i.e., causal directionality) of a series of equations that define the mechanism. If we have a series of variables Vi that are dependent variables, then the last variable in the series that can be solved without solving for any of its successors can be treated as the cause. This definition is effectively the opposite of the definition of mechanism used by us. That is, the mechanism is the series of events that lead up to—and include—the cause. The events between the cause and effect (which are the mechanism in our sense) can be ignored because, once an action has been taken, that action determines the causal event (the occurrence of the effect).

Top of Page


Model Based Causation

Scientists and Philosophers

Definition

The most likely cause is the one that, when mathematically modeled, best fits and therefore best explains the data.

Our Position

This is, in theory, a very useful method. However, it requires that all causes be understood sufficiently to determine models and that data be available to parameterize the models. “The fish kill was caused by an unknown pathogen” is a legitimate causal hypothesis, but it does not lend itself to model-based inference. In addition, to statistically compare these models and identify the most likely cause, the same data set should apply to all alternatives. Otherwise, the relative likelihoods may be due to differences in the data applied to the models rather than the models themselves. Finally, there must be enough data to allow the statistics to distinguish among the models. These conditions are often met for biological resource or pest management problems such as setting limits on fisheries but not for contaminated or disturbed sites. Hence, there are no examples of this method in CADDIS.

Background

This approach began with Peirce's concept of the weight of evidence, which was revived by Good (1950). The weight of evidence for a hypothesis expressed as a mathematical relationship is the log of its likelihood relative to the likelihood of alternative hypotheses. This Bayesian statistical approach for comparing models has been largely replaced by an information theoretic approach expressed as the relative magnitudes of Akaike's information criterion for each model (Anderson 2008).

Top of Page


Multiple Causation

Definition

The term multiple causation is applied to two distinct situations:
  1. Plural causation refers to situations in which multiple causes may induce a general effect. For example, many different causes induce the effect “impaired stream” in different streams. Plural causation results from broadly defined effects. If we define the effect as reduced brook trout abundance, the number of causes is reduced. If we further define it as reduced abundance in the first kilometer of Red Brook in 2002, there is only a single cause, but it may be complex. Hence, plural causation is an issue only when deriving general causal relationships, not for specific cases.
  2. Complex causation refers to situations in which the cause has multiple components. For example, a fish kill may be due to the interaction of high temperatures and low dissolved oxygen. This is a single but complex cause.

Our Position

CADDIS is concerned with causation in specific cases, so there is no plural causation. However, there are multiple candidate causes, which should be reduced by defining the effect as specifically as possible. Complex causation may be minimized by carefully defining the set of agents and events that must combine to induce the effect. All constituents of a complex cause that are necessary for the effect must be included—but not background conditions and trivial contributors (see INUS). The point of casual analyses in CADDIS is to determine a sufficient intervention to eliminate the effect, not to completely define the agents in a system and their interactions.

Background

Galileo recognized that there may be multiple (i.e., complex) causes but argued that there will be a primary cause that should be distinguished. He seems to imply an additive model of combined effects. Mill (1843) argued that the real cause of an effect is all antecedent conditions. This extreme of complex causation is in a sense monist. That is, there is only one cause, which is everything that has happened. In a metaphysical sense, most philosophers seem to agree with Mill (Lewis 1973). For example, Salmon (1984) developed an account of objective causation based on the concept of “complete causal structure,” which includes the entire network of causal processes in a convex chunk of space-time such as the universe. However, other philosophers have developed various strategies for reducing this complexity to manageable but multicausal systems (Lewis 1973, Mackie 1974, Pearl 2000).

Top of Page


Network Causation

Scientists and Philosophers

Definition

Network models represent causation graphically: nodes represent entities or states connected by arrows that represent models of individual causal processes or probabilities of the implied processes. The advantages of network models are that, unlike equations, they convey directionality and make explicit the structure of interactions in multivariate causal relationships. Empirical methods for analyzing causal networks include path analysis, structural equation models, and Bayesian network analysis. Alternatively, a network can be modeled mechanistically through mathematical simulation (e.g., systems of differential equations), but that is the old-fashioned field of systems analysis. Causal diagram theory, based on directed acyclic graphs, can be used to analyze complex causal relationships without parametric assumptions such as linearity that are required by structural equation modeling (Pearl 2000, Spirtes et al. 2000).

Our Position

Network models require that causal relationships be known or at least hypothesized. Analysis may be used to test the plausibility of the network structure or to determine the relative strength of the contributions of nodes in the network to the effect of interest, given the assumed causal structure. In general, the causes of ecological impairments are not sufficiently well known to confidently define the network for a particular case, and data are insufficient to quantitatively analyze the network. In addition, the application of network models to specific cases is problematical (Pearl 2000). However, general network models might be used like other general models to support the credibility of specific causal hypotheses. The conceptual models in CADDIS could potentially be subject to quantitative analysis, and we have explored both Bayesian analysis and structural equation modeling. We will continue to consider their utility.

Background

Analysis of causal networks began with Wright (1920, 1921), who developed path analysis (basically, a combination of directed graphs and regression analysis) to analyze the effects of genes and environment on phenotypes. It was first applied broadly by economists and social scientists (beginning with Herbert Simon), where data sets are often large and include quantification of multiple causal factors. However, the most important technical developments and the most influential texts on causal networks come from the field of artificial intelligence (Pearl 2000, Spirtes et al. 2000). Statistical analysis is now typically performed by an extension of path analysis called structural equation modeling. The techniques are now being applied in various fields, and their development is very active and controversial. However, even qualitative analyses of causal networks can help to identify potential confounders and aid in the design of studies (Greenland et al. 1999a).

Top of Page


Pluralism

Scientists and Philosophers

Definition

  1. Conceptual pluralism holds that causation has multiple distinct definitions that are all potentially legitimate and useful given different questions, bodies of evidence and contexts.
  2. Ontological pluralism holds that there are multiple types of causes and of causation.

Our Position

We agree that there are multiple legitimate definitions of causation and multiple methods of causal analysis. The CADDIS approach is conceptually pluralistic in that we use evidence corresponding to all potentially relevant theories and definitions of causation. For example, we use evidence of association of C and E in the case, regularity of association in the region, and counterfactual evidence from experiments. However, we assume that in any real-world case, an effect has one cause (which may be complex) and that the different definitions are different representations of that actual relationship, not ontological alternatives.

Background

Since the late 1980s, many philosophers, led by Nancy Cartwright (2003), came to believe that none of the attempts to reduce causality to a particular definition (counterfactual, probability raising, etc.) could succeed. Whether we accept ontological pluralism or not, we can use evidence to investigate causal relationships by applying the most appropriate concept of causality. Russo and Williamson (2007) argue that epistemic pluralism (the application of conceptual pluralism to the development of causal information) applies to the health sciences in practice and that it subsumes ontological pluralism: “The epistemic theory of causality can account for this multifaceted epistemology, since it deems the relationship between various types of evidence and the ensuing causal claims to be constitutive of causality itself. Causality just is the result of this epistemology.” Causal pluralism has been reviewed, and types of causal pluralism have been defined by Campaner and Galavotti (2007) and Hitchcock (2007).

Top of Page


Predictive Performance

Scientists and Philosophers

Related Concepts

Definition

A causal hypothesis displays predictive performance if a prediction deduced from the hypothesis is confirmed by subsequent novel observations. Good predictive performance is considered by some to be an essential characteristic of a good scientific hypothesis or theory.

Our Position

Prediction is not a characteristic of causation or a causal theory, but Verified Prediction is one of the SI and CADDIS types of evidence. We believe that predicted observations are more powerful evidence than ad hoc causal explanations of existing observational data, because predictions can not be fudged. That is, one can invent an explanation for any observation after the fact to make it fit a preferred causal hypothesis, but, if a prediction is made before the observation, it cannot be changed afterward to fit the results.

Background

In the Philosophical Essays, Leibniz wrote that “It is the greatest commendation of a hypothesis (next to proven truth) if by its help predictions can be made even about phenomena or experiments not tried.” However, Mill (1843) argued that a consequence already known has the same power to support a hypothesis as one that was predicted. Schlick (1931) argued that the formation and verification of predictions provided a greater rigor for the regularity theory of causation. Susser (1986b) wrote that “When it clearly produces new knowledge, the a priori character of the prediction is strongly affirmative, the more so in that it provides little opportunity for post hoc reasoning and avoids many biases that lurk in situations of which the scientist has foreknowledge.” Lipton (2005) argued that there is no fundamental advantage to evidence from predictions; evidence is just as good if it was generated before as after the hypothesis. However, he identified two legitimate arguments for giving more weight to evidence that is predicted.

  1. The weak argument—The quality of evidence from predictions tends to be better because we design the study to test the prediction. In particular, you cannot choose a good control or reference unless you know what you are controlling for and you do not know that until you have framed the causal hypothesis.
  2. The strong argument—Even the same evidence is better if it is predicted because fudging is precluded.

Top of Page


Probabilistic Causation

Scientists and Philosophers

Related Concepts

Definitions

  1. Metaphysical probabilism—Because of the inherent unpredictability of the world, effects can be predicted only as probabilities.
  2. Epistemological probabilism—Because of incomplete knowledge, causes are not determinate, but C is a cause of E if the occurrence of C increases the probability of E. That is, P(E|C) > P(E). This concept of causation is also referred to as probability raising.

Our Position

We are not metaphysical probabilists. We do not believe that the macrocosm (things bigger than atoms) is inherently random. Further, chaotic systems (e.g., those with nonlinear dynamics) are unpredictable but inherently deterministic. In practice, chaotic indeterminism is not distinguishable from other sources of noise in field data and does not significantly influence our ability to identify causes. Because this metaphysical position implies that additional data collection and modeling can decrease uncertainty and drive probabilities of causation toward zero or one, CADDIS recommends iterative assessment when results are unclear.

CADDIS does not suggest that probability raising constitutes a definition of causation because the apparent cause Co that is correlated with the effect may actually be correlated with the true cause Ct and methods to prevent confounding are unreliable. However, correlations and other expressions of the probability of association do provide evidence that can be useful in causal analyses.

Finally, epistemological probabilism is important in “population-level” causation if the members of the population (e.g., streams in a region) differ in ways that affect their susceptibility to the cause.

Background

Karl Pearson was the father of causal probabilism. In The Grammar of Science, 3rd edition (1911), Pearson argued that all we know of causation is the probability of association.

In his arguments against smoking as a cause of lung cancer, Fisher pointed out that, in observational studies, correlation does not prove causation. Confounding is always possible. A genetic trait may cause lung cancer and also make a person more susceptible to nicotine addiction, or nicotine craving may be a symptom of early stage lung cancer.

Some modern philosophers have argued that determinism is untenable and, therefore, probability raising (with temporal sequence) is the best definition of causation (Eells 1991a, Eells 1991b, Good 1961, Good 1983, Reichenbach 1958, Suppes 1970).

The major objection to probability raising is that it does not distinguish causal from non-causal associations (Holland 1986). Cartwright (2007) argued that probability raising is a sort of symptom of causation rather than a definition.

Top of Page


Process Connection

Definition

Causal relationships result from interactions that are physical processes such as the exchange of energy or other conserved quantity (e.g., angular momentum between a flying baseball and a window) (Dowe 2000, Salmon 1998). Synonyms include process model, physicalist model, and mechanism.

Our Position

Many causes act through a physical process that exchanges some conserved quantity as in the philosophers' process theories of causation. For example, the transfer of solar energy to the sediment of a shallow stream is followed by a transfer of thermal energy to the water and then to fish, causing the fish to leave in search of cooler water. However, many causal relationships are not easily expressed as such an exchange (e.g., effects of fine sediment on lithophilic stream invertebrates). In such cases, it is more natural to speak of causal mechanisms rather than processes. When evidence of a process connection is available, it is treated as a variant of Evidence of Exposure or Biological Mechanism.

Background

Although Russell famously opposed the idea of causation, he attempted to develop a scientifically defensible theory of causation (Russell 1948). He defined causation as a series of events constituting a “causal line” or process. However, he did not distinguish between causal processes and pseudo-processes (Reichenbach 1958, Salmon 1984). The modern process theory of causation was developed by Salmon (1984, 1994, 1998). Salmon's causation originally involved spatiotemporally continuous processes that transmit an abstract property termed a mark. In response to Dowe (2000) he changed it to an exchange of invariant or conserved quantities such as charge, mass, energy, and momentum. However, some types of causation (e.g., blocking an event) are not causes in this theory, and causation in many fields of science are not easily portrayed as exchanges of conserved quantities (Woodward 2003). Numerous philosophers have published variants and presumed improvements on Salmon's and Dowe's process theory. Some psychologists and psycholinguists have adopted a version of the physical process theory of causation and argue based on experiments that people inherently assume that a process connection (their terms are force dynamics or the dynamics model) is involved in causal relationships (Pinker 2008, Wolff 2007). This causal intuition includes Salmon's and Dowe's physics but also implies by analogy, intrinsic tendencies, powers and even intentions.

Top of Page


Regularity

Scientists and Philosophers

Related Concepts

Definition

Where and when the cause occurs, the effect always occurs. This definition is now usually modified by requiring that conditions be appropriate. This concept is also known as regularist causation or regularity theory. That is, the cause is regularly associated with the effect. However, association is a property of causation in a case, while regular association is a property of general causation.

Our Position

The CADDIS approach implies metaphysical but not epistemic regularity of causation. That is, when the full set of causal conditions occurs, the effect must occur, but we cannot rely on regular association of causes and effects in nature because of the complexity of conditions in nature that may modify or obscure a causal relationship.

Background

The regularity theory of causation is associated with Hume (1748). He defined “a cause to be an object followed by another and where all the objects similar to the first are followed by objects similar to the second.” The regularity theory dominated philosophies of causation until the development of counterfactual theory (Lewis 1973).

Top of Page


Rejectionist Causation

Scientists and Philosophers

Related Concepts

Definition

The only defensible form of scientific inference is to frame and reject hypotheses—including causal hypotheses. No amount of positive evidence can prove a hypothesis (at least one in the form of a scientific law), but, if all but one possible hypothesis can be rejected, the remaining hypothesis may be accepted. To use Popper's famous example, no number of observations of white swans can prove that all swans are white, but one black swan disproves it. Synonyms include falsification and refutation.

Our Position

Rejection is possible in specific cases if a cause is not possible in that case. Elimination of impossible or at least incredible candidate causes is one of the three methods of inference in the Stressor Identification and CADDIS methodology. For example, a cause that requires that contaminated water flow uphill (i.e., the source is downstream of the impairment) or that events in the past can be changed (e.g., the effects began before the cause was invented) can be eliminated. This method cannot identify a cause, but it can shorten the list of candidates. After that, positive evidence must be used.

However, rejection is seldom helpful in general causal analysis (the sort of inference proposed by Popper and Platt) for environmental effects, because most biological effects have plural causes. For example, a nonsmoker with lung cancer does not disprove the hypothesis that smoking causes lung cancer. Rejection can be useful for general causation in cases with very specific effects that have only one cause.

Background

Rejection is a relatively recent concept, but it has been extremely influential. Popper (1968) and Platt (1964) argued that induction is unreliable and that the only reliable inferences are deductive. That is, we frame a causal hypothesis, deduce an observable consequence of the hypothesis, and then test it in a way that makes it likely to fail. If the hypothesis is rejected by the test, another hypothesis is derived and tested in the same way. We cannot prove that a hypothesis is true; we can only tentatively accept the hypothesis that has withstood the strongest tests.

Popper's and Platt's arguments were popular in the 1970s, but they have less influence today. Most scientists and philosophers of science now accept the need to make inductions from positive evidence. Many scientists—but not Popper and Platt—have treated Fisher's tests of null hypotheses as an implementation of the rejectionist philosophy. However, if you allow probabilistic rejection, you may as well allow induction (Greenland 1988).

Top of Page


Specific Causation

Definition

  1. The concept of causation applies to specific events, not to general categories. That is, the cause of each event is unique.
  2. The concept that causal analysis for specific events must be different from analysis of general causes. Synonyms for specific causation include singular causation, actual causation, case causation, single-event causation and token-level causation.

Our Position

CADDIS is concerned with causes of specific effects in specific instances, and every instance of causation in ecosystems is, at some level, different. Although fine sediments have been shown to cause reduced trout abundance, that finding does not mean that reduced trout abundance at a site is due to fine sediments. However, the concept of general causation is useful and can contribute to the identification of specific causes. That is because we treat general causal relationships as supporting evidence from elsewhere—not as proof of causation.

Background

Most of the writings on causation are concerned with general causation, leading to useful generalizations and even scientific laws. Hume (1739) and others have argued that the problem with specific causation is that, without repetition, there is no basis for determining what aspects of an event are causal (i.e., necessary or sufficient). Hence, many philosophers have argued that we must derive general causal relationships and assume that instances follow general causal laws (i.e., covering laws). Singularists argue to the contrary that specific causal events are all we really know and that general relationships are unreliable abstractions (Armstrong 2004). Pearl (2000) argues that the distinction between specific causation and general causation is a matter of degree of detail in the causal model. “Thus, the distinction between type and token claims is a matter of degree in the structural account. The more episode-specific evidence we gather, the closer we come to the ideals of token claims and actual causes.” Mackie's (1965) INUS theory and Lewis's (1973) counterfactual theory were both developed to address specific causation.

Top of Page


Teleological Causation

Scientists and Philosophers

Definition

Teleological causation is the idea that causes are purposeful.

Our Position

Because CADDIS provides evidence of natural causes, we do not include teleological arguments.

Background

The earliest concepts of causation were teleological (Frankfort 1946). That is, events were believed to be caused by conscious agent (gods, humans, demons, spirits, etc.). Hence, causal explanation was a matter of assigning responsibility.

One of Aristotle's four types of cause is the final cause of a thing, which is its purpose (telos).

Teleological causation in science was dismissed by Galileo in the Dialogue Concerning the Two Major Systems of the World (1632). Teleology is now primarily associated with explaining human actions in psychology and the social sciences and with theology.

Top of Page


Temporality

Scientists and Philosophers

Related Concepts

Definition

Causes precede their effects. A particularly cogent explanation of the concept is provided by Renton (1993): “A cause is unable to produce its effect before it exists itself and is therefore, by definition, existentially prior to it.” Synonyms include time order and antecedence.

Our Position

Temporality is a necessary result of event causation. That is, an effect event cannot precede its causal event. There may be some confusion within agent causation. An affected entity may precede (i.e., be older than) its causal entity. For example, the coral may be older than the anchor that destroys it. However, when event temporality is violated, the candidate cause is refuted. For clarity, temporality is called Temporal Sequence in CADDIS.

Background

Temporality is on everyone's list of characteristics of causation. Most theories of causation also include temporality. For example, Suppes (1970) theory of probabilistic causality requires that a cause raise the probability of its effect and also that it precede the effect. However, in a process model of causation, events need not be invoked, and temporality may be expressed as the simultaneous involvement of the cause and effect in the physical process rather than as time order (Wolff 2007).

Top of Page

A Chronological History of Causation for Environmental Scientists

This brief historical review is intended to provide a basis for deeper inquiry into causation. The history is based on individual contributors and is divided into four sections:

  • Major Pre-Contemporary Authors - (those published before the mid-20th century) are presented chronologically, with original sources cited by title and date (when known). This section is for those who want a deep background into causation.
  • Recent Philosophy of Causation - includes professional philosophers, plus philosophical writings by statisticians and scientists.
  • Recent Epidemiology and Causation - is reviewed because human health epidemiology is the major source of methods for ecological epidemiology—including CADDIS.
  • Recent Applied Ecology and Causation - is reviewed for ecologists who want to know how other ecologists have proposed to determine causation.

Note: Recent authors are listed chronologically within disciplinary categories by their earliest relevant publication, and their works are cited conventionally.

Inevitably, this review leaves out many authors who addressed causation. Pre-contemporary authors are included if they made an important contribution to concepts of causation in the contemporary philosophy of science. Contemporary authors are included if they propose ideas of causation that are potential contributors to our approach or potentially viable alternatives.

The review also neglects non-Western philosophies. This is not because they do not address the issue of causation. Indian philosophy has been particularly concerned with the nature of causation, notably in the doctrine of Karma. However, these philosophies have not contributed to scientific concepts of causality in general or to our approach in particular.

Throughout this review, causes and effects, as logical propositions or statistical parameters, are represented as C and E, respectively.

Major Pre-Contemporary Authors

This section addresses authors who are deceased and published their relevant work prior to the mid-20th century. Authors are arranged chronologically by the dates of their most important publications on causation. All disciplines are included, but most of these authors were philosophers.

Highlights of this period begin with the development by Galileo of a scientific concept of causation involving necessary and sufficient conditions. David Hume showed that regular association could not prove causation, but provided the basis for belief that a relationship is causal. J.S. Mill was the first to make experimentation the definitive method for determining causation. John Herschel was the first to define causation in terms of a set of characteristics, which sets the stage for Hill’s criteria and the CADDIS method. Koch’s postulates provide an alternative set of causal criteria. Pearson developed probabilistic causation which is expressed by correlation. Fisher linked Pearson’s probabilistic causation with Mill’s experimental approach.

Top of Page

Pre-Ancients

Prior to the ancient philosophers, causation was attributed to conscious agents (gods, humans, demons, etc.). Hence, causal explanation was a matter of assigning responsibility (Frankfort 1946, Mithen 1998).

Plato (427-347 BCE)

In Phaedo, Plato attributed to Socrates the rejection of empirical explanations of why things come into being and pass away, and reliance on reason. Things are as they are because they participate in a form, but the forms are eternal and uncaused (Mittelstrass 2007). In Parnenides, he wrote “whatever becomes must necessarily become, owing to some cause; without a cause it is impossible for anything to achieve becoming.” However, he did not indicate what the causes of things might be.

Aristotle (384-322 BCE)

In Analytics and Physics, Aristotle established the deductive method in science. That is, truths can be derived as a series of consequences from a few fundamental principles.

He defined four kinds of causes: material, formal, efficient, and final. Final cause of a thing is its purpose (telos). Efficient cause (the one that acts) is how a thing happens, which corresponds most nearly to the modern concept. Material cause is that out of which a thing is made. Formal cause is the form into which the thing is made. All are aspects of causation in the sense of “why is something the way it is” (Mittelstrass 2007). Aristotle's analysis of causation was intended to be deductive. A sculpture is caused to be, because a sculptor (efficient cause) takes a piece of stone (material cause) and imposes the form of his patron (formal cause) to earn a fee (final cause).

In Metaphysics, he wrote “that which is called Wisdom is concerned with the primary causes and principles.” He differed from modern philosophers of causation in his concern with telos and in lacking a concept of a regular functional relationship. His teleology referred to an internal principle that guides natural processes, not to a supernatural director of nature. Whatever “we ascribe to chance has some definite cause” (Physics).

Top of Page

Roman and Medieval Periods

In imperial Rome, and in both medieval Europe and the Muslim world, commentaries on and elaborations of Plato and Aristotle dominated writings on causation (see Ch. 3-5 in Machamer and Wolters (2007)).

Galileo Galilei (1564-1642)

In Discourses and Mathematical Demonstrations Concerning the Two New Sciences (Discorsi, 1638), Galileo distinguished description from explanation. A description is an empirical law; explanation is teleological. He advocated empirical description and warned against seeking causes in the Aristotelian, teleological sense.

Galileo also made the important distinction between necessary and sufficient conditions. In Dialogue on the Two Chief World Systems (Dialogo, 1632) he wrote, “If it is true that an effect has a single primary cause, and that between the cause and the effect there be a firm and constant connection, that it necessarily follows that whenever a firm and constant alteration is perceived in the effect, there will be a firm and constant alteration in the cause.”

He also wrote, “That and no other is to be called cause, at the presence of which the effect always follows, and at whose removal the effect disappears.” This is a practical definition of Aristotle's efficient cause. It is not clear whether this is the first manipulationist theory of causation or just a counterfactual statement. Ducheyne (2006) argued that Galileo was the first to propose a manipulationist theory of causation based on his scientific practices.

In the Dialogo, Galileo acknowledged that there may be a causal complex, but stated that there is always a unitary primary cause.

Galileo established causal explanation as the heart of natural philosophy (science). On the other hand, he did not hypothesize causes when he could only identify law-like regularities. For example, he determined that the velocity of a falling object was a function of the square of time, but he refused to speculate as to whether something (a causal agent) was pulling or pushing the falling object.

Francis Bacon (1561-1626)

Bacon was the father of the inductive method in science, but he criticized induction by enumeration of confirming cases as childish (Novum Organum, 1620).

He advocated examining both positive and negative instances, but emphasized the importance of negative instances: “The induction which is to be available for the discovery and the demonstration of sciences and arts must analyze nature by proper rejections and exclusions; and then after a sufficient number of negatives, come to a conclusion on the affirmative instances.”

Bacon also emphasized predictive performance. If a hypothesis (axiom) is “larger and wider” than the existing factual basis, in the sense of making a prediction beyond it, then an experiment verifying the chancy prediction “confirms its wideness and largeness by indicating new particulars as a kind of collateral security” (Novum Organum).

He warned against four delusions: idols of the tribe (perceptual illusions), cave (personal bias), marketplace (linguistic confusion), and theater (dogmatic systems). Each of these may lead to errors in causal assessments.

Isaac Newton (1642-1727)

Causes must be verae causae, known to exist in nature (i.e., based on evidence independent of the phenomena being explained) (Philosophiæ Naturalis Principia Mathematica, 1687). His advice against framing hypotheses (Hypotheses non fingo) had inordinate influence, but apparently he meant hypotheses about unknown or supernatural causes. Gravity is one of those unknown causes. In describing the effects of gravity without hypothesizing what it is, he followed Galileo's practice.

Newton's provided “Rules of Philosophizing” in the Principia:
  • “No more causes of natural things should be admitted than are both true and sufficient to explain their phenomena.”
  • “The causes assigned to natural effects of the same kind must be, as far as possible, the same.”
  • “Those qualities of bodies that cannot be increased or diminished and that belong to all bodies on which experiments can be made should be taken as qualities of all bodies universally.”
  • “In experimental philosophy, propositions gathered from phenomena by induction should be considered either exactly or very nearly true notwithstanding any contrary hypotheses, until yet other phenomena make such propositions either more exact or liable to exceptions.”

Gottfried Leibniz (1646-1716)

In a letter to Huygens, as an argument against Newton's theory of gravitation, which was all mathematical law and not physical mechanism, Leibniz stated “The fundamental principle of reasoning is, nothing without cause” (Gleick 2003). That is, Leibniz required a causal agent.

John Locke (1632-1704)

Locke was the father of empiricism and the empirical epistemology of causation (causation is something we perceive rather than an ideal or entity), followed by Berkeley and Hume. In An Essay Concerning Human Understanding, Book II (1690), he wrote “That which produces any simple or complex idea we denote by the general name 'cause,' and that which is produced, 'effect'.” Locke also suggested an early version of inference to the best explanation.

Baruch Spinoza (1632-1667)

In Improvement of the Understanding (published posthumously), Spinoza broke with Plato, the academicians, and Descartes in arguing that things are self-caused and self-sustained. He recognized two types of efficient causes: intrinsic self-moving and self-sustaining properties (e.g., the planets exist and move without outside intervention) and extrinsic causes (e.g., a collision of an asteroid with a planet would cause its orbit to change).

George Berkeley (1685-1753)

In Concerning Motion (1721), Berkeley argued that causation is purely mental and, therefore, the real causes of motion (i.e., causal agency) are a matter for metaphysics, not mechanics. He thought that Newton had shown that there was no mechanical causation, and terms like attraction and force convey the illusion of an explanation (McMullin 2000).

David Hume (1711-1776)

In An Enquiry Concerning Human Understanding (1748) and A Treatise of Human Nature (1739), Hume combined Locke's empiricism (causation based on impressions from experience) with a neo-skeptical stance and greater attention to the issue of causation. In the Enquiry, he wrote “All reasonings concerning cause and effect are founded on experience, and all reasonings from experience are founded on the supposition that the course of nature will continue uniformly the same.”

We may think that reason and experience can prove causation, but Hume demonstrated that is not true. We must assume “the uniformity of nature” or “conformability to the past” and that assumption may not hold. We believe that the future will be like the past because the future has been like the past in the past, but that is a circular argument. We believe based on “constant conjunction” and “lively or vivid impression.” His terminology is not consistent, but causal criteria are expressed in the Treastise as contiguity, priority, and constant conjunction. He defined causation as “an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second.” Hence, Hume replaced the concept of necessity in causation with regularity. The cause of a unique event cannot be determined, because there can be no constant conjunction. Therefore, singular causal events must be instances of a general causal relationship.

In addition to this regularity definition, Hume also presented in the Enquiry a counterfactual definition, “an object followed by another…where, if the first had not existed, the second had never existed.” He apparently considered this to be equivalent to the regularity definition, but Lewis (1973) showed that they are different.

Hume said that our idea of causation comes from our experience of manipulation.

Hume was skeptical about proving causation, but in the Treastise, he allowed that our inductive processes are genuinely “correspondent” to natural processes and are “essential to the subsistence of human creatures.” He called this inherent propensity to extrapolate from the past an instinct, held in common with the beasts. His solution in the Enquiry was to “explain where you cannot justify.”

He approved of Newton's physics, which quantified relationships (laws of nature) without appealing to unobserved underlying causal mechanisms (ether or phlogiston) (McMullin 2000).

Thomas Bayes (1702-1761)

Bayes argued that truth is objective, but knowledge is subjective. In particular, he accepted deductive logic but realized that it depended on the truth of premises and we typically are not certain of our premises. Hence, we must begin with our belief in the premises and then apply logic, including Bayes's theorem of conditional probability. He was more of an inferential pessimist than Hume in the sense that, while Hume showed that inductive proof is impossible, Bayes showed that deductive proof is also uncertain.

Immanuel Kant (1724-1804)

Causality is an a priori category of understanding (Critique of Pure Reason, 1781). In response to Hume, Kant tried to bridge the gap between rationalist and empiricist approaches by positing that human perception is filtered through innate categories of ideas such as time and space. Hence, “Every event is caused” is a synthetic a priori truth. This innate concept enables us to organize, label, and read experiences and generate causal laws. He followed Hume in arguing that causal laws apply to experience, not things.

Pierre-Simon Laplace (1749-1827)

In the Philosophical Essay on Probabilities, which is an introduction to The Analytical Theory of Probability (1812), Laplace presented a mechanistic and deterministic theory of causality. This theory states that, if we knew the state of the universe at a moment and had sufficient knowledge of natural laws and sufficient computational capability, we could predict future states. This determinism may seem surprising given that Laplace is one of the founders of probability theory. However, he ascribed uncertainty to our state of knowledge: “Ignorance of the different causes involved in the production of events, as well as their complexity, taken together with the imperfection of analysis, prevents our reaching the same certainty about the vast majority of phenomena.”

Georg W. F. Hegel (1770-1801)

In Science of Logic (1813 & 1817), Hegel, a German idealist, struggled with the idea of causation because the real world is so contingent and unpredictable. In particular, he argued that causation does not apply to living things, because they react to external influences and thereby modify their effect. “Whatever has life does not allow the cause to reach its effect, that is, cancels it as a cause.” Also, “That which acts upon living matter is independently changed and transmuted by such matter...Thus it is inadmissible to say that food is the cause of blood, or certain dishes or cold or damp the cause of fever, and so on.”

John Herschel (1792-1881)

In A Preliminary Discourse on the Study of Natural Philosophy (1830), Herschel, an English astronomer, physicist, and philosopher of science, advocated verae causae, causes which are known to exist in nature and not “mere hypotheses or figments of the mind,” an idea taken from Newton. However, he believed that verae causae must be agents that are observed in nature and for which we have inductive evidence that they may cause the sort of phenomenon that we are trying to explain. This redefinition of a vera causa requires direct experience.

Hershel defined five characters of causal relations:
  1. “Invariable antecedent of the cause and consequence of the effect, unless prevented by some counteracting cause.”
  2. “Invariate negation of the effect with the absence of the cause, unless some other cause be capable of producing the same effect.”
  3. “Increase or diminution of the effect with the increased or diminished intensity of the cause.”
  4. “Proportionality of the effect to its cause in all cases of direct unimpeded action.”
  5. “Reversal of the effect with that of the cause.”

He did not believe that these prove causation, but, rather, “That any circumstance in which all the facts without exception agree, may be the cause in question, or, if not, at least a collateral effect of the same cause...”

Herschel was apparently elaborating on Hume, but he appealed to the authority of Bacon and Newton. He was particularly concerned that “counteracting or modifying causes may subsist unperceived and annul the effects of the cause we seek...” He suggested the importance of natural or contrived experiments to resolve those concerns, but did not use the term or emphasize the idea.

He dealt with multiple causes by suggesting that, as each cause is understood, its effects can be “subducted,” leaving “a residual phenomenon to be explained.”

William Whewell (1794-1866)

Whewell was an English astronomer, historian, and philosopher of science but a Kantian. In The Philosophy of the Inductive Sciences Founded Upon Their History (1840), he suggested that we should accumulate observations and then derive a hypothesis that explains the data. He called this colligation, or seeing facts in a new light. Tests of colligations are
  1. Empirically adequate; account for all available data.
  2. Provide successful predictions
  3. Provide consilience of inductions (brings unrelated hypotheses together). Implies simplicity is goal.

He condemned Darwin for starting with his hypothesis, then gathering cases to support it. However, following Kant, he believed that the evidence simply stimulated the mind to recognize self-evident a priori truths. Hence, he opposed Herschel and Mill, who believed that the fundamental ideas of science were inductions from experience (Hull 1983).

He developed the concept of consilience—the results of one science should be consistent with those of other sciences. He also applied consilience to the process of identifying theories that cover a variety of phenomena or types of evidence.

As a Kantian, he believed that knowledge was possible only because the mind supplied certain fundamental ideas such as force, time, space, mass, and causality (Scarre 1998).

John Stuart Mill (1806-1873)

As an empiricist, Mill argued that no knowledge exists, independent of experience (Mill 1843). He avoided Hume's problem of induction by assuming uniformity, which he said was justified by experience of the universality of causation (causal relationships are consistent across space and time). He either did not appreciate the circularity of that justification or did not care.

Mill believed that axioms of science are alternative hypotheses that are eliminated by experiment, or observation. Axioms are not derived by elimination, but by enumerative induction (simple generalizations from experience).

He identified four qualitative classes of causal explanation.
  • The best “is a law of body of laws that gives necessary and sufficient conditions for any state of a system”
  • Next best is necessary and sufficient conditions for a state.
  • Next, necessary conditions for a state.
  • Finally, mere regular association, “the method of concomitant variation.”

Association comes from observation; the others require experiments. Mill is the first philosopher of science to clearly argue the priority of experiments over uncontrolled observations. He wrote that “…we have not yet proved that antecedent to be the cause until we have reversed the process and produced the effect by means of that antecedent artificially, and if, when we do, the effect follows, the induction is complete…”

However, Mill believed that social sciences are too complex for experimentation. Hence, we must deduce laws for wholes from laws for parts and then test against history. (This approach assumes there are no emergent properties.)

He argued against Whewell (whom he opposed as a Kantian idealist) and for experiments by pointing out that more than one hypothesis might account for observations so only a controlled manipulation could distinguish among them (Scarre 1998).

He did not believe that predicted associations were any better than observed prior associations.

He was a Monist, that is, he believed that there is only one cause of an effect, but in most cases, it is a network of conditions. This belief is associated with the idea that causes are unconditional and necessary. In the System of Logic he wrote, “That which is necessary, that which must be, means that which will be, no matter what supposition we may make in regard to all other things.” Hence, like Hume, he believed that necessity was the feature that distinguishes causation from other associations.

He derived his philosophy of science after he abandoned botany for chemistry and physics (Scarre 1998), which may be due to the conceptual difficulty of causation in complex and hierarchical living systems.

Charles Darwin (1809-1882)

Darwin tried to follow Herschel but could not resist framing hypotheses. In the end, he was unapologetic. He admitted that he framed his theory of coral reef formation before he had seen a reef and then performed observations needed to confirm his hypothesis (Hull 1983).

“The line of argument often pursued throughout my theory is to establish a point as a probability by induction and to apply it as a hypothesis to other parts and see whether it will solve them” (quoted in Hull 1983). This is analogous to using observations to derive potential causes and then making predictions about results of proposed studies or about existing observational data not used in hypothesis formation.

“He [Hutton] is one of the very few who see that the change of species cannot be directly proved, and that the doctrine must sink or swim according as it groups and explains phenomena” (quoted in Hull 1983).

Charles Sanders Peirce (1839-1914)

Peirce was a geologist, astronomer (U.S. Coast and Geodetic Survey), philosopher (the founder of pragmatism; followers include William James, Thorstein Veblen, and John Dewy), mathematician, and an early contributor to semiotics. His “principle of pragmatism” is that the meaning of a proposition depends on its practical consequences. That is, how might it conceivably modify our conduct? Hence, pragmatism is often described by the aphorism “thinking is for doing.” His writings were voluminous but mostly unpublished in his lifetime, so the standard reference is the Collected Papers.

Peirce argued that there are three types of inference: deduction, induction, and abduction. Abduction, which is his creation, is argument to the best explanation. That is, the hypothesis that best explains the existing information is most likely to be true. The strength of evidence analysis in the Stressor Identification Guidance and in CADDIS is an example of abductive inference.

Peirce strung the three inferential approaches together into a general scientific method, which he termed the inductive method (abductive inference plus the hypothetico-deductive method).
  1. Abduction is used to frame a most likely explanatory hypothesis.
  2. Deduction derives testable consequences from the hypothesis.
  3. Induction evaluates the hypothesis in light of the observed consequences.

According to the Oxford English Dictionary (1971), he also was the first to use the term “weight-of-evidence” in print. He provided a formal definition for the weight-of-evidence for two hypotheses in terms of the odds ratio.

Peirce recognized that the interpretation of probability as frequency relied on the problem being solved consisting of a series of repeated events (coin tosses, card draws, etc.). However, we want to make probabilistic arguments about unique events. So, he developed the concept of confidence and the concept of confidence intervals, which was made rigorous by Jerzy Neyman.

He believed that we could gain confidence in an inference based on the repetition of measurements or observations, because they will converge. Hence, they are self-correcting. Through “collective enquiry,” they will gradually stabilize on a reliable answer. Hence, science is a belief of the community (see the discussion by Hacking (2001)). However, he was a philosophical objectivist and opposed subjectivism in that sense.

In Illustrations of the Logic of Science, he propounded tychism: there is absolute chance in the universe, so the laws of nature are probabilistic. This belief was due to the influence of Darwin, but Peirce defended the role of chance in evolution as part of a pattern in science in general, citing gas laws and variance in astronomical observations.

Robert Koch (1843-1910)

Koch's postulates (also known as Henle-Koch postulates) provide a standard of proof for identifying pathogens responsible for diseases. However, Koch never wrote them so they must be inferred from cases in which he applied them. The four step version is
  1. The microorganism in question must be shown to be consistently present in diseased hosts.
  2. The microorganism must be isolated from the diseased host and grown in pure culture.
  3. Microorganisms from pure culture, when injected into a healthy susceptible host, must produce the disease in the host.
  4. Microorganisms must be isolated from the experimentally infected host, grown in pure culture, and compared with the microorganisms in the original culture.

Henri Poincaré (1854-1912)

In Nouvelles Methodes Des Méchanique Céleste (1892), Poincaré anticipated a bit of chaos theory but not the role of nonlinearity. He wrote “A very small cause which escapes our notice determines a considerable effect that we cannot fail to see, and then we say that the effect is due to chance...it may happen that small differences in the initial conditions produce very great ones in the final phenomenon.”

Karl Pearson (1857-1936)

In The Grammar of Science, 1st edition (1900), Pearson took Galton's concept of “co-relation,” developed it as a quantitative tool, and made causation, at most, a subset of it. He argued that everything we can know about causation is contained in contingency tables: “Once the reader realizes the nature of such a table, he will have grasped the essence of the concept of association between cause and effect” (The Grammar of Science, 3rd edition (1911)). Hence, causation is probabilistic consistency of association.

Bertrand Russell (1872-1970)

In Mysticism and Logic (1912), Russell was dismissive of causality: “The law of causality...like much that passes muster among philosophers, is a relic of a bygone age, surviving like the monarchy, only because it is erroneously supposed to do no harm.”

He pointed out that the laws of physics are symmetrical (F=ma and a=F/m) so F can be the cause of acceleration (the conventional interpretation) or its effect. In contrast, causation is taken to be asymmetrical. Why do we say that a force causes acceleration and not the other way around? For Russell (1913, cited in (Sloman 2005)), invariant natural laws are simply mathematical, not causal, because they do not imply agency.

In Principia Mathematica (1910) with A.N. Whitehead, Russell attempted to establish a formal symbolic logic based on set theory, which apparently included all of mathematics and formal logic but not causation. They argued that there is no one relationship that encompasses everything we call causation. The best we can do is “material implication” (the presence of A implies the presence of B).

In Human Knowledge (1948), Russell continued to reject conventional concept of causation but developed a theory of process causation based on continuous physical processes that form causal lines. “When two events belong to one causal line the earlier may be said to 'cause' the latter. In this way laws of the form 'A causes B' may preserve a certain validity.”

F. A. Moritz Schlick (1882-1936)

Schlick, a philosopher trained in physics, was a founder of the Vienna Circle and an originator of logical positivism. He made Hume's regularity a basis for assuming causation. “Every ordering of events in the temporal direction, of whatever kind it may be, is to be viewed as a causal relation. Only complete chaos, and utter lawlessness, could be described as non-causal occurrence, as pure chance; every trace of order would already signify dependence, and hence causality” (Schlick 1931). Epistemologically, he believed that “the true criterion of regularity, the essential mark of causality, is the fulfillment of predictions” (Schlick 1931). This fits with the logical positivists' belief that the meaning of a proposition is its method of verification.

Ronald Fisher (1890-1962)

Fisher argued that experiments in which treatments are randomly assigned to replicate systems can demonstrate causal relationships, but the relationship is probabilistic due to variance among replicates and treatments and sampling error (Fisher 1937).

He was operationally a falsificationist, providing a method for rejecting a null hypothesis (the hypothesis that there is no relationship between C and E). He allowed acceptance of a causal hypothesis by assuming that if the null hypothesis is rejected, there is only one causal alternative that can then be accepted. This may be justifiable in well-designed and well‑controlled experimental systems but not in the uncontrolled real world. He was influenced by Hume and attempted to design an inferential method in which regularity of association is indicative of causation (Armitage 2003).

Fisher famously opposed the idea that smoking causes cancer. He argued that smoking does not cause lung cancer, because most smokers do not contract lung cancer. He stated that smoking could be the result of lung cancer (e.g., tobacco smoke might relieve symptoms of early lung cancer) or that smoking and lung cancer may have a common cause (e.g., a genetic factor) (Fisher 1959, Salsburg 2001).

Sewall Wright (1889-1988)

A founder, with R.A. Fisher and J.B.S. Haldane, of theoretical population genetics, Wright was the first to publish a causal network model and developed path analysis to quantify it (Wright 1921). He began by modeling the determination of a phenotypic trait (coat pattern in guinea pigs) by incorporating genetic and chance environmental factors.

Karl Popper (1902-1994)

In The Logic of Scientific Discovery (1934 & 1968), Popper famously argued that we cannot prove hypotheses, only disprove them. He denied induction and emphasized the deduction of testable consequences of hypotheses. No number of corroborating cases increases the confidence in a cause, because we may be making the same error repeatedly or observing the same special case. Consistent cases “corroborate” but do not “confirm.” Hence, he was a falsificationist but limited his arguments almost entirely to deterministic science and “universal theories.” He held that, if action is needed, we may tentatively accept those theories that have withstood the most severe tests (which is effectively induction).

He separated the psychological problem of induction from the logical problem. The psychological problem is, why do we make inductive generalizations when they are not justified? The answer is that learning from experience works for us as it does for other animals and even bacteria. Hence, it is a selectively advantageous strategy. That leaves the logical problem, which Popper said we can avoid by avoiding induction and using deduction. That is, (1) make a conjecture, c, (2) deduce a testable implication of c, (3) perform the test, (4) reject (eliminate) c if the test fails, else tentatively accept c.

Hans Reichenbach (1891-1953)

Causality was central to Reichenbach's attempt to create a philosophical explication of relativity theory and quantum mechanics. In The Direction of Time (1956), he espoused a causal theory of time and space in which the directionality of time was due to the directionality of causation. It is a more philosophical version of the causal structure of the Lorentzian manifold, presented as a network of potentially causal interactions of events in space-time.

Reichenbach also developed the common cause principle. A correlation between events E1 and E2 indicates that E1 is a cause of E2, or that E2 is a cause of E1, or that E1 and E2 have a common cause. It has been abbreviated as “no correlation without causation” (Pearl 2000). Reichenbach explained that a common cause screens off the effects. That is, when we account for the common cause, the effects are no longer correlated. If the effects are not fully screened off, the identified common cause is incomplete. The causal Markov condition is a generalization of this principle for analysis of larger numbers of probabilistically associated variables. Reichenbach's networks and common causal principle have been important to the development of models of causal networks (Pearl 2000, Spirtes et al. 2000).

Carl Hempel (1905-1997)

Like Reichenbach and Schlick, Hempel was a member of the Vienna Circle. With Oppenheim in 1948, he formalized the covering law concept of causal explanation, calling it the deductive-nomological model. That is, a phenomenon requiring an explanation (the explanandum) is explained by premises (the explanans) consisting of at least one scientific law and suitable facts concerning initial conditions. Later, in Aspects of Scientific Explanation, he extended the concept to inductions from probabilistic observations, the inductive-statistical model. This theory made explanations and predictions equivalent by making them implementations of the same causal laws.

Top of Page


Recent Philosophy of Causation

Although this section is dominated by professional philosophers, it also includes philosophical writings by statisticians such as I.J. Good and scientists such as Steven Weinberg. It is organized chronologically by the date of the author's publication that is most important to causal analysis.

Herbert Simon showed that the understanding of complex systems is typically insufficient to develop reliable causal models, but sufficient causal understanding for decision making is attainable. John Mackie clarified the concept of a set of causes which together are sufficient. Judea Pearl and other advocates of causal network modeling are currently highly influential and their methods are being used in ecology.

Herbert Simon (1916-2001)

Simon was a Nobel Laureate and pioneer of econometrics, quantitative political science, computer science (for which he received the Turing Award), theory of science, operations research, and artificial intelligence. He described himself as a monomaniac about decision making. He also made important contributions to analysis of causation including pioneering the application of Wright's path analysis method to other types of causal networks (Simon 1952, Simon 1954, Simon and Rescher 1966). He argued that we can eliminate the philosophical objections to causality by treating it as a property of models. That transfers the problem to defending the legitimacy of causal models, but for this, he appealed to the reassuring respectability of models of electrical and mechanical systems.

Simon also recognized that many systems are too complex to model with sufficient reliability to guide decision making. For such cases, he developed the theory of bounded rationality, dealing with cases of incomplete knowledge (Simon 1983). The key original concept of the theory is satisficing, an alternative to optimizing, in which solutions are found that meet a defined set of criteria. He argued against Pareto that people ordinarily make good enough decisions, not optimal decisions. He also argued that satisficing is consistent with Darwinian evolution. That is, natural selection does not result in the fittest, but the fit enough to persist.

Mario Bunge (1919-)

Beginning in 1959, the Argentine-born physicist and now Canadian philosopher, M. Bunge, tried to revive causality in the face of quantum mechanics and logical positivism in Causality and Modern Science (Bunge 1979). He provided the definition, “Causation is not a category of relation among ideas, but a category of connection and determination corresponding to an actual trait of the factual (external and internal) world, so that it has an ontological status...”

His theses are:
  1. The causal relation is a relation among events
  2. Every effect is somehow produced (generated) by its cause(s)
  3. The causal generation of events is lawful
  4. Causes can modify propensities (in particular, probabilities), but they are not propensities
  5. The world is not strictly causal

Although he attempted to rigorously define causation in a generally applicable way, he recognized that “Almost every philosopher and scientist uses his own definition of cause...”

Clive W. J. Granger (1934-2009)

Beginning in 1962, Granger (2007) developed an information theoretical definition of cause, known as G-causality, to identify causes in time series data. He pointed out that path models by Pearl, Glymour, and others have no time sequence; “Exactly the same graph with the arrows reversed will have the same likelihood” (Granger 2007).

Granger disagrees with manipulationist theories of causation because they assume that the manipulation does not change the causal relationships. That is not true of systems including humans and possibly not any living system.

He defined a good causal theory as one that makes good predictions and, in particular, leads to good decisions in realistic settings. “In an applied area, the effect of a causal definition on the decisions taken by a decision maker in a realistic setting is the only way that its usefulness can be discussed” (Granger 2007).

J. L. Mackie (1917-1981)

Mackie (1965) developed a more realistic version of Hume's regularity account of causation by incorporating the “plurality of causes.” It may be summarized as C is a cause of E if and only if
  1. C and E are both actual
  2. C occurs before E
  3. C is an INUS condition

INUS conditions are Insufficient but Necessary parts of Unnecessary but Sufficient set of conditions. For example, a fish kill may be caused by elevated copper concentrations (an insufficient condition), along with low pH, susceptible fish, and low levels of dissolved organic matter (the sufficient set). The set would not cause a kill without copper, so it is a necessary part. However, this set is unnecessary because other sets of conditions also cause fish kills. Mackie recognized that we will not specify all members of the sufficient set; some must be treated as background. He called these unspecified conditions the “causal field.” This approach was more fully developed in Mackie (1974).

Patrick Suppes (1922-2014)

Suppes (1970) developed a probabilistic theory of causality in which C is identified as a prima facie cause if
  1. C precedes E
  2. C is real [i.e., P(C) > 0]
  3. C is correlated with E or raises the probability of E [i.e., P(E|C) > P(E)]

In addition, the relationship must be non-spurious. That is, the relationship must not be explained by any third variable. He defined C and E as events.

He argued that experiments can demonstrate all of these criteria. They demonstrate the reality of the cause and time sequence by imposing the cause on experimental subjects. The results demonstrate the difference in probability given the imposed cause. Good experimental design with replication and randomization eliminates spurious relationships.

Suppes also responded to Russell by pointing out that current physics often refers to causation. This is particularly true of complex physical systems, which are not readily characterized by a mathematical formula. He suggested that in such cases, causation is an abbreviation for the set of processes that determine the effect.

David Lewis (1941-2001)

The modern counterfactual theory of causation dates to Lewis (1973). Counterfactual theory (had C not occurred, E would not have occurred) is an alternative to regularity analysis (Humean association or constant conjunction—E occurred because C occurred). Lewis's solution to the problem of defining the counterfactual situation was to define alternative worlds, which he considered to be actual and not just conceptual conveniences. He distinguished between counterfactual dependence between propositions and causal dependence, which applies the same logic to events (see also his chapters in Collins et al. (2004)).

Counterfactual theory suffers from cases of preemption (C2 would have been the cause but C1 acted first) and overdetermination (C1 and C2 both acted) in which neither is a counterfactual cause.

Donald Rubin (1943-)

Donald Rubin developed a theory of causality for observational studies based on Neyman's concept of potential outcomes (Rubin 1974). In this approach, the effect is defined as the difference between results for two or more treatments of a unit, only one of which is observed. Various statistical techniques are used to estimate those differences, based on the observed outcomes and covariates. His potential outcomes concept, which is known as the Rubin Causal Model, is particularly popular in psychology and the social sciences.

Frederick Mosteller (1916-2006) and John Tukey (1915-2000)

In their classic text on regression analysis, Mosteller and Tukey (1977) recognized that regression models and other statistical associations do not demonstrate causation. They suggested that the following ideas are needed to support causation.
  1. Consistency—Other things being equal, the relationship between C and E is consistent across populations in direction and perhaps in amount.
  2. Responsiveness—If we intervene and change C for some individuals, a property E will change accordingly.
  3. Mechanism—C is related, often step by step, with E, in a way that it would be natural to say “this causes that.”

Wesley Salmon (1925-2001)

Salmon (1984) proposed a theory of physical causation based on causal processes and updated it in Salmon (1994, 1998). It involves the transmission of an invariant or conserved quantity (e.g., charge, mass, energy, and momentum) in an exchange between two processes. For example, the breaking of a window by a baseball can be characterized as the exchange of linear momentum between the ball in flight and the window at rest. Salmon considered this theory to be superior to the counterfactual theory (i.e., the window would not have broken if not for being hit by the ball).

Salmon (1998) described two accounts of probabilistic causation. Aleatory causation involves physical processes that may be probabilistic. Statistical causality involves statistical derivation of probabilities of putatively causal associations. Statistical causality is applied to general causal relationships. Aleatory causality, however, can be applied to specific cases as well as general relationships.

Richard Miller

Miller considered causation to be a primitive concept like number or art. Each science has a core concept of acceptable causation, which is extended by subsequent work, just as Jackson Pollock extended the concept of art (Miller 1987). Over time, a science's core causes are extended to create new accepted causes. There are no a priori principles for justifying causal attribution. New causes come from associations and relevant principles, which are themselves sustained by empirical findings or core definitions. Hence, we should begin by asking, given that the possible list of causes is potentially infinite, what causes are adequate given the circumstances at hand. For example, what must be part of the cause, and what can be considered background? The type of answer that is acceptable depends on the field. Causes must conform to an appropriate “standard causal pattern.”

Marjorie Grene

Hierarchy theory shows that causation is a matter of trios of levels: focal, lower, and higher (Grene 1987). For reproduction, the focal level (organisms) performs reproduction, the lower level (genes) determines what is reproduced, and the higher level (demes) constrains the amount of reproduction. There are two types of evolutionary biological hierarchies: genealogical (taxonomic) and ecological (economic-type interactions).

Paul Humphreys

Humphreys (1989) presented a probability raising theory of causal explanation in science. C is a cause of E if the occurrence of C increases the probability of E when circumstances are compatible. He based his concept of probability on the lack of knowledge of all of the contributing factors, not on frequency. As science increases knowledge of the causal factors in a system, the causal model approaches determinism.

Judea Pearl

Pearl (2009) has been concerned with imparting concepts of causation in artificial intelligence. His fundamental definitions are:
  • Causation = encoding of behavior under interventions
  • Interventions = surgeries on mechanisms
  • Mechanisms = stable functional relationships = physical laws, which are sufficient to determine all events of interest.

These definitions were summarized as “Y is a cause of Z if we can change Z by manipulating Y.” Pearl's scheme for causal computation consists of directed acyclic graphs (DAGs—box and arrow diagrams with no loops) and equations or probability values defining the relationships between nodes in the DAG. His examples of causal models included circuit diagrams and Sewall Wright's phenotype determination. Pearl presented his models as bases for counterfactual analyses, but he argued that other counterfactual analyses fail to properly account for confounding by covariates.

Michael Ruse

Ruse (1998) argued that causation is an epigenetic rule; those who inferred causes left more offspring. This differs from Kant's a priori concepts in that it is contingent, not necessary. It is Humean. “Hume's propensities correspond exactly to Wilson's epigenetic rules” (Ruse 1998).

Colin Howson

Howson (2000) developed a response to Hume's argument that induction is circular. (We believe that what has happened in the past will happen in the future, because in the past, the past was like the future.) To avoid circularity, we insert some assumption, which he claims takes the form of a Bayesian prior. “Inductive reasoning is justified to the extent that it is sound, given appropriate premises. These consist of initial assignments of positive probability that cannot themselves be justified in any absolute sense.”

Steven Weinberg

Weinberg is a Nobel laureate in physics who indulges in philosophy of science. He argued that physicists explain regularities while biologists, meteorologists, historians, etc. must explain individual events or phenomena (Weinberg 2002). Hence, physicists can explain things by deducing them from fundamental laws, which are not causes in the usual sense. A fundamental problem for others is to decide which of the many things that affect the state of the environment or other complex system will be considered the cause.

Nancy Cartwright

Cartwright provided often metaphysical and always logical critical analyses of causation in series of papers for London School of Economics (Cartwright 2003a, 2003b, 2003c) and a collection of essays (Cartwright 2007). She reviewed the current dominant accounts of causal laws and found them all wanting. She explained that each is legitimate but highly limited in scope by its inherent assumptions. She concluded that there is more than one type of cause. She advocated thick causal terms and laws. Rather than saying cause, use smother, compress, attract, etc.

Cartwright (2007) argued that those who make causal claims must answer three questions. (1) What do they mean? (usually answered by philosophers) (2) How do we confirm them? (usually answered by methodologists) (3) What use can be made of them? (usually answered by policy makers or their consultants). She further argued that these three activities should be integrated and subjected to consistent analyses. For example, inductive analyses of causation in real systems may justify causal hypotheses if assumptions are met.

James Woodward

Woodward (2003) provided a counterfactual theory of causation based on manipulation. That is, we know that E would not have occurred without C because we have withdrawn or added C, because we believe that we could have performed that manipulation or because we can imagine performing the manipulation. He refers to this extended manipulationist concept as intervention. In this theory, a manipulationist can provide a causal explanation of the extinction of dinosaurs by stating that, had someone diverted the asteroid, the extinction would not have occurred. This is equivalent to Pearl's interventions in network models, but Woodward is concerned with the meaning of causal claims rather than the use of models to identify causal relationships.

Robert Northcott

Northcott (2003) proposed two definitions of causal strength:

Difference magnitude—How much effect did C have? For example, if the elevation in temperature were eliminated, how much would that change the biotic community?
Potency magnitude—How much did C matter? For example, temperature may be capable of impairing a community, but, if the effect is defined as impairment status, and if low flow is impairing the community, then increased temperature does not matter.

He subsequently contrasted analysis of variance as a measure of causal strength with an explicitly causal model: E(C1 & W) - E(C0 & W) (Northcott 2008). C1 is the actual level of the agent, C0 is the baseline level of the agents, and W is background conditions. Note that C0 is not the counterfactual absence of C but rather a chosen baseline, because absence of the cause is nonsense for some causes such as temperature. He stated that this is a more intuitive definition, which it is more flexible, and, unlike ANOVA, it does not give nonsense results. He attributed ANOVA's emphasis on variance rather than differences to Fisher's reaction to logical positivism and its dismissal of causation.

Philip Dawid

Dawid is a statistician with an interest in the philosophy and logic of data analysis. He developed a Bayesian decision theoretic approach to the analysis of causal relationships, which he believed corrects some conceptual problems in the approaches of Rubin, Pearl, and others (Dawid 2000, 2004). In particular, he argued that counterfactual analyses are metaphysical rather than scientific. He also argued that causality is a theoretical concept, in Popper's sense, that only indirectly relates to the physical world. However, the link can be formalized as personal belief defined in terms of de Fenetti's exchangeability theory. Probabilistic causal hypotheses can be formulated and tested by calibration of the probabilistic hypothesis against actual outcomes in the world using Borel criteria (i.e., only P = 0 or 1 have direct external referents, so we should create an event which our theory assigns P = 1 and attempt to falsify it by finding that the event does not occur).

Jim Bogen

Bogen (2004) argued that regularists are wrong; neither factual nor counterfactual regularity are necessary. Causes may act sporadically. His example was, women bear children is true but not invariate. Hence, “the goodness of a causal explanation of one or more cases does not depend in any interesting way upon how many other cases satisfy the relevant generalities.”

A.M. Armstrong

Armstrong (2004) espoused a singularist theory of causation. Rather than Humean regularity, we should determine the cause in each singular incident. He referred to fixing the reference, that is, C causes E in this particular circumstance. Causation is a “theoretical entity” (from Menzies), an undefined primitive defined by platitudes of folk psychology. “Causation is that intrinsic relationship between singular events for which the causal platitudes hold.” The platitudes include regularity, agency (manipulative theory), counterfactual dependence, and probability raising. That is, we know it when we see it.

Armstrong rejected probabilistic causation. In any instance, C causes E, or it does not. This eliminates probabilistic causation, but we still have the concept “probability of causing” for general causation.

Steven Sloman

Sloman (2005) advocated Bayesian causal models for cognition (i.e., Bayesian networks).

Contrary to Gigerenzer, he argued that Bayesian causal models are models of the way we think about and identify causal relationships. His argument includes the following points.
  • Causal relations associate events with other events.
  • Mechanisms take causes and induce effects.
  • “No matter how many times A and B occur together, mere co-occurrence cannot reveal whether A causes B, or B causes A, or something else causes both.” Hence, correlation leads only to a class of Markov-equivalent models.
  • Every arrow in a causal graph is a cause.
  • We have an inherent concept of causality that is based on contingency rather than association (i.e., C predicts E).
  • The advantage of experiments is that experimenters manipulate one thing at a time. Their disadvantage is that they manipulate only one thing.
  • The disadvantage of Bayesian network models is that it is difficult to deal with continuous processes and they cannot deal with feedbacks.

Phillip Wolff

Wolff (2007) is an experimental psychologist concerned with how people actually represent causation when making judgments about the causal nature of events. He criticized “dependency models” of causation (i.e., counterfactual and probability models) and espoused his own version of physical process models (like those of Salmon and Dowe), which he terms “dynamics models” after Talmy. He argued that people routinely infer physical dynamics when they observe kinetic interactions and they view the dynamic processes as causation. People view nonphysical (i.e., psychological and social) interactions analogously, as resulting from inferred dynamics. He supported this argument with results from experiments in which subjects view animations and report why the observed events occurred. He stated that dependency models simply capture “the side effects of causation.”

Christopher Hitchcock

Hitchcock (2007) presented various versions of causal pluralism. The most relevant is Philosophical Methodological Pluralism. Just as science uses various methods to elucidate causal relationships for different problems, he advocated using various philosophical concepts of causation depending on the problem. All fail in some circumstances (he gives examples).

Steven Pinker

Pinker (2008) is a linguistic Kantian. That is, he adopted Kant's a priori categories (space, time, substance, and causation) and made them basic linguistic concepts. He rejected the idealistic aspects of Kant (we can engage the real world directly, and the categories are not more real than the world we perceive) but argued that we tend to filter experience through those categorical concepts. The evidence that the concepts are in our heads rather than in the world is that, when we apply them, they work most of the time, but we encounter paradoxes when we analyze atypical cases, advanced science, or philosophical thought experiments. He showed how paradoxes arise in both associationist and counterfactual accounts of causation.

Top of Page

Recent Epidemiology and Causation

The attempts of epidemiologists to determine the causes of observed patterns of disease and injury are closely analogous to the attempts of ecologists (ecoepidemiologists) to determine the causes of biological impairments in the nonhuman environment. However, epidemiologists are concerned more with generic causation (does trichloroethylene cause cancer) than specific causation (did trichloroethylene cause the cancer cluster in Woburn, Massachusetts). Ecologists are more likely to address individual cases. This brief review of the voluminous literature on causation in epidemiology is intended to represent the major themes and to emphasize the literature that addresses specific causation. It is organized chronologically by the date of the author's first publication that is important to causal analysis.

The causal criteria of Hill, Susser and others are a major source of CADDIS’s inferential approach. However, most epidemiologists have focused on developing sophisticated estimates of the degree of association of effects and putative causes. Frederica Russo and Jon Williamson pointed out that, in practice, causation is accepted if two criteria are met: clear correlation and well defined mechanism.

J. Yerushalmy and Carroll E. Palmer

Yerushalmy and Palmer (1959) generalized Koch's postulates to causes of disease other than pathogens.
  1. The cause is more common in people with the disease.
  2. The disease is more common in people with the cause.
  3. The association must be tested in other cases for validity (requires specificity).

This weak version for epidemiologists does not require experimental testing, only verification in another case.

U.S. Department of Health Education and Welfare

Five criteria were applied to tobacco smoke as a cause of cancer by the Surgeon General's Committee on Smoking and Health: Temporality, Strength, Specificity, Consistency, and Coherence (U.S. Department of Health Education and Welfare 1964). This list was expanded by A.B. Hill to form his famous “criteria” (Hill 1965).

Austin Bradford Hill (1897-1991)

A.B. Hill is credited with being a founder of medical statistics. He designed and conducted the first randomized clinical trial and, with Richard Doll, conducted the first epidemiological studies of smoking and lung cancer. After Fisher attacked his smoking studies, he came to realize that statistics alone could not determine causality in epidemiology. In response, he proposed nine criteria (which he called considerations, features, or viewpoints) for causation (Hill 1965).
  • Strength
  • Consistency
  • Specificity
  • Temporality
  • Biological gradient
  • Plausibility
  • Coherence
  • Experiment
  • Analogy

He argued that these considerations were not actually criteria. Rather, they answered the question: “What aspects of this association should we especially consider before deciding that the most likely interpretation of it is causation?”

He took a pragmatic approach to identifying causal relationships as shown in the following quotations from Chalmers (2003) “…one had to be content with far from perfect evidence and draw on the most likely explanation.” “In the strict sense of the word 'proven' you can prove nothing, but you can make one interpretation of the data more probable than any other (e.g., smoking and cancer of the lung).”

He also argued that the amount of evidence required should depend on the consequences of the interventions that would follow from the causal conclusion.

Richard Doll

Doll was the senior author of the first epidemiological study of smoking and lung cancer (Doll and Hill 1950). He summarized and illustrated his approach to causality in the 23rd Fisher Memorial Lecture (Doll 2002). He believed that causation could be proved beyond a reasonable doubt in epidemiological studies by
  1. Demonstrating that the association cannot reasonably be explained by chance, by methodological bias, or by confounding.
  2. Identifying positive support for causality by applying Hill's guidelines (criteria).

Jack D. Hackney and William S. Linn

Hackney and Linn (1979) adapted Koch's postulates to diseases caused by chemicals in the environment. They use four postulates, but Postulates 2 and 4 are quite different from the usual four-part version.
  1. A definable environmental chemical agent must be plausibly associated with a particular observable adverse health effect.
  2. The environmental agent must be available in the laboratory in a form that permits realistic and ethically acceptable exposure studies to be performed.
  3. Laboratory exposures to realistic concentrations of the agent must be associated with effects comparable to those observed in real-life exposures.
  4. The preceding findings must be confirmed in at least one investigation independent of the original.

Mervyn Susser

Susser (1973, 1986, 1988) clarified and revised Hill's criteria for inferring causation to
  • Strength of association
  • Consistency
  • Specificity in effects of a cause
  • Specificity in causes of an effect
  • Time order
  • Theoretical Coherence
  • Biological Coherence
  • Factual Coherence
  • Statistical
  • Predictive performance
  • Probability

In addition, he used multiple + and - codes for scoring a cause with respect to each criterion, which increases transparency and encourages rigor.

Susser (1988) responded to Popperians by providing a philosophical justification for Hill's approach. Popper's best hypothesis is one that is refutable, but readily refuted hypotheses are the least useful. However, we may follow Popper to the extent of claiming somewhat greater decisiveness for falsification than affirmation, but we cannot do without induction and affirmative tests.” He advocated abductive inference without using the term: “The central concern of causal inference is to establish preferences among theories at a given point in time, Pt.”

He also rejected statistical hypothesis testing: “The limits are arbitrary and do not override logical inference based on other criteria” (Susser 1988).

Kenneth Rothman

Rothman is a prominent epidemiologist, author of numerous texts in the field and skeptic of causal analyses. He argued that in biological systems, we often know a cause (a contributing event, condition, or characteristic) that contributes to occurrence of effect but not the sufficient cause (e.g., smoking causes cancer but is not sufficient) (Rothman 1986, Rothman 1988, Rothman and Greenland 2005).

Rothman rejected probabilistic causation. He argued that probabilistic causation is due to unknown causes. Hence, his underlying model is deterministic, and probabilistic models reflect ignorance.

As a Popperian, he is skeptical of anything but disproof. He rejected Hill's criteria except temporality. The strength of a cause is a function of its rarity relative to other component causes, so strength is not a good criterion.

Rothman's (1988) possible solutions to the problem of induction are:
  • Consensus of experts (e.g., NIH consensus development conferences)
  • Ad hominem appeals to authority or guilt
  • Post hoc ergo propter hoc
  • Disproof (Popperian)
  • Consensus of the community

He approved of Lanes's (1988) proposal to present the data and leave identification of causes to policy makers. He suggested that, given the conceptual problems with determining causality, epidemiologists should focus on estimating the magnitude of effects in a presumed causal relationship (Rothman and Greenland 2005). “Causal inference in epidemiology is better viewed as an exercise in measurement of an effect rather than as a criterion-guided process for determining whether an effect is present or not.”

Ronald E. Gots

Gots (1986) proposed causal criteria for individual human cases, focusing on specific clinical and morphological symptoms:
  • Can the agent cause the disease?
    • Animal data
    • Human data
  • Did it cause the disease in this case?
    • Alternative causes ruled out?
    • Confirmed exposure?
    • Sufficient exposure?
    • Appropriate clinical pattern?
    • Appropriate morphological pattern?
    • Temporality?
    • Appropriate latency?

Douglas L. Weed

Weed (1988) initially argued for Popper and against Hill. Hypotheses must be predictive and testable. Since refutation never gives a cause, we must reach a decision by less reliable means. Weed proposed the two Popperian criteria: predictability (can we deduce consequences from the causal hypothesis?) and testability (can we devise a test that could refute the prediction?) but couched them in terms of critical debates rather than critical experiments.

Weed later accepted and even recommended criteria-based inference, pointing out that epidemiologists have a moral obligation to inform public health decisions (Weed 1997, Weed 2002). That includes nutrients as well as toxicants (Potischman and Weed 1999). He indicated that meeting all of Hill's criteria is too demanding and is antithetical to public health goals and the precautionary principle (Weed 2005). Although this argument would seem to call for reducing the number of criteria or weakening rules of inference, he indicated that we do not know enough about how well the criteria work to modify them. Rather, he argued for greater clarity concerning what criteria are used, why they are used, and how they are used (Parascandola and Weed 2001, Weed 2001). In particular, he argued that epidemiologists should address plausibility more explicitly, defining the possible mechanisms and laying out the evidence for each (Weed and Hursting 1998). He concluded that judgment-free causal analysis is impossible and undesirable (Weed 2007).

Stephan Lanes

Lanes is another Popperian. We cannot calculate the probability that a theory is true” (Lanes 1988). We cannot even reject a hypothesis probabilistically, despite Fisher, because a statement of probability cannot be refuted.

Lanes attacked subjectivism, weight-of-evidence, degree of belief, Bayes, Hill, etc. He argued that we believe that cigarettes cause lung cancer because alternatives have been effectively refuted, not because of the Surgeon General's and Hill's beliefs.

He argued that Popperianism is about the acceptability of scientific theories. In contrast, the acceptability of actions (i.e., to eliminate a cause) depends on the consequences of an action. Scientists should determine the best explanation or lay out unrefuted alternatives and allow policy makers or the public to weigh the consequences.

Sander Greenland

Greenland is a conceptually oriented statistical epidemiologist. He argued that Popperianism strictly does not allow statistics; if you allow chance, you may as well allow induction (Greenland 1988). Greenland advocated Bayesian analysis. “To a subjective Bayesian, labeling a result as due to chance or random variation is analogous to diagnosing an illness as idiopathic in that it is just a way of making ignorance sound like technical explanation.” In other words, observations that are in conflict with a true hypothesis are due to our inability to take into consideration all conditions. He clarified his disagreement with Popper and the role of induction and deduction in Greenland (1998). Recently, he has propounded a counterfactual approach based on potential outcome models, structural equation models, or analysis of causal diagrams (Greenland et al. 1999, Greenland 2005).

U.S. Environmental Protection Agency

The U.S. EPA (2005) published guidelines for assessing risks from carcinogens that include guidance on how to evaluate causality in epidemiological studies. They recommend evaluating study design, exposure issues, biological markers, confounding factors, likelihood of the observed effects, and biases and then evaluate the evidence for causality using Hill's “criteria.” They are not to be treated as mandatory criteria; instead, they “should be used to determine the strength of the evidence for concluding causality.”

P.S. Guzelian

Guzelian et al. (2005) addressed causality in toxicology. They argued that there is no probabilistic causation—either a thing is a cause, or it is not.

They distinguished general causality, addressed by Hill and Susser, from specific causality. Their minimum criteria for specific causation are:
  1. General causation
  2. Dose-response
  3. Temporality
  4. Alternative cause (eliminate confounders)
  5. Coherence

They demanded authoritative rather than comparative criteria. If they are criteria, then they must be satisfied. They condemned consensus and subjective judgments.

Paolo Vineis and David Kriebel

In their review of causation in epidemiology, Vineis and Kriebel (2006) identified two eras of medical causality:
  1. Agent of disease is a single necessary cause, as described by Koch
  2. Chronic diseases such as cancer and heart disease have causal webs

Causal webs are required in cases of individuals (smoking and susceptibility or something else cause cases of lung cancer so smoking is not necessary or sufficient), but we can speak of a single necessary cause for population (smoking is the cause of the increase in the frequency of lung cancer).

We must consider interactions, which the standard multiple regression approach does not do. They recommended Pearl's network models as heuristic tools; they make causal assumptions explicit.

Frederica Russo and Jon Williamson

Russo and Williamson (2007) argued that there is only one type of cause in medical inference and practice, but two criteria:
  1. Probabilistic evidence is consistency of association of cause and effect.
  2. Mechanism is an explanation of how the causal relationship occurs.

They divide Hill's criteria between those criteria. Criteria 2, 4, 5, 8, and 9 involve mechanistic considerations, Criteria 1, 3, 7, and 8 involve probabilistic considerations, and Criterion 6 is ambiguous. They argue that medical science seldom adopts a causal claim until both types of evidence are provided.

Top of Page

Recent Applied Ecology and Causation

The issue of causation in applied ecology received increasing attention in the 1990s because environmental monitoring programs were revealing biological impairments, but the causes of those impairments were unknown or controversial. Most of the proposed methods for ecological causal analysis have been adapted from epidemiology. In particular, Woodman and Cowling, Fox, Suter, Chapman, Kapustka, Beyers, Forbes and Calow, the U.S. EPA and others have developed variants of Koch’s postulates or Hill’s criteria as evidence of causation. However, others have advocated quantititative or semi-quantitative modeling methods to determine causation, including Westman, Landis, and Newman.

This review does not include publications that attempted to determine the cause of an ecological impairment but did not advocate a method. It also does not include methods for determining the chemical or chemical classes that are the causes of toxicity such as toxicity identification evaluation (TIE) (Norberg-King et al. 2005). The section is organized chronologically by the date of the author's first publication that is important to causal analysis.

Walter Westman

Westman (1979) used “path analysis to determine the most likely route of causation of the decline in native cover” of coastal sage scrub in Southern California. He concluded that mean annual oxidant concentration was the primary proximate cause and elevation was a secondary cause. This appears to be the first application of path analysis to an environmental impairment.

In a subsequent study of the causes of species distributions in Southern California plant communities, he reverted to the more conventional technique, multiple linear regression (Westman 1981). He did not explain the change in approach, but it seems likely that he could not define a causal path model for the 21 species and 43 potential predictor variables.
In both publications, he acknowledged that the results were only suggestive of likely causes and should be followed by experimental studies. Many ecologists since have used multiple regression and various forms of network analysis to model causal relationships, but usually for species management or academic problems.

James Woodman and Ellis Cowling

Woodman and Cowling (1987) adapted a three-part version of Koch's postulates to determine whether air pollution caused observed effects on forest trees:
  1. The injury or dysfunction symptoms observed in the case of individual trees in the forest must be associated consistently with the presence of the suspected causal factors.
  2. The same injury of dysfunction symptoms must be seen when healthy trees are exposed to the suspected causal factors under controlled conditions.
  3. Natural variance in susceptibility observed in forest trees also must be seen when clones of these same trees are exposed to the suspected causal factors under controlled conditions.

They indicated that these criteria can be achieved for relatively simple systems “that involve one, two, or possibly three interacting causal factors.” For more complex systems, they suggest a synoptic approach. It involves, “initial surveys to identify important variables, multiple regression analysis to generate hypotheses, and field tests to verify diagnoses.”

Cowling (1992) subsequently proposed another set of causal criteria for air pollution effects based on Mosteller and Tukey (1977):
  1. a clear pattern of spatial and/or temporal consistency must be found between dysfunction in the system and one or more specific airborne chemicals.
  2. a clear relationship must be found between dose of airborne chemical(s) and response of the system; and
  3. a proven biological mechanism or a series of stepwise biological processes must be found by which dysfunction in the ecosystem can be linked to one or more specific airborne chemicals.

Kelly Munkittrick

Munkittrick and Dixon (1989a, 1989b) proposed that the causes of declines in fish population could be diagnosed as caused by one of a set of standard causal mechanisms based on a set of metrics commonly obtained in fishery surveys. This method was subsequently refined and expanded (Gibbons and Munkittrick 1994) and applied to assessments of Canadian rivers (Munkittrick et al. 2000). Numerous metrics contribute to the symptomology, but they are condensed to three responses: age distribution, energy expenditure, and energy storage. The most recent list of types of causes is exploitation, recruitment failure, multiple stressors, food limitation, niche shift, metabolic redistribution, chronic recruitment failure, and null response. These are mechanistically based categories of causes, not causes per se. As the authors acknowledge, follow-up studies are needed to determine the actual cause once this method has been used to identify the category of cause. This method has been incorporated into the causal analysis component of the Canadian Environmental Effects Monitoring Program.

Glenn Suter

Suter (1990) proposed a four-part version of Koch's postulates as a general standard of proof of causation in ecological epidemiology. He showed that those requirements were met by many ecoepidemiological studies although the postulates were not employed by those authors (Suter 1993, 1998). A version specifically for toxicants from Suter (1993) is:
  1. The injury, dysfunction, or other putative effects of the toxicant must be regularly associated with exposure to the toxicant and any contributory causal factors.
  2. Indicators of exposure to the toxicant must be found in the affected organisms.
  3. The toxic effects must be seen when healthy organisms are exposed to the toxicant under controlled conditions, and any contributory factors should contribute in the same way during the controlled exposures.
  4. The same indicators of exposure and effects must be identified in the controlled exposures as in the field.

He argued that, when Koch's postulates could not be met, causation should be inferred by eliminating refuted causes and then applying Hill's or Susser's criteria or his own combination of those criteria (Suter 1993, 1998). Subsequently, he joined Susan Cormier and Susan Norton in developing the U.S. EPA's stressor identification guidance and CADDIS support system (see U.S. EPA, below).

Robert Elner and Robert Vadas

Elner and Vadas (1990) argued for hypothesis rejection and particularly strong inference (Platt 1964) as the only way to determine causes of ecological phenomena. They used population explosions of sea urchins and resulting replacement of macroalgae beds with “coralline barrens” as a case study. They showed that prior studies that relied on “weak” inference (i.e., inferences that include positive evidence), particularly those arguing that the cause was loss of lobster predation, had been inconclusive. However, their review of the case also demonstrates that there is no clear limit to the number of hypotheses to be rejected and that the outcomes of field experiments may have multiple interpretations.

Robert Peters

Peters (1991) argued that causality is an unnecessary and potentially misleading concept. The test of good science is prediction, not explanation. He preferred “instrumentalist science,” particularly empirical models such as allometric equations and QSARs, which go directly for prediction. He also argued that causally driven research never ends because one can extrapolate causes back up the chain, identify more steps between cause and effect, or laterally extend to more contributing causes (Peters 1991, Fig. 5.8). Hence, one can never identify “the cause.”

Glen Fox

Fox (1991) proposed that ecologists should use Susser's (1986) 11 criteria for causal analysis. This has been the most influential paper on the topic of causal analysis for applied ecology. That is, in part, because Susser's criteria and scoring system are more appealing to many ecologists than Hill's completely informal criteria, in part, because Fox presented Susser's system in a clear and compelling manner, and in part, because the system was immediately applied to a set of high profile problems: the declines of fishes, birds, and reptiles in the Laurentian Great Lakes.

Kristen Schrader-Frechette

Schrader-Frechette is a philosopher of environmental science who has argued that conventional scientific methods are not useful for environmental problem solving because of the complexity of the systems and the problems (Schrader-Frechette 1985, Schrader-Frechette and McCoy 1993). Her solution is the “method of case studies.” That is, rather than seeking generalities, environmental scientists should focus on solving individual real problems and thereby learning to be good practitioners. In this approach, the solution to the problem of causation is “to subject our inductive and causal judgments to repeated criticism, reevaluation and discussion and to seek independent evidence for them" (Schrader-Frechette and McCoy 1993). In part this can be accomplished by making it a point to seek alternative analyses of the same case, or to seek evaluations of a particular case study by persons with divergent scientific, epistemological, and personal presuppositions.”

Peter Chapman

Chapman (1995) proposed a three-part adaptation of Koch's postulates for determining whether contaminated sediments are causing an effect. He called them exploratory postulates:
  1. Contaminant(s) must be found in all cases of the effect(s) in question
  2. Similar effect(s) must be demonstrable in the laboratory with field-collected sediments; and/or, the field with in situ experimentation
  3. Similar effect(s) must be demonstrable in the laboratory with direct exposure to contaminant(s)

This is different from his better known Sediment Quality Triad in that, as in CADDIS assessments, the observation of an effect is assumed, and the purpose is to identify a cause. However, it is limited to contaminants as potential causes.

Allan Stewart-Oaten

The environmental statistician Stewart-Oaten (1996) reviewed designs for determining whether human activities are causing environmental effects. He concluded that none of the pseudo-experimental designs (Before After Control Impacts and variants) are reliable. Hence, he advocated using Hill's criteria to avoid “causal uncertainty.” For the statistical analysis, he recommended estimating differences between exposed and unexposed sites and associated confidence intervals rather than testing causal hypotheses.

Wayne Landis

Landis (1996) briefly presented a concept of causal analysis for toxic effects on ecosystems. First, laboratory studies, including toxicity tests, analysis of contaminant concentrations, and comparative toxicology provide a mechanistic basis. Second, multispecies studies reveal broad patterns of response to particular stressor types. Third, biomarkers and measurements of structure and function are used to establish causes.

More recently, Landis and colleagues have incorporated causal analysis into his relative risk model (Landis et al. 2004, Landis 2005). Alternate sources and stressors are identified for the effects of concern and are assigned ranks using a variety of criteria. Then the ranks for sources are multiplied by the ranks for connected stressors and then by 0 or 1 depending on whether that pathway is connected to the effect. These arithmetically combined ranks for a hypothetical pathway are called stressor rank scores and are taken to represent the relative risk that the stressors posed as causes of the effects. The authors claimed that their relative risk model subsumes both Chapman's triad approach and the use of causal criteria derived from Hill's considerations or Koch's postulates (Landis et al. 2004).

For a study of the cause of decline of a Pacific herring population, Landis and Bryant (2010) used a comparative weight of evidence analysis.

Lawrence Kapustka

Kapustka (1996) presented an adaptation of Koch's postulates for “forensic ecology” of toxic chemicals:
  1. Observe field conditions (injury); characterize the symptoms, magnitude, and extent of the problem in the field.
  2. Identify putative contaminants.
  3. Characterize mode of action and symptomology.
  4. Establish dose-response relationships.
  5. Demonstrate the presence of putative toxic substances in the field within the “effects range” concentrations.
  6. Demonstrate the opportunity for exposure.
  7. He then added two confirmation requirements.
  8. Apply weight-of-evidence criteria to establish relationship between contaminant and observed effect.
  9. Document the uncertainty of the conclusions.

Robert Bode

Some groups and organizations have attempted to use biotic community characteristics as symptoms to diagnose the causes of community impairments but with limited success (Norton et al. 2000, Yoder and Rankin 1995). Currently, the best developed and most successful of these is the Impact Source Determination (ISD) system developed for the state of New York by Bode et al. (1996, 2002). ISD attempts to relate the macroinvertebrate community of a stream to one of the following classes:
  • Natural
  • Nonpoint nutrients, pesticides
  • Toxic
  • Organic (sewage effluent or animal waste)
  • Complex (municipal/industrial)
  • Siltation
  • Impoundment

For each of these classes, 5-13 model communities have been created by cluster analysis of relative abundance data from sampled communities that have been judged to belong to the class. The multiple model communities within a class are judged to represent differences in response due to natural factors. An attempt to verify the classifications of sites by the ISD found that the system discriminated fairly well among nonpoint nutrient, siltation, complex and natural categories, but not others (Riva-Murray et al. 2002). Difficulties were attributed to the lack of species level taxonomic data and the lack of data concerning hydrology, habitat, and many chemicals. Because most of these classes are categories of sources rather than causes and because of the potential for misclassification, results of ISD would serve primarily to help define candidate causes.

C. J. Sindermann

Sindermann (1997) advocated the Hill/Susser/Fox approach to causal analysis combined with the precautionary principle when information is incomplete or uncertain to justify acting without proving a cause in marine pollution studies.

Daniel Beyers

Beyers (1998) combined Hill's 9 criteria with “Suter's second rule” (the second of his adaptations of Koch's postulates) to obtain 10 criteria for causation. He applied them to determining whether an insecticide caused impairment of the benthic invertebrate community in the Little Missouri River, North Dakota.

Bruce Chessman and Paul McEvoy

Community diagnostics may be possible for some classes of agents but not others. However, it will require genus- or species-level identification (Chessman and McEvoy 1998).

U.S. EPA (Risk Assessment Forum)

The Agency's Guidelines for Ecological Risk Assessment endorsed Fox's criteria, both Woodman and Cowling's and Suter's versions of Koch's postulates, and toxicity identification evaluation (TIE) for identifying causes of ecological impairments (U.S. EPA 1998).

Joseph Germano

Germano (1999) was strongly critical of the use of hypothesis testing statistics for causal analysis. He pointed out that hypothesis testing answers a question in which we are seldom interested: “Given that the null hypothesis (H0) is true, what is the probability of these (or more extreme) data?” On the other hand, he believed we should not resort to expert judgment because it is even less reliable than correlations. Rather, we need a better understanding of associations, which comes from fully specifying contingency tables and using Bayesian analyses.

U.S. EPA (S.M. Cormier, S.B. Norton, and G.W. Suter II)

The U.S. EPA (2000) developed the Stressor Identification Guidance to determine the causes of biological impairments in individual aquatic ecosystems. The methodology includes three inferential methods: elimination, diagnosis, and strength of evidence. The strength of evidence method was inspired by Fox (1991) and Susser (1986) but highly modified. In particular, it distinguishes evidence from the case from evidence from elsewhere and evidence based on integrating multiple types of evidence. The methodology and a case study are available in the open literature (Cormier et al. 2002, Norton et al. 2002, Suter et al. 2002).

In response to user feedback, the method was refined and expanded into a decision support system for causal analysis called "Casual Analysis/Diagnosis Decision Information System" or CADDIS.

The methodology changed in three ways from the Stressor Identification Guidance. First, elimination and diagnosis were integrated into the strength-of-evidence analysis to simplify the process. Second, the types of evidence were rewritten to make them clearer to users. Third, to help with the comparison of candidate causes, a set of basic causal characteristics was identified that summarized the 17 types of evidence.

Jerome Diamond

Studies of the causes of impairment of fish, mussel, and benthic macroinvertebrate communities used stepwise regression to relate land uses in the Clinch and Powell River watershed to biological indexes and metrics (Diamond et al. 2002, Diamond and Serveiss 2001). Although statistically significant relationships to land use were found, the model explained little of the biological variance.

Michael Newman

Newman argued that Bayesian conditional probabilities are the appropriate expression of causality. He wrote that “Belief in a causal hypothesis can be determined by simple or iterative application of Bayes's theorem” (Newman and Evans 2002). He repeated this position in (Newman et al. 2007).

Newman was critical of Hill's and Fox's weight-of-evidence approaches because they are qualitative and subject to cognitive errors (Newman et al. 2007). However, he has acknowledged that they can be useful and presented a good case study of the application of Hill's criteria to determining the cause of hepatic lesions in English sole from Puget Sound, Washington (Newman 2001).

William Clements

Clements et al. (2002) applied the Stressor Identification methodology (U.S. EPA 2000) to observational and experimental data to demonstrate that metals were responsible for effects on benthic invertebrates in Colorado streams.

Valery Forbes and Peter Calow

Forbes and Calow (2002) modified prior causal criteria, reducing the number to seven and framing them as yes/no questions:
  1. Is there evidence that the target is or has been exposed to the agent?
  2. Is there evidence for correlation between adverse effects on the target and exposure to the agent either in time or space?
  3. Do the measured or predicted environmental concentrations exceed quality criteria for water, sediment, or body burden?
  4. Have the results of controlled experiments in the field or laboratory led to the same effect?
  5. Has removal of the agent led to the amelioration of effects in the target?
  6. Is there an effect in the target known to be specifically caused by exposure to the agent?
  7. Does the proposed causal relationship make sense logically and scientifically?

They also provided a logic diagram based on answering the questions in sequence, which generates results as unlikely, possibly, likely, very likely, or don't know.

This method was applied to determining the cause of brown trout declines in Swiss rivers but with modifications of both the questions and the logic (Burkhardt-Holm and Scheurer 2007).

Mark Hewitt

Hewitt et al. (2003, 2005) developed a system for determining if a pulp mill is the cause of apparent effects on fish and invertebrates in Canadian monitoring programs. It has seven increasingly detailed tiers beginning with (1) confirming the effect and (2) relating it to the mill and ending with (7) identifying the specific causative chemicals. In the 2003 version, the second tier is based on Fox's explication of Susser's criteria, and the third tier, which looks for characteristic biological response patterns, is based on community or population diagnostics. In the 2005 version, only the response patterns are used. The higher tiers are based on testing of waste streams, chemical fractions, and individual chemicals in a manner similar to the U.S. EPA's toxicity identification evaluation (TIE) procedures. An earlier version of this system is presented in Ch. 9 of Munkittrick et al. (2000). The system has been applied to Canadian metal mines as well (Munkittrick, personal communication).

Tracy Collier and Marshall Adams

Collier and Adams compiled 14 papers on causal analysis for ecological field studies in a special 2003 issue of Human and Ecological Risk Assessment (Volume 9, Issue 1). They requested that the authors use a set of seven causal criteria:
  1. strength of association,
  2. consistency of association,
  3. specificity of association,
  4. time order,
  5. biological gradient,
  6. experimental evidence, and
  7. biological plausibility.

The papers were inconsistent in their interpretation and use of the criteria, but several of them constitute useful case studies.

Intergovernmental Program on Climate Change (IPCC)

The IPCC developed a method for “attributing physical and biological impacts to anthropogenic climate change” (Rosenzweig et al. 2008). It used a list of environmental changes that are statistically significantly related to temperature as the effects to be analyzed. The causal analysis consisted of:
  1. determining whether the trend is consistent with temperature as the cause,
  2. determining whether the change spatially co-occurred with increases in temperature, and
  3. elimination of alternative causes.

Hence, their two criteria for causation were mechanistic plausibility and co-occurrence. They applied the same criteria to climate and the alternative causes. They performed source identification by appealing to other IPCC analyses to state that the temperature changes were due to anthropogenic greenhouse gases.

Dick de Zwart and Leo Posthuma

This method uses multivariate linear statistical models for a river basin to diagnose the causes of individual taxon abundances at specific sites with habitat variables and toxicity as the possible causes (de Zwart et al. 2009). Toxicity is expressed as the acute multistressor Potentially Affected Fraction (msPAF), which is derived from Species Sensitivity Distributions and assumptions about the additivity of the chemicals (de Zwart and Posthuma 2005). Because of statistical limitations, particularly correlations among explanatory variables, the authors consider this to be a “first-tier approach” that may be followed-up “by other lines of evidence.” Because of the inevitable data limitations, local causes that are not included in a regional model, and inherent variability, the predominant cause in their test case is “unknown.”

Top of Page

References

  • Anderson DR (2008) Model Based Inference in the Life Sciences: A Primer on Evidence. Springer, New York NY.
  • Anderson DR, Burnham KP, Thompson WL (2000) Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64:912-923.
  • Armitage P (2003) Fisher, Bradford Hill and randomization. International Journal of Epidemiology 32:922-924.
  • Armstrong DM (2004) Going through the open door again: Counterfactual versus singularist theories of causation. Pp. 445-457 in: Collins J, Hall N, Paul LA (Eds). Causation and Counterfactuals. MIT Press, Cambridge MA.
  • Bailar JC (2005) Redefining the confidence interval. Human and Ecological Risk Assessment 11:169-177.
  • Beyer WN, Franson JC, Locke LN, Stroud RK, Sileo L (1998) Retrospective study of the diagnostic criteria in a lead-poisoning survey of waterfowl. Archives of Environmental Contamination and Toxicology 35:506-512.
  • Beyers DW (1998) Causal inference in environmental impact studies. Journal of the North American Benthological Society 17:367-373.
  • Bode RW, Novak M, Abele LA (1996) Quality assurance work plan for biological stream monitoring in New York State. New York State Department of Environmental Conservation, Albany NY.
  • Bode RW, Novak M, Abele LA, Heitzman DL, Smith AJ (2002) Quality assurance work plan for biological stream monitoring in New York State. New York State Department of Environmental Conservation, Albany NY.
  • Bogen J (2004) Regularities, generalizations, causal mechanisms. London School of Economics, London UK. Rpt. 22/04.
  • Botham T (2008) Agent-Causation Revisited. VDM Verlag, Saarbrucken, Germany.
  • Bunge M (1979) Causality and Modern Science (3rd revised edition). Dover Publications, Inc., New York NY.
  • Burkhardt-Holm P, Scheurer K (2007) Application of the weight-of-evidence approach to assess the decline of brown trout (Salmo trutta) in Swiss rivers. Aquatic Sciences 69:51-70.
  • Campaner R, Galavotti MC (2007) Plurality in causality. Pp. 178-199 in: Machamer P, Wolters G (Eds). Thinking About Causes: From Greek Philosophy to Modern Physics. University of Pittsburgh Press, Pittsburgh PA.
  • Cartwright N (2003a) Causation: One word, many things. London School of Economics, London UK. Tech. Rpt. 07/03.
  • Cartwright N (2003b) From causation to explanation and back. London School of Economics, London UK. Tech. Rpt. 09/03.
  • Cartwright N (2003c) How can we know what made the ratman sick? Singular causes and population probabilities. An essay in honor of Adolf Grunbaum. London School of Economics, London UK. Tech. Rpt. 08/03.
  • Cartwright N (2007) Hunting Causes and Using Them: Approaches in Philosophy and Economics. Cambridge University Press, Cambridge UK.
  • Chalmers I (2003) Fisher and Bradford Hill: theory and pragmatism? International Journal of Epidemiology 32:922-924.
  • Chapman PM (1995) Extrapolating laboratory toxicity results to the field. Environmental Toxicology and Chemistry 14:927-930.
  • Chessman BC, McEvoy PK (1998) Towards diagnostic biotic indices for river macroinvertebrates. Hydrobiologia 364:169-182.
  • Clements WH, Carlisle DM, Courtney LA, Harrahy EA (2002) Integrating observational and experimental approaches to demonstrate causation in stream biomonitoring studies. Environmental Toxicology and Chemistry 21(6):1138-1146.
  • Collins J, Hall N, Paul LA (2004) Causation and Counterfactuals. MIT Press, Cambridge MA.
  • Cormier SM, Norton SB, Suter GW II, Altfater D, Counts B (2002) Determining the causes of impairments in the Little Scioto River, Ohio, USA: Part 2. Characterization of causes. Environmental Toxicology and Chemistry 21:1125-1137.
  • Cormier SM, Suter GW II, and Norton SB (2010) Causal characteristics for ecoepidemiology. Human and Ecological Risk Assessment 16(1):53-73.
  • Cowling EB (1992) The performance and legacy of NAPAP. Ecological Applications 2:111-116.
  • Dawid AP (2000) Causal inference without counterfactuals (with discussion). Journal of the American Statistical Association 95:407-448.
  • Dawid AP (2004) Probability, causality and the empirical world: a Bayes-de Finetti-Popper-Borel synthesis. Statistical Science 19:44-57.
  • de Zwart D, Posthuma L (2005) Complex mixture toxicity for single and multiple species: proposed methodologies. Environmental Toxicology and Chemistry 24:2665-2676.
  • de Zwart D, Posthuma L, Gevrey M, von der Ohe P, de Dekere E (2009) Diagnosis of ecosystem impairment in a multiple-stress context: how to formulate effective river basin management plans. Integrated Environmental Assessment and Management 5:38-49.
  • Diamond JM, Bressler DW, Serveiss VB (2002) Assessing relationships between human land uses and the decline of native mussels, fish, and macroinvertebrates in the Clinch and Powell River watershed, USA. Environmental Toxicology and Chemistry 21(6):1147-1155.
  • Diamond JM, Serveiss VB (2001) Indentifying sources of stress to native aquatic fauna using a watershed ecological risk assessment framework. Environmental Science & Technology 35(24):4711-4718.
  • Doll R (2002) Proof of causality: deduction from epidemiological observation. Perspectives in Biology and Medicine 45:499-515.
  • Doll R, Hill AB (1950) Smoking and carcinoma of the lung: preliminary report. British Medical Journal 2:739-748.
  • Dowe P (2000) Physical Causation. Cambridge University Press, Cambridge UK.
  • Ducheyne S (2006) Galileo's interventionist notion of "cause". Journal of the History of Ideas 67(3):443-464.
  • Eells E (1991) Probabilistic Causality. Cambridge University Press, Cambridge UK.
  • Elner RW, Vadas RL (1990) Inference in ecology: the sea urchin phenomenon in the northwestern Atlantic. American Naturalist 136:108-125.
  • Fisher RA (1937) The Design of Experiments. Macmillan, London UK.
  • Fisher RA (1959) Smoking: The Cancer Controversy. Some Attempts to Assess the Evidence. Oliver and Boyd, Edinburgh, Scotland.
  • Forbes VE, Calow P (2002) Applying weight-of-evidence to retrospective ecological risk assessment when quantitative data are limited. Human and Ecological Risk Assessment 8:1625-1639.
  • Fox GA (1991) Practical causal inference for ecoepidemiologists. Journal of Toxicology and Environmental Health 33:359-373.
  • Frankfort H (1946) Before Philosophy. Harmondsworth Press, London UK.
  • Germano JD (1999) Ecology, statistics, and the art of misdiagnosis: the need for a paradigm shift. Environmental Reviews 7:167-190.
  • Gibbons WN, Munkittrick KR (1994) A sentinel monitoring framework for indentifying fish population responses to industrial discharges. Journal of Aquatic Ecosystem Health 3:227-237.
  • Gilbertson M, Kubiak T, Ludwig J, Fox G (1991) Great Lakes embryo mortality, edema, and deformities syndrome (GLEMEDS) in colonial fish-eating birds: similarity to chick-edema disease. Journal of Toxicology and Environmental Health 33:455-520.
  • Gleick J (2003) Isaac Newton. Pantheon Books, New York NY.
  • Good IJ (1950) Probability and the Weighing of Evidence. Hafner's Press, New York NY.
  • Good IJ (1961) A causal calculus. British Journal of Philosophy of Science 11:305-318.
  • Good IJ (1983) Good Thinking: The Foundations of Probability and Its Applications. University of Minnesota Press, Minneapolis MN.
  • Gots RE (1986) Medical causation and expert testimony. Regulatory Toxicology and Pharmacology 6:95-102.
  • Granger CWJ (2007) Causality in economics. Pp. 284-296 in: Machamer P, Wolters G (Eds). Thinking About Causes, From Greek Philosophy to Modern Physics. University of Pittsburgh Press, Pittsburgh PA.
  • Greenland S (1988) Probability versus Popper: An elaboration of the insufficiency of current Popperian approaches for epidemiological analysis. Pp. 95-104 in: Rothman KJ (Eds). Causal Inference. Epidemiology Resources Inc., Chestnut Hill MA.
  • Greenland S (1998) Induction versus Popper: substance versus semantics. International Journal of Epidemiology 27(4):543-548.
  • Greenland S (2005) Epidemiological measures and policy formulation: lessons from potential outcomes. Emerging Themes in Epidemiology 2(5).
  • Greenland S, Pearl J, Robinson EL (1999a) Causal diagrams for epidemiological research. Epidemiology 10:37-48.
  • Greenland S, Robins JM, Pearl J (1999b) Confounding and collapsibility in causal inference. Statistical Science 14:29-46.
  • Grene M (1987) Hierarchies in biology. American Scientist 75:504-510.
  • Guzelian PS, Victoroff MS, Halmes NC, James RC, Guzelian CP (2005) Evidence-based toxicology: a comprehensive framework for causation. Human and Experimental Toxicology 24:161-201.
  • Hacking I (2001) An Introduction to Probability and Inductive Logic. Cambridge University Press, Cambridge UK.
  • Hackney JD, Kinn WS (1979) Koch's postulates updated: A potentially useful application to laboratory research and policy analysis in environmental toxicology. American Review of Respiratory Disease 119(6):849-852.
  • Hempel CG (1965) Aspects of Scientific Explanation. Free Press, New York NY.
  • Hewitt LM, Dube MG, Culp JM, MacLatchy DL, Munkittrick KR (2003) A proposed framework for investigation of cause for environmental effects monitoring. Human and Ecological Risk Assessment 9:195-211.
  • Hewitt LM, Dube MG, Ribey SC, Culp JM, Lowell R, Hedley K, Kilgour B, Portt C, MacLatchy DL, Munkittrick KR (2005) Investigation of cause for pulp and paper environmental effects monitoring. Water Quality Research Journal of Canada 40:261-274.
  • Hill AB (1965) The environment and disease: Association or causation. Proceedings of the Royal Society of Medicine 58:295-300.
  • Hilsenhoff WL (1987) An improved biotic index of organic stream pollution. Great Lakes Entomologist 20:31-39.
  • Hitchcock C (2007) How to be a causal pluralist. Pp. 200-221 in: Machamer P, Wolters G (Eds). Thinking About Causes: From Greek Philosophy to Modern Physics. University of Pittsburgh Press, Pittsburgh PA.
  • Holland PW (1986) Statistics and causal inference. Journal of the American Statistical Association 81:945-960.
  • Howson C (2000) Hume's Problem: Induction and the Justification of Belief. Oxford University Press, Oxford UK.
  • Hull DL (1983) Darwin and His Critics. University of Chicago Press, Chicago IL.
  • Hume D (1739) A Treatise of Human Nature. Oxford University Press, Oxford UK.
  • Hume D (1748) An Enquiry Concerning Human Understanding. Prometheus Books, Amherst NY.
  • Humphreys P (1989) The Chance of Explanation: Causal Explanation in the Social, Medical and Physical Sciences. Princeton University Press, Princeton NJ.
  • Hurlbert SH (1984) Pseudoreplication and the design of ecological field experiments. Ecological Monographs 54:187-211.
  • Huston MA (1997) Hidden treatments in ecological experiments: re-evaluating the ecosystem function of biodiversity. Oecologia 110:449-460.
  • Johnson DH (1999) The insignificance of statistical significance testing. Journal of Wildlife Management 63:763-772.
  • Kapustka LA (1996) Plant ecotoxicology: The design and evaluation of plant performance in risk assessments and forensic ecology. Pp. 110-121 in: La Point TW, Price FT, Little EE (Eds). Environmental Toxicology and Risk Assessment: Fourth Volume STP 1262. American Society for Testing and Materials, Philadelphia PA.
  • Kim J (1993) Causes and events: Mackie on causation. Pp. 60-74 in: Sosa E, Tooley M (Eds). Causation. Oxford University Press, Oxford UK.
  • Landis WG (1996) The integration of environmental toxicology. SETAC News 1996:15-16.
  • Landis WG (2005) Regional Scale Ecological Risk Assessment Using the Relative Risk Model. CRC Press, Boca Raton FL.
  • Landis WG, Bryant PT (2010) Using weight of evidence characterization and modeling to investigate the cause of the changes in Pacific herring (Clupea pallasi) population dynamics in Puget Sound and at Cherry Point, Washington. Risk Analysis 30:183-202.
  • Landis WG, Duncan PB, Hayes EH, Markiewicz AJ, Thomas JF (2004) A regional retrospective assessment of the potential stressor causing the decline of the Cherry Point Pacific herring run. Human and Ecological Risk Assessment 10:271-297.
  • Lanes SE (1988) The logic of causal inference. Pp. 59-76 in: Rothman KJ (Eds). Causal Inference. Epidemiology Resources Inc., Chestnut Hill MA.
  • Laskowski R (1995) Some good reasons to ban the use of NOEC, LOEC, and related concepts in ecotoxicology. Oikos 73:140-144.
  • Lewis D (1973) Causation. Journal of Philosophy 70:556-567.
  • Lipton P (2005) Testing hypotheses: prediction and prejudice. Science 307:219-221.
  • Machamer P, Wolters G (Eds.) (2007) Thinking About Causes: From Greek Philosophy to Modern Physics. University of Pittsburgh Press, Pittsburgh PA.
  • Mackie J (1965) Causes and conditions. American Philosophical Quarterly 2/4:245-255.
  • Mackie J (1974) The Cement of the Universe: A Study of Causation. Clarendon Press, Oxford UK.
  • McMullin E (2000) The impact of Newton's Principia on the philosophy of science. Pari Center for New Learning Library.
  • Menzies P (2004) Difference making in context. Pp. 140-180 in: Collins J, Hall N, Paul LA (Eds). Causation and Counterfactuals. MIT Press, Cambridge MA.
  • Mill JS (1843) A System of Logic, Ratiocinative and Inductive. Liberty Fund, Indianapolis IN.
  • Miller RE (1987) Fact and Method: Explanation, Confirmation and Reality in the Natural and Social Sciences. Princeton University Press, Princeton NJ.
  • Mithen S (1998) The Prehistory of the Mind. Phoenix, London UK.
  • Mittelstrass J (2007) The concept of causality in Greek thought. Pp. 1-13 in: Machamer P, Wolters G (Eds). Thinking About Causes: From Greek Philosophy to Modern Physics. University of Pittsburgh Press, Pittsburgh PA.
  • Mosteller F, Tukey JW (1977) Data Analysis and Regression. Addison-Wesley, Reading MA.
  • Munkittrick KR, Dixon DG (1989a) A holistic approach to ecosystem health assessment using fish population characteristics. Hydrobiologia 188:123-135.
  • Munkittrick KR, Dixon DG (1989b) Use of white sucker (Catostomus commersoni) populations to assess the health of aquatic ecosystems exposed to low-level contaminant stress. Canadian Journal of Fisheries and Aquatic Sciences 46:1455-1462.
  • Munkittrick KR, McMaster ME, Van Der Kraak G, Portt C, Gibbons WN, Farwell A, Gray M (2000) Development of Methods for Effects-Driven Cumulative Effects Assessment Using Fish Populations: Moose River Project. SETAC Press, Pensacola FL.
  • Newman MC (2001) Population Ecotoxicology. J. Wiley & Sons, Ltd., Chichester UK.
  • Newman MC, Evans DA (2002) Enhancing belief during causality assessments: cognitive idols or Bayes's theorem. Pp. 73-96 in: Newman MC, Roberts MH Jr., Hale RC (Eds). Coastal and Estuarine Risk Assessment. Lewis Publishers, Boca Raton FL.
  • Newman MC, Zhao Y, Carriger JF (2007) Coastal and estuarine ecological risk assessment: the need for a more formal approach to stressor identification. Hydrobiologia 577:31-40.
  • Neyman J (1923) On the application of probability theory to agricultural experiments: Essay on principles, Sec. 9. Statistical Science 5:465-480.
  • Neyman J, Pearson ES (1933) On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 231:289-337.
  • Norberg-King T, Ausley LW, Burton DT, Goodfellow WL, Miller JL, Waller WT (2005) Toxicity Reduction and Toxicity Identification Evaluations from Effluents, Ambient Waters, and Other Aqueous Media. SETAC Press, Pensacola FL.
  • Northcott R (2003) Defining causal strength. London School of Economics, London UK. Tech. Rpt. 14/03.
  • Northcott R (2008) Can ANOVA measure causal strength?. Quarterly Review of Biology 83:47-55.
  • Norton SB, Cormier SM, Smith M, Jones RC (2000) Can biological assessments discriminate among types of stress? A case study from the Eastern Corn Belt Plains ecoregion. Environmental Toxicology and Chemistry 19(4):1113-1119.
  • Norton SB, Cormier SM, Suter GW II, Subramaniam B, Lin E, Altfater D, Counts B (2002) Determining probable causes of ecological impairment in the Little Scioto River, Ohio, USA: Part 1. Listing candidate causes and analyzing evidence. Environmental Toxicology and Chemistry 21:1112-1124.
  • O'Connor T (1995) Agent causation. in: O'Connor T (Eds). Agents, Causes and Events. Oxford University Press, New York NY.
  • Olsen J (2003) What characterises a useful concept of causation in epidemiology?. Journal of Epidemiology and Community Health 57:86-88.
  • Parascandola M, Weed DL (2001) Causation in epidemiology. Journal of Epidemiology and Community Health 55(12):905-912.
  • Pearl J (2009) Causality: Models, Reasoning, and Inference. Cambridge University Press, New York.
  • Peters RH (1991) A Critique for Ecology. Cambridge University Press, Cambridge UK. 366 pp.
  • Pinker S (2008) The Stuff of Thought: Language as a Window into Human Nature. Penguin Group, New York NY.
  • Platt JR (1964) Strong inference. Science 146:347-353.
  • Popper KR (1968) The Logic of Scientific Discovery. Harper and Row, New York NY.
  • Potischman N, Weed DL (1999) Causal criteria in nutritional epidemiology. American Journal of Clinical Nutrition 69(6):1309S-1314S.
  • Reichenbach H (1958) The Direction of Time. University of California Press, Berkeley CA.
  • Reiter R (1987) A theory of diagnosis from first principles. Artificial Intelligence 32:57-95.
  • Renton A (1993) Epidemiology and causation: a realist view. Journal of Epidemiology and Community Health 48:79-85.
  • Richter EC, Laster R (2005) The precautionary principle, epidemiology and the ethics of delay. Human and Ecological Risk Assessment 11:17-27.
  • Riva-Murray K, Bode RW, Phillips PJ, Wall GL (2002) Impact source determination with biomonitoring data in New York State: concordance with environmental data. Northeastern Naturalist 9:127-162.
  • Roosenburg WM (2000) Hypothesis testing, decision theory, and common sense in resource management. Conservation Biology 14:1208-1210.
  • Rosenzweig C, Karoly D, Vicarelli M, Neofotis P, Wu Q, Casassa G, Menzel A, Root TL, Estrella N, Seguin B, Tryjanowski P, Liu C, Rawlins S, Imeson A (2008) Attributing physical and biological impacts to anthropogenic climate change. Nature 453:353-357.
  • Rothman KJ (1986) Modern Epidemiology. Little, Brown, and Co., Boston.
  • Rothman KJ (Ed.) (1988) Causal Inference. Epidemiology Resources Inc., Chestnut Hill MA.
  • Rothman KJ, Greenland S (1998) Modern Epidemiology (2nd edition). Lippincott, Williams & Wilkins, Philadelphia PA.
  • Rothman KJ, Greenland S (2005) Causal and causal inference in epidemiology. American Journal of Public Health 95(Suppl. 1):S144-S150.
  • Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66:688-701.
  • Rubin DB (1990) Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science 5:472-480.
  • Rubin DB (2004) Teaching statistical inference for causal effects in experiments and observational studies. Journal of Educational and Behavioral Statistics 29:343-367.
  • Rubin DB (2006) Rejoinder. Statistical Science 21:319-321.
  • Ruse M (1998) Taking Darwin Seriously. Prometheus Books, Amherst NY.
  • Russell B (1948) Human Knowledge, Its Scope and Limits, Part V. Simon and Schuster, New York NY.
  • Russell B (1957) Mysticism and Logic. Doubleday, New York NY.
  • Russo R, Williamson J (2007) Interpreting causality in the health sciences. International Studies in the Philosophy of Science 21:157-170.
  • Salmon W (1984) Scientific Explanation and the Causal Structure of the World. Princeton University Press, Princeton NJ.
  • Salmon W (1994) Causality without counterfactuals. Philosophy of Science 61:297-312.
  • Salmon W (1998) Causality and Explanation. Oxford University Press, Oxford UK.
  • Salsburg D (2001) The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. W.H. Freeman and Company, New York NY.
  • Scarre G (1998) Mill on induction and scientific method. Pp. 112-138 in: Skorupski J (Eds). The Cambridge Companion to Mill. Cambridge University Press, Cambridge UK.
  • Schlick M (1931) Causality in contemporary physics. Pp. 176-209 in: Mulder H, van de Velde-Schlick BFB (Eds). Philosophical Papers, Volume II. D. Reidel Publishing Co., Dordrecht, The Netherlands.
  • Schrader-Frechette KS (1985) Risk Analysis and Scientific Method. D. Reidel Publishing Co., Dordrecht, The Netherlands.
  • Schrader-Frechette KS, McCoy ED (1993) Method in Ecology: Strategies for Conservation. Cambridge University Press, Cambridge UK.
  • Simon HA (1952) Logic of causal relations. Journal of Philosophy 49:517-528.
  • Simon HA (1954) Spurious correlation: a causal interpretation. Journal of the American Statistical Association 49:467-479.
  • Simon HA (1983) Reason in Human Affairs. Stanford University Press, Stanford CA.
  • Simon HA, Rescher N (1966) Cause and counterfactual. Philosophy of Science 33(4):323-340.
  • Sindermann CJ (1997) The search for cause and effect relationships in marine pollution studies. Marine Pollution Bulletin 34(4):218-221.
  • Sloman S (2005) Causal Models: How People Think About the World and Its Alternatives. Oxford University Press, Oxford UK.
  • Spirtes P, Glymour C, Scheines R (2000) Causation, Prediction, and Search. MIT Press, Cambridge MA.
  • Stewart-Oaten A (1995) Rules and judgments in statistics: three examples. Ecology 76:2001-2009.
  • Stewart-Oaten A (1996) Problems in the analysis of environmental monitoring data. Pp. 109-131 in: Schmitt RJ, Osenberg CW (Eds). Detecting Environmental Impacts. Academic Press, New York NY.
  • Suppes P (1970) A Probabilistic Theory of Causality. North Holland Publishing, Amsterdam.
  • Susser M (1986a) Rules of inference in epidemiology. Regulatory Toxicology and Pharmacology 6:116-186.
  • Susser M (1986b) The logic of Sir Karl Popper and the practice of epidemiology. American Journal of Epidemiology 124:711-718.
  • Susser M (1988) Falsification, verification and causal inference in epidemiology: reconsideration in light of Sir Karl Popper's philosophy. Pp. 33-58 in: Rothman KJ (Eds). Causal Inference. Epidemiology Resources Inc., Chestnut Hill MA.
  • Susser M (2001) Glossary: causality in public health science. Journal of Epidemiology and Community Health 55(6):376-378.
  • Suter GW II (1990) Use of biomarkers in ecological risk assessment. Pp. 419-426 in: McCarthy JF, Shugart LL (Eds). Biomarkers of Environmental Contamination. Lewis Publishers, Ann Arbor MI.
  • Suter GW II (1993) Ecological Risk Assessment. Lewis Publishers, Boca Raton FL. 538 pp.
  • Suter GW II (1996) Abuse of hypothesis testing statistics in ecological risk assessment. Human and Ecological Risk Assessment 2:331-349.
  • Suter GW II (1998) Retrospective assessment, ecoepidemiology, and ecological monitoring. Pp. 177-217 in: Calow P (Eds). Handbook of Environmental Risk Assessment and Management. Blackwell Scientific, Oxford UK.
  • Suter GW II, Norton SB, Cormier SM (2002) A methodology for inferring the causes of observed impairments in aquatic ecosystems. Environmental Toxicology and Chemistry 21:1101-1111.
  • Taper ML, Lele SR (2004) The Nature of Scientific Evidence: Statistical, Philosophical and Empirical Considerations. University of Chicago Press, Chicago IL.
  • U.S. Department of Health, Education, and Welfare (1964) Smoking and Health: Report of the Advisory Committee to the Surgeon General. U.S. Department of Health, Education, and Welfare, Washington DC. Public Health Service Publication 1103.
  • U.S. EPA (1998) Guidelines for Ecological Risk Assessment. U.S. Environmental Protection Agency, Risk Assessment Forum, Washington DC. EPA/630/R-95/002F.
  • U.S. EPA (2000) Stressor Identification Guidance Document. U.S. Environmental Protection Agency, Washington DC. EPA/822/B-00/025.
  • U.S. EPA (2005) Guidelines for carcinogen risk assessment. U.S. Environmental Protection Agency, Risk Assessment Forum, Washington DC. EPA/630/P-03/001F.
  • Vineis P, Kriebel D (2006) Causal models in epidemiology: past inheritance and genetic future. Environmental Health 5:21.
  • Weed DL (1988) Causal criteria and Popperian refutation. Pp. 15-32 in: Rothman KJ (Eds). Causal Inference. Epidemiology Resources Inc., Chestnut Hill MA.
  • Weed DL (1997) On the use of causal criteria. International Journal of Epidemiology 26:1137-1141.
  • Weed DL (2000) Interpreting epidemiological evidence: how meta-analysis and causal inference methods are related. International Journal of Epidemiology 29(3):387-390.
  • Weed DL (2001) Methods in epidemiology and public health: does practice match theory?. Journal of Epidemiology and Community Health 55(2):104-110.
  • Weed DL (2002) Environmental epidemiology: basics and proof of cause-effect. Toxicology 181-182:399-403.
  • Weed DL (2005) Methodological implication of the precautionary principle. Human and Ecological Risk Assessment 11:107-113.
  • Weed DL (2007) The nature and necessity of scientific judgment. Journal of Law and Policy, 3/3/2007 2:17 AM.
  • Weed DL, Hursting SD (1998) Biologic plausibility in causal inference: current method and practice. American Journal of Epidemiology 147(5):415-425.
  • Weinberg S (2002) Can science explain everything? Anything?. Pp. 258-272 in: Ridley M (Eds). The Best American Science Writing 2002. Harper Collins, New York NY.
  • Westman WE (1979) Oxidant effects on California coastal sage scrub. Science 205:1001-1003.
  • Westman WE (1981) Factors influencing the distribution of species of California coastal sage scrub. Ecology 62:439-455.
  • Wolff P (2007) Representing causation. Journal of Experiment Psychology 136:82-111.
  • Woodman JN, Cowling EB (1987) Airborne chemicals and forest health. Environmental Science & Technology 21:120-126.
  • Woodward J (2003) Making Things Happen: A Theory of Causal Explanation. Oxford University Press, Oxford UK.
  • Wright S (1920) The relative importance of heredity and environment in determining the piebald pattern of guinea pigs. Proceedings of the National Academy of Sciences USA 6:320-332.
  • Wright S (1921) Correlation and causation. Journal of Agricultural Research 20:557-585.
  • Yerushalmy J, Palmer CE (1959) On the methodology of investigations of etiologic factors in chronic disease. Journal of Chronic Disease 10(1):27-40.
  • Yoder CO, Rankin ET (1995) Biological criteria program development and implementation in Ohio. Pp. 109-144 in: Davis WS, Simon TP (Eds). Biological Assessment and Criteria: Tools for Water Resource Planning and Decision Making. Lewis Publishers, Boca Raton FL.
  • Yoder CO, Rankin ET (1995) Biological response signatures and the area of degradation value: new tools for interpreting multi-metric data. Pp. 263-286 in: Davis WS, Simon TP (Eds). Biological Assessment and Criteria: Tools for Water Resource Planning and Decision Making. Lewis Publishing, Boca Raton FL.

Top of Page