Nouvelles publications, Sciences et Cognition
[ Nouvelle publication ] Characterizing the Robustness of Science : After the Practice Turn in Philosophy of Science
Axe 6 - Projet de recherche PratiScienS
Parution chez Springer d'un volume sur la robustesse, auquel de nombreux membres de PratiScienS ont contribué.
Characterizing the Robustness of Science : After the Practice Turn in Philosophy of Science
Léna Soler, Emiliano Trizio, Thomas Nickles, William Wimsatt Editors (Editions Springer)
Mature sciences have been long been characterized in terms of the "successfulness", "reliability" or "trustworthiness" of their theoretical, experimental or technical accomplishments. Today many philosophers of science talk of "robustness", often without specifying in a precise way the meaning of this term. Th is lack of clarity is the cause of frequent misunderstandings, since all these notions, and that of robustness in particular, are connected to fundamental issues, which concern nothing less than the very nature of science and its specificity with respect to other human practices, the nature of rationality and of scientific progress; and science's claim to be a truth-conducive activity. Th is book offers for the first time a comprehensive analysis of the problem of robustness, and in general, that of the reliability of science, based on several detailed case studies and on philosophical essays inspired by the so-called practical turn in philosophy of science. Thanks to its rich thematic variety, the book is addressed to the entire science studies community : general philosophers of science, philosophers of physics, biology, cognitive sciences, historians, sociologists and anthropologists of science. As the authors carefully explain all the examples discussed in the book, only a general background of scientific knowledge is presupposed.Philosophy
Table of Contents
Characterizing the Robustness of Science after the Practice Turn in Philosophy of Science
Léna Soler, Emiliano Trizio, Thomas Nickles and William C. Wimsatt (eds.)
Introduction The Solidity of Scientific Achievements: Structure of the Problem, Difficulties, Philosophical Implications
Léna Soler (Archives H. Poincaré, Laboratoire d'Histoire des Sciences et de Philosophie – Archives Henri Poincaré, UMR 7117 CNRS – Nancy-Université, Université Nancy 2, 91 avenue de la Libération, BP 454, 54001 NANCY Cedex, FRANCE).
The introduction (a) defines robustness and solidity; (b) provides a systematic analysis of the structure of the problem of robustness; (c) stresses several important difficulties, makes suggestions intended to help to overcome them, and points to issues waiting for further work; (d) sketches the philosophical implications related to the solidity problem; (e) gives an overview of the different chapters of the present book.
Chapter 1. Robustness, Reliability, and Overdetermination (1981)
William Wimsatt (Dept. of Philosophy and Conceptual and Historical Studies of Science, The University of Chicago, Center for Philosophy of Science, University of Minnesota, USA. 414 Judd Hall, 5835 S. Kimbark, Chicago, IL., 60637, USA)
Reprinted from M. Brewer and B. Collins, eds., (1981); Scientific Inquiry in the Social Sciences (a festschrift for Donald T. Campbell), San Francisco: Jossey-Bass, 123-162.)
Chapter 2. Robustness: Material, and Inferential, in the Natural and Human Sciences
William Wiamsatt (University of Chicago, USA)
I review the scientific situation with the emergence of population biology that led Richard Levins to introduce the idea of looking for robust theorems, and the influences that led Donald Campbell to introduce "triangulation". My review tied these two notions together, and looked for other convergent methodologies that showed some of the same characteristics that I baptized as "robustness analysis". I review the main types, and then turn to a further characterization of material robustness, which has become the primary focus of studies in biology and elsewhere in the last decade. I discuss one key source of this robustness—sexual recombination—and then close with some remarks on robustness, complexity, fragility, and generative entrenchment.
Chapter 3. Achieving Robustness to Confirm Controversial Hypotheses: A Case Study in Cell Biology
Emiliano Trizio (Seattle University, Philosophy Department, US ; Archives Poincaré, Nancy ; Archives Husserl, Paris, France).
Recent developments in cellular microscopy provide an interesting example of the role played by what William Wimsatt calls "robustness analysis" in the establishment of experimental results. According to a commonly accepted biochemical model, clathrin-mediated endocytosis (that is, one of the main processes by which external material is internalized by the cell via the plasma membrane) takes place only if the size of the entering object does not exceed 120-150 nm. However recent studies provide evidence that invasive bacteria whose diameter is far larger than 150 nm can enter host cells in a clathrin-dependent manner. In particular, images obtained by fluorescence microscopy indicate the presence of clathrin molecules and their active role in such processes. Yet, due to the well-known risk that artifacts introduced during the preparation of the sample may influence the results obtained with this technique, the scientific community still does not deem the currently available evidence sufficient to revise the well-established and so far unchallenged model of clathrin-mediated endocytosis. The aim of ongoing research is thus to crosscheck the results of fluorescence microscopy by means of techniques involving transmission electron microscopy. The focus of this paper will be on the methodologies adopted in a type of "correlative microscopy" combining (cryo-) fluorescence microscopy and (cryo-) electron tomography. It will be argued that real cases of robustness analysis offer, in general, a complicated pattern in which, a multiplicity of derivations are indeed combined, but their independence comes in degree and the results they yield stand with one another in a relation of partial overlap rather than identity. It will thus appear that the situation portrayed by Wimsatt's robustness scheme is often to be regarded as an aim to be pursued through a long and stepwise process or even as a regulative ideal directing the researchers' efforts, rather than as a readily available option in their methodological tool-box.
Chapter 4. Multiple Derivability and the Reliability and Stabilization of theories
H. Nederbragt (Descartes Centre for the History and Philosophy of the Sciences and the humanities, Utrecht University, the Netherlands).
Multiple derivability (MD) is an inductive strategy to increase the reliability of a theory (Nederbragt, Hist.Phil.Biol.Biomed.Sci., 34:539, 2003). It may be considered as the strategy with which a theory is supported by evidence obtained by two or more independent methods that differ in background knowledge and technical principles on which they are based. As such, MD is a member of a family of comparable strategies to which also belong robustness, triangulation and consilience of induction. Triangulation may be roughly defined as the use of the same or a different method, both in an independent manner, to describe an object. Consilience of induction may be described as occurring under the circumstance that a hypothesis explains two or more known or unknown (classes of) independent facts. It may be argued that robustness is the result of MD, triangulation and consilience; this will be investigated in more detail.
Robustness may come in degrees. This can be argued when using the definition of MD in which emphasis is given to theoretical and technical independence of two methods that make it possible to infer the same theory. The degree in which two methods differ in this background and principles determines the degree in robustness. I will confront this with analyses of replication and confirmation.
Finally, obtaining robustness by MD may not be possible. I will illustrate this by discussing a case of immunohistochemical staining of microscopical slides. Some robustness on the level of the method itself may be possible but not on the level of the theory. In that case stability of the theory is dependent on social interactions between theory, scientist and the science community.
Chapter 5. Robustness of an Experimental Result: The Example of the Tests of Bell's Inequalities
Catherine Dufour (Institut Jean Lamour, P2M department, UMR 7198, Université Henri Poincaré Nancy, France). BP 239, 54506 Vandoeuvre cedex, France.
Bell's inequalities provide a quantitative criterion to test experimentally the local hidden variables theories (LHVT) versus standard quantum mechanics (SQM). From the early 1970's to the present, a huge number of experimental tests have been performed. We will discuss their independence. The outcomes – except one – are consistent with SQM and inconsistent with LHVT. At a first glance, one can consider that the result of the experimental tests of Bell's inequalities is robust if one follows the statement of Wimsatt (1981): "the robustness of a result is characterized by its invariance with respect to a great number of independent derivations". This opinion is implicitly shared by many physicists. However, real experiments differ from the ideal experiment used to derive Bell's theorem in several respects. Two kinds of problems are mainly emphasized. First, in all the experiments an additional assumption is used due to the fact that a part of the experimental set-up is not 100% efficient. This leads to the detection loophole. Second, the experiments do not fulfill one of the requirements of the theorem, for example the locality condition. This leads to the locality loophole. Consequently, one cannot strictly speaking conclude that the experimental tests have ruled out the LHVT.
We argue that, for experimental tests of a given theoretical question to be robust, one has to consider the validity of the various independent derivations carefully. In order to find a way to increase robustness, we will discuss the following questions: Do both loopholes have the same importance? Are they both crucial? Is an ultimate experiment closing both loopholes simultaneously necessary to conclude that the result favouring SQM is robust? Or, are a couple of experiments, each one closing a given loophole, enough?
Chapter 6. Scientific Images and Robustness
Catherine Allamel-Raffin and Jean-Luc Gangloff (IRIST, université de Strasbourg, France)
According to W. C. Wimsatt, the robustness of an experimental result relies on the scientist's use of multiple independent derivations. This definition corresponds to what a laboratory ethnographer may observe concerning research practices in astrophysics. In this field, establishing experimental results or detecting new entities most commonly requires images produced by telescopes functioning on different physical principles.
This study will focus on a specific astrophysics paper in order to demonstrate that: 1) images, contrary to what is usually believed, play a central role in the argumentation, with the main text only serving as a long commentary on the images; 2) researchers establish a series of converging proofs in order to elaborate their conclusions. In other words, we will insist on the fact that it is an inter-instrumental procedure that allows the community of researchers to consider their results as true, until proven otherwise, as suggested by the fallibilist perspective which prevails in scientific practice and to which W. C. Wimsatt subscribes.
Chapter 7. Are we still Babylonians? The Structure of the Foundations of Mathematics from a Wimsattian Perspective
Ralf Krömer (University of Siegen, Germany).
We will investigate the usefulness of Wimsatt's concept of robustness for mathematics. In the experimental sciences, there are no demonstrations in the strict sense but only 'confirmations' of various types of the propositions one believes in. Wimsatt stressed that our conviction of a proposition grows with the number of independent and convergent confirmations. In the paper, we shall not discuss the difference between mathematical proof and confirmation in experimental sciences, but we shall investigate whether and to which degree wimsattian robustness is at issue in the practice of mathematicians, namely in the discussion of propositions considered as very useful, desirable, likely etc. but found to be logically independent of usual well-established bases of deduction. We will study two examples: the proposition P asserting the consistency of set theory (base of deduction: ZFC), and the proposition asserting the relative consistency of a certain large cardinal axiom (base of deduction: ZFC+P). In these cases, one can observe that the practitioners make use of non-necessary but multiple, independent and convergent confirmations. The second example might seem quite technical, but its discussion is useful for judging the relevance of such a concept of robustness for mathematics because it concerns a mathematical discipline (category theory) which is very important but which can't claim so far to have reached a state of development rendering unlikely the future discovery of contradictions.
Chapter 8. Rerum Concordia Discors: Robustness and Discordant Multimodal Evidence
Jacob Stegenga (University of Toronto, , Room 316, 91 Charles St W, Toronto, ON, Canada, M5S 1K7)
Rain today, I reckon, given the grey clouds above, the falling barometer, and after all, it is an autumn day in London. My conjecture is supported with multimodal evidence: the clouds, the barometer, the season. The term "multimodal evidence" will be unfamiliar to most, though it is a common intuition that multimodal evidence is valuable. A "mode" is a way of finding out about the world; a type of evidence; a technique or study design. We usually have evidence for or against a hypothesis which comes from a variety of different modes; I call this multimodal evidence. For example, when devising his laws of motion, Newton had evidence on the orbits of the moons of Jupiter and Saturn, the patterns of spring and neap tides at the solstice and equinox, and terrestrial dynamics.
Robustness – the state in which hypotheses are supported with concordant multimodal evidence – is one way in which the value of multimodal evidence has been explicated (§II). Another way in which multimodal evidence is said to be valuable is based on the notion of security (§III). An empirical challenge for robustness is that when multimodal evidence is available for a particular hypothesis, it is often discordant (§IV) – discordance is ubiquitous. A conceptual challenge is to know when and how modes are sufficiently independent to count as providing multimodal evidence (§V). A methodological challenge is that to know the impact multimodal evidence should have on our belief in a hypothesis, the multimodal evidence must be assessed and amalgamated by an amalgamation function (§VI). I argue that an amalgamation function for multimodal evidence should do the following: evidence from multiple modes should be assessed on prior criteria (quality of mode), relative criteria (relevance of mode to a given hypothesis) and posterior criteria (salience of evidence from particular modes and concordance/discordance of evidence between modes); the assessed evidence should be amalgamated; and the output of the function should be a constraint on our justified credence. Without principled methods of amalgamating multimodal evidence, appeals to multimodal evidence are vague and inconclusive. Such amalgamation functions could provide more rigorous guidance for our belief in a hypothesis when presented with multimodal evidence.
Chapter 9. Robustness of Results and Robustness of Derivations: the Internal Architecture of a Solid Experimental Proof
Léna Soler (Archives H. Poincaré, Laboratoire d'Histoire des Sciences et de Philosophie – Archives Henri Poincaré, UMR 7117 CNRS – Nancy-Université, Université Nancy 2, 91 avenue de la Libération, BP 454, 54001 NANCY Cedex, FRANCE)
According to Wimsatt's definition, the robustness of a result is due to its being derivable from multiple, partially independent methods, and increases with the number of such methods. In the case of the experimental sciences, the multiple methods will amount to different types of experiments. But clearly, this holds only if the convergent derivations involved are genuine arguments, that is, if each of them can be considered as sufficiently reliable or solid. Thus, the issue of the robustness of results inevitably leads to a reflection on the robustness of methods.
What is, then, that makes a method, and in particular an experimental procedure robust? Despite the possible worries of circularity, part of the answer lies, without doubt, in a sort of reversed formulation of Wimsatt's definition: the solidity of a method will increase with the number of independent results, previously established as robust, that it will enable to be derived. But this seems to be only a part of the answer. Intuitively at least, it is expected that the solidity of a method could also be linked to specific properties of this method, to features that are more "intrinsic" than the results it allows to derive.
In this paper, I try to probe into the nature of these 'intrinsic' characters, through a discussion of an example connected to the discovery of weak neutral currents in particle physics. More precisely, the method that will be investigated is an experimental procedure developed at the beginning of the 70s, which uses a giant bubble chamber named Gargamelle, and which is commonly believed to have contributed to establishing the existence of weak neutral currents. I analyze the content of the Gargamelle experimental 'proof' and bring to light its internal architecture. Then I examine the relations between this architecture and the wimsattian scheme of invariance under multiple determinations. Thereafter, I specify this scheme, and draw some general conclusions about the robustness of methods and results. Finally, some implications with respect to the issues of scientific realism and the contingency of scientific results are sketched.
Chapter 10. Multiple Means of Determination and Multiple Constraints of Construction: Robustness and Strategies for Modeling Macromolecular Objects
Frédéric Wieber ((Archives H. Poincaré, Laboratoire d'Histoire des Sciences et de Philosophie – Archives Henri Poincaré, UMR 7117 CNRS – Nancy-Université, Université Nancy 2, 91 avenue de la Libération, BP 454, 54001 NANCY Cedex, FRANCE).
The field of protein chemistry was transformed during the 1960's and 70's by the development of theoretical and computational methods. In this paper, I present and analyze, as a case study, one of these procedures. My aim is to describe how protein scientists have used and mutually adjusted limited resources in order to construct what became, for them, an efficient procedure of modeling protein structure. In order to specify the modeling strategy they have devised, as well as the characteristics of the models constructed, I discuss, first of all, the analytical framework proposed by Levins (1966), within which he analyzes modeling practices in population biology by delineating three strategies of modeling and introduces the concept of robustness analysis. I then describe the tension between the limitations protein scientists encountered, their aims for constructing models, and the complexity of proteinic objects. Next, I analyze precisely, by using Levins' framework, the nature of the procedure of modeling by showing how the theoretical, empirical and computational limited resources used impact the nature of the protein models and are interrelated. I conclude by discussing Levins' robustness analysis and the most general concept of robustness introduced by Wimsatt. I show that Levins' analytical framework is an interesting tool for characterizing the modeling strategy used by protein scientists and for contrasting this strategy with the one Levins prefers as a population biology modeler. If the fruitfulness and efficiency of this last strategy is notably linked with robustness analysis, the fruitfulness of protein scientists' modeling strategy is not associated with robustness analysis but with the stabilization of the modeling procedure, which cannot be described by using a general robustness scheme. I propose to consider that this procedure has acquired stability within a process of mutual and iterative adjustment of interrelated theoretical, empirical and computational constraints.
Chapter 11. Understanding Scientific Practices: The Role of Robustness-notions
Mieke Boon (University of Twente, Department of Philosophy, Cubicus, PO Box 217, 7500 AE Enschede, The Netherlands).
This article explores the role of 'robustness-notions' in an account of the engineering sciences. The engineering sciences aim at technological production of, and intervention with phenomena relevant to the (dis-)functioning of materials and technological devices, by means of scientific understanding thereof.
It is proposed that different kinds of robustness-notions enable and guide scientific research: (1) Robustness is as a metaphysical belief that we have about the physical world – i.e., we believe that the world is robust in the sense that the same physical conditions will always produce the same effects. (2) 'Same conditions – same effects' functions as a regulative principle that enables and guides scientific research because it points to, and justifies methodological notions. (3) Repetition, variance and multiple-determination function as methodological criteria for scientific methods that justify the acceptance of epistemological and ontological results. (4) Reproducibility and stability function as ontological criteria for the acceptance of phenomena described by A→B. (5) Reliability functions as an epistemological criterion for the acceptance of epistemological results, in particular law‐like knowledge of a conditional form: "A→B, provided Cdevice, and unless other known and/or unknown causally relevant conditions."
The crucial question is how different kinds of robustness‐notions are related and how they play their part in the production and acceptance of scientific results. Focus is on production and acceptance of physical phenomena and the rule-like knowledge thereof. Based on an analysis of how philosoophy of science tradtionally justified scientific knowledge, I propose a general schema that specifies how inferences to the claim that a scientific result has a certain epistemological property (such as truth) are justified by scientific methods that meet specific methodological criteria. It is proposed that 'same conditions – same effects' as a regulative criterion justifies 'repetition, variation and multiple‐determination' as methodological criteria for the production and acceptance of (ontological and epistemological) scientific results.
Chapter 12. The Robustness of Science and the Dance of Agency
Andrew Pickering (Dept of Sociology and Philosophy, University of Exeter, Amory Building, Rennes Drive, Exeter EX4 4RJ, UK; Dept of Sociology, Kyung Hee University, Seoul, Korea)
This essay examines the notion of 'robustness' from the perspective developed in my book, The Mangle of Practice. The central concept is that of an emergent and decentred dance of agency between scientists and the material world—nature, instruments, machines. The novel argument here is that in science such dances have the telos of extinguishing themselves—of making a clean split between human scientists and 'free-standing machines'—of making the world dual. That this end is sometimes more or less accomplished points to a degree of nonhuman stability in the material culture of science which is the ontological basis of its robustness.
I extend the discussion to include the epistemological components of science and their robustness, and conclude with a consideration of the relation between robustness, uniqueness and contingency. Ontological robustness is the achievement of a specific 'machinic grip' on the world, and I argue, with examples at both micro- and macro-scales, that we should not assume that there is one best machinic grip that science is destined to find. My suggestion is that a novel and non-representational sort of 'machinic incommensurability' continually bubbles up in science, at all scales, large and small.
Chapter 13. Dynamic Robustness and Design in Nature and Artifact
Thomas Nickles (Department of Philosophy, University of Nevada, Reno, NV, USA
A goal of this volume is to build on the pathbreaking work by experts such as Bill Wimsatt and Andy Pickering in order to develop a more robust account of robustness. However, the idea may be so multifaceted that no single account will do. I shall canvass a few basic ideas of robustness, popular and technical, and then address such questions as: What is the relation of robustness to fragility or brittleness? Can a system be completely robust? Are decentralized, distributed systems potentially more robust than centralized ones? Which network topologies are more robust than others? What, if anything, do power laws have to do with robustness and with Wimsatt's "generative entrenchment"? Is there an interesting connection between robustness and design? Robustness and innovation? Robustness and scientific revolutions? Robustness, heuristics, experimental design, and novel prediction? Robustness and realism? My central claim, supported by a diverse body of literature, is that robustness is deeply related to fragility. Rather than vanquishing fragility, complex robustness shifts its location. More than that, complex robustness can actually generate fragility where none existed before.