Abstract
In the Introduction I have briefly mentioned the basic differences between the statement and the non-statement approaches. The advocates of the statement approach depict scientific theories as axiomatised deductively closed sets of sentences within some appropriate syntactic system, and discuss the “empirical interpretations” of these theories in terms of some set of correspondence rules or bridge principles. The defenders of the non-statement approach, on the other hand, do not view the formal formulation of scientific theories in some appropriate language as the most useful characterisation of “theories”. Rather, they depict these “theories” in terms of sets of mathematical structures that are the models of the theory in question.
Sections from this chapter are published as Ruttkamp (1999a).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes: Chapter 4
See Suppes (1954, p.244).
Suppes (quoted in Wójcicki in Humphreys, 1994, pp.148,149) writes: “The more I think about scientific practice and reflect on how to give an accurate account of the complicated processes that go into experimentation, the more I am persuaded that there are a large number of distinctions needed to describe experimentation thoroughly, especially as data are purified for quantitative, and even more statistical, analysis. It is a long way from running around the laboratory doing one thing and then another, to having a set of data as printout or on a computer screen ready for analysis. That process still needs much more thorough attention... gruesome details of exactly how data are purified and selected for analysis, not to speak of details of how they are generated, which itself may involve, as equipment becomes increasingly complicated, many different independent tests of reliability and accuracy of equipment”.
Issues concerned with this stage of science are addressed, for instance, by Paul Galison in his book entitled How experiments end (1987), and the details have now been worked out to unbelievable depths in his follow-up Image and logic (1997).
The kind of co-ordinating definitions often described by philosophers have their place in popular philosophical expositions of theories, but in the actual practice of testing scientific theories a more elaborate and more sophisticated formal machinery for relating a theory to data is required“ (Suppes in Morgenbesser, 1967, p.62).
To be able to establish a representation theorem for a theory implies that it can be proved that there is a class of models of the theory such that every model of the theory is isomorphic to some member of this class. Suppes (1960, p.295) gives a few examples of such theorems, for instance, Cayley’ s theorem that every group is isomorphic to a group of transformations, and Stone’s theorem that every Boolean algebra is isomorphic to a field of sets.
The language used to formulate a particular theory is the same one used to formulate a particular reduct. Recall that a “reduct” in model-theoretic terms is created by leaving out in the language and its interpretations some of the relations and functions originally contained in these entities. This kind of structure thus has the same domain as the model in question but contains only the extensions of the empirical predicates of the model.
Suppes (1988b, p.254) claims that one of the most important and valuable uses of representation theorems in philosophy of science is that they help to increase (scientific) understanding of the represented object.
If a representation theorem is found for one science in terms of a second, the first has been (formally) reduced to the second — e.g. Adams (1959) was the first to give a rigorous proof of reducing rigid body mechanics to particle mechanics.
The “type” of a model is determined by the individual constant symbols, as well as by the relation and function symbols of the axiomatic calculus of the theory in question.
See Mayo (1997) and Galison (1997) for recent discussion of this process.
See Ruttkamp (1997b), and the discussion related to these issues in Chapter 2.
Sneed (1983, p.350) claims that structuralism “is essentially a view about the logical form of the claims of empirical theories and the nature of the predicates that are used to make these claims”. (The notion of ‘predicates’ is taken in the usual set-theoretic sense of characterising the type or species of sets of structures.)
For formalised theories the entire (meta-) mathematical apparatus for studying theories and their models becomes available to the philosopher of science. One example of the tremendous usefulness of this approach is the study of verisimilitude.
Both Stegmüller and Sneed formulated reconstructions of parts of Kuhn’s theory, touching on the role of the scientific community in the development of scientific theories (Stegmüller, 1976; Sneed, 1976 ). Sneed claims that “... in order to make sense of what Prof Kuhn was telling us about scientific activity... we found it convenient to... employ a concept of scientific theory somewhat different from that commonly used by philosophers of science-in-general” (Sneed, 1976, p.119). He is referring here to the adoption of the “non-statement view” by him and Stegmüller (and their followers). (Of course, Kuhn’s work was not the only motivation in this regard, as Sneed himself acknowledges.) As already pointed out in the above, Sneed and Stegmtiller both reconstructed parts of Kuhn’s philosophy of science, focusing especially on the notion of a member of some community “holding” a particular theory, which implies a concentration on the differences between normal and revolutionary science. Briefly, Sneed (1976, p.120) defines “normal change” as change in the body of empirical claims of a theory, while “revolutionary change” consists in the changing of theories themselves. These notions can be very successfully treated by the structuralist programme, and Stegmüller (1973) specifically showed that the relation of reduction can be of significant use in depicting the notion of scientific progress. Kuhn (1976) wrote an article in reaction entitled “Theory change as structure change: Comments on the Sneed formalism”.
Recall that in general, a model is a structure (interpretation) of the form <A 1,..., A m, R 1,..., Rn> where the A, are the “basic sets” or domains of the model (the ontology of the theory); and the Rj are relations on the A. Remember also that — at least for a language with a sound set of rules — satisfaction of the axioms implies satisfaction of the theory, for any interpretation.
We all know that a theory usually has many different models, but they all have one thing in common, which Balzer, Moulines and Sneed (1987, p.3) identify as the same structure, while I emphasise also the fact that they are all models of the same (linguistically expressed) theory. A theory offers one formulation which binds together all these models (e.g. think of a theory as a set of field equations). That is why the model-theoretic approach is the one that I choose. This approach offers the possibility to focus on the linguistic nature of the theory as well as on its different models. Be that as it may, I do agree with Balzer, Moulines, and Sneed (1987, p.3) that what is meant by models sharing the same structure, is that they all share the same conceptual framework (i.e. in my terms, they all have the same logical type or signature) and they all satisfy the same laws (theory).
Sometimes also referred to as “conceptual determinations”.
... this distinction may be understood as the model-theoretic explication of the distinction between the ‘analytic’ and the ‘synthetic’ components within a particular theory“ (Moulines in Schurz & Dorn, 1991, p.318). Or perhaps, in my terms, this may be viewed as the distinction between the themata and related context-specific factors, in so far as these co-determine the logical type (signature) of structures, and the linguistic formulation of the empirical claims suggested by the interpretation of the empirical data in question.
See Balzer, Moulines, Sneed ( 1987, pp. 19, 20 ).
Briefly, the intensional description of I is a description in terms of the properties of I,while the corresponding extension of the set I denotes the elements of I — i.e. which elements of I have these (intensional) properties.
In my terms, the elements of I would be representations of systems of the “real things”.
Without a distinction between theoretical and non-theoretical terms, structuralists simply say that a particular intended application is an element of M. If such a distinction is made, they say that a particular intended application belongs to the class of partial potential models, M pp ,which is formally derivable from M p .
Or of most depictions of models of theories in formal terms, for that matter.
Approximation has been left out of this discussion, simply because the inclusion of approximate relationships will only complexity matters needlessly, since this discussion is meant as a brief introduction into the structuralist programme. I think it suffices for my purposes to make it clear that within the structuralist programme all approximate relations can be defined formally and are definitely taken into account in their reconstructions of empirical theories.
See Moulines in Shurz & Dorn (1991, p.324).
This simply means that the set I has “a life of its own” (ibid.), in the sense that its endurance is not dependent on the endurance of its members. The issue is especially complex, because the “life” or nature of the class I that endures through time (or history) depends on the nature of the scientific community to which it is linked, which, in its turn, is also a “genidentical” entity.
Note, however that although the set of intended applications cannot be depicted in purely formal terms, constraints and inter-theoretical links can be formulated formally by using structural descriptions of models of the theory.
“Intended” here refers not to the formulation stages of theory development as I have set it out in Chapter 2, but rather Balzer et al.. want to focus on the particular application (interpretation in my terms) of a specific theory to a certain real system or “range of phenomena”.
Note again that Sneed and company take a theory element as the core of a theory plus its range of intended applications. (See Sneed (1976).)
See Sneed (1976).
“The intuitive idea is that a distinction may be drawn between what is ruled out by the structure of the theory’s models M and what is ruled out by restriction on the way that structure is applied ‘across’ a number of different applications C” (Sneed, 1976, p.124) — “Local applications [of theory T] may overlap in space and time, they may influence each other (even if they are separated in time and space), certain properties of T’s objects may remain the same if the objects are transferred from one application to another one. Any connection of this sort will be captured by what we call constraints” (Balzer, Moulines, Sneed, 1987, p.41).
See Balzer, Moulines, Sneed (1987, pp.46ff., Sections 11.2.3 and 114) for more detail.
The sections of the structuralist programme dealing with these issues are very technical — see Balzer, Moulines, and Sneed (1987, pp.57ff., Section 11.3.2; pp.73ff., Section 11.3.4) for more detail.
See also Sneed (1976), and Balzer, Moulines, Sneed (1987).
This is done (ibid.) in such a way that the whole array of theoretical components satisfies the constraints C.
See Balzer, Moulines, and Sneed (1987, pp.57ff).
All of the above is of course set out in idealised terms since it is not, in this context, taken into account that the empirical claim associated with a particular theory element will always — according to the structuralists — be only approximately true. What is relevant here in this connection is not the overwhelming literature on the technical aspects of the question of approximate truth, but rather, and much more simply, investigating exactly what the structuralists envisage the theory core’s function to be in all of this. The briefest answer is, obviously, that the theory core identifies the theory content. More precisely, the theory core defines a set of possible situations or “ways things could be” (Sneed in Humphreys (1994, p.195)), called content(K). This notion of “ways things could be” again link up with the need I see for preferential analyses of groups of empirically equivalent models, discussed in Chapter 2. I shall not go into any more detail as far as these issues are concerned though, given the scope of this chapter.
See Beth (1949), (1961), and Van Fraassen (1970).
See Suppe (1967), (1973), and (1989).
Van Fraassen (1970) points out that Wilfred Sellars has since the late fifties been arguing for precisely such a meaning structure for the language of science. See Sellars (1957, pp.225–308), and ( 1963, Chapters 4, 10 and 11).
See Chapter 2 again.
The context-dependency of mathematical models does however also enter in their view of scientific theories in so far as they — especially Suppe (1973, pp.151ff.) — discuss the “extra-theoretical factors” determining for instance the experimental design of a theory. These factors include “regularities” (ibid.) such as other theories, laws or known regularities about the phenomena in question.
Giere (1983, p.271) explains that a physical system “... is defined by a set of state variables and system laws that specify the physically possible states of the system and perhaps also its possible evolutions”. Giere offers the example of classical thermodynamics which may be understood as defining an ideal gas in terms of three variables: pressure, volume, and temperature, and then specifying that these are related by the law PV = KT. (See also Suppe, 1973, p.132.)
Van Fraassen (1970, pp.130–132) discusses three types of law:Laws of succession are relations of succession indicating the various sequences of states various physical systems will assume over time. These relations are such that the sequences may be deterministic or statistically determined, continuous or discrete. These laws (as far as they are non-statistical) thus select the physically possible trajectories in a particular state-space. Laws of coexistence are equivalence relations indicating which states are equivalent to which others, if the associated law is deterministic. If it is statistical it indicates which states are equally probable, i.e. it selects the physically possible subsets of the given state-space. Laws of interaction (either deterministic or statistical) determine which states result from the interaction between various systems. These laws are combinations of the first two kinds of law.
See Beth, E., “Camap’s views on the advantages of constructed systems over natural languages in the philosophy of science” (especially pp.479–480) in Schlipp, P (ed). 1963. The philosophy of Rudolf Carnap.
“Suppose that for a given kind of physical system, the theory specifies a set E of elementary statements, state-space H,and satisfaction function h. We call the triple L = <E,H,h> a semi-interpreted language” (Van Fraassen, 1970, p.335).
Especially non-relativistic physical theories typically use mathematical models to represent the behaviour of a certain kind of physical system — Van Fraassen (1970:328) gives as examples the use of Hilbert space in quantum mechanics, and the use of Euclidean 2n-space [sic] as phase-space for n particles in classical mechanics. (He probably means “... for a system with n degrees of freedom ...”. For n particles 6n-space is needed!)
This is much in line with my idea of an interpretative model of a theory and relations linking it to some real system via some empirical model offering a “snap shot” view of the real system at a specific time.
“For each elementary statement U there is a region h(U) of the state-spaces H such that U is true if and only if the system’s actual state is represented by an element of h(U). (We also say that these elements satisfy U...)” (Van Fraassen, 1970, p.329).
This notion of a “satisfaction function” characterises the kind of relation I see involved in determining the possible isomorphic embeddings of empirical models into interpretative models of some theory.
See Van Fraassen (1970, p.337).
“For each elementary statement U there is a region h(U) of the state-spaces H such that U is true if and only if the system’s actual state is represented by an element of h(U). (We also say that these elements satisfy U...)” (Van Fraassen, 1970, p.329).
A science like physics cannot exist at all if it were not to use rich mathematical structures and the deductive methodology of mathematics and restrict itself to empirical models only. For example, no physical measuring process can ever produce an irrational number as result, although interpreting some of these results necessarily assumes the existence of such numbers.
See Niiniluoto (1999, pp.115ff.) for a discussion of the anti-realist aspects of constructive empiricism.
This is yet another appropriate moment to refer you at least to Mayo (1997) and Galison (1997).
“For our purposes, a scientific theory has two components. One is a family of [theoretical] models.... The second is a set of theoretical hypotheses that pick out things in the real world that may fit one or another of the models in the family” (Giere, 1991, p. 29 ).
See my discussion of this issue in Chapter 5.
See Wartofski (1979, p.19).
He (Giere, 1984:13) gives as example of a theoretical hypothesis the statement that “The positions and velocities of the earth and moon in the earth-moon system are very close to those of a two-body particle Newtonian model (with specified initial conditions)”.
Another example of a notion of approximation by degree, as noted by Giere (1985:80), can be found in Beth’s state-space approach in terms of the value r of a magnitude m at a time t in a given state-space. This notion also fits both my and Suppes’s approaches to these “empirical” relations.
And so they should be, given that they form part of human scientific knowledge which, from the beginning, simply is based on activities of abstraction, because that simply is how we humans know anything.
We have already agreed that fundamental laws are indeed too simple and abstract to be directly about any aspect of reality.
She also remarks in Cushing, Delaney, and Gutting (1984, p.135) that “... abstractness and scientific realism are two different issues, and not all varieties of abstractness bear equally on questions of descriptive completeness, accuracy, and truth. This is so with our notion [of abstraction], where notions become more and more abstract as less and less explanatory information about them is given”.
Both Cartwright and I view these models as idealisations, although we differ about the implications of the ideal nature of these models for the process of science, as will be discussed below.
Cartwright (1983, Chapter 8).
Cartwright’s “phenomenological laws” remind one very much of Suppes’s (1989) models of data.
See also Chalmers (1987, p.82) for confirmation of this interpretation.
See also Chapters 2 and 5.
Well, this is true of fundamental laws too — Newton says openly he offers no hypotheses concerning the reasons why his laws of gravitation are true.
See also my notes concerning Newton’s mechanics offering an explanation of Kepler’s laws in Chapter 2.
Cartwright claims ( 1983, p.150) a model realistic in this second sense to be in need of more bridge principles than one realistic in the first sense. The best explanation for this is, I think, the fact that she sees the mathematical representation as being closer to — or perhaps mainly identical to — the theory.
As she explains (Cartwright, 1983, pp.152–154): “The second definition of ‘simulacrum’ in the Oxford English Dictionary says that a simulacrum is ‘something having merely the form or appearance of a certain thing, without possessing its substance or proper qualities’”.
Cartwright illustrates this accusation with the following remarks: “Not all radiometers that meet Maxwell’s two descriptions have the distribution function Maxwell writes down; most have many other relevant features besides. This will probably continue to be true no matter how many further corrections we add. In general... the bridge law between the medium of a radiometer and a proposed distribution can hold only ceteris paribus” (Cartwright, 1983, p.155).
There are cases in which we believe phenomenological laws to be soundly deducible from a certain set of fundamental laws, but find that the actual deduction is extremely difficult. These cases, however, do not prove in any way either that phenomenological laws “typically” cannot be deduced from fundamental ones, or that the “all things being equal” and additional assumptions needed in such deductions may be found to “contradict” the original (set of) fundamental law(s).
The main means of communication in science still is language (together with diagrams, physical models, demonstrations, films, and so on).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Ruttkamp, E. (2002). Variations on the Non-Statement View of Science. In: A Model-Theoretic Realist Interpretation of Science. Synthese Library, vol 311. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-0583-7_4
Download citation
DOI: https://doi.org/10.1007/978-94-017-0583-7_4
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-6066-2
Online ISBN: 978-94-017-0583-7
eBook Packages: Springer Book Archive