Sep 11, 2010

PAGE 4

The gravitational force from invisible matter, known as dark matter, may have helped speed the formation of structure in the universe. Observations from the Hubble Space Telescope have revealed galaxies older than astronomers expected, reducing the interval between the big bang and the formation of galaxies or clusters of galaxies.


From about 2 billion years after the big bang for another 2 billion years, quasars formed as active giant black holes in the cores of galaxies. These quasars gave off radiation as they consumed matter from nearby galaxies. Few quasars appear close to Earth, so quasars must be a feature of the earlier universe.

A population of stars formed out of the interstellar gas and dust that contracted to form galaxies. This first population, known as Population II, was made up almost entirely of hydrogen and helium. The stars that formed evolved and gave out heavier elements that were made through fusion in the stars’ cores or that were formed as the stars exploded as supernovas. The later generation of stars, to which the Sun belongs, is known as Population I and contains heavy elements formed by the earlier population. The Sun formed about 5 billion years ago and is almost halfway through its 11-billion-year lifetime

About 4.6 billion years ago, our solar system formed. The oldest fossils of a living organism date from about 3.5 billion years ago and represent Cyanobacteria. Life evolved, and 65 million years ago, the dinosaurs and many other species were extinguished, probably from a catastrophic meteor impact. Modern humans evolved no earlier than a few hundred thousand years ago, a blink of an eye on the cosmic timescale.

Will the universe expand forever or eventually stop expanding and collapse in on itself? Jay M. Pasachoff, professor of astronomy at Williams College in Williamstown, Massachusetts, confronts this question in this discussion of cosmology. Whether the universe will go on expanding forever depends on whether there is enough critical density to halt or reverse the expansion, and the answer to that question may, in turn, depend on the existence of something the German-born American physicist Albert Einstein once labelled the cosmological constant.

New technology allows astronomers to peer further into the universe than ever before. The science of cosmology, the study of the universe as a whole, has become an observational science. Scientists may now verify, modify, or disprove theories that were partially based on guesswork.

In the 1920s, the early days of modern cosmology, it took an astronomer all night at a telescope to observe a single galaxy. Current surveys of the sky will likely compile data for a million different galaxies within a few years. Building upon advances in cosmology over the past century, our understanding of the universe should continue to accelerate

Modern cosmology began with the studies of Edwin Hubble, who measured the speeds that galaxies move toward or away from us in the mid-1920s. By observing redshift - the change in wavelength of the light that galaxies give off as they move away from us - Hubble realized that though the nearest galaxies are approaching us, all distant galaxies are receding. The most distant galaxies are receding most rapidly. This observation is consistent with the characteristics of an expanding universe. Since 1929 an expanding universe has been the first and most basic pillar of cosmology.

n 1990 the National Aeronautics and Space Administration (NASA) launched the Hubble Space Telescope (HST), named to honour the pioneer of cosmology. Appropriately, determining the rate at which the universe expands was one of the telescope’s major tasks.

One of the HST’s key projects was to study Cepheid variables (stars that vary greatly in brightness) and to measure distances in space. Another set of Hubble’s observations focuses on supernovae, exploding stars that can be seen at very great distances because they are so bright. Studies of supernovae in other galaxies reveal the distances to those galaxies.

The term big bang refers to the idea that the expanding universe can be traced back in time to an initial explosion. In the mid-1960s, physicists found important evidence of the big bang when they detected faint microwave radiation coming from every part of the sky. Astronomers think this radiation originated about 300,000 years after the big bang, when the universe thinned enough to become transparent. The existence of cosmic microwave background radiation, and its interpretation, is the second pillar of modern cosmology.

Also in the 1960s, astronomers realized that the lightest of the elements, including hydrogen, helium, lithium, and boron, were formed mainly at the time of the big bang. What is most important, deuterium (the form of hydrogen with an extra neutron added to normal hydrogen's single proton) was formed only in the era of nucleosynthesis? This era started about one second after the universe was formed and made up the first three minutes or so after the big bang. No sources of deuterium are known since that early epoch. The current ratio of deuterium to regular hydrogen depends on how dense the universe was at that early time, so studies of the deuterium that can now be detected indicate how much matter the universe contains. These studies of the origin of the light elements are the third pillar of modern cosmology.

Until recently many astronomers disagreed on whether the universe was expected to expand forever or eventually stop expanding and collapse in on itself in a “big crunch.”

At the General Assembly of the International Astronomical Union (IAU) held in August 2000, a consistent picture of cosmology emerged. This picture depends on the current measured value for the expansion rate of the universe and on the density of the universe as calculated from the abundances of the light elements. The most recent studies of distant supernovae seem to show that the universe's expansion is accelerating, not slowing. Astronomers have recently proposed a theoretical type of negative energy - which would provide a force that opposes the attraction of gravity—to explain the accelerating universe.

For decades scientists have debated the rate at which the universe is expanding. We know that the further away a galaxy is, the faster it moves away from us. The question is: How fast are galaxies receding for each unit of distance they are away from us? The current value, as announced at the IAU meeting, is 75 km/s/Mpc, that is, for each mega parsec of distance from us (where each mega parsec is 3.26 million light-years), the speed of expansion increases by 75 kilometres per second.

What’s out there, exactly?

In the picture of expansion held until recently, astronomers thought the universe contained just enough matter and energy so that it would expand forever but expand at a slower and slower rate as time went on. The density of matter and energy necessary for this to happen is known as the critical density.

Astronomers now think that only 5 percent or so of the critical density of the universe is made of ordinary matter. Another 25 percent or so of the critical density is made of dark matter, a type of matter that has gravity but that has not been otherwise detected. The accelerating universe, further, shows that the remaining 70 percent of the critical density is made of a strange kind of energy, perhaps that known as the cosmological constant, an idea tentatively invoked and then abandoned by Albert Einstein in equations for his general theory of relativity.

Some may be puzzled: Didn't we learn all about the foundations of physics when we were still at school? The answer is "yes" or "no," depending on the interpretation. We have become acquainted with concepts and general relations that enable us to comprehend an immense range of experiences and make them accessible to mathematical treatment. In a certain sense these concepts and relations are probably even final. This is true, for example, of the laws of light refraction, of the relations of classical thermodynamics as far as it is based on the concepts of pressure, volume, temperature, heat and work, and of the hypothesis of the nonexistence of a perpetual motion machine.

What, then, impels us to devise theory after theory? Why do we devise theories at all? The answer to the latter question is simply: Because we enjoy "comprehending," i.e., reducing phenomena by the process of logic to something already known or (apparently) evident. New theories are first of all necessary when we encounter new facts which cannot be "explained" by existing theories. Nevertheless, this motivation for setting up new theories is, so to speak, trivial, imposed from without. There is another, more subtle motive of no less importance. This is the striving toward unification and simplification of the premises of the theory as a whole (i.e., Mach's principle of economy, interpreted as a logical principle).

There exists a passion for comprehension, just as there exists a passion for music. That passion is rather common in children, but gets lost in most people later on. Without this passion, there would be neither mathematics nor natural science. Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally, by pure thought, without any empirical foundations - in short, by metaphysics. I believe that every true theorist is a kind of tamed metaphysicist, no matter how pure a "positivist" he may fancy himself. The metaphysicist believes that the logically simple is also the real. The tamed metaphysicist believes that not all that is logically simple is embodied in experienced reality, but that the totality of all sensory experience can be "comprehended" on the basis of a conceptual system built on premises of great simplicity. The skeptic will say that this is a "miracle creed." Admittedly so, but it is a miracle creed which has been borne out to an amazing extent by the development of science.

The rise of atomism is a good example. How may Leucippus have conceived this bold idea? When water freezes and becomes ice—apparently something entirely different from water—why is it that the thawing of the ice forms something which seems indistinguishable from the original water? Leucippus is puzzled and looks for an "explanation." He is driven to the conclusion that in these transitions the "essence" of the thing has not changed at all. Maybe the thing consists of immutable particles and the change is only a change in their spatial arrangement. Could it not be that the same is true of all material objects which emerge again and again with nearly identical qualities?

This idea is not entirely lost during the long hibernation of occidental thought. Two thousand years after Leucippus, Bernoulli wonders why gas exerts pressure on the walls of a container. Should this be "explained" by mutual repulsion of the parts of the gas, in the sense of Newtonian mechanics? This hypothesis appears absurd, for the gas pressure depends on the temperature, all other things being equal. To assume that the Newtonian forces of interaction depend on temperature is contrary to the spirit of Newtonian mechanics. Since Bernoulli is aware of the concept of atomism, he is bound to conclude that the atoms (or molecules) collide with the walls of the container and in doing so exert pressure. After all, one has to assume that atoms are in motion; how else can one account for the varying temperature of gases?

A simple mechanical consideration shows that this pressure depends only on the kinetic energy of the particles and on their density in space. This should have led the physicists of that age to the conclusion that heat consists in random motion of the atoms. Had they taken this consideration as seriously as it deserved to be taken, the development of the theory of heat - in particular the discovery of the equivalence of heat and mechanical energy - would have been considerably facilitated.

This example is meant to illustrate two things. The theoretical idea (atomism in this case) does not arise apart from and independent of experience; nor can it be derived from experience by a purely logical procedure. It is produced by a creative act. Once a theoretical idea has been acquired, one does well to hold fast to it until it leads to an untenable conclusion.

As for my latest theoretical work, I do not feel justified in giving a detailed account of it before a wide group of readers interested in science. That should be done only with theories which have been adequately confirmed by experience. So far it is primarily the simplicity of its premises and its intimate connection with what is already known (viz., the laws of the pure gravitational field) that speak in favour of the theory to be discussed here. It may, however, be of interest to a wide group of readers to become acquainted with the train of thought which can lead to endeavours of such an extremely speculative nature. Moreover, it will be shown what kinds of difficulties are encountered and in what sense they have been overcome.

In Newtonian physics the elementary theoretical concept on which the theoretical description of material bodies is based is the material point, or particle. Thus matter is considered a priori to be discontinuous. This makes it necessary to consider the action of material points on one another as "action at a distance." Since the latter concept seems quite contrary to everyday experience, it is only natural that the contemporaries of Newton - and indeed Newton himself - found it difficult to accept. Owing to the almost miraculous success of the Newtonian system, however, the succeeding generations of physicists became used to the idea of action at a distance. Any doubt was buried for a long time to come.

Nonetheless, when, in the second half of the 19th century, the laws of electrodynamics became known, it turned out that these laws could not be satisfactorily incorporated into the Newtonian system. It is fascinating to muse: Would Faraday have discovered the law of electromagnetic induction if he had received a regular college education? Unencumbered by the traditional way of thinking, he felt that the introduction of the "field" as an independent element of reality helped him to coordinate the experimental facts. It was Maxwell who fully comprehended the significance of the field concept; he made the fundamental discovery that the laws of electrodynamics found their natural expression in the differential equations for the electric and magnetic fields. These equations implied the existence of waves, whose properties corresponded to those of light as far as they were known at that time.

This incorporation of optics into the theory of electromagnetism represents one of the greatest triumphs in the striving toward unification of the foundations of physics; Maxwell achieved this unification by purely theoretical arguments, long before it was corroborated by Hertz' experimental work. The new insight made it possible to dispense with the hypothesis of action at a distance, at least in the realm of electromagnetic phenomena; the intermediary field now appeared as the only carrier of electromagnetic interaction between bodies, and the field's behaviour was completely determined by contiguous processes, expressed by differential equations.

Now a question arose: Since the field exists even in a vacuum, should one conceive of the field as a state of a "carrier," or should it rather be endowed with an independent existence not reducible to anything else? In other words, is there an "ether" which carries the field; the ether being considered in the undulatory state, for example, when it carries light waves?

The question has a natural answer: Because one cannot dispense with the field concept, not introducing in addition a carrier with hypothetical properties is preferable. However, the pathfinders who first recognized the indispensability of the field concept were still too strongly imbued with the mechanistic tradition of thought to accept unhesitatingly this simple point of view. Nevertheless, in the course of the following decades this view imperceptibly took hold.

The introduction of the field as an elementary concept gave rise to an inconsistency of the theory as a whole. Maxwell's theory, although adequately describing the behaviour of electrically charged particles in their interaction with one another, does not explain the behaviours of electrical densities, i.e., it does not provide a theory of the particles themselves. They must therefore be treated as mass points on the basis of the old theory. The combination of the idea of a continuous field with that of material points discontinuous in space appears inconsistent. A consistent field theory requires continuity of all elements of the theory, not only in time but also in space, and in all points of space. Hence the material particle has no place as a fundamental concept in a field theory. Thus even apart from the fact that gravitation is not included, Maxwell's electrodynamics cannot be considered a complete theory.

Maxwell's equations for empty space remain unchanged if the spatial coordinates and the time are subjected to a particular kind of linear transformations - the Lorentz transformations ("covariance" with respect to Lorentz transformations). Covariance also holds, of course, for a transformation which is composed of two or more such transformations; this is called the "group" property of Lorentz transformations.

Maxwell's equations imply the "Lorentz group," but the Lorentz group does not imply Maxwell's equations. The Lorentz group may indeed be defined independently of Maxwell's equations as a group of linear transformations which leave a particular value of the velocity - the velocity of light - invariant. These transformations hold for the transition from one "inertial system" to another which is in uniform motion relative to the first. The most conspicuous novel property of this transformation group is that it does away with the absolute character of the concept of simultaneity of events distant from each other in space. On this account it is to be expected that all equations of physics are covariant with respect to Lorentz transformations (special theory of relativity). Thus it came about that Maxwell's equations led to a heuristic principle valid far beyond the range of the applicability or even validity of the equations themselves.

Special relativity has this in common with Newtonian mechanics: The laws of both theories are supposed to hold only with respect to certain coordinate systems: those known as "inertial systems." An inertial system is a system in a state of motion such that "force-free" material points within it are not accelerated with respect to the coordinate system. However, this definition is empty if there is no independent means for recognizing the absence of forces. Nonetheless, such a means of recognition does not exist if gravitation is considered as a "field."

Let A be a system uniformly accelerated with respect to an "inertial system" I. Material points, not accelerated with respect to I, are accelerated with respect to A, the acceleration of all the points being equal in magnitude and direction. They behave as if a gravitational field exists with respect to A, for it is a characteristic property of the gravitational field that the acceleration is independent of the particular nature of the body. There is no reason to exclude the possibility of interpreting this behaviour as the effect of a "true" gravitational field (principle of equivalence). This interpretation implies that A is an "inertial system," even though it is accelerated with respect to another inertial system. (It is essential for this argument that the introduction of independent gravitational fields is considered justified even though no masses generating the field are defined. Therefore, to Newton such an argument would not have appeared convincing.) Thus the concepts of inertial system, the law of inertia and the law of motion are deprived of their concrete meaning - not only in classical mechanics but also in special relativity. Moreover, following up this train of thought, it turns out that with respect to A time cannot be measured by identical clocks; indeed, even the immediate physical significance of coordinate differences is generally lost. In view of all these difficulties, should one not try, after all, to hold on to the concept of the inertial system, relinquishing the attempt to explain the fundamental character of the gravitational phenomena which manifest themselves in the Newtonian system as the equivalence of inert and gravitational mass? Those who trust in the comprehensibility of nature must answer: No.

This is the gist of the principle of equivalence: In order to account for the equality of inert and gravitational mass within the theory admitting nonlinear transformations of the four coordinates is necessary. That is, the group of Lorentz transformations and hence the set of the "permissible" coordinate systems has to be extended.

What group of coordinate transformations can then be substituted for the group of Lorentz transformations? Mathematics suggests an answer which is based on the fundamental investigations of Gauss and Riemann: namely, that the appropriate substitute is the group of all continuous (analytical) transformations of the coordinates. Under these transformations the only thing that remains invariant is the fact that neighbouring points have nearly the same coordinates; the coordinate system expresses only the topological order of the points in space (including its four-dimensional character). The equations expressing the laws of nature must be covariant with respect to all continuous transformations of the coordinates. This is the principle of general relativity.

The procedure just described overcomes a deficiency in the foundations of mechanics which had already been noticed by Newton and was criticized by Leibnitz and, two centuries later, by Mach: Inertia resists acceleration, but acceleration relative to what? Within the frame of classical mechanics the only answer is: Inertia resists acceleration relative to space. This is a physical property of space - space acts on objects, but objects do not act on space. Such is probably the deeper meaning of Newton's assertion spatium est absolutum (space is absolute). Nevertheless, the idea disturbed some, in particular Leibnitz, who did not ascribe an independent existence to space but considered it merely a property of "things" (contiguity of physical objects). Had his justified doubts won out at that time, it hardly would have been a boon to physics, for the empirical and theoretical foundations necessary to follow up his idea were not available in the 17th century.

According to general relativity, the concept of space detached from any physical content does not exist. The physical reality of space is represented by a field whose components are continuous functions of four independent variables—the coordinates of space and time. It is just this particular kind of dependence that expresses the spatial character of physical reality.

Since the theory of general relativity implies the representation of physical reality by a continuous field, the concept of particles or material points cannot play a fundamental part, nor can the concept of motion. The particle can only appear as a limited region in space in which the field strength or the energy density are particularly high.

A relativistic theory has to answer two questions: 1) What is the mathematical character of the field? What equations hold for this field?

Concerning the first question: From the mathematical point of view the field is essentially characterized by the way its components transform if a coordinate transformation is applied. Concerning the second question: The equations must determine the field to a sufficient extent while satisfying the postulates of general relativity. Whether or not this requirement can be satisfied depends on the choice of the field-type.

The attempt to comprehend the correlations among the empirical data on the basis of such a highly abstract program may at first appear almost hopeless. The procedure amounts, in fact, to putting the question: What most simple property can be required from what most simple object (field) while preserving the principle of general relativity? Viewed in formal logic, the dual character of the question appears calamitous, quite apart from the vagueness of the concept "simple." Moreover, as for physics there is nothing to warrant the assumption that a theory which is "logically simple" should also be "true."

Yet every theory is speculative. When the basic concepts of a theory are comparatively "close to experience" (e.g., the concepts of force, pressure, mass), its speculative character is not so easily discernible. If, however, a theory is such as to require the application of complicated logical processes in order to reach conclusions from the premises that can be confronted with observation, everybody becomes conscious of the speculative nature of the theory. In such a case an almost irresistible feeling of aversion arises in people who are inexperienced in epistemological analysis and who are unaware of the precarious nature of theoretical thinking in those fields with which they are familiar.

On the other hand, it must be conceded that a theory has an important advantage if its basic concepts and fundamental hypotheses are "close to experience," and greater confidence in such a theory is certainly justified. There is less danger of going completely astray, particularly since it takes so much less time and effort to disprove such theories by experience. Yet ever more, as the depth of our knowledge increases, we must give up this advantage in our quest for logical simplicity and uniformity in the foundations of physical theory. It has to be admitted that general relativity has gone further than previous physical theories in relinquishing "closeness to experience" of fundamental concepts in order to attain logical simplicity. This holds all ready for the theory of gravitation, and it is even more true of the new generalization, which is an attempt to comprise the properties of the total field. In the generalized theory the procedure of deriving from the premises of the theory conclusions that can be confronted with empirical data is so difficult that so far no such result has been obtained. In favour of this theory are, at this point, its logical simplicity and its "rigidity." Rigidity means here that the theory is either true or false, but not modifiable.

The greatest inner difficulty impeding the development of the theory of relativity is the dual nature of the problem, indicated by the two questions we have asked. This duality is the reason why the development of the theory has taken place in two steps so widely separated in time. The first of these steps, the theory of gravitation, is based on the principle of equivalence discussed above and rests on the following consideration: According to the theory of special relativity, light has a constant velocity of propagation. If a light ray in a vacuum starts from a point, designated by the coordinates x1, x2 and x3 in a three dimensional coordinate system, at the time x4, it spreads as a spherical wave and reaches a neighbouring point (x1 + dx1, x2 + dx2, x3 + dx3) at the time x4 + dx4. Introducing the velocity of light, c, we write the expression:

This expression represents an objective relation between neighbouring space-time points in four dimensions, and it holds for all inertial systems, provided the coordinate transformations are restricted to those of special relativity. The relation loses this form, however, if arbitrary continuous transformations of the coordinates are admitted in accordance with the principle of general relativity. The relation then assumes the more general form:

Σik gik dxi dxk=0

The gik are certain functions of the coordinates which transform in a definite way if a continuous coordinate transformation is applied. According to the principle of equivalence, these gik functions describe a particular kind of gravitational field: a field which can be obtained by transformation of "field-free" space. The gik satisfy a particular law of transformation. Mathematically speaking, they are the components of a "tensor" with a property of symmetry which is preserved in all transformations; the symmetrical property is expressed as follows:

gik=gki

The idea suggests itself: May we not ascribe objective meaning to such a symmetrical tensor, even though the field cannot be obtained from the empty space of special relativity by a mere coordinate transformation? Although we cannot expect that such a symmetrical tensor will describe the most general field, it may describe the particular case of the "pure gravitational field." Thus it is evident what kind of field, at least for a special case, general relativity has to postulate: a symmetrical tensor field.

Hence only the second question is left: What kind of general covariant field law can be postulated for a symmetrical tensor field?

This question has not been difficult to answer in our time, since the necessary mathematical conceptions were already here in the form of the metric theory of surfaces, created a century ago by Gauss and extended by Riemann to manifolds of an arbitrary number of dimensions. The result of this purely formal investigation has been amazing in many respects. The differential equations which can be postulated as field law for gik cannot be of lower than second order, i.e., they must at least contain the second derivatives of the gik with respect to the coordinates. Assuming that no higher than second derivatives appear in the field law, it is mathematically determined by the principle of general relativity. The system of equations can be written in the form: Rik=0. The Rik transform in the same manner as the gik, i.e., they too form a symmetrical tensor.

These differential equations completely replace the Newtonian theory of the motion of celestial bodies provided the masses are represented as singularities of the field. In other words, they contain the law of force as well as the law of motion while eliminating "inertial systems."

The fact that the masses appear as singularities indicates that these masses themselves cannot be explained by symmetrical gik fields, or "gravitational fields." Not even the fact that only positive gravitating masses exist can be deduced from this theory. Evidently a complete relativistic field theory must be based on a field of more complex nature, that is, a generalization of the symmetrical tensor field.

Before considering such a generalization, two remarks pertaining to gravitational theory are essential for the explanation to follow.

The first observation is that the principle of general relativity imposes exceedingly strong restrictions on the theoretical possibilities. Without this restrictive principle hitting on the gravitational equations would be practically impossible for anybody, not even by using the principle of special relativity, even though one knows that the field has to be described by a symmetrical tensor. No amount of collection of facts could lead to these equations unless the principle of general relativity were used. This is the reason why all attempts to obtain a deeper knowledge of the foundations of physics seem doomed to me unless the basic concepts are in accordance with general relativity from the beginning. This situation makes it difficult to use our empirical knowledge, however comprehensive, in looking for the fundamental concepts and relations of physics, and it forces us to apply free speculation to a much greater extent than is presently assumed by most physicists. I do not see any reason to assume that the heuristic significance of the principle of general relativity is restricted to gravitation and that the rest of physics can be dealt with separately on the basis of special relativity, with the hope that later on the whole may be fitted consistently into a general relativistic scheme. I do not think that such an attitude, although historically understandable, can be objectively justified. The comparative smallness of what we know today as gravitational effects is not a conclusive reason for ignoring the principle of general relativity in theoretical investigations of a fundamental character. In other words, I do not believe that asking it is justifiable: What would physics look like without gravitation?

The second point we must note is that the equations of gravitation are 10 differential equations for the 10 components of the symmetrical tensor gik. In the case of a non-general relativistic theory, a system is ordinarily not overdetermined if the number of equations is equal to the number of unknown functions. The manifold of solutions is such that within the general solution a certain number of functions of three variables can be chosen arbitrarily. For a general relativistic theory this cannot be expected as a matter of course. Free choice with respect to the coordinate system implies that out of the 10 functions of a solution, or components of the field, four can be made to assume prescribed values by a suitable choice of the coordinate system. In other words, the principle of general relativity implies that the number of functions to be determined by differential equations is not 10 but 10-4=6. For these six functions only six independent differential equations may be postulated. Only six out of the 10 differential equations of the gravitational field ought to be independent of each other, while the remaining four must be connected to those six by means of four relations (identities). Indeed there exist among the left-hand sides, Rik, of the 10 gravitational equations four identities - “Bianchi's identities" - which assure their "compatibility."

In a case like this - when the number of field variables is equal to the number of differential equations - compatibility is always assured if the equations can be obtained from a variational principle. This is indeed the case for the gravitational equations.

However, the 10 differential equations cannot be entirely replaced by six. The system of equations is indeed "overdetermined," but due to the existence of the identities it is overdetermined in such a way that its compatibility is not lost, i.e., the manifold of solutions is not critically restricted. The fact that the equations of gravitation imply the law of motion for the masses is intimately connected with this (permissible) over determination.

After this preparation understanding the nature of the present investigation without entering into the details of its mathematics is now easy. The problem is to set up a relativistic theory for the total field. The most important clue to its solution is that there exists already the solution for the special case of the pure gravitational field. The theory we are looking for must therefore be a generalization of the theory of the gravitational field. The first question is: What is the natural generalization of the symmetrical tensor field?

This question cannot be answered by itself, but only in connection with the other question: What generalization of the field is going to provide the most natural theoretical system? The answer on which the theory under discussion is based is that the symmetrical tensor field must be replaced by a non-symmetrical one. This means that the condition gik=gki for the field components must be dropped. In that case the field has 16 instead of 10 independent components.

There remains the task of setting up the relativistic differential equations for a non-symmetrical tensor field. In the attempt to solve this problem one meets with a difficulty which does not arise in the case of the symmetrical field. The principle of general relativity does not suffice to determine completely the field equations, mainly because the transformation law of the symmetrical part of the field alone does not involve the components of the antisymmetrical part or vice versa. Probably this is the reason why this kind of generalization of the field has been hardly ever tried before. The combination of the two parts of the field can only be shown to be a natural procedure if in the formalism of the theory only the total field plays a role, and not the symmetrical and antisymmetrical parts separately.

It turned out that this requirement can indeed be satisfied in a natural way. Nonetheless, even this requirement, together with the principle of general relativity, is still not sufficient to determine uniquely the field equations. Let us remember that the system of equations must satisfy a further condition: the equations must be compatible. It has been mentioned above that this condition is satisfied if the equations can be derived from a variational principle.

This has indeed been achieved, although not in so natural a way as in the case of the symmetrical field. It has been disturbing to find that it can be achieved in two different ways. These variational principles furnished two systems of equations - let us denote them by E1 and E2 - which were different from each other (although only so), each of them exhibiting specific imperfections. Consequently even the condition of compatibility was insufficient to determine the system of equations uniquely.

It was, in fact, the formal defects of the systems E1 and E2 out whom indicated a possible way. There exists a third system of equations, E3, which is free of the formal defects of the systems E1 and E2 and represents a combination of them in the sense that every solution of E3 is a solution of E1 as well as of E2. This suggests that E3 may be the system for which we have been looking. Why not postulate E3, then, as the system of equations? Such a procedure is not justified without further analysis, since the compatibility of E1 and that of E2 do not imply compatibility of the stronger system E3, where the number of equations exceeds the number of field components by four.

An independent consideration shows that irrespective of the question of compatibility the stronger system, E3, is the only really natural generalization of the equations of gravitation.

It seems, nonetheless, that E3 is not a compatible system in the same sense as are the systems E1 and E2, whose compatibility is assured by a sufficient number of identities, which means that every field that satisfies the equations for a definite value of the time has a continuous extension representing a solution in four-dimensional space. The system E3, however, is not extensible in the same way. Using the language of classical mechanics we might say: In the case of the system E3 the "initial condition" cannot be freely chosen. What really matters is the answer to the question: Is the manifold of solutions for the system E3 as extensive as must be required for a physical theory? This purely mathematical problem is as yet unsolved.

The skeptic will say: "It may be true that this system of equations is reasonable from a logical standpoint. However, this does not prove that it corresponds to nature." You are right, dear skeptic. Experience alone can decide on truth. Yet we have achieved something if we have succeeded in formulating a meaningful and precise question. Affirmation or refutation will not be easy, in spite of an abundance of known empirical facts. The derivation, from the equations, of conclusions which can be confronted with experience will require painstaking efforts and probably new mathematical methods.

Schrödinger's mathematical description of electron waves found immediate acceptance. The mathematical description matched what scientists had learned about electrons by observing them and their effects. In 1925, a year before Schrödinger published his results, German-British physicist Max Born and German physicist Werner Heisenberg developed a mathematical system called matrix mechanics. Matrix mechanics also succeeded in describing the structure of the atom, but it was totally theoretical. It gave no picture of the atom that physicists could verify observationally. Schrödinger's vindication of de Broglie's idea of electron waves immediately overturned matrix mechanics, though later physicists showed that wave mechanics is equivalent to matrix mechanics.

To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determinations of Newton’s laws of motion and the theory of electromagnetism.

The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.

The new math is new only in that the material is introduced at a much lower level than heretofore. Thus geometry, which was and is commonly taught in the second year of high school, is now frequently introduced, in an elementary fashion, in the fourth grade - in fact, naming and recognition of the common geometric figures, the circle and the square, occur in kindergarten. At an early stage, numbers are identified with points on a line, and the identification is used to introduce, much earlier than in the traditional curriculum, negative numbers and the arithmetic processes involving them.

The elements of set theory constitute the most basic and perhaps the most important topic of the new math. Even a kindergarten child can understand, without formal definition, the meaning of a set of red blocks, the set of fingers on the left hand, and the set of the child’s ears and eyes. The technical word set is merely a synonym for many common words that designate an aggregate of elements. The child can understand that the set of fingers on the left hand and the set on the right hand match - that is, the elements, fingers, can be put into a one-to-one correspondence. The set of fingers on the left hand and the set of the child’s ears and eyes do not match. Some concepts that are developed by this method are counting, equality of number, more than, and less then. The ideas of union and intersection of sets and the complement of a set can be similarly developed without formal definition in the early grades. The principles and formalism of set theory are extended as the child advances; upon graduation from high school, the student’s knowledge is quite comprehensive.

The amount of new math and the particular topics taught vary from school to school. In addition to set theory and intuitive geometry, the material is usually chosen from the following topics: a development of the number systems, including methods of numeration, binary and other bases of notation, and modular arithmetic; measurement, with attention to accuracy and precision, and error study; studies of algebraic systems, including linear algebra, modern algebra, vectors, and matrices, with an axiomatic as well as traditional approach; logic, including truth tables, the nature of proof, Venn or Euler diagrams, relations, functions, and general axiomatic; probability and statistics; linear programming; computer programming and language; and analytic geometry and calculus. Some schools present differential equations, topology, and real and complex analysis.

Cosmology, of an evolution, is the study of the general nature of the universe in space and in time - what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between the galaxies that make up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein's general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.

The basic cosmological notion of general relativity grew out of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds - Russian named Nikolai Lobachevski and a Hungarian named János Bolyai - discovered that the classical geometry of Euclid was not the only possible geometry: in fact, they succeeded in constructing a geometry that was fully as logical and self-consistent as the Euclidean. They began by overthrowing Euclid's axiom about parallel lines: namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.

To illustrate the differences between Euclidean geometry and their non-Euclidean system considering just two dimensions is simplest - that is, the geometry of surfaces. In our schoolbooks this is known as "plane geometry," because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two-dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be ‘negative’, which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now if we are to draw parallel lines or any figure

(e.g., a triangle) on this surface, we must decide first of all how we will define a ‘straight line’, equivalent to the straight line of plane geometry. The most reasonable definition of a straight line in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a ‘geodesic’.

Considering a surface curved like a saddle, we find that, given a ‘straight’ line or geodesic, we can draw through a point outside that line a great many geodesics that will never intersect the given line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel. The possible parallels to the line fall within certain limits, indicated by the intersecting lines.

As a consequence of the overthrow of Euclid's axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sum of the three angles of a triangle is 180 degrees no longer holds on a curved surface. On the saddle-shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. Still, on a saddle surface both the circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameter.

After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non-Euclidean geometry whose two-dimensional model is a surface of positive, rather than negative, curvature - that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles), there are no parallel lines at all in this geometry. Again the sum of the three angles of a triangle is not 180 degrees: in this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.

Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe in which we live. Is the space of our universe ‘flat’, as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two-dimensional creatures living in a two-dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly as three-dimensional beings living in three-dimensional space, in that we should be capably able by way of studying geometrical properties of that space, to decide what the curvature of our space is. Riemann in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of the universe as a curved system in four dimensions, embodying time as the fourth dimension, and he proceeded to apply Riemann's formulas to test his idea.

Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a ‘space-time continuum’, by means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employed the imaginary number Á, or i, to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.

In his special theory of relativity Einstein made the geometry of the time-space continuum strictly Euclidean, that is, flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four-dimensional space-time continuum. In place of the classical Newtonian statement that ‘the sun produces a field of forces that impels the earth to deviate from straight-line motion and to move in a circle around the sun’. Einstein substituted a statement to the effect that ‘the presence of the sun causes a curvature of the space-time continuum in its neighbourhood’.

The motion of an object in the space-time continuum can be represented by a curve called the object's ‘world line’. Einstein declared, in effect: "The world line of the earth is a geodesic in the curved four-dimensional space around the sun." In other words, the

. . .earth’s ‘world line’ . . . corresponds to the shortest four-dimensional distance between the position of the earth in January . . . and its position in October . . .

Einstein's idea of the gravitational curvature of space-time was, of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closest approach to the sun and of the deflection of light rays by the sun's gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have a general curvature, similar to the local curvature in the sun's gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large-scale view the galaxies are spread uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably ‘smooth out’ its matter to a general average (which comes to about one hydrogen atom per cubic metre). On this assumption the universe as a whole has a smooth general curvature.

Nevertheless, if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two-dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? Since we cannot consider space alone, how is this space curvature related to time?

Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it changes internally). However, he found to his surprise that there was no solution of the equations that would permit a static cosmos. To repair the situation, Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun) and to gain in strength with increasing distance between the interacting objects (as no other forces ever do in physics).

Einstein's new force, called ‘cosmic repulsion’, allowed two mathematical models of a static universe. One solution, which was worked out by Einstein himself and became known as, Einstein's spherical universe, gave the space of the cosmos a positive curvature. Like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Einstein's spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space-time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two-dimensional analogy of Einstein's space-time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.

The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician Willem de Sitter. In his model of the universe both space and time were curved. Its geometry was similar to that of a globe, with longitude serving as the space coordinate and latitude as time. Unhappily astronomical observations contradicted by both Einstein and de Sitter's static models of the universe, and they were soon abandoned.

In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman (from whom the author of this article learned his relativity), discovered an error in Einstein's proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstances in question. Friedman showed that two nonstatic models were possible. One pictured the universe as expanding with time; the other, contracting.

Einstein quickly recognized the importance of this discovery. In the last edition of his book The Meaning of Relativity he wrote: "The mathematician Friedman found a way out of this dilemma. He showed that having a finite density in the whole is possible, according to the field equations, (three-dimensional) space, without enlarging these field equations. Einstein remarked to me many years ago that the cosmic repulsion idea was the biggest blunder he had made in his entire life.

Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematical reasoning, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of actual physical expansion through his telescope. He made a compilation of the distances of a number of far galaxies, whose light was shifted toward the red end of the spectrum, and it was soon found that the extent of the shift was in direct proportion to a galaxy's distance from us, as estimated by its faintness. Hubble and others interpreted the red-shift as the Doppler effect - the well-known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies' red-shift. If the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart.

Thus Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lemaître. He proposed that our universe started from a highly compressed and extremely hot state that he called the ‘primeval atom’. (Modern physicists would prefer the term ‘primeval nucleus’.) As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we know it today.

Until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of flight of the galaxies and their distances from us indicated that the expansion had started about 1.8 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old; more recent estimates based on other radioactive elements raise the age of the earth's crust to almost five billion years. Clearly a universe 1.8 billion years old could not contain five-billion-year-old rocks! Happily the contradiction has now been disposed of by Walter Baade's recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raises the age of the universe to five billion years or more.

Friedman's solution of Einstein's cosmological equation, permits two kinds of universe. We can call one the "pulsating" universe. This model says that when the universe has reached a certain maximum permissible expansion, it will begin to contract; that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million times denser than water; that it will then begin to expand again - and so on through the cycle ad infinitum. The other model is a "hyperbolic" one: it suggests that from an infinitely thin state an eternity ago the universe contracted until it reached the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.

The question whether our universe is ‘pulsating’ or ‘hyperbolic’ should be decidable from the present rate of its expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second - the ‘escape velocity’ - the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, . . . and so on.) On the other hand, a rocket shot with a velocity of more than seven miles per second will escape from the earth's gravitational field and disappear in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies (the rocket and the earth) we have an unlimited number of them escaping from one another. We find that the galaxies are fleeing from one another at seven times the velocity necessary for mutual escape.

Thus we may conclude that our universe corresponds to the ‘hyperbolic’ model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse our conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space, and it could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust.

Is the universe finite or infinite? This resolves itself into the question: Is the curvature of space positive or negative - closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three-dimensional space, just as we examined the properties of figures on two-dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius.

We saw that, in the two-dimensional case, the area of a circle increases with increasing radius at a faster rate on a negatively curved surface than on a Euclidean or flat surface; and that on a positively curved surface the relative rate of increase is slower. Similarly the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in radius. In negatively curved space the volume would increase faster than this; in positively curved space, slower. Thus if we look into space and find that the volume of successively larger spheres, as measured by a count of the galaxies within them, increases faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature - closed and finite.

Following this idea, Hubble undertook to study the increase in number of galaxies with distance. He estimated the distances of the remote galaxies by their relative faintness: galaxies vary considerably in intrinsic brightness, but over a very large number of galaxies these variations are expected to average out. Hubble's calculations produced the conclusion that the universe is a closed system - a small universe only a few billion light-years in radius.

We know now that the scale he was using was wrong: with the new yardstick the universe would be more than twice as large as he calculated. Nevertheless, there is a more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past - 500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light-years away cannot be put on the same distance scale with a galaxy 500 million light-years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.

When a correction is made for the assumed decline in brightness with age, the more distant galaxies are spread out to farther distances than Hubble assumed. In fact, the calculations of volume are nonetheless drastically that we may have to reverse the conclusion about the curvature of space. We are not sure, because we do not yet know enough about the evolution of galaxies. Even so, if we find that galaxies wane in intrinsic brightness by only a few per cent in a billion years, we will have to conclude that space is curved negatively and the universe is infinite.

Effectively there is another line of reasoning which supports the side of infinity. Our universe seems to be hyperbolic and ever-expanding. Mathematical solutions of fundamental cosmological equations indicate that such a universe is open and infinite.

We have reviewed the questions that dominated the thinking of cosmologists during the first half of this century: the conception of a four-dimensional space-time continuum, of curved space, of an expanding universe and of a cosmos that is either finite or infinite. Now we must consider the major present issue in cosmology: Is the universe in truth evolving, or is it in a steady state of equilibrium that has always existed and will go on through eternity? Most cosmologists take the evolutionary view. All the same, in 1951 a group at the University of Cambridge, whose chief spokesman has been Fred Hoyle, advanced the steady-state idea. Essentially their theory is that the universe is infinite in space and time that it has neither a beginning nor an end, that the density of its matter remains constant, that new matter is steadily being created in space at a rate that exactly compensates for the thinning of matter by expansion, that as a consequence new galaxies are continually being born, and that the galaxies of the universe therefore range in age from mere youngsters to veterans of 5, 10, 20 and more billions of years. In my opinion this theory must be considered very questionable because of the simple fact (apart from other reasons) that the galaxies in our neighbourhood all seem to be of the same age as our own Milky Way. However, the issue is many-sided and fundamental, and can be settled only by extended study of the universe as far as we can observe it . . . Thus coming to summarize the evolutionary theory.

We assume that the universe started from a very dense state of matter. In the early stages of its expansion, radiant energy was dominant over the mass of matter. We can measure energy and matter on a common scale by means of the well-known equation E=mc2, which says that the energy equivalent of matter is the mass of the matter multiplied by the square of the velocity of light. Energy can be translated into mass, conversely, by dividing the energy quantity by c2. Thus we can speak of the ‘mass density’ of energy. Now at the beginning the mass density of the radiant energy was incomparably greater than the density of the matter in the universe. Yet in an expanding system the density of radiant energy decreases faster than does the density of matter. The former thins out as the fourth power of the distance of expansion: as the radius of the system doubles, the density of radiant energy drops to one sixteenth. The density of matter declines as the third power; a doubling of the radius means an eightfold increase in volume, or eightfold decrease in density.

Assuming that the universe at the beginning was under absolute rule by radiant energy, we can calculate that the temperature of the universe was 250 million degrees when it was one hour old, dropped to 6,000 degrees (the present temperature of our sun's surface) when it was 200,000 years old and had fallen to about 100 degrees below the freezing point of water when the universe reached its 250-millionth birthday.

This particular birthday was a crucial one in the life of the universe. It was the point at which the density of ordinary matter became greater than the mass density of radiant energy, because of the more rapid fall of the latter. The switch from the reign of radiation to the reign of matter profoundly changed matter's behaviours. During the eons of its subjugation to the will of radiant energy (i.e., light), it must have been spread uniformly through space in the form of thin gas. Nevertheless, as soon as matter became gravitationally more important than the radiant energy, it began to acquire a more interesting character. James Jeans, in his classic studies of the physics of such a situation, proved half a century ago that a gravitating gas filling a very large volume is bound to break up into individual ‘gas balls’, the size of which is determined by the density and the temperature of the gas. Thus in the year 250,000,000 A. B. E. (after the beginning of expansion), when matter was freed from the dictatorship of radiant energy, the gas broke up into giant gas clouds, slowly drifting apart as the universe continued to expand. Applying Jeans's mathematical formula for the process to the gas filling the universe at that time, in that these primordial balls of gas would have had just about the mass that the galaxies of stars possess today. They were then only ‘proto galaxies’ - cold, dark and chaotic. However, their gas soon condensed into stars and formed the galaxies as we see them now.

A central question in this picture of the evolutionary universe is the problem of accounting for the formation of the varied kinds of matter composing it, i.e., the chemical elements . . . Its belief is that at the start matter was composed simply of protons, neutrons and electrons. After five minutes the universe must have cooled enough to permit the aggregation of protons and neutrons into larger units, from deuterons (one neutron and one proton) up to the heaviest elements. This process must have ended after about 30 minutes, for by that time the temperature of the expanding universe must have dropped below the threshold of thermonuclear reactions among light elements, and the neutrons must have been used up in element-building or been converted to protons.

To many, the statement that the present chemical constitution of our universe was decided in half an hour five billion years ago will sound nonsensical. However, consider a spot of ground on the atomic proving ground in Nevada where an atomic bomb was exploded three years ago. Within one microsecond the nuclear reactions generated by the bomb produced a variety of fission products. Today, 100 million-million microseconds later, the site is still "hot" with the surviving fission products. The ratio of one microsecond to three years is the same as the ratio of half an hour to five billion years! If we can accept a time ratio of this order in the one case, why not in the other?

The late Enrico Fermi and Anthony L. Turkevich at the Institute for Nuclear Studies of the University of Chicago undertook a detailed study of thermonuclear reactions such as must have taken place during the first half hour of the universe's expansion. They concluded that the reactions would have produced about equal amounts of hydrogen and helium, making up 99 per cent of the total material, and about 1 per cent of deuterium. We know that hydrogen and helium do in fact make up about 99 per cent of the matter of the universe. This leaves us with the problem of building the heavier elements. Hold to opinion, that some of them were built by capture of neutrons. However, since the absence of any stable nucleus of atomic weight 5 makes it improbable that the heavier elements could have been produced in the first half hour in the abundances now observed, and, yet agreeing that the lion's share of the heavy elements may have been formed later in the hot interiors of stars.

All the theories - of the origin, age, extent, composition and nature of the universe - are becoming more subject to test by new instruments and new techniques. . . . Nevertheless, we must not forget that the estimate of distances of the galaxies is still founded on the debatable assumption that the brightness of galaxies does not change with time. If galaxies diminish in brightness as they age, the calculations cannot be depended upon. Thus the question whether evolution is or is not taking place in the galaxies is of crucial importance at the present stage of our outlook on the universe.

In addition certain branches of physical science focus on energy and its large-scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X rays - forms of energy that are closely related and that all obey the same set of rules. Chemistry is the study of the composition of matter and the way different substances interact - subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with fewer harmful side effects.

The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions up which they use to build themselves. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology, one of the fastest-growing sciences today.

Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens usually, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today.

The earth sciences examine the structure and composition of our planet, and the physical processes that have helped to shape it. Geology focuses on the structure of Earth, while geography is the study of everything on the planet's surface, including the physical changes that humans have brought about from, for example, farming, mining, or deforestation. Scientists in the field of geomorphology study Earth's present landforms, while mineralogists investigate the minerals in Earth's crust and the way they formed. Water dominates Earth's surface, making it an important subject for scientific research. Oceanographers carry out research in the oceans. While scientists working in the field of hydrology investigate water resources on land, a subject of vital interest in areas prone to drought. Glaciologists study Earth's icecaps and mountain glaciers, and the effects that ice have when it forms, melts, or moves. In atmospheric science, meteorology deals with day-to-day changes in weather, but climatology investigates changes in weather patterns over the longer term.

When living things die their remains are sometimes preserved, creating a rich store of scientific information: Palaeontology is the study of plant and animal remains that have been preserved in sedimentary rock, often millions of years ago. Paleontologists study things long dead and their findings shed light on the history of evolution and on the origin and development of humans. A related science, called palynology, is the study of fossilized spores and pollen grains. Scientists study these tiny structures to learn the types of plants that grew in certain areas during Earth’s history, which also helps identify what Earth’s climates were like in the past.

The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.

Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes finds in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.

While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.

Physiology explores how living things’ work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life, the fact that most living things maintain a steady internal state when the environment around them constantly changes.

Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.

As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.

The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence’s people's behaviour and attitudes.

Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long-term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share and those that are the products of local culture, learned and handed on from generation to generation.

The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well in technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. Overall, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine’ was any kind of machine.

Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water’s surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.

In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine. Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.

During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. Even so, with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.

The earliest writers were the people of Mesopotamia, who lived in a part of present-day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge-shaped marks.

Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared. The Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.

Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of a pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc. shows that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians - a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.

For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. Yet in ancient Greece, often recognized as the birthplace of Western science, a new scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.

Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.

Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit - not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.

As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens - the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle - students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.

In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing the figure of an accurate overlay within one percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.

By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in Ad 529, bringing the first flowering of rationalism to an end.

For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300's, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.

Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about Ad. 270; wood-block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of π (pi) to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940 Bc.

The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al-Khwarizmi introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in 30. Al-Khwarizmi also wrote on algebra (it derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.

In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used - alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.

In Europe, historians often attribute the rebirth of science to a political event - the capture of Constantinople (now Istanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.

The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. However, in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, 7 volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body before which had persisted since the time of Galen more than 1,300 years. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.

The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer . In it, Copernicus rejected the idea that Earth was the centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion that has dogged Western thought ever since

In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.

These observations of Venus helped to convince Galileo that Copernicus’s Sun-entered view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.

Nicolaus Copernicus (1473-1543), the first developed heliocentric theory of the Universes in the modern era presented in De Revolutioniv bus Coelestium, published in the year of Copernicus’s death. The system is entirely mathematical, in the sense of predicting the observed position of celestial bodies on te basis of an underlying geometry, without exploring the mechanics of celestial motion. Its mathematical and scientific superiority over the Ptolemaic system was not as direct as poplar history suggests: Copernicus’s system adhered to circular planetary motion and let the planets run on 48 epicycles and eccentrics. It was not until the work of Kepler and Galileo that the system became markedly simpler than Ptolemaic astronomy.

The publication of Nicolaus Copernicus's De Revolutionibus Orbium coelestium (On the Revolutions of the Heavenly Spheres) in 1543 is traditionally considered the inauguration of the scientific revolution. Ironically, Copernicus had no intention of introducing radical ideas into cosmology. His aim was only to restore the purity of ancient Greek astronomy by eliminating novelties introduced by Ptolemy. With such an aim in mind he modelled his own book, which would turn astronomy upside down, on Ptolemy's Almagest. At the core of the Copernican system, as with that of Aristarchus before him, is the concept of the stationary Sun at the centre of the universe, and the revolution of the planets, Earth included, around the Sun. The Earth was ascribed, in addition to an annual revolution around the Sun, a daily rotation around its axis.

Copernicus's greatest achievement is his legacy. By introducing mathematical reasoning into cosmology, he dealt a severe blow to Aristotelian commonsense physics. His concept of an Earth in motion launched the notion of the Earth as a planet. His explanation that he had been unable to detect stellar parallax because of the enormous distance of the sphere of the fixed stars opened the way for future speculation about an infinite universe. Nevertheless, Copernicus still clung to many traditional features of Aristotelian cosmology. He continued to advocate the entrenched view of the universe as a closed world and to see the motion of the planets as uniform and circular. Thus, in evaluating Copernicus's legacy, it should be noted that he set the stage for far more daring speculations than he himself could make. The heavy metaphysical underpinning of Kepler's laws, combined with an obscure style and a demanding mathematics, caused most contemporaries to ignore his discoveries. Even his Italian contemporary Galileo Galilei, who corresponded with Kepler and possessed his books, never referred to the three laws. Instead, Galileo provided the two important elements missing from Kepler's work: a new science of dynamics that could be employed in an explanation of planetary motion, and a staggering new body of astronomical observations. The observations were made possible by the invention of the telescope in Holland c.1608 and by Galileo's ability to improve on this instrument without having ever seen the original. Thus equipped, he turned his telescope skyward, and saw some spectacular sights.

The results of his discoveries were immediately published in the Sidereus nuncius (The Starry Messenger) of 1610. Galileo observed that the Moon was very similar to the Earth, with mountains, valleys, and oceans, and not at all that perfect, smooth spherical body it was claimed to be. He also discovered four moons orbiting Jupiter. As for the Milky Way, instead of being a stream of light, it was, rather, a large aggregate of stars. Later observations resulted in the discovery of sunspots, the phases of Venus, and that strange phenomenon which would later be designated as the rings of Saturn.

Having announced these sensational astronomical discoveries--which reinforced his conviction of the reality of the heliocentric theory--Galileo resumed his earlier studies of motion. He now attempted to construct a comprehensive new science of mechanics necessary in a Copernican world, and the results of his labours were published in Italian in two epoch-making books: Dialogue Concerning the Two Chief World Systems (1632) and Discourses and Mathematical Demonstrations Concerning the Two New Sciences (1638). His studies of projectiles and free-falling bodies brought him very close to the full formulation of the laws of inertia and acceleration (the first two laws of Isaac Newton). Galileo's legacy includes both the modern notion of "laws of nature" and the idea of mathematics as nature's true language. He contributed to the mathematization of nature and the geometrization of space, as well as to the mechanical philosophy that would dominate the 17th and 18th centuries. Perhaps most important, it is largely due to Galileo that experiments and observations serve as the cornerstone of scientific reasoning.

Today, Galileo is remembered equally well because of his conflict with the Roman Catholic church. His uncompromising advocacy of Copernicanism after 1610 was responsible, in part, for the placement of Copernicus's De Revolutionibus on the Index of Forbidden Books in 1616. At the same time, Galileo was warned not to teach or defend Copernicanism in public. The election of Galileo's friend Maffeo Barberini as Pope Urban VIII in 1624 filled Galileo with the hope that such a verdict could be revoked. With perhaps some unwarranted optimism, Galileo set to work to complete his Dialogue (1632). However, Galileo underestimated the power of the enemies he had made during the previous two decades, particularly some Jesuits who had been the target of his acerbic tongue. The outcome was that Galileo was summoned to Rome and there forced to abjure, on his knees, the views he had expressed in his book. Ever since, Galileo has been portrayed as a victim of a repressive church and a martyr in the cause of freedom of thought; as such, he has become a powerful symbol.

Despite his passionate advocacy of Copernicanism and his fundamental work in mechanics, Galileo continued to accept the age-old views that planetary orbits were circular and the cosmos an enclosed world. These beliefs, as well as a reluctance rigorously to apply mathematics to astronomy as he had previously applied it to terrestrial mechanics, prevented him from arriving at the correct law of inertia. Thus, it remained for Isaac Newton to unite heaven and Earth in his immense intellectual achievement, the Philosophiae naturalis principia mathematica (Mathematical Principles of Natural Philosophy), which was published in 1687. The first book of the Principia contained Newton's three laws of motion. The first expounds the law of inertia: everybody persists in a state of rest or uniform motion in a straight line unless compelled to change such a state by an impressing force. The second is the law of acceleration, according to which the change of motion of a body is proportional to the force acting upon it and takes place in the direction of the straight line along which that force is impressed. The third, and most original, law ascribes to every action an opposite and equal reaction. These laws governing terrestrial motion were extended to include celestial motion in book 3 of the Principia, where Newton formulated his most famous law, the law of gravitation: everybody in the universe attracts any other body with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them.

The Principia is deservedly considered one of the greatest scientific masterpieces of all time. Nevertheless, in 1704, Newton published his second great work, the Opticks, in which he formulated his corpuscular theory of light and his theory of colours. In later editions Newton appended a series of "queries" concerning various related topics in natural philosophy. These speculative, and sometimes metaphysical, statements on such issues as light, heat, ether, and matter became most productive during the 18th century, when the book and the experimental method it propagated became immensely popular.

The 17th century French scientist and mathematician René Descartes was also one of the most influential thinkers in Western philosophy. Descartes stressed the importance of skepticism in thought and proposed the idea that existence had a dual nature: one physical, the other mental. The latter concept, known as Cartesian dualism, continues to engage philosophers today. This passage from Discourse on Method (first published in his Philosophical Essays in 1637) contains a summary of his thesis, which includes the celebrated phrase “I think, therefore I am.”

Then examining attentively what I was, and seeing that I could pretend that I had no body and that there was no world or place that I [was] in, but that I could not, for all that, pretend that I did not exist, and that, on the contrary, from the very fact that I thought of doubting the truth of other things, it followed very evidently and very certainly that I existed; while, on the other hand, if I had only ceased to think, although all the rest of what I had ever imagined had been true, I would have had no reason to believe that I existed; I thereby concluded that I was a substance, of which the whole essence or nature consists in thinking, and which, in order to exist, needs no place and depends on no material thing; so that this “I,” that is to say, the mind, by which I am what I am, is distinct entirely from the body, and even that knowing is easier than the body, and moreover that even if the body were not, it would not cease to be all that it is.

After this, as considered overall what is needed for a proposition to be true and certain; for, since I had just found one which I knew to be so, I thought that I ought also to know what this certainty consisted of And having noticed that there is nothing at all in this, I think, therefore I am, which assures me that I am speaking the truth, except that I see very clearly that in order to think one must exist, I judged that I could take it to be a general rule that the things we conceive very clearly and very distinctly are nevertheless some difficulty in being able to recognize for certain which are the things we see distinctly.

Following this, reflecting on the fact that I had doubts, and that consequently my being was not perfect, for I saw clearly that it was a greater perfection to know than to doubt, I decided to inquire from what place I had learned to think of some thing perfect than myself; and I clearly recognized that this must have been from some nature which was in fact perfect. As for the notions I had of several other things outside myself, such as the sky, the earth, light, heat and a thousand others, I had not the same concern to know their source, because, seeing nothing in them which seemed to make them superior to myself. I could believe that, if they were true, they were dependencies of my nature, in as much as it. One perfection; and, if they were not, that I held them from nothing, that is to say that they were in me because of an imperfection in my nature. But I could not make the same judgement concerning the idea of a being perfect than myself; for to hold it from nothing was something manifestly impossible; and because it is no less contradictory that the perfect should proceed from and depend on the less perfect, than it is that something should emerge out of nothing, I could not hold it from myself; with the result that it remained that it must have been put into me by a being whose nature was truly perfect than mine and which even had in itself all the perfection of which I could have any idea, which is to say, in a word, which was God. To which I added that, since I knew some perfections that I did not have, I was not the only being which existed (I shall freely use here, with your permission, the terms of the School) but that there must be another perfect, upon whom I depended, and from whom I had acquired all I had; for, if I had been alone and independent of all other, so as to have had from myself this small portion of perfection that I had by participation in the perfection of God, I could have given myself, by the same reason, all the remainder of perfection that I knew myself to lack, and thus to be myself infinite, eternal, immutable, omniscient, all powerful, and finally to have all the perfections that I could observe to be in God. For, consequentially upon the reasonings by which I had proved the existence of God, in order to understand the nature of God as far as my own nature was capable of doing, I had only to consider, concerning all the things of which I found in myself some idea, whether it was a perfection or not to have them: and I was assured that none of those which indicated some imperfection was in him, but that all the others were. So I saw that doubt, inconstancy, sadness and similar things could not be in him, seeing that I myself would have been very pleased to be free from them. Then, further, I had ideas of many sensible and bodily things; for even supposing that I was dreaming, and that everything I saw or imagined was false, I could not, nevertheless, deny that the ideas were really in my thoughts. But, because I had already recognized in myself very clearly that intelligent nature is distinct from the corporeal, considering that all composition is evidence of dependency, and that dependency is manifestly a defect, I thence judged that it could not be a perfection in God to be composed of these two natures, and that, consequently, he was not so composed; but that, if there were any bodies in the world or any intelligence or other natures which were not wholly perfect, their existence must depend on his power, in such a way that they could not subsist without him for a single instant.

I set out after that to seek other truths; and turning to the object of the geometers [geometry], which I conceived as a continuous body, or a space extended indefinitely in length, width and height or depth, divisible into various parts, which could have various figures and sizes and be moved or transposed in all sorts of ways—for the geometers take all that to be in the object of their study—I went through some of their simplest proofs. And having observed that the great certainty that everyone attributes to them is based only on the fact that they are clearly conceived according to the rule I spoke of earlier, I noticed also that they had nothing at all in them which might assure me of the existence of their object. Thus, for example, I very well perceived that, supposing a triangle to be given, its three angles must be equal to two right-angles, but I saw nothing, for all that, which assured me that any such triangle existed in the world; whereas, reverting to the examination of the idea I had of a perfect Being. I found that existence was comprised in the idea in the same way that the equality of the three angles of a triangle to two right angles is comprised in the idea of a triangle or, as in the idea of a sphere, the fact that all its parts are equidistant from its centre, or even more obviously so; and that consequently it is at least as certain that God, who is this perfect Being, is, or exists, as any geometric demonstration can be.

The impact of the Newtonian accomplishment was enormous. Newton's two great books resulted in the establishment of two traditions that, though often mutually exclusive, nevertheless permeated into every area of science. The first was the mathematical and reductionist tradition of the Principia, which, like René Descartes's mechanical philosophy, propagated a rational, well-regulated image of the universe. The second was the experimental tradition of the Opticks, somewhat less demanding than the mathematical tradition and, owing to the speculative and suggestive queries appended to the Opticks, highly applicable to chemistry , biology, and the other new scientific disciplines that began to flourish in the 18th century. This is not to imply that everyone in the scientific establishment was, or would be, a Newtonian. Newtonianism had its share of detractors. Rather, the Newtonian achievement was so great, and its applicability to other disciplines so strong, that although Newtonian science could be argued against, it could not be ignored. In fact, in the physical sciences an initial reaction against universal gravitation occurred. For many, the concept of action at a distance seemed to hark back to those occult qualities with which the mechanical philosophy of the 17th century had done away. By the second half of the 18th century, however, universal gravitation would be proved correct, thanks to the work of Leonhard Euler, A. C. Clairaut, and Pierre Simon de LaPlace, the last of whom announced the stability of the solar system in his masterpiece Celestial Mechanics (1799-1825).

Newton's influence was not confined to the domain of the natural sciences. The philosophes of the 18th-century Enlightenment sought to apply scientific methods to the study of human society. To them, the empiricist philosopher John Locke was the first person to attempt this. They believed that in his Essay on Human Understanding (1690) Locke did for the human mind what Newton had done for the physical world. Although Locke's psychology and epistemology were to come under increasing attack as the 18th century advanced, other thinkers such as Adam Smith, David Hume, and Abbé de Condillac would aspire to become the Newtons of the mind or the moral realm. These confident, optimistic men of the Enlightenment argued that there must exist universal human laws that transcend differences of human behaviour and the variety of social and cultural institutions. Labouring under such an assumption, they sought to uncover these laws and apply them to the new society about which they hoped to bring.

As the 18th century progressed, the optimism of the philosophes waned and a reaction began to set in. Its first manifestation occurred in the religious realm. The mechanistic interpretation of the world--shared by Newton and Descartes - had, in the hands of the philosophes, led to materialism and atheism. Thus, by mid-century the stage was set for a revivalist movement, which took the form of Methodism in England and pietism in Germany. By the end of the century the romantic reaction had begun (see romanticism). Fuelled in part by religious revivalism, the romantics attacked the extreme rationalism of the Enlightenment, the impersonalization of the mechanistic universe, and the contemptuous attitude of "mathematicians" toward imagination, emotions, and religion.

The romantic reaction, however, was not antiscientific; its adherents rejected a specific type of the mathematical sciences, not the entire enterprise. In fact, the romantic reaction, particularly in Germany, would give rise to a creative movement--the Naturphilosophie–that in turn would be crucial for the development of the biological and life sciences in the 19th century, and would nourish the metaphysical foundation necessary for the emergence of the concepts of energy, forces, and conservation

Thus and so, in classical physics, external reality consisted of inert and inanimate matter moving in accordance with wholly deterministic natural laws, and collections of discrete atomized parts constituted wholes. Classical physics was also premised, however, on a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate from and superior to sensible objects and movements. The motion that the material world experienced by the senses was inferior to the immaterial world experiences by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. Nevertheless, in one very important respect it also made the fist scientific revolution possible. Copernicus, Galileo, Kepler and Newton firmly believed that the immaterial geometrical mathematical ides that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.

Even though instruction at Cambridge was still dominated by the philosophy of Aristotle, some freedom of study was permitted in the student's third year. Newton immersed himself in the new mechanical philosophy of Descartes, Gassendi, and Boyle; in the new algebra and analytical geometry of Vieta, Descartes, and Wallis; and in the mechanics and Copernican astronomy of Galileo. At this stage Newton showed no great talent. His scientific genius emerged suddenly when the plague closed the University in the summer of 1665 and he had to return to Lincolnshire. There, within 18 months he began revolutionary advances in mathematics, optics, physics, and astronomy.

During the plague years Newton laid the foundation for elementary differential and integral CALCULUS, several years before its independent discovery by the German philosopher and mathematician LEIBNIZ. The "method of fluxions," as he termed it, was based on his crucial insight that the integration of a function (or finding the area under its curve) is merely the inverse procedure to differentiating it (or finding the slope of the curve at any point). Taking differentiation as the basic operation, Newton produced simple analytical methods that unified a host of disparate techniques previously developed on a piecemeal basis to deal with such problems as finding areas, tangents, the lengths of curves, and their maxima and minima. Even though Newton could not fully justify his methods--rigorous logical foundations for the calculus were not developed until the 19th century--he receives the credit for developing a powerful tool of problem solving and analysis in pure mathematics and physics. Isaac Barrow, a Fellow of Trinity College and Lucasian Professor of Mathematics in the University, was so impressed by Newton's achievement that when he resigned his chair in 1669 to devote himself to theology, he recommended that the 27-year-old Newton take his place.

Newton's initial lectures as Lucasian Professor dealt with optics, including his remarkable discoveries made during the plague years. He had reached the revolutionary conclusion that white light is not a simple, homogeneous entity, as natural philosophers since Aristotle had believed. When he passed a thin beam of sunlight through a glass prism, he noted the oblong spectrum of colours--red, yellow, green, blue, violet--that formed on the wall opposite. Newton showed that the spectrum was too long to be explained by the accepted theory of the bending (or refraction) of light by dense media. The old theory said that all rays of white light striking the prism at the same angle would be equally refracted. Newton argued that white light is really a mixture of many different types of rays, that the different types of rays are refracted at slightly different angles, and that each different type of ray is responsible for producing a given spectral colour. A so-called crucial experiment confirmed the theory. Newton selected out of the spectrum a narrow band of light of one colour. He sent it through a second prism and observed that no further elongation occurred. All the selected rays of one colour were refracted at the same angle.

These discoveries led Newton to the logical, but erroneous, conclusion that telescopes using refracting lenses could never overcome the distortions of chromatic dispersion. He therefore proposed and constructed a reflecting telescope, the first of its kind, and the prototype of the largest modern optical telescopes. In 1671 he donated an improved version to the Royal Society of London, the foremost scientific society of the day. As a consequence, he was elected a fellow of the society in 1672. Later that year Newton published his first scientific paper in the Philosophical Transactions of the society. It dealt with the new theory of light and colour and is one of the earliest examples of the short research paper.

Newton's paper was well received, but two leading natural philosophers, Robert Hooke and Christian Huygens rejected Newton's naive claim that his theory was simply derived with certainty from experiments. In particular they objected to what they took to be Newton's attempt to prove by experiment alone that light consists in the motion of small particles, or corpuscles, rather than in the transmission of waves or pulses, as they both believed. Although Newton's subsequent denial of the use of hypotheses was not convincing, his ideas about scientific method won universal assent, along with his corpuscular theory, which reigned until the wave theory was revived in the early 19th century.

The debate soured Newton's relations with Hooke. Newton withdrew from public scientific discussion for about a decade after 1675, devoting himself to chemical and alchemical researches. He delayed the publication of a full account of his optical researches until after the death of Hooke in 1703. Newton's Opticks appeared the following year. It dealt with the theory of light and colour and with Newton's investigations of the colours of thin sheets, of "Newton's rings," and of the phenomenon of diffraction of light. To explain some of his observations he had to graft elements of a wave theory of light onto his basically corpuscular theory. ‘q’.

Newton's greatest achievement was his work in physics and celestial mechanics, which culminated in the theory of universal gravitation. Even though Newton also began this research in the plague years, the story that he discovered universal gravitation in 1666 while watching an apple fall from a tree in his garden is a myth. By 1666, Newton had formulated early versions of his three LAWS OF MOTION. He had also discovered the law stating the centrifugal force (or force away from the centre) of a body moving uniformly in a circular path. However, he still believed that the earth's gravity and the motions of the planets might be caused by the action of whirlpools, or vortices, of small corpuscles, as Descartes had claimed. Moreover, although he knew the law of centrifugal force, he did not have a correct understanding of the mechanics of circular motion. He thought of circular motion as the result of a balance between two forces--one centrifugal, the other centripetal (toward the centre) - than as the result of one force, a centripetal force, which constantly deflects the body away from its inertial path in a straight line.

Newton's great insight of 1666 was to imagine that the Earth's gravity extended to the Moon, counterbalancing its centrifugal force. From his law of centrifugal force and Kepler's third law of planetary motion, Newton deduced that the centrifugal (and hence centripetal) force of the Moon or of any planet must decrease as the inverse square of its distance from the centre of its motion. For example, if the distance is doubled, the force becomes one-fourth as much; if distance is trebled, the force becomes one-ninth as much. This theory agreed with Newton's data to within about 11%.

In 1679, Newton returned to his study of celestial mechanics when his adversary Hooke drew him into a discussion of the problem of orbital motion. Hooke is credited with suggesting to Newton that circular motion arises from the centripetal deflection of inertially moving bodies. Hooke further conjectured that since the planets move in ellipses with the Sun at one focus (Kepler's first law), the centripetal force drawing them to the Sun should vary as the inverse square of their distances from it. Hooke could not prove this theory mathematically, although he boasted that he could. Not to be shown up by his rival, Newton applied his mathematical talents to proving Hooke's conjecture. He showed that if a body obeys Kepler's second law (which states that the line joining a planet to the sun sweeps out equal areas in equal times), then the body is being acted upon by a centripetal force. This discovery revealed for the first time the physical significance of Kepler's second law. Given this discovery, Newton succeeded in showing that a body moving in an elliptical path and attracted to one focus must indeed be drawn by a force that varies as the inverse square of the distance. Later even these results were set aside by Newton.

In 1684 the young astronomer Edmond Halley, tired of Hooke's fruitless boasting, asked Newton whether he could prove Hooke's conjecture and to his surprise was told that Newton had solved the problem a full 5 years before but had now mislaid the paper. At Halley's constant urging Newton reproduced the proofs and expanded them into a paper on the laws of motion and problems of orbital mechanics. Finally Halley persuaded Newton to compose a full-length treatment of his new physics and its application to astronomy. After 18 months of sustained effort, Newton published (1687) the Philosophiae naturalis principia mathematica (The Mathematical Principles of Natural Philosophy), or Principia, as it is universally known.

By common consent the Principia is the greatest scientific book ever written. Within the framework of an infinite, homogeneous, three-dimensional, empty space and a uniformly and eternally flowing "absolute" time, Newton fully analysed the motion of bodies in resisting and nonresisting media under the action of centripetal forces. The results were applied to orbiting bodies, projectiles, pendula, and free-fall near the Earth. He further demonstrated that the planets were attracted toward the Sun by a force varying as the inverse square of the distance and generalized that all heavenly bodies mutually attract one another. By further generalization, he reached his law of universal gravitation: every piece of matter attracts every other piece with a force proportional to the product of their masses and inversely proportional to the square of the distance between them.

Given the law of gravitation and the laws of motion, Newton could explain a wide range of hitherto disparate phenomena such as the eccentric orbits of comets, the causes of the tides and their major variations, the precession of the Earth's axis, and the perturbation of the motion of the Moon by the gravity of the Sun. Newton's one general law of nature and one system of mechanics reduced to order most of the known problems of astronomy and terrestrial physics. The work of Galileo, Copernicus, and Kepler was united and transformed into one coherent scientific theory. The new Copernican world-picture finally had a firm physical basis.

Because Newton repeatedly used the term "attraction" in the Principia, mechanical philosophers attacked him for reintroducing into science the idea that mere matter could act at a distance upon other matter. Newton replied that he had only intended to show the existence of gravitational attraction and to discover its mathematical law, not to inquire into its cause. He no more than his critics believed that brute matter could act at a distance. Having rejected the Cartesian vortices, he reverted in the early 1700s to the idea that some sort of material medium, or ether, caused gravity. But Newton's ether was no longer a Cartesian-type ether acting solely by impacts among particles. The ether had to be extremely rare so it would not obstruct the motions of the planets, and yet very elastic or springy so it could push large masses toward one another. Newton postulated that the new ether consisted of particles endowed with very powerful short-range repulsive forces. His unreconciled ideas on forces and ether deeply influenced later natural philosophers in the 18th century when they turned to the phenomena of chemistry, electricity and magnetism, and physiology.

With the publication of the Principia, Newton was recognized as the leading natural philosopher of the age, but his creative career was effectively over. After suffering a nervous breakdown in 1693, he retired from research to seek a government position in London. In 1696 he became Warden of the Royal Mint and in 1699 its Master, an extremely lucrative position. He oversaw the great English recoinage of the 1690s and pursued counterfeiters with ferocity. In 1703 he was elected president of the Royal Society and was reelected each year until his death. He was knighted (1708) by Queen Anne, the first scientist to be so honoured for his work.

As any overt appeal to metaphysics became unfashionable, the science of mechanics was increasingly regarded, says Ivor Leclerc, as ‘an autonomous science,’ and any alleged role of God as ‘deus ex machina.’At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other great French mathematicians and, advanced the view that the science of mechanics constituted a complex view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God as, they concluded unnecessary.

Pierre de Simon LaPlace (1749-1827) is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well. The epistemology of science requires, had that we proceeded by inductive generalisations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena.’ What was unique out LaPlace’s view of hypotheses as insistence that we cannot attribute reality to them. Although concepts like force, mass, notion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths abut nature are only quantities.

The seventeenth-century view of physics s a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was: The science of nature. This view, which was premised on the doctrine e of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical descriptions. Since the doctrine of positivism, assumed that the knowledge we call physics resides only in the mathematical formalisms of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.

So, then, the decision was motivated by our conviction that our discoveries have more potential to transform our conception of the ‘way thing are’ than any previous discovery in the history of science, as these implications of discovery extend well beyond the domain of the physical sciences, and the best efforts of large numbers of thoughtfully convincing in others than I will be required to understand them.

In fewer contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.

Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.

However, the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.

Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.

Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-held notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.

By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.

With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.

In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions - a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.

Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.

In 1828 the German chemist Friedrich Wöhler showed that making carbon-containing was possible, organic compounds from inorganic ingredients, a breakthrough that opened an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.

In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he performed led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's development of the electromagnetic theory of light took many years. It began with the paper ‘On Faraday's Lines of Force’ (1855–1856), in which Maxwell built on the ideas of British physicist Michael Faraday. Faraday explained that electric and magnetic effects result from lines of force that surround conductors and magnets. Maxwell drew an analogy between the behaviour of the lines of force and the flow of a liquid, deriving equations that represented electric and magnetic effects. The next step toward Maxwell’s electromagnetic theory was the publication of the paper, On Physical Lines of Force (1861–1862). Here Maxwell developed a model for the medium that could carry electric and magnetic effects. He devised a hypothetical medium that consisted of a fluid in which magnetic effects created whirlpool-like structures. These whirlpools were separated by cells created by electric effects, so the combination of magnetic and electric effects formed a honeycomb pattern.

Maxwell could explain all known effects of electromagnetism by considering how the motion of the whirlpools, or vortices, and cells could produce magnetic and electric effects. He showed that the lines of force behave like the structures in the hypothetical fluid. Maxwell went further, considering what would happen if the fluid could change density, or be elastic. The movement of a charge would set up a disturbance in an elastic medium, forming waves that would move through the medium. The speed of these waves would be equal to the ratio of the value for an electric current measured in electrostatic units to the value of the same current measured in electromagnetic units. German physicists Friedrich Kohlrausch and Wilhelm Weber had calculated this ratio and found it the same as the speed of light. Maxwell inferred that light consists of waves in the same medium that causes electric and magnetic phenomena.

Maxwell found supporting evidence for this inference in work he did on defining basic electrical and magnetic quantities in terms of mass, length, and time. In the paper, On the Elementary Regulations of Electric Quantities (1863), he wrote that the ratio of the two definitions of any quantity based on electric and magnetic forces is always equal to the velocity of light. He considered that light must consist of electromagnetic waves but first needed to prove this by abandoning the vortex analogy and developing a mathematical system. He achieved this in ‘A Dynamical Theory of the Electromagnetic Field’ (1864), in which he developed the fundamental equations that describe the electromagnetic field. These equations showed that light is propagated in two waves, one magnetic and the other electric, which vibrate perpendicular to each other and perpendicular to the direction in which they are moving (like a wave travelling along a string). Maxwell first published this solution in Note on the Electromagnetic Theory of Light (1868) and summed up all of his work on electricity and magnetism in Treatise on Electricity and Magnetism in 1873.

The treatise also suggested that a whole family of electromagnetic radiation must exist, of which visible light was only one part. In 1888 German physicist Heinrich Hertz made the sensational discovery of radio waves, a form of electromagnetic radiation with wavelengths too long for our eyes to see, confirming Maxwell’s ideas. Unfortunately, Maxwell did not live long enough to see this vindication of his work. He also did not live to see the ether (the medium in which light waves were said to be propagated) disproved with the classic experiments of German-born American physicist Albert Michelson and American chemist Edward Morley in 1881 and 1887. Maxwell had suggested an experiment much like the Michelson-Morley experiment in the last year of his life. Although Maxwell believed the ether existed, his equations were not dependent on its existence, and so remained valid.

Maxwell's other major contribution to physics was to provide a mathematical basis for the kinetic theory of gases, which explains that gases behave as they do because they are composed of particles in constant motion. Maxwell built on the achievements of German physicist Rudolf Clausius, who in 1857 and 1858 had shown that a gas must consist of molecules in constant motion colliding with each other and with the walls of their container. Clausius developed the idea of the mean free path, which is the average distance that a molecule travels between collisions.

Maxwell's development of the kinetic theory of gases was stimulated by his success in the similar problem of Saturn's rings. It dates from 1860, when he used a statistical treatment to express the wide range of velocities (speeds and the directions of the speeds) that the molecules in a quantity of gas must inevitably possess. He arrived at a formula to express the distribution of velocity in gas molecules, relating it to temperature. He showed that gases store heat in the motion of their molecules, so the molecules in a gas will speed up as the gasses temperature increases. Maxwell then applied his theory with some success to viscosity (how much a gas resists movement), diffusion (how gas molecules move from an area of higher concentration to an area of lower concentration), and other properties of gases that depend on the nature of the molecules’ motion.

Maxwell's kinetic theory did not fully explain heat conduction (how heat travels through a gas). Austrian physicist Ludwig Boltzmann modified Maxwell’s theory in 1868, resulting in the Maxwell-Boltzmann distribution law, showing the number of particles (n) having an energy (E) in a system of particles in thermal equilibrium. It has the form:

n = n0 exp(-E/kT),

where n0 is the number of particles having the lowest energy, ‘k’ the Boltzmann constant, and ‘T’ the thermodynamic temperature.

If the particles can only have certain fixed energies, such as the energy levels of atoms, the formula gives the number (Ei) above the ground state energy. In certain cases several distinct states may have the same energy and the formula then becomes: ni = gin0 exp(-Ki/kT),

where gi is the statistical weight of the level of energy Ei, i.e., the number of states having energy Ei. The distribution of energies obtained by the formula is called a Boltzmann distribution.

Both Maxwell’ s thermodynamic relational equations and the Boltzmann formulation to a contributional successive succession of refinements of kinetic theory, and it proved fully applicable to all properties of gases. It also led Maxwell to an accurate estimate of the size of molecules and to a method of separating gases in a centrifuge. The kinetic theory was derived using statistics, so it also revised opinions on the validity of the second law of thermodynamics, which states that heat cannot flow from a colder to a hotter body of its own accord. In the case of two connected containers of gases at the same temperature, it is statistically possible for the molecules to diffuse so that the faster-moving molecules all concentrate in one container while the slower molecules gather in the other, making the first container hotter and the second colder. Maxwell conceived this hypothesis, which is known as Maxwell's demon. Although this event is very unlikely, it is possible, and the second law is therefore not absolute, but highly probable.

These sources provide additional information on James Maxwell Clerk: Maxwell is generally considered the greatest theoretical physicist of the 1800s. He combined a rigorous mathematical ability with great insight, which enabled him to make brilliant advances in the two most important areas of physics at that time. In building on Faraday's work to discover the electromagnetic nature of light, Maxwell not only explained electromagnetism but also paved the way for the discovery and application of the whole spectrum of electromagnetic radiation that has characterized modern physics. Physicists now know that this spectrum also includes radio, infrared, ultraviolet, and X-ray waves, to name a few. In developing the kinetic theory of gases, Maxwell gave the final proof that the nature of heat resides in the motion of molecules.

With Maxwell's famous equations, as devised in 1864, uses mathematics to explain the intersaction between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well.

With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long-held notion that atoms were the basic unit of matter.

As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.

In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimated ranges may be as far as from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) reflecting telescopes, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.

In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur’s vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.

Pasteur’s work on fermentation and spontaneous generation had considerable implications for medicine, because he believed that the origin and development of disease are analogous to the origin and process of fermentation. That is, disease arises from germs attacking the body from outside, just as unwanted microorganisms invade milk and cause fermentation. This concept, called the germ theory of disease, was strongly debated by physicians and scientists around the world. One of the main arguments against it was the contention that the role germs played during the course of disease was secondary and unimportant; the notion that tiny organisms could kill vastly larger ones seemed ridiculous to many people. Pasteur’s studies convinced him that he was right, however, and in the course of his career he extended the germ theory to explain the causes of many diseases.

Pasteur also determined the natural history of anthrax, a fatal disease of cattle. He proved that anthrax is caused by a particular bacillus and suggested that animals could be given anthrax in a mild form by vaccinating them with attenuated (weakened) bacilli, thus providing immunity from potentially fatal attacks. In order to prove his theory, Pasteur began by inoculating 25 sheep; a few days later he inoculated these and 25 more sheep with an especially strong inoculant, and he left 10 sheep untreated. He predicted that the second 25 sheep would all perish and concluded the experiment dramatically by showing, to a sceptical crowd, the carcasses of the 25 sheep lying side by side.

Pasteur spent the rest of his life working on the causes of various diseases, including septicaemia, cholera, diphtheria, fowl cholera, tuberculosis, and smallpox - and their prevention by means of vaccination. He is best known for his investigations concerning the prevention of rabies, otherwise known in humans as hydrophobia. After experimenting with the saliva of animals suffering from this disease, Pasteur concluded that the disease rests in the nerve centres of the body; when an extract from the spinal column of a rabid dog was injected into the bodies of healthy animals, symptoms of rabies were produced. By studying the tissues of infected animals, particularly rabbits, Pasteur was able to develop an attenuated form of the virus that could be used for inoculation.

In 1885, a young boy and his mother arrived at Pasteur’s laboratory; the boy had been bitten badly by a rabid dog, and Pasteur was urged to treat him with his new method. At the end of the treatment, which lasted ten days, the boy was being inoculated with the most potent rabies virus known; he recovered and remained healthy. Since that time, thousands of people have been saved from rabies by this treatment.

Pasteur’s research on rabies resulted, in 1888, in the founding of a special institute in Paris for the treatment of the disease. This became known as the Instituted Pasteur, and it was directed by Pasteur himself until he died. (The institute still flourishes and is one of the most important centres in the world for the study of infectious diseases and other subjects related to microorganisms, including molecular genetics.) By the time of his death in Saint-Cloud on September 28, 1895, Pasteur had long since become a national hero and had been honoured in many ways. He was given a state funeral at the Cathedral of Nôtre Dame, and his body was placed in a permanent crypt in his institute.

Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. Nevertheless, the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that until it has not subsided. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.

In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.

At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940's American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and thus the key to heredity.

After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, sidestepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.

At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than 10 a year by the 21st century. By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. Nevertheless, by the 1980's the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause haemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.

In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertion of normal or genetically an altered gene into a patient’s cells replaces nonfunctional or missing genes.

Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fiberoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as ‘telemedicine’, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.

In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind.’ In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.

The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.

In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.

In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.

During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second

Further miniaturization led in 1971 to the first microprocessor - a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950's. Once used only by large businesses, computers are now used by professionals, small retailers, and students to perform a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to understand each other with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.

During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.

When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960's NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960's and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.

In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.

In 1900 the German physicist Max Planck proposed the then sensational idea that energy be not divisible but is always given off in set amounts, or quanta. Five years later, German-born American physicist Albert Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light. This, together with Einstein's special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.

Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known - an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927, whereby, the principle, that the product of the uncertainty in measured value of a component of momentum (pχ) and the uncertainty in the corresponding co-ordinates of (χ) is of the same order of magnitude as the Planck constant. In its most precise form:

Δp2 x Δχ ≥ h/4π

where Δχ represents the root-mean-square value of the uncertainty. For mot purposes one can assume:

Δpχ x Δχ = h/2π

the principle can be derived exactly from quantum mechanics, a physical theory that grew out of Planck’s quantum theory and deals with the mechanics of atomic and related systems in terms of quantities that an be measured mathematical forms, including ‘wave mechanics’ (Schrödinger) and ‘matrix mechanics’ (Born and Heisenberg), all of which are equivalent.

Nonetheless, it is most easily understood as a consequence of the fact that any measurement of a system mist disturb the system under investigation, with a resulting lack of precision in measurement. For example, if it were possible to see an electron and thus measure its position, photons would have to be reflected from the electron. If a single photon could be used and detected with a microscope, the collision between the electron and photon would change the electron’s momentum, as to its effectuality Compton Effect as a result to wavelengths of the photon is increased by an amount Δλ, where:

Δλ = (2h/m0c) sin2 ½ φ.

This is the Compton equation, h is the Planck constant, m0 the rest mass of the particle, c the speed of light, and φ the angle between the directions of the incident and scattered photon. The quantity h/m0c is known as the Compton wavelength, symbol: λC, which for an electron is equal to 0.002 43 nm.

A similar relationship applies to the determination of energy and time, thus:

ΔE x Δt ≥ h/4π.

The effects of the uncertainty principle are not apparent with large systems because of the small size of h. However, the principle is of fundamental importance in the behaviour of systems on the atomic scale. For example, the principle explains the inherent width of spectral lines, if the lifetime of an atom in an excited state is very short there is a large uncertainty in its energy and line resulting from a transition is broad.

One consequence of the uncertainty principle is that it is impossible fully to predict the behaviour of a system and the macroscopic principle of causality cannot apply at the atomic level. Quantum mechanics gives a statistical description of the behaviour of physical systems.

Nevertheless, while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world, that is, the one in which we live.

In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both energy sources.

These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of 12 fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.

Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between 10 and 20 billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.

Apart from their assimilations affiliated within the paradigms of science, Descartes was to posit the existence of two categorically different domains of existence for immaterial ideas - the res extensa and the res cognitans or the ‘extended substance’ and the ‘thinking substance. Descartes defined the extended substance as the realm of physical reality within primary mathematical and geometrical forms resides and thinking substance as the realm of human subjective reality. Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a lap of faith - God constructed the world, said Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering in their pristine essence. The truth of classical physics as Descartes viewed them were quite literally ‘revealed’ truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what we term the ‘hidden ontology of classical epistemology.’

While classical epistemology would serve the progress of science very well, It also presented us with a terrible dilemma about the relationship between ‘mind’ and the ‘world’. If there is no real or necessary correspondence between non - mathematical ideas in subjective reality and external physical reality, how do we now that the world in which ‘we live, and love, and die’ actually exists? Descartes’s resolution of this dilemma took the form of an exercise. He asked us to direct our attention inward and to divest our consciousness of all awareness of eternal physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.

As it turned out, this resolution was considerably more problematic and oppressive than Descartes could have imaged. ‘I think, Therefore, I am’ may be a marginally persuasive way of confirming the real existence e of the thinking self. However, the understanding of physical reality that obliged Descartes and others to doubt the existence of this self implied that the separation between the subjective world, or the world of life, and the real world of physical reality was ‘absolute.’

Our propped new understanding of the relationship between mind and world is framed within the larger context of the history of mathematical physics, the organs and extensions of the classical view of the foundations of scientific knowledge, and the various ways that physicists have attempted to obviate previous challenge s to he efficacy of classical epistemology, this was made so, as to serve as background for a new relationship between parts nd wholes in quantum physics, as well as similar view of the relationship that had emerged in he so-called ‘new biology’ and in recent studies of the evolution of modern humans.

But at the end of such as this arduous journey lie two conclusions that should make possible that first, there is no basis in contemporary physics or biology for believing in the stark Cartesian division between mind and world, that some have rather aptly described as ‘the disease of the Western mind’. And second, there is a new basis for dialogue between two cultures that are now badly divided and very much un need of an enlarged sense of common understanding and shared purpose - ;et us briefly consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by classical physics and formalized by Descartes.

The first scientific revolution of the seventeenth century freed Western civilization from the paralysing and demeaning forces of superstition, laid the foundations for rational understanding and control of the processes of nature, and ushered in an era of technological innovation and progress that provided untold benefits for humanity. But as classical physics progressively dissolved the distinction between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life.

Philosophy, quickly realized that there was nothing in tis view of nature that could explain o provide a foundation for he mental, or for all that we know from direct experience cas distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led to invent ‘algebraic geometry’.

A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s Principia Mathematica. In 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world would be known and mastered though the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.

Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is the method of investigating the extent of knowledge and its basis in reason or experience, it attempts to put knowledge upon a secure formation by first inviting us to suspend judgement on any proposition whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. The process is eventually dramatized in the figure of the evil-demon, or malin génie, whose aim is to deceive us, so that our sense, memories, and reasonings lead us astray. The task then becomes one of finding a demon-proof point of certainty, and Descartes produces this in the famous ‘Cogito ergo sum’, I think therefore I am’. It is on this slender basis that the correct use of our faculties has to be reestablished, but it seems as though Descartes has denied himself any materials to use in reconstructing the edifice of knowledge. He has a basis, but any way of building on it without invoking principles tat will not be demon-proof, and so will not meet the standards he had apparently set himself. It vis possible to interpret him as using ‘clear and distinct ideas’ to prove the existence of God, whose benevolence then justifies our use of clear and distinct ideas (‘God is no deceiver’): This is the notorious Cartesian circle. Descartes’s own attitude to this problem is not quite clear, at timers he seems more concerned with providing a stable body of knowledge, that our natural faculties will endorse, rather than one that meets the more severe standards with which he starts out. For example, in the second set of Replies he shrugs off the possibility of ‘absolute falsity’ of our natural system of belief, in favour of our right to retain ‘any conviction so firm that it is quite incapable of being destroyed’. The need to add such natural belief to anything certified by reason Events eventually the cornerstone of Hume ‘s philosophy, and the basis of most 20th-century reactionism, to the method of doubt.

In his own time Desecrate’s conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanying them, and thought of as their effects, although the position is associated especially with Malebrallium, it is much older, many among the Islamic philosophies, their processes for adducing philosophical proofs to justify elements of religious doctrine. It plays the parallel role in Islam to that which scholastic philosophy played in the development of Christianity. The practitioners of kalam were known as the Mutakallimun. It also gives rise to the problem, insoluble in its own terms, of ‘other minds’. Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of th problem.

In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses., since we can conceive of the nature of a ‘ball of wax’ surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought here is reflected in Leibniz’s view, as held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).

Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility all contrive to make him the central point of reference for modern philosophy.

It seems, nonetheless, that the radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms in the absence of any concerns about is spiritual dimension or ontological foundations. In the meantime, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps te most cental feature of Western intellectual life.

Philosophers in the like of John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of mater with linguistic representations of external reality in the subjective space of mind. Descartes’ countryman Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that “Liberty, Equality, Fraternity” are the guiding principles of this consciousness. Rousseau also made godlike the ideas o the ‘general will’ of the people to achieve these goals and declare that those who do not conform to this will were social deviants.

Evenhandedly, Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a somewhat different form by the nineteenth-century Romantics in Germany, England, and the United Sates. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific musing. In Goethe’s attempt to wed mind and matter, nature became a mindful agency that ‘loves illusion’. Shrouds man in mist, ‘ presses him to her heart’, and punishes those who fail to see the ‘light’. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unifies mind and matter is progressively moving toward self-realization and undivided wholeness.

Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things—bodies and minds—are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.

For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources which in turn affect the brain, affecting mental states. Thus the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely rational explanations of the division between subjective reality and external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche. After declaring that God and ‘divine will’ did not exist, Nietzsche reified the ‘essences’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all pervious philosophical attempts to articulate the ‘will to truth’. The problem, claimed Nietzsche, is that earlier versions of the ‘will to power’ disguise the fact that all allege truths were arbitrarily created in tr subjective reality of te individual and are expression or manifestations of individual ‘will’.

In Nietzsche’s view, the separation between mind and mater is more absolute and total than had previously been imagined. Based on the assumption that there is no real or necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language’. The prison as he conceived it, however, it was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new massage of individual existence founded on will.

Those who fail to enact heir existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and/or democratic or socialist ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionistic examinations of phenomena at the expense of mind. It also seeks to educe mind to a mere material substance, and thereby to displace or subsume the separateness and uniqueness of mind with mechanistic description that disallow any basis for te free exerciser of individual will.

Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulful mechanistic inverse proved terribly influential on twentieth-century y thought. Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Though a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics, to resolve this crisis resulted in a view of the character of human consciousness that closely resembled that of Nietzsche.

Friedrich Nietzsche is openly pessimistic about the possibility of knowledge. ‘We simply lack any organ for knowledge, for ‘truth’: we ‘know (or believe or imagine) just as much as may be useful in the interests of the human herd, the species: and even what is called ‘utility’ is ultimately also a mere belief, something imaginary and perhaps precisely that most calamitous stupidity of which we shall perish some day’ (The Gay Science).

This position is very radical, Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: he writes that we think we know what we think is useful, and that we can be quite wrong abut the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if hey were to constitute knowledge. He sum up this highly controversial position in The Will to Power: ‘Facts are precisely what there is not. Only interpretation’.

It is often claimed that perspectivism is self-undermining. If the thesis that all views are interpretations is true then, it is argued there is at least one view that is not an interpretation. If, on the other hand, the thesis is itself an interpretation, then there is no reason to believe that it is true, and it follows again that nit every view is an interpretation.

But this refutation assume that if a view, like perspectivism itself, is an interpretation it is wrong. This is not the case. To call any view, including perspectivism, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is actually false it is necessary to produce another view superior to it on specific epistemological grounds.

Perspectivism does not deny that particular views can be true. Like some versions of cotemporary anti-realism, it attributes to specific approaches truth in relation t o facts specified internally those approaches themselves. Bu t it refuses to envisage a single independent set of facts, To be accounted for by all theories. Thus Nietzsche grants the truth of specific scientific theories does, however, deny that a scientific interpretation csan possibly be ‘the only justifiable interpretation of the world’ (The Gay Science): Neither t h fact science addresses nor the methods it employs are privileged. Scientific theories serve the urposes for which hey have been devised, but these have no priority over the many other purposes of human life. The existence of many purposes and needs relative to which the value of theories is established-another crucial element of perspectivism - is sometimes thought to imply a reason relative, according to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of single set of standards for determining epistemic value once and for all, but he holds that specific views can be compared with and evaluated in relation to one another the ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in all. Agreement is therefore not always possible, since individuals may sometimes differ over the most fundamental issues dividing them.

But Nietzsche would not be troubled by this fact, which his opponents too also have to confront only he would argue, to suppress it by insisting on the hope that all disagreements are in particular eliminable even if our practice falls woefully short of the ideal. Nietzsche abandons that ideal. He considers irresoluble disagreement and essential part of human life.

Knowledge for Nietzsche i s again material ,but now based on desire and bodily needs more than social refinements Perspectives are to be judged not from their relation to the absolute but on the basis of their effects in a specific era. The possibility of any truth beyond such a local, pragmatic one becomes a problem in Nietzsche, since either a noumenal realm or an historical synthesis exists to provide an absolute criterion of adjudication for competing truth claims: what get called truths are simply beliefs that have been for so long that we have forgotten their genealogy? In this Nietzsche reverses the Enlightenment dictum that truth is the way to liberation by suggesting that trying classes in so far as they are considered absolute for debate and conceptual progress and cause rather tab alleviate backwardness and unnecessary misery. Nietzsche moves back and forth without revolution between the positing of transhistories; truth claims, sch as his claim about the will to power, and a kind of epistemic nihilism that calls into question not only the possibility of truth but the need and desire of it as well. But perhaps most importantly, Nietzsche introduces the notion that truth is a kind of human practices, in a game whose rules are contingent rather than necessary it. The evaluation of truth claims should be based on their strategic efforts, not their ability to represent a reality conceived of as separate as an autonomous of human influence.

For Nietzsche, the view that all truth is truth from or within a particular perspective. The perspective may be a general human pin of view, set by such things as the nature of our sensory apparatus, or it may be thought to be bound by culture, history, language, class or gender. Since there may be many perspectives, there are also different families of truth. The term is frequently applied to, of course Nietzsche’s philosophy.

The best-known disciples of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger and Sartre became foundational to that of the principle architects of philosophical postmodernism, the deconstructionists Jacques Lacan, Roland Bathes, Michel Foucault and Jacques Derrida, this direct linkage between the nineteenth-century crisis about epistemological foundations of physics and the origins of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form

Of Sartre’s main philosophical work, Being and Nothingness, Sartre examines the relationships between Being For-itself (consciousness) and Being In-itself (the non-conscious world). He rejects central tenets of the rationalalist and empiricist traditions, calling the view that the mind or self is a thing or substance. ‘Descartes’s substantialist illusion’, and claiming also that consciousness dos not contain ideas or representations . . . are idolist invented by the psychologists., Sartre also attacks idealism in the forms associated with Berkeley and Kant, and concludes that his account of the relationship between consciousness and the world is neither realist nor idealist.

Sartre also discusses Being For-others, which comprises the aspects of experience pertaining to interactions with other minds.. His views are subtle: roughly, he holds that one’s awareness of others is constituted by feelings of shame, pride, and so on.

Sartre’s rejection of ideas, and the denial of idealism, appear to commit him to direct realism in the theory of perception. This is not inconsistent with his claim to be neither realist nor idealist, since by ‘realist’ he means views which allow for the mutual independence or in-principle separability of mind and world. Against this Sartre emphasizes, after Heidegger, that perceptual experience has an active dimension, in hat it is a way of interacting and dealing with the world, than a way of merely contemplating it (‘activity, as spontaneous, unreflecting consciousness, constitutes a certain existential stratum in the world’). Consequently, he holds that experience is richer, and open to more aspects of the world, than empiricist writers customarily claim:

When I run after a streetcar . . . there is consciousness of-the-streetcar-having-to-be-overtaken, etc., . . . I am then plunged into the world of objects, it is they which constitute the unity of my consciousness, it is they which present themselves with values, with attractive nd repellent qualities . . .

Relatedly, he insists that I experience material things as having certain potentialities-for-me )’nothingness’). I see doors and bottles as openable, bicycles as ridable (these matters are linked ultimately to he doctrine of extreme existentialist freedom). Similarly, if my friend is not where I expect to meet her, then I experience her absence ‘as a real event’.

These Phenomenological claims are striking and compelling, but Sartre pay insufficient attention to sch things as illusions and hallucinations, which are normally cited as problems for direct realists. In his discussion of mental imagery, however, he describes the act of imaging as a ‘transformation’ of ‘psychic material’. This connects with his view that even a physical image such as a photograph of a tree does not figure as an object of consciousness when it is experienced as a tree-representation (than as a piece of coloured cards). But even so, the fact remains that the photograph continues to contribute to the character of the experience. Given this, it is hard to see how Sartre avoids positing a mental analogue of a photograph for episodes of mental imaging, and harder still to reconcile this with his rejection of visual representations. It may be that the regards imaging as debased and derivative perceptual knowledge, but this merely raises once more the issue of perceptual illusion and hallucination, and the problem of reconciling them with direct realism

Much of Western religion and philosophical thought since the seventeenth century has sought to obviate this prospect with an appeal to ontology or to some conception of God or Being. Yet we continue to struggle, as philosophical postmodernism attests, with the terrible prospect by Nietzsche - we are locked in a prison house of our individual subjective realities in a universe that is as alien to our thought as it is to our desires. This universe may seem comprehensible and knowable in

scientific terms, and science does seek in some sense, as Koyré puts it, to ‘find a place for everything.’ Nonetheless, the ghost of Descartes lingers in the widespread conviction that science does not provide a ‘place for man’ or for all that we know as distinctly human in subjective reality.

Nonetheless, after The Gay Science (1882) began the crucial exploration of self-mastery. The relations between reason and power, and the revelation of the unconscious striving after power that provide the actual energy for the apparent self-denial of the ascetic and the martyred was during this period that Nietzsche’s failed relationship with Lou Salomé precipitated the emotional crisis from which Also sprach Zarathustra (1883-5, trs, as Thus Spoke Zarathustra) signals a recovery. This work is frequently regarded as Nietzsche’s masterpiece. It was followed by Jenseits von Gut and Böse (1887, trs., as Beyond Good and Evil), Zur Genealogie der Moral (1887, trs., as The Genealogy of Moral.)

In Thus Spake Zarathustra (1883 - 85), Friedrich Nietzsche introduced in eloquent poetic prose the concepts of the death of God, the superman, and the will to power. Vigorously attacking Christianity and democracy as moralities for the "weak herd," he argued for the "natural aristocracy" of the superman who, driven by the "will to power," celebrates life on earth rather than sanctifying it for some heavenly reward. Such a heroic man of merit has the courage to "live dangerously" and thus rise above the masses, developing his natural capacity for the creative use of passion.

Also known as radical theology, this movement flourished in the mid 1960s. As a theological movement it never attracted a large following, did not find a unified expression, and passed off the scene as quickly and dramatically as it had arisen. There is even disagreement as to who its major representatives were. Some identify two, and others three or four. Although small, the movement attracted attention because it was a spectacular symptom of the bankruptcy of modern theology and because it was a journalistic phenomenon. The very statement "God is dead" was tailor - made for journalistic exploitation. The representatives of the movement effectively used periodical articles, paperback books, and the electronic media. This movement gave expression to an idea that had been incipient in Western philosophy and theology for some time, the suggestion that the reality of a transcendent God at best could not be known and at worst did not exist at all. Philosopher Kant and theologian Ritschl denied that one could have a theoretical knowledge of the being of God. Hume and the empiricists for all practical purposes restricted knowledge and reality to the material world as perceived by the five senses. Since God was not empirically verifiable, the biblical world view was said to be mythological and unacceptable to the modern mind. Such atheistic existentialist philosophers as Nietzsche despaired even of the search of God; it was he who coined the phrase "God is dead" almost a century before the death of God theologians.

Mid-twentieth century theologians not associated with the movement also contributed to the climate of opinion out of which death of God theology emerged. Rudolf Bultmann regarded all elements of the supernaturalistic, theistic world view as mythological and proposed that Scripture be demythologized so that it could speak its message to the modern person.

Paul Tillich, an avowed anti supernaturalist, said that the only nonsymbiotic statement that could be made about God was that he was being itself. He is beyond essence and existence; therefore, to argue that God exists is to deny him. It is more appropriate to say God does not exist. At best Tillich was a pantheist, but his thought borders on atheism. Dietrich Bonhoeffer (whether rightly understood or not) also contributed to the climate of opinion with some fragmentary but tantalizing statements preserved in Letters and Papers from Prison. He wrote of the world and man "coming of age," of "religionless Christianity," of the "world without God," and of getting rid of the "God of the gaps" and getting along just as well as before. It is not always certain what Bonhoeffer meant, but if nothing else, he provided a vocabulary that later radical theologians could exploit.

It is clear, then, that as startling as the idea of the death of God was when proclaimed in the mid 1960s, it did not represent as radical a departure from recent philosophical and theological ideas and vocabulary as might superficially appear.

Just what was death of God theology? The answers are as varied as those who proclaimed God's demise. Since Nietzsche, theologians had occasionally used "God is dead" to express the fact that for an increasing number of people in the modern age God seems to be unreal. But the idea of God's death began to have special prominence in 1957 when Gabriel Vahanian published a book entitled God is Dead. Vahanian did not offer a systematic expression of death of God theology. Instead, he analysed those historical elements that contributed to the masses of people accepting atheism not so much as a theory but as a way of life. Vahanian himself did not believe that God was dead. But he urged that there be a form of Christianity that would recognize the contemporary loss of God and exert its influence through what was left. Other proponents of the death of God had the same assessment of God's status in contemporary culture, but were to draw different conclusions.

Thomas J J Altizer believed that God had actually died. But Altizer often spoke in exaggerated and dialectic language, occasionally with heavy overtones of Oriental mysticism. Sometimes it is difficult to know exactly what Altizer meant when he spoke in dialectical opposites such as "God is dead, thank God!" But apparently the real meaning of Altizer's belief that God had died is to be found in his belief in God's immanence. To say that God has died is to say that he has ceased to exist as a transcendent, supernatural being. Rather, he has become fully immanent in the world. The result is an essential identity between the human and the divine. God died in Christ in this sense, and the process has continued time and again since then. Altizer claims the church tried to give God life again and put him back in heaven by its doctrines of resurrection and ascension. But now the traditional doctrines about God and Christ must be repudiated because man has discovered after nineteen centuries that God does not exist. Christians must even now will the death of God by which the transcendent becomes immanent.

For William Hamilton the death of God describes the event many have experienced over the last two hundred years. They no longer accept the reality of God or the meaningfulness of language about him. non theistic explanations have been substituted for theistic ones. This trend is irreversible, and everyone must come to terms with the historical - cultural - death of God. God's death must be affirmed and the secular world embraced as normative intellectually and good ethically. Indeed, Hamilton was optimistic about the world, because he was optimistic about what humanity could do and was doing to solve its problems.

Paul van Buren is usually associated with death of God theology, although he himself disavowed this connection. But his disavowal seems hollow in the light of his book The Secular Meaning of the Gospel and his article "Christian Education Post Mortem Dei." In the former he accepts empiricism and the position of Bultmann that the world view of the Bible is mythological and untenable to modern people. In the latter he proposes an approach to Christian education that does not assume the existence of God but does assume "the death of God" and that "God is gone."

Van Buren was concerned with the linguistic aspects of God's existence and death. He accepted the premise of empirical analytic philosophy that real knowledge and meaning can be conveyed only by language that is empirically verifiable. This is the fundamental principle of modern secularists and is the only viable option in this age. If only empirically verifiable language is meaningful, ipso facto all language that refers to or assumes the reality of God is meaningless, since one cannot verify God's existence by any of the five senses. Theism, belief in God, is not only intellectually untenable, it is meaningless. In The Secular Meaning of the Gospel van Buren seeks to reinterpret the Christian faith without reference to God. One searches the book in vain for even one clue that van Buren is anything but a secularist trying to translate Christian ethical values into that language game. There is a decided shift in van Buren's later book Discerning the Way, however.

In retrospect, it becomes clear that there was no single death of God theology, only death of God theologies. Their real significance was that modern theologies, by giving up the essential elements of Christian belief in God, had logically led to what were really antitheologies. When the death of God theologies passed off the scene, the commitment to secularism remained and manifested itself in other forms of secular theology in the late 1960s and the 1970s.

Nietzsche is unchallenged as the most insightful and powerful critic of the moral climate of the 19th century (and of what of it remains in ours). His exploration of unconscious motivation anticipated Freud. He is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes when it is denied its basis in action, and the corruptions of human nature encouraged by religion, such as Christianity, that feed on such resentment. But the powerful human beings who escapes all this, the Ubermensch, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style to his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist rather than the warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave himself no words to condemn any uncaged beasts of pre y who best find their style by exerting repulsive power find their style by exerting repulsive e power over others. This problem is no t helped by Nietzsche’s frequently expressed misogyny, although in such matters the interpretation of his many-layered and ironic writings is no always straightforward. Similarly y, such

Anti-Semitism as has been found in his work is balanced by an equally vehement denunciation of anti-Semitism, and an equal or greater dislike of the German character of his time.

Nietzsche’s current influence derives not only from his celebration of will, but more deeply from his scepticism about the notions of truth and act. In particular, he anticipated any of the central tenets of postmodernism: an aesthetic attitude toward the world that sees it as a ‘text’; the denial of facts; the denial of essences; the celebration of the plurality of interpretation and of the fragmented self, as well as the downgrading of reason and the politicization of discourse. All awaited rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his perspectivism is echoed in the shifting array of literary devices - humour, irony, exaggeration, aphorisms, verse, dialogue, parody - with which he explores human life and history.

Yet, it is nonetheless, that we have seen, the origins of the present division that can be traced to the emergence of classical physics and the stark Cartesian division between mind and bodily world are two separate substances, the self is as it happened associated with a particular body, but is self-subsisting, and capable of independent existence, yet Cartesian duality, much as the ‘ego’ that we are tempted to imagine as a simple unique thing that makes up our essential identity, but, seemingly sanctioned by this physics. The tragedy of the Western mind, well represented in the work of a host of writers, artists, and intellectual, is that the Cartesian division was perceived as uncontrovertibly real.

Beginning with Nietzsche, those who wished to free the realm of the mental from the oppressive implications of the mechanistic world-view sought to undermine the alleged privileged character of the knowledge called physicians with an attack on its epistemological authority. After Husserl tried and failed to save the classical view of correspondence by grounding the logic of mathematical systems in human consciousness, this not only resulted in a view of human consciousness that became characteristically postmodern. It also represents a direct link with the epistemological crisis about the foundations of logic and number in the late nineteenth century that foreshadowed the epistemological crisis occasioned by quantum physics beginning in the 1920's. This, as a result in disparate views on the existence of oncology and the character of scientific knowledge that fuelled the conflict between the two.

If there were world enough and time enough, the conflict between each that both could be viewed as an interesting artifact in the richly diverse coordinative systems of higher education. Nevertheless, as the ecological crisis teaches us, the ‘old enough’ capable of sustaining the growing number of our life firms and the ‘time enough’ that remains to reduce and reverse the damage we are inflicting on this world ae rapidly diminishing. Therefore, put an end to the absurd ‘betweeness’ and go on with the business of coordinate human knowledge in the interest of human survival in a new age of enlightenment that could be far more humane and much more enlightened than any that has gone before.

It now, which it is, nonetheless, that there are significant advances in our understanding to a purposive mind. Cognitive science is an interdisciplinary approach to cognition that draws primarily on ideas from cognitive psychology, artificial intelligence, linguistics and logic. Some philosophers may be cognitive scientists, and others concern themselves with the philosophy of cognitive psychology and cognitive science. Since inauguration of cognitive science these disciplines have attracted much attention from certain philosophers of mind. This has changed the character of philosophy of mind, and there are areas where philosophical work on the nature of mind is continuous with scientific work. Yet, the problems that make up this field concern the ways of ‘thinking’ and ‘mental properties’ are those that these problems are standardly and traditionally regarded within philosophy of mind than those that emerge from the recent developments in cognitive science. The cognitive aspect is what has to be understood is to know what would make the sentence true or false. It is frequently identified with the truth cognition of the sentence. Justly as the scientific study of precesses of awareness, thought, and mental organization, often by means of a computer modelling or artificial intelligence research.

Contradicted by the evidence, it only has to do with is structure and the way it functioned, that is just because a theory does not mean that the scientific community currently accredits it. Generally, there are many theories, though technically scientific, have been rejected because the scientific evidence is strangely against it. The historical enquiry into the evolution of self-consciousness, developing from elementary sense experience to fully rational, free, thought processes capable of yielding knowledge the presented term, is associated with the work and school of Husserl. Following Brentano, Husserl realized that intentionality was the distinctive mark of consciousness, and saw in it a concept capable of overcoming traditional mind-body dualism. The stud y of consciousness, therefore, maintains two sides: a conscious experience can be regarded as an element in a stream of consciousness, but also as a representative of one aspect or ‘profile’ of an object. In spite of Husserl’s rejection of dualism, his belief that there is a subject-matter remaining after epoch or bracketing of the content of experience, associates him with the priority accorded to elementary experiences in the parallel doctrine of phenomenalism, and phenomenology has partly suffered from the eclipse of that approach to problems of experience and reality. However, later phenomenologists such as Merleau-Ponty do full justice to the world-involving nature of Phenomenological theories are empirical generalizations of data experience., or manifest in experience. More generally, the phenomenal aspects of things are the aspects that show themselves, than the theoretical aspects that are inferred or posited in order to account for them. They merely described the recurring process of nature and do not refer to their cause or that, in the words of J.S. Mill, ‘objects are the permanent possibilities of sensation’. To inhabit a world of independent, external objects is, on this view, to be the subject of actual and possible orderly experiences. Espoused by Russell, the view issued in a programme of translating talk about physical objects and their locations int o talk about possible experience. The attempt is widely supposed to have failed, and the priority the approach gives to experience has been much criticized. It is more common in contemporary philosophy to see experience as itself a construct from the actual way of the world, than the other way round.

Phenomenological theories are also called ‘scientific laws’ ‘physical laws’ and ‘natural laws.’ Newton’s third law is one example. It says that every action ha an equal and opposite reaction. ‘Explanatory theories’ attempt to explain the observations rather than generalized them. Whereas laws are descriptions of empirical regularities, explanatory theories are conceptual constrictions to explain why the data exit, for example, atomic theory explains why we see certain observations, the same could be said with DNA and relativity, Explanatory theories are particularly helpful in such cases where the entities (like atoms, DNA . . . ) cannot be directly observed.

What is knowledge? How does knowledge get to have the content it has? The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts begun with Plato, in that knowledge is true belief plus logos, as it is what enables us to apprehend the principle and firms, i.e., an aspect of our own reasoning.

What makes a belief justified for what measures of belief is knowledge? According to most epistemologists, knowledge entails belief, so that to know ‘w’ that such and such is the case. None less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief or facsimile, are mutually incompatible (the incompatibility thesis) or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis). The incompatibility thesis that hinged on the equation of knowledge with certainty. The assumption that we believe in the truth of claim we are not certain about its truth. Given that belief always involves uncertainty, while knowledge never does, believing something rules out the possibility of knowledge knowing it. Again, given to no reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest otherwise, that we cease to believe things about which we are completely confident is bizarre.

A. D. Woozley (1953) defends a version of the separability thesis. Woozley’s version that deal with psychological certainty than belief per se, is that knowledge can exist without confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley says, ‘what I can Do, where what I can do may include answering questions.’ on the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people that correct responses on examinations if those people show no confidence in their answers. Woozley acknowledges however, that it would be odd for those who lack confidence to claim knowledge. Saying it would be peculiar, ‘I know it is correct.’ but this tension; still ‘I know is correct.’ Woozley explains, using a distinction between condition under which are justified in making a claim (such as a claim to know something) and conditioned under which the claim we make is true. While ‘I know such and such’ might be true even if I answered whether such and such holds, nonetheless claiming that ‘I know that such should be inappropriate for me and such unless I was sure of the truth of my claim.’

Colin Redford (1966) extends Woozley’s defence of the separability thesis. In Redford’s view, not only in knowledge compatible with the lacking of certainty, it is also compatible with a complete lack of belief. He argues by example, in this one example, Jean had forgotten that he learned some English history years prior and yet he is able to give several correct responses to questions such as, ‘When did the Battle of Hastings occur?’ since he forgot that the battle of Hastings took place in 1066 in history, he considers his correct response to be no more than guesses. Thus when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hasting took place in 1066.

Those who agree with Radford’s defence of the separation thesis will probably think of belief as an inner state that can be directed through introspection. That Jean lacks’ beliefs out English history are plausible on this Cartesian picture since Jean does not find himself with the belief out of which the English history when with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting the Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious. For example, (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?). since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the battle of Hastings occurred in 1066.

Once, again, but the jargon is attributable to different attitudinal values. AS, D. M. Armstrong (1973) makes a different task against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that points, which in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occur. For Armstrong equates the belief of such and such is just possible bu t no more than just possible with the belief that such and such is not the case. However, Armstrong insists Jean also believe that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 10690, we would surely describe the situation as one in which Jean’ false belief about te Battle became a memory trace that was causally responsible or his guess. Thus while Jean consciously believes that the Battle e did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, Jan has every reason to suppose that his response is mere guesswork, and so he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

The attempt to understand the concepts involved in religious belief, existence, necessity, fate, creation, sun, justice, Mercy, Redemption, God. Until the 20th century the history of western philosophy is closely intertwined with attempts to make sense of aspect of pagan, Jewish or Christian religion, while in other tradition such as Hinduism, Buddhism or Taoism, there is even less distinction between religious and philosophical enquiry. The classic problem of conceiving an appropriate object of religious belief I that of understanding whether any term can be predicated of it: Does it make to any sense of talking about its creating to things, willing event, or being one thing or many? The via negativa of theology is to claim that God can only be known by denying ordinary terms of any application (or them); another influential suggestion is that ordinary term only apply metaphorically, sand that there is n hope of cashing the metaphors. Once a description of a Supreme Being is hit upon, there remains the problem of providing any reason for supposing that anything answering to the description exists. The medieval period was the high-water mark - for purported proofs of the existence of God, such as the Five-Ays of Aquinas, or the ontological argument of such proofs have fallen out of general favour since the 18th century, although theories still sway many people and some philosophers.

Generally speaking, even religious philosophers (or perhaps, they especially) have been wary of popular manifestations of religion. Kant, himself a friend of religious faith, nevertheless distinguishes various perversions: Theosophy (using transcendental conceptions that confuses reason), demonology (indulging an anthropomorphic, mode of representing the Supreme Being), theurgy (a fanatical delusion that feeling can be communicated from such a being, or that we can exert an influence on it), and idolatry, or a superstition’s delusion the one can make oneself acceptable to his Supreme Being by order by means than that of having the moral law at heart (Critique of judgement) these warm conversational tendencies have, however, been increasingly important in modern theology.

Since Feuerbach there has been a growing tendency for philosophy of religion either to concentrate upon the social and anthropological dimension of religious belief, or to treat a manifestation of various explicable psychological urges. Another reaction is retreat into a celebration of purely subjective existential commitments. Still, the ontological arguments continue to attach attention. A modern anti - fundamentalists trends in epistemology are not entirely hostile to cognitive claims based on religious experience.

Still, the problem f reconciling the subjective e or psychological nature of mental life with its objective and logical content preoccupied from of which is next of the problem was elephantine Logische untersuchungen (trs. as Logical Investigations, 1070). To keep a subjective and a naturalistic approach to knowledge together. Abandoning the naturalism in favour of a kind of transcendental idealism. The precise nature of his change is disguised by a pechant for new and impenetrable terminology, but the ‘bracketing’ of eternal questions for which are to a great extent acknowledged implications of a solipistic, disembodied Cartesian ego s its starting-point, with it thought of as inessential that the thinking subject is ether embodied or surrounded by others. However by the time of Cartesian Meditations (trs. as, 1960, fist published in French as Méditations Carthusianness, 1931), a shift in priorities has begun, with the embodied individual, surrounded by others, than the disembodied Cartesian ego now returned to a fundamental position. The extent to which the desirable shift undermines the programme of phenomenology that is closely identical with Husserl’s earlier approach remains unclear, until later phenomenologists such as Merleau-Ponty has worked fruitfully from the later standpoint.

Pythagoras established and was the central figure in school of philosophy, religion, and mathematics: He was apparently viewed by his followers as semi-divine. For his followers the regular solids (symmetrical three - dimensional forms in which all sides are the same regular polygon) with ordinary language. The language of mathematical and geometric forms seem closed, precise and pure. Providing one understood the axioms and notations, and the meaning conveyed was invariant from one mind to another. The Pythagoreans following which was the language empowering the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. Tis mystical insight made Pythagoras the figure from antiquity must revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological of the quantum mechanical description of nature.

Pythagoras (570 Bc) was the son of Mn esarchus of Samos ut, emigrated (531 Bc) to Croton in southern Italy. Here he founded a religious society, but were forces into exile and died at Metapomtum. Membership of the society entailed self - disciplined, silence and the observance of his taboos, especially against eating flesh and beans. Pythagoras taught the doctrine of metempsychosis or te cycle of reincarnation, and was supposed ale to remember former existence. The soul, which as its own divinity and may have existed as an animal or plant, can, however gain release by a religious dedication to study, after which it may rejoin the universal world-soul. Pythagoras is usually, but doubtfully, accredited with having discovered the basis of acoustics, the numerical ratios underlying the musical scale, thereby y intimating the arithmetical interpretation of nature. This tremendous success inspired the view that the whole of the cosmos should be explicable in terms of harmonia or number. the view represents a magnificent brake from the Milesian attempt to ground physics on a conception shared by all things, and to concentrate instead on form, meaning that physical nature receives an intelligible grounding in different geometric breaks. The view is vulgarized in the doctrine usually attributed to Pythagoras, that all things are number. However, the association of abstract qualitites with numbers, but reached remarkable heights, with occult attachments for instance, between justice and the number four, and mystical significance, especially of the number ten, cosmologically Pythagoras explained the origin of the universe in mathematical terms, as the imposition of limit on the limitless by a kind of injection of a unit. Followers of Pythagoras included Philolaus, the earliest cosmosologist known to have e understood that the earth is a moving planet. It is also likely that the Pythagoreans discovered the irrationality of the square root of two.

The Pythagoreans considered numbers to be among te building blocks of the universe. In fact, one of the most central of the beliefs of Pythagoras mathematihoi, his inner circle, was that reality was mathematical in nature. This made numbers valuable tools, and over time even the knowledge of a number’s name came to be associated with power. If you could name something you had a degree of control over it, and to have power over the numbers was to have power over nature.

One, for example, stood for the mind, emphasizing its Oneness. Two was opinion, taking a step away from the singularity of mind. Three was wholeness (a whole needs a beginning, a middle and its ending to be more than a one - dimensional point), and four represented the stable squareness of justice. Five was marriage - being the sum of three and two, the first odd (male) and even (female) numbers. (Three was the first odd number because the number one was considered by the Greeks to be so special that it could not form part of an ordinary grouping of numbers).

The allocation of interpretations went on up to ten, which for the Pythagoreans was the number of perfections. Not only was it the sum of the first four numbers, but when a series of ten dots are arranged in the sequence 1, 2, 3, 4, . . . each above the next, it forms a perfect triangle, the simplest of the two-dimensional shapes. So convinced were the Pythagoreans of the importance of ten that they assumed there had to be a tenth body in the heavens on top of the known ones, an anti-Earth, never seen as it was constantly behind the Sun. This power of the number ten, may also have linked with ancient Jewish thought, where it appears in a number of guised the ten commandments, and the ten the components are of the Jewish mystical cabbala tradition.

Such numerology, ascribed a natural or supernatural significance to number, can also be seen in Christian works, and continued in some new-age tradition. In the Opus majus, written in 1266, the English scientist-friar Roger Bacon wrote that: ‘Moreover, although a manifold perfection of number is found according to which ten is said to be perfect, and seven, and six, yet most of all does three claim itself perfection.’

Ten, we have already seen, was allocated to perfection. Seven was the number of planets according to the ancient Greeks, while the Pythagoreans had designated the number as the universe. Six also has a mathematical significance, as Bacon points out, because if you break it down into te factor that can be multiplied together to make it - one, two, and three - they also add up to six:

1 x 2 x 3 = 6 = 1 + 2 + 3

Such was the concern of the Pythagoreans to keep irrational numbers to themselves, bearing in mind, it might seem amazing that the Pythagoreans could cope with the values involved in this discovery. After all, as the square root of 2 can’t be represented a ratio, we have to use a decimal fraction to write it out. Indeed, it would be amazing, were it rue that the Greeks did have a grasp for the existence of irrational numbers as a fraction. In fact, though you might find it mentioned that the Pythagoreans did, to talk about them understanding numbers in its way, totally misrepresented the way they thought.

At this point, as occupied of a particular place in space, and giving the opportunity that our view presently becomes fused with Christian doctrine when logos are God’s instrument in the development (redemption) of the world. The notion survives in the idea of laws of nature, if these conceived of as independent guides of the natural course of events, existing beyond the temporal world that they order. The theory of knowledge and its central questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, not between knowledge and the impossibility of error, the possibility of universal scepticism, sand the changing forms of knowledge that arise from new conceptualizations of the world and its surrounding surfaces.

As, anyone group of problems concerns the relation between mental and physical properties. Collectively they are called ‘the mind-body problem ‘ this problem is of its central questioning of philosophy of mind since Descartes formulated in the three centuries past, for many people understanding the place of mind in nature is the greatest philosophical problem. Mind is often thought to be the last domain that stubbornly resists scientific understanding, and philosophers differ over whether they find that a cause for celebration or scandal, the mind-body problem in the modern era was given its definitive shape by Descartes, although the dualism that he espoused is far more widespread and far older, occurring in some form wherever there is a religious or philosophical tradition by which the soul may have an existence apart from the body. While most modern philosophers of mind would reject the imaginings that lead us to think that this makes sense, there is no consensus over the way to integrate our understanding people a bearer s of physical proper ties on the one hand and as subjects of mental lives on the other.

As the motivated convictions that discoveries of non-locality have more potential to transform our conceptions of the ‘way things are’ than any previous discovery, it is, nonetheless, that these implications extend well beyond the domain of the physical sciences, and the best efforts of many thoughtful people will be required to understand them.

Perhaps the most startling and potentially revolutionary of these implications in human terms is the view in the relationship between mind and world that is utterly different from that sanctioned by classical physics. René Descartes, for reasons of the moment, was among the first to realize that mind or consciousness in the mechanistic world-view of classical physics appeared to exist in a realm separate and the distinction drawn upon ‘self-realisation’ and ‘undivided wholeness’ he lf within the form of nature. philosophy quickly realized that there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that we know from direct experience and distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.

Decanters’ theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible, sometimes known as the use of hyperbolic (extreme) doubt, or Cartesian doubt. This is the method of investigating how much knowledge and its basis in reason or experience used by Descartes in the first two Meditations. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By finding the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of various counter - attacks for social and public starting-point. The metaphysic associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly understands the presence of divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invoked a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: As Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, to prove the veracity of our senses, is surely making a very unexpected circuit.’

In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to precede to insoluble problems of nature of the causal connection between the two systems running in parallel. When I stub my toe, this does not cause pain, but there is a harmony between the mental and the physical (perhaps, due to God) that ensure that there will be a simultaneous pain; when I form an intention and then act, the same benevolence ensures that my action is appropriate to my intention. The theory has never been wildly popular, and in its application to mind-body problems many philosophers would say that it was the result of a misconceived ‘Cartesian dualism,’ it of ‘subjective knowledge’ and ‘physical theory.’

It also produces the problem, insoluble in its own terms, of ‘other minds.’ Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’ s thought is reflected in Leibniz’s view, held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure than of filling. On this basis Descartes builds a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void,’ since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of through vortices (like the motion of a liquid).

Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected often, their relentless exposures of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.

A scientific understanding of these ideas could be derived, said, Descartes, with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three - dimensional coordinates. Following the publication of Isaac Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.

The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concerns about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes’s stark division between mind and matter became the most central feature of Western intellectual life.

Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean - Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that ‘Liberty, Equality, Fraternities’ are the guiding principals of this consciousness. Rousseau also given rythum to cause an endurable god - like semblance so that the idea of the ‘general will’ of the people to achieve these goals and declared that those who do no conform to this will were social deviants.

The Enlightenment idea of deism, which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency at the moment of creation. It also implied, however, that all the creative forces of the universe were exhausted at origins, that the physical substrates of mind were subject to the same natural laws as matter, and that the only means of mediating the gap between mind and matter was pure reason. Traditional Judeo-Christian theism, which had previously been based on both reason and revelation, responding to the challenge of deism by debasing rationality as a test of faith and embracing the idea that the truths of spiritual reality can be known only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. It also laid the foundation for the fierce competition between the mega-narratives of science and religion as frame tales for mediating relations between mind and matter and the manner in which the special character of each should be ultimately defined.

Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a different form by the nineteenth - century Romantics in Germany, England, and the United States. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific musings. In Goethe’s attempt to wed mind and matter, nature becomes a mindful agency that ‘loves illusion,’ ‘shrouds man in mist,’ ‘presses him to her heart,’ and punishes those who fail to see the ‘light.’ Schelling, in his version of cosmic unity, argues that scientific facts were at best partial truths and that the mindful dualism spirit that unites mind and matter is progressively moving toward self-realization and undivided wholeness.

The flaw of pure reason is, of course, the absence of emotion, an external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche after declaring that God and ‘divine will’ does not exist, verifiably, literature puts forward, it is the knowledge that God is dead. The death of God he calls the greatest events in modern history and the cause of extremer danger. Yet, the paradox contained in these words. He never said that there was no God, but the Eternal had been vanquished by Time and that the Immortal suffered death at the hands of mortals. ‘God is dead’. It is like a cry mingled of despair and triumph, reducing, by comparison, the whole story of atheism agnosticism before and after him to the level of respectable mediocrity and making it sound like a collection of announcements who in regret are unable to invest in an unsafe proposition: - this is the very essence of Nietzsche’s spiritual core of existence, and what flows is despair and hope in a new generation of man, visions of catastrophe and glory, the icy brilliance e of analytical reason, fathoming with affected irreverence those depths until now hidden by awe and fear, and side-by-side, with it, ecstatics invocations of as ritual healer.

Nietzsche reified for ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all previous philosophical attempts to articulate the ‘will to truth.’ The problem, claimed Nietzsche, is that earlier versions of the ‘will to truth’ disguise the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual ‘will.’

In Nietzsche’s view, the separation between ‘mind’ and ‘matter’ is more absolute and total had previously been imagined. Based on the assumptions that there are no really necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language.’ The prison as he conceived it, however, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on will.

Those who fail to enact their existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and/or democratic or socialist ideals and become. therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionistic examinations of phenomena at the expense of mind. It also seeks to reduce mind to a mere material substance, and by that to displace or subsume the separateness and uniqueness of mind with mechanistic descriptions that disallow a basis for the free exercise of individual will.

Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless scientific universe proved terribly influential on twentieth - century thought. Nietzsche sought to reinforce his view on subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. As it turned out, these efforts resulted in paradoxes of recursion and self - reference that threatened to undermine both the efficacy of this correspondence and the privileged character of scientific knowledge.

Nietzsche appealed to this crisis in an effort to reinforce his assumption that, in the absence of onotology, all knowledge (including scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl, attempted to preserve the classical view of correspondence between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigour. It represented a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.

Since Husserl’s epistemology, like that of Descartes and Nietzsche, was grounded in human subjectivity, a better understanding of his attempt to preserve the classical view of correspondence not only reveals more about the legacy of Cartesian duality. It also suggests that the hidden onotology of classical epistemology was more responsible for the deep division and conflict between the two cultures of humanists-social scientists and scientists-engineers than we has preciously imagined. The central question in this late-nineteenth-century debate over the status of the mathematical description of nature as the following: Is the foundation of number and logic grounded in classical epistemology, or must we assume, in the absence of any ontology, that the rules of number and logic are grounded only in human consciousness? In order to frame this question in the proper context, it should first examine in more detail that intimate and ongoing dialogue between physics and metaphysics in Western thought.

Through a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics to resolve this crisis resulted in a view of the character of human consciousness that closely resembled that of Nietzsche.

For Nietzsche, however, all the activities of human consciousness share the predicament of psychology. There can be, for him, no ‘pure’ knowledge, only satisfaction, however sophisticated, of te ever-varying intellectual needs of the will to know. He therefore demands that man should accept moral responsibility for the kind of questioned he asks, and that he should realize what values are implied in he answers he asks - and in this he was more Christian than all our post-Faustian Fausts of truth and scholarship. ‘The desire for truth,’ he says, ‘is itself in need of critique. Let this be the definition of my philosophical task. By way of excrement, I shall question for one the value of truth.’ and does he not. He protests that, in an age that is as uncertain of its values as is his and ours, the search for truth will issue in either/or trivialities or - catastrophe. We might wonder how he would react to the pious hope of our day that the intelligence and moral conscience of politicians will save the world from the disastrous products of our scientific explorations and engineering skills. It is perhaps not too difficult to guess; for he knew that there was a fatal link between the moral resolution of scientists to follow the scientific search wherever, by its own momentum, it will take us, and te moral debility of societies not altogether disinclined to ‘apply’ the results, however catastrophic, believing that there was a hidden identity among all the expressions of the ‘Will to Power’, he saw the element of moral nihilism in the ethics of our science: Its determination not to let ‘higher values’ interfere with its highest value - Truth (as it conceives it). Thus he said that the goal of knowledge pursued by the natural sciences means perdition.

In these regions of his mind dwells the terror that he may have helped to bring about the very opposite of what he desired. When this terror comes to the force, he is much afraid of the consequences of his teaching. Perhaps, the best will be driven to despair by it, the very worst accept it? And once he put into the mouth of some imaginary titanic genius what is his most terrible prophetic utterance: ‘Oh grant madness, your heavenly powers, madness that at last I may believe in myself . . . I am consumed by doubts, for I have killed the Law. . . .If I am not more than the Law, then I am the most abject of all men’.

Still ‘God is dead,’ and, sadly, that he had to think the meanest thought: He saw in the real Christ an illegitimate son of the Will to power, a flustrated rabbi sho set out to save himself and the underdog human from the intolerable strain of importantly resending the Caesars - not to be Caesar was now proclaimed a spiritual disjunction - a newly invented form of power, the power to be powerless.

It is the knowledge that God is dead, and suffered death at the hands of mortals: ‘God is dead’: It is like a cry mingled of despair ad triumph, reducing the whole story of theism nd agnosticism before and after him to the level of respectable mediocrity nd masking it sound like a collection of announcement. Nietzsche, for the nineteenth century, brings to its perverse conclusion a line of religious thought and experience linked with the names of St. Paul, St. Augustin, Pascal, Kierkegaard, and Dostoevsky, minds for whom God was not simply the creator of an order of nature within which man has his clearly defined place, but to whom He came rather in order to challenge their natural being, masking demands that appeared absurd in the light of natural reason. These men are of the family of Jacob: Having wrestled with God for His blessing, they ever after limp through life with the framework of Nature incurably out of joint. Nietzsche is just a wrestler, except within him the shadow of Jacob merges with the shadow of Prometheus. Like Jacob, Nietzsche too believed that he prevailed against God in that struggle, and won a new name for himself, the name of Zarathustra. Yet the words he spoke on his mountain to the angle of the Lord were: ‘I will not let thee go, but thou curse me.’ Or, in words that Nietzsche did in fact speak: ‘I have on purpose devoted my life to exploring the whole contrast to a truly religious nature. I know the Devil and all his visions of God.’ ‘God is dead,’ is the very core of Nietzsche’s spiritual existence, and what follows is despair and hope in a new greatness of man.

Further to issues are the best - known disciple that Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. His first novel, La Nausée, was published in 1938 (trs. As Nausea, 1949). Lʹ̀Imginaire (1940, trs. as The Psychology of the Imagination, 1948) is a contribution to phenomenal psychology. Briefly captured by the Germans, Sartre spent the ending of war years in Paris, where Lʹ Être et le néant, his major purely philosophical work, was published in 1945 (trs. as Being and Nothingness, 1956). The lecture Lʹ Existentialisme est un humanisme (1946, trs. as Existentialism is a Humanism, 1947) consolidated Sartre’s position as France’s leading existentialist philosopher.

Sartre’s philosophy is concerned entirely with the nature of human life, and the structures of consciousness. As a result it gains expression in his novels and plays as well as in more orthodox academic treatises. Its immediate ancestors is the phenomenological tradition of his teachers, and Sartre can most simply be seen as concerned to rebut the charge of idealism as it is laid at the door of phenomenology. The agent is not a spectator of the world, but, like everything in the world, constituted by acts of intentionality. The self constituted is historically situated, but as an agent whose own mode of finding itself in the world makes for responsibility and emotion. Responsibility is, however, a burden that we cannot frequently bear, and bad faith arises when we deny our own authorship of our actions, seeing then instead as forced responses to situations not of our own making.

Sartre thus locates the essential nature of human existence in the capacity for choice, although choice, being equally incompatible with determinism and with the existence of a Kantian moral law, implies a synthesis of consciousness (being for-itself) and the objective(being in-itself) that is forever unstable. The unstable and constantly disintegrating nature of free-will generates anguish. For Sartre our capacity to make negative judgement is one fundamental puzzles of consciousness. Like Heidegger he took the ‘ontological’ approach of relating to the nature of nonbeing, a move that decisively differentiated him from the Anglo-American tradition of modern logic.

The work of Husserl, Heidegger and Sartre became foundational to that of the principal architects of philosophical postmodernism, Deconstructionists Jacques Lacan, Roland Barthes, Michel Foucault, and Jacqures Derrida. This direct linkage between the nineteenth - century crisis about the epistemological foundations of mathematical physics and the origins of philosophical postmodernism served to perpetrate the Cartesian two world dilemmas in an even more oppressive form.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

But, still, even like Planck and Einstein understood and embraced hoboism as an inescapable condition of our physical existence. According to Einstein’s general relativity theory, wrote Planck, ‘each individual particle of a system in a certain sense, at any one time, exists simultaneously in every part of the space occupied by the system’. And the system, as Planck made clear, is the entire cosmos. As Einstein put it, ‘physical reality must be described in terms of continuos functions in space. The material point, therefore, can hardly be conceived any more as the basic concept of the theory.’

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self - consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.

The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic’ notions.

As for Newton, a British mathematician , whereupon the man Hume called Newton, ‘the greatest and rarest genius that ever arose for the ornament and instruction of the species.’ His mathematical discoveries are usually dated to between 1665 and 1666, when he was secluded in Lincolnshire, the university being closed because of the plague His great work, the Philosophae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy, usually referred to as the Principia), was published in 1687.

Yet throughout his career, Newton engaged in scientific correspondence and controversy. The often-quoted remark, ‘If I have seen further it is by standing on the shoulders of Giant’s occurs in a conciliatory letter to Robert Hooke (1635-1703). Newton was in fact echoing the remark of Bernard of Chartres in 1120: ‘We are dwarfs standing on the shoulders of giants’. The dispute with Leibniz over the invention of the calculus is his best - known quarrel, and certainly the least edifying with Newton himself appointing the committee of the Royal Society that judged the question of precedence, and then writing the report, the Commercium Epistolicum, awarding himself the victory. Although was himself of the ‘age of reason,’ Newton was himself interested in alchemy, prophesy, gnostic, wisdom and theology,

Philosophical influence of Principia was incalculable, and from Locke’s Essay onward philosophers recognized Newton’s work as a new paradigm of scientific method, but without being entirely clear what different parts reason and observation play in the edifice. Although Newton ushered in so much of the scientific world view, overall scholium at the end of Principia, he argues that ‘it is not to be conceived that mere mechanical causes could give birth to so many regular motions’ and hence that his discoveries pointed to the operations of God, ‘to discourse of whom from phenomena does certainly belong to natural philosophy.’ Newton confesses that he has ‘not been able to discover the cause of those properties of gravity from phenomena’: Hypotheses non fingo (I do not make hypotheses). It was left to Hume to argue that the kind of thing Newton does, namely place the events of nature into law-like orders and patterns, is the only kind of thing that scientific enquiry can ever do

An ‘action at a distance’ is a much contested concept in the history of physics. Aristotelian physics holds that every motion requires a conjoined mover. Action can therefore never occur at a distance, but needs a medium enveloping the body, and of which parts befit its motion and pushes it from behind (antiperistasis). Although natural motions like free fall and magnetic attraction (quaintly called ‘coition’) were recognized in the post-Aristotelian period, the rise of the ‘corpusularian’ philosophy. Boyle expounded in his Sceptical Chemist (1661) and The Origin and Form of Qualifies (1666), held that all material substances are composed of minutes corpuscles, themselves possessing shape, size,. And motion. The different properties of materials would arise different combinations and collisions of corpuscles: chemical properties, such as solubility, would be explicable by the mechanical interactions of corpuscles, just as the capacity of a key to turn a lock is explained by their respective shapes. In Boyle’s hands the idea is opposed to the Aristotelean theory of elements and principles, which he regarded as untestable and sterile. His approach is a precursor of modern chemical atomism, and had immense influence on Locke, however, Locke recognized the need for a different kind of force guaranteeing the cohesion of atoms, and both this and the interaction between such atoms were criticized by Leibniz. Although natural motion like free fall and magnetic attraction (quality called ‘coition’) were recognized in the post-Aristotelian period , the rise of the ‘corpusularian’ philosophy again banned ‘attraction; or unmediated action at a distance: the classic argument is that ‘matter cannot act where it is not again banned ‘attraction’, or unmediated action at a distance: The classic argument is that ‘matter cannot act where it is not’.

Cartesian physical theory also postulated ‘subtle matter’ to fill space and provide the medium for force and motion. Its successor, the a ether, was populated in order to provide a medium for transmitting forces and causal influences between objects that are not in directorially contact. Even Newton, whose treatment of gravity might seem to leave it conceived of a action to a distance, opposed that an intermediary must be postulated, although he could make no hypothesis as to its nature. Locke, having original ly said that bodies act on each other ‘manifestly by impulse and nothing else’. But changes his mind and strike out the words ‘and nothing else,’ although impulse remains ‘the only way that we can conceive bodies operate in’. In the Metaphysical Foundations of Natural Science Kant clearly sets out the view that the way in which bodies impulse each other is no more natural, or intelligible, than the way inn that they act at a distance, in particular he repeats the point half-understood by Locke, that any conception of solid, massy atoms requires understanding the force that makes them cohere as a single unity, which cannot itself be understood in terms of elastic collisions. In many cases contemporary field theories admit of alternative equivalent formulations, one with action at a distance, one with local action only.

Recently, one could have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have been in agreement that science ought to be ‘value-free’. This had been a particular emphasis on the part of the first positivist, as it would be upon twentieth-century successors. Science, so it is said, deals with ‘facts’, and facts and values and irreducibly distinct. Facts are objective, they are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact ought not be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference rather differently. But the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would come to be known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to bring about such an apparently radical change? What are its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind - which has been quite intensive - has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically different from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of some other.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76),who argued that causation is simply a matter for which he denies that we have innate ideas, that the causal relation is observably anything other than ‘constant conjunction’ wherefore, that there are observable necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions that the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978), on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves a number of questions open. Firstly, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficient law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. But knowing that A-type events are constantly conjoined with B-type events does not tell us which of ‘A’ and ‘B’ is the cause and which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises which include one or more laws. As applied to the explanation of particular events this implies that one particular event can be explained if it is linked by a law to some other particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) Its omission by effects, as well as effects by causes, after all, it is as easy to deduce the height of flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploitrated in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary with.

A teleological theory of representation needs to be supplemental with a philosophical account of biological representation, generally a selectionism account of biological purpose, according to which item ‘F’ has purpose ‘G’ if and only if it is now present as a result of past selection by some process which favoured item with ‘G’. So, a given belief type will have the purpose of covarying with ‘P’, say. If and only if some mechanism has selected it because it has covaried with ‘P’ the past.

Along the same lines, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’, teleological theories differ depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935-) Jerry Alan Fodor, is known for a resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘holist s’ such as the American philosopher Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription, such as the British philosopher of logic and language, Eardley Anthony Michael Dummett (1925-). In recent years he has become a vocal critic of some of the aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually queried by points as owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: We suppose that there’s a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, in spite of the fact that although A’s and B’s botheration gives cause by A’s’ every bit as a matter of fact, perhaps can assume that only A’s would cause ‘A’s’ in - as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that eventuate are ipso facto wild.

Suppose at the present time, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that it be possible to say that the optimal circumstances for tokening a mental representation are in terms that are not themselves either semantical nor intentional. (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the sort of semantical notions that the theory is supposed to naturalize.) Befittingly, the suggestion - to put it in a nutshell - is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are supposed to’. In the case of mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functioning as they are supposed to.

So, then: The teleology o the cognitive mechanisms determine the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words: The teleology story perhaps strikes one as plausible in that it understands one normative notion - truth - in terms of another normative notion - optimality. But this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs has much to do with the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ - when they’re working ‘as they’re supposed to’ - what they deliver are likely to be ‘falsehoods’.

Or again: There’s no obvious reason why coitions that are optimal for the tokening of one sort of mental symbol need be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. But this raises the possibility that if we’re to say which conditions are optimal for the fixation of a belief, we’ll have to know what the content of the belief is - what it’s a belief about. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicates ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There is, therefore, a great variety of different research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition - perhaps the best way to study it - is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. But it is fair to say, that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation of empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) John Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas which furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposing Descartes, who had argued that it is possible to come to knowledge of fundamental truths about the natural world through reason alone. Descartes (1596-1650) had argued, that we can come to know the essential nature of both ‘mind’ and ‘matter’ by pure reason. John Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came in via the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) came the building blocks of the understanding.

Locke combined his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties that had been advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary - the distinction between primary and secondary qualities may be reached by two rather different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have tended to run together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter make it appears to be an empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary-secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. But Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination - a parallelogram, for example. But simple ideas can never be created by us: We just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience. Our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this particular simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world - for example, all claims to identity what were then beginning to be called the laws of nature - must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched in terms of the concept of causality, so that where we are accustomed to talk of laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between events

of the type of the cause and those of the type of the effect.

2. That the cause event should be contiguous with the effect event.

3. That the cause event should necessitate the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions un-problematically related to the idea of regularity concomitance and of contiguity. But the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlates of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this necessity logical, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that an even similar to those we have already observed to be correlated with the cause-type of events will come to be in this case too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of the effect and the occurring of event s of the type of the cause. And then, the impression that corresponds to the idea of regular concomitance - the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises as to whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments: That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat - or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body which is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half of the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is clearly evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity - gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure which within the scale of observation do in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be in a position to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that in order to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of idle wheels in the process of nature. Every factor which makes a difference, and that difference can only be expressed in terms of the individual character of that factor.

Whitehead proceeds to analyse our experiences in general, and our observations of nature in particular, and ends up with ‘mutual immanence’ as a central theme. This mutual immanence is obvious in the case of an experience, that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. But my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity as being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every actual entity is a process of prehending or appropriating all the other actual entities and creating one new entity out of them all, namely, itself.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening together with certain others, and then seeks to explain why some such patterns - the ‘laws’ - matter more than others - the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than happen-stantial co-occurrence, and instead postulates a relationship ‘necessitation’, a kind of ‘cement, which links events that are connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are a number of versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and later revived by the American philosopher David Lewis (1941-2002), who holds that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns that are somewhat explicated in terms of basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but rather a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forth-right Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events e related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of accident. So necessitation is a stronger relationships than constant conjunction. However, Armstrong and other defenders of this view say very little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ to ‘later’ itself stands in need of philosophical explanation - and one of the most popular explanations is that the idea of ‘movement’ from earlier to later depends on the fact that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation which does not itself assume the direction of time.

A number of such accounts have been proposed. David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events - consider a person who dies after simultaneously being shot and struck by lightning - is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

Lewis relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative induction’. From the English statesman and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors - by what we are studying, as well as by the very act of study itself, the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this: It is evident that all understanding concerning, ‘matter of fact’ are founded on the relation of cause and effect, and that we can never infer the existence of one object from another unless they are connected together, either mediately or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what exactly do we observe? And in the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events simply are, they merely occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference on to the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, here is necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence e which we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. But now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ - or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. Accordingly, he claimed that all his theses are based on ‘experience’, understood as sensory awareness together with memory, since only experience establishes matters of fact. But is our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problem that Hume raises are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experience become useless and can give rise to inference or conclusion. And yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stem from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives on the basis of probabilities, and he is not saying that inductive reasoning could or should be avoided or rejected. Rather, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. In the context of causation, inductive inferences are inescapable and invaluable. What, then, makes ‘past experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be more or less impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will prove to be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have exactly the same properties, but the terms ‘a’ and ‘b’ need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

But a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. that the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed to many to lead to a kind of dualism at the level of the properties of mental states, even if we reject dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states simply are to its reappearance at the level of the properties of those states.

There are two broad categories of mental property. Mental states such as thoughts and desires, often called ‘propositional attitudes’, have ‘content’ that can be de scribed by ‘that’ clauses. For example, one can have a thought, or desire, that it will rain. These states are said to have intentional properties, or ‘intentionality sensations’, such as pains and sense impressions, lack intentional content, and have instead qualitative properties of various sorts.

The problem about mental properties is widely thought to be most pressing for sensations, since the painful qualities of pains and the red quality of visual sensations seem to be irretrievably non-physical. And if mental states do actually have non-physical properties, the identity of mental states generate to physical states as they would not sustain a thoroughgoing mind-body materialism.

The Cartesian doctrine that the mental is in some way non-physical is so pervasive that even advocates of the identity theory sometimes accepted it, for the ideas that the mental is non-physical underlies, for example, the insistence by some identity theorists that mental properties are really neural as between being mental or physical. To be neural is in this way, a property would have to be neutral as to whether its mental at all. Only if one thought that being meant being non-physical would one hold that defending materialism required showing the ostensible mental properties are neutral as regards whether or not they’re mental.

But holding that mental properties are non-physical has a cost that is usually not noticed. A phenomenon is mental only if it has some distinctively mental property. So, strictly speaking, a materialist who claims that mental properties are non-physical phenomena exist. This is the ‘Eliminative-Materialist position advanced by the American philosopher and critic Richard Rorty (1979).

According to Rorty (1931-) ‘mental’ and ‘physical’ are incompatible terms. Nothing can be both mental and physical, so mental states cannot be identical with bodily states. Rorty traces this incompatibly to our views about incorrigibility: ‘Mental’ and ‘physical’ are incorrigible reports of one’s own mental states, but not reports of physical occurrences, but he also argues that we can imagine a people who describe themselves and each other using terms just like our mental vocabulary, except that those people do not take the reports made with that vocabulary to be incorrigible. Since Rorty takes a state to be a mental state only if one’s reports about it are taken to be incorrigible, his imaginary people do not ascribe mental states to themselves or each other. Nonetheless, the only difference between their language and ours is that we take as incorrigible certain reports which they do not. So their language as no less descriptive or explanatory power than ours. Rorty concludes that our mental vocabulary is idle, and that there are no distinctively mental phenomena.

This argument hinges on building incorrigibly into the meaning of the term ‘mental’. If we do not, the way is open to interpret Rorty’s imaginary people as simply having a different theory of mind from ours, on which reports of one’s own mental states are not incorrigible. Their reports would this be about mental states, as construed by their theory. Rorty’s thought experiment would then provide to conclude not that our terminology is idle, but only that this alternative theory of mental phenomena is correct. His thought experiment would thus sustain the non-eliminativist view that mental states are bodily states. Whether Rorty’s argument supports his eliminativist conclusion or the standard identity theory, therefore, depends solely on whether or not one holds that the mental is in some way non-physical.

Paul M. Churchlands (1981) advances a different argument for eliminative materialism. According to Churchlands, the common-sense concepts of mental states contained in our present folk psychology are, from a scientific point of view, radically defective. But we can expect that eventually a more sophisticated theoretical account will relace those folk-psychological concepts, showing that mental phenomena, as described by current folk psychology, do not exist. Since, that account would be integrated into the rest of science, we would have a thoroughgoing materialist treatment of all phenomena, unlike Rorty’s, does not rely of assuming that the mental is non-physical.

But even if current folk psychology is mistaken, that does not show that mental phenomena does not exist, but only that they are of the way folk psychology described them as being. We could conclude they do not exist only if the folk-psychological claims that turn out to be mistaken actually define what it is for a phenomena to be mental. Otherwise, the new theory would be about mental phenomena, and would help show that they’re identical with physical phenomena. Churchlands argument, like Rorty’s, depends on a special way of defining the mental, which we need not adopt, its likely that any argument for Eliminative materialism will require some such definition, without which the argument would instead support the identity theory.

Despite initial appearances, the distinctive properties of sensations are neutral as between being mental or physical, in that borrowed from the English philosopher and classicist Gilbert Ryle (1900-76), they are topic neutral: My having a sensation of red consists in my being in a state that is similar, in respect that we need not specify, even so, to something that occurs in me when I am in the presence of certain stimuli. Because the respect of similarity is not specified, the property is neither distinctively mental nor distinctively physical. But everything is similar to everything else in some respect or other. So leaving the respect of similarity unspecified makes this account too weak to capture the distinguishing properties of sensation.

A more sophisticated reply to the difficultly about mental properties is due independently to the Australian, David Malet Armstrong (1926-) and American philosopher David Lewis (1941-2002), who argued that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which e identify states as thoughts or sensations will still be neural as between being mental or physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect to capturing the distinguishing properties of sensations and thought.

No comments:

Post a Comment