Citation metadata

Author: David Topper
Editor: Brian S. Baigrie
Date: Mar. 30, 2012
Publisher: Charles Scribner's Sons
Document Type: Topic overview
Length: 20,379 words
Content Level: (Level 4)
Lexile Measure: 1130L

Document controls

Main content

Full Text: 

If we think of physics as originating in the minds of individuals, it is customary to begin with the early Greek philosophers. Perhaps the first of these philosohers was Thales of Miletus (c. 625-c. 547 BC) who spoke of all things as being composed of water. Later thinkers used a fourfold division: earth, water, air, and fire. In one sense they might be considered the first physicists, since their quest was analogous to the present-day search for the smallest building blocks of matter. But in another sense they were more like chemists, since the concept of a chemical element is a closer analog to these substances; indeed, the idea of earth, water, air, and fire as the elements of nature continued into the eighteenth century.

Pre-1543 Roots

Two early groups who had a major impact on the development of physics were the Pythagoreans and the atomists. Pythagoras (c. 560-c. 480 BC) and his followers set the stage for mathematizing the world by asserting that the essence of all things is number (specifically, geometry). Although their view of mathematics bordered on the spiritual (in somewhat the way that we speak of lucky numbers), a look at any of today's physics textbooks, pages filled with many mathematical formulas, will make clear the extent of their insight. In contrast, Leucippus of Miletus (fl. fifth century BC) and Democritus of Abdera (fl. late fifth century BC) conceived of a universe composed only of lifeless matter moving randomly in a void. This matter could be broken down into many invisibly small indivisible (atomic) units. According to the atomists, neither mind nor spirit existed in nature: This was a materialist view of the world.


The science of physics as taught from the late ancient world for centuries--well into the seventeenth century--was based primarily on the writings of Aristotle (384-322 BC). Although specific aspects of his ideas and thoughts relating to physics were often challenged, the overall picture was not displaced until the works of Galileo, Newton, and others during the Scientific Revolution of the sixteenth and seventeenth centuries. By then, Aristotle's physics was seen as a belief system grounded on ancient texts, taught by scholars in universities, but having no basis in experience or experiment. Yet that was not how it began.

Aristotle's physics was mainly a science of motion; this science was based on common sense. If you push an object lying near you, it moves as long as you push it; no push, no motion. So a push (or a force) produces motion (or speed). The harder you push (i.e., the greater the force), the faster the object moves. Stated as a proportion: Speed (S) is proportional to force (F); in contemporary notation, S [proportional to] F. Aristotle also realized that air was a factor in motion, retarding or resisting it; thus the same force would not produce the same motion in, say, water. So the greater the resistance of the medium (R), the slower the speed; thus S is inversely proportional to R (or S [proportional to] 1/R). Combining the two proportions, we obtain S [proportional to] F/R, which is Aristotle's law of motion for such horizontal motion. The history of the truth or falsehood of these laws forms a central focus in the physics of motion from ancient times through the Scientific Revolution.

One important consequence of Aristotle's law was the deduction that a vacuum (or void) is impossible. For if there is no resisting medium (R = 0), the law implies that any force (however small) produces an infinite speed--a result that Aristotle correctly interpreted as physically impossible (how can something be in two places at once?) and hence meaningless. The result was the dictum, echoed down through the ages, that "nature abhors a vacuum." As a result, atomism, which implied a void, was also rejected.

Next, consider vertical motion. Heavy objects fall. Fire, which is not heavy, rises. A bubble rises in water. Since some things go up and some go down, Aristotle dismissed an external power (or force) as the cause of these motions. Therefore, the power must be internal. Because Aristotle accepted the four-element theory, he proposed that each element possessed a natural tendency to move toward its natural place. Working within the Earth-centered universe of the ancient Greeks, Aristotle argued that substances composed mainly of earth move downward toward the center of Earth (which is the center of the universe), thus forming a sphere. Above and on Earth is water; above the water is air, forming a shell around Earth; and beyond that presumably is a shell of fire going to the orbit of the Moon. Just as an acorn knows how to grow into an oak tree, these elements know how to move toward their natural places. Thus gravity (from the Latin word for heavy) is not caused by an attraction (or a pull) of Earth. Although some ancient Greeks had considered attractive (and repulsive) forces, Aristotle dismissed these notions as unscientific and mystical because they implied the idea of action-at-a-distance (forces operating without direct contact), and he did not believe in such invisible powers operating over distances. Later such powers or forces were called occult (i.e., hidden from ordinary sensation). Gravity for Aristotle was a natural tendency of heavy objects to seek the center of Earth, just as air has levity and seeks its natural place above water.

One obvious prediction from Aristotle's law of motion was that a heavy object should fall faster than a lighter one, since the former is more strongly seeking its natural place. This deduction also follows from common sense: Hold two objects of different weight in your hands and feel the heavier one pushing toward the floor with greater force than the lighter one. Accordingly, when you release them, the heavier one should reach the floor first since its speed of falling should be greater. All this follows from Aristotle's law. Hence this law was potentially quantifiable, but as far as we know, neither Aristotle nor his students performed any quantitative experiments, although they may have tried to demonstrate qualitatively the law of falling bodies by dropping objects of different weights.

In the ancient cosmos, Earth was at the center of the sphere of the stars, for the heavens were pictured as they appear to us. We still reproduce this experience in a planetarium by projecting lights onto a hemisphere. Aristotle argued further that Earth was a sphere. Thus there was symmetry to the cosmos: the spherical Earth at the center mirroring the sphere of the stars. Before Aristotle there was no consensus on Earth's shape, but after him there was little debate. Contrary to popular historical mythology, Earth was known to be a sphere from Aristotle's day right through the time of Columbus and Magellan.

Not only was a vacuum impossible within the realm of the earthly world of the four elements below the Moon; this was also true in the celestial cosmos beyond the Moon, the abode of the planets. So Aristotle postulated the existence of a very diffuse substance to fill that world. This fifth element was given various names over the ages, the most common one being the aether. This aether had to be very diffuse (more so than air) since the planets are not retarded in their motions around Earth.

Natural motion in the sublunary or terrestrial realm is vertical (up and down), toward and away from the center of the universe, which is the center of Earth. But the planets move around Earth, so their natural motion must be circular. Thus different laws of motion governed the celestial realm. Just as there were different elements in the two realms, making them essentially different, each had its own laws of physics. This overall picture of the cosmos and its laws, based on a commonsense view of our experience of the world, was easily accepted and assimilated by various cultures over the next millennium and a half.

But there were some problems with Aristotle's earthly physics. One major problem involved nonnatural or violent motion. Throw a rock (earth) up and it continues moving even after it leaves your hand; why, if a force is required to make it move? Or throw a rock (i.e., a projectile) away from you; what keeps it moving? The explanations of violent motion in the writings of Aristotle are not always clear and consistent, but in essence the argument goes something like this. Since an internal power makes the rock fall, something external keeps it moving after leaving your hand. This can only be the air (the medium) surrounding the rock. Just as water flows around a moving boat and forms swirling vortices at the back, so air similarly flows around the rock; the vortices push it farther. This external push eventually stops, because of friction, and then, by gravity alone, the rock falls. In other words, projectile motion is composed of two parts: a straight line away from the source of motion and a vertical fall. The reason for the division of the motion into two distinct parts was Aristotle's belief that two powers cannot operate on an object at the same time. Diagrams illustrating this two-part motion appear well into the seventeenth century.

Another problem with Aristotle's law of motion involved the subjects of electricity and magnetism, both of which seemed to imply action-at-a-distance. Electricity in the ancient world was static electricity. Magnetism involved the use of natural magnets (called lodestones), which attracted (or repelled) each other as well as pieces of metal. Such forces must have an external cause (i.e., air around a projectile), so, to account for their behavior, Aristotle proposed an invisible substance around and in direct contact with magnets. An invisible substance was not without some evidence; after all, air is invisible but certainly exists. So why can there not be more diffuse substances, such as electrical or magnetic aethers?

Finally, there was the question about falling bodies. Perhaps the most widely performed experiment in the history of science is this one. If done such that two quite different weights (e.g., a 1-pound and a 20-pound sphere, respectively) are dropped simultaneously from a considerable height (e.g., about 100 feet), the 20-pound sphere will reach the ground before the 1-pound sphere. So qualitatively Aristotle's law is confirmed. However, the actual difference in speed is much less than that predicted by his law. Since the difference in weight is 20-fold, when the 20-pound sphere hits the ground the 1-pound sphere should have fallen only about 5 feet; in fact, the 1-pound sphere will be only a few feet from the ground! Quantitatively, Aristotle's law breaks down. But without actually performing such an experiment, the intuitive "fact" that speed of fall should be proportional to weight would not be challenged.

Aristotle's ideas were pondered long after his death through the Lyceum, a school he founded and which continued to flourish into the first century BC The third head of the Lyceum was Strato (d. c. 268 BC), who is said to have made several revisions and critiques of his master's ideas, specifically involving falling bodies. Strato argued that all bodies have weight, and that air and fire go up because they float. More important, he seems to have been the first to conceive of acceleration (i.e., change in speed) for falling bodies. He noted that when water falls, it begins as a continuous stream but then breaks into drops that get farther and farther apart. Thus, rather than falling at a constant speed, as Aristotle said, the water seems to be speeding up as it falls. Strato also spent some time at the famous school of learning and research in Alexandria (in present-day Egypt); founded about 307 BC, this museum (literally, a temple to the muses) and library (one of the largest in the world at the time) became a center of intellectual activity for several centuries. Most famous scientists of the era studied there at some time in their lives.


For Aristotle, "physics" was primarily the science of the sublunary (terrestrial) world. Mathematics applied to this world was confined to the use of some proportions of quantities (such as speed and force). Exact mathematics, however, was applied only to the heavens: namely, the motions of the Sun, Moon, planets, and stars. Aristotle's qualitative physics was justified in the sense that he was studying the everyday "real" world, where objects move in a medium. Such a complex world is more difficult than an abstract one to deal with mathematically. Yet there was a tradition in Greek science of conceiving of a mathematical order to terrestrial physics, probably beginning with Pythagoras. This framework came to fruition in Alexandria after Aristotle, in the third century BC

Archimedes (c. 287-212 BC) made a series of brilliant applications of mathematics to physics problems in mechanics (such as his law of the lever and the center of gravity) and hydrostatics (the equilibrium of fluids). His abstract approach to physical problems establishes him as one of the first mathematical physicists. But he had minimal impact on mechanics in the late ancient world and the Middle Ages; not until the sixteenth and seventeenth centuries was his approach revived.

Euclid (fl. 295 BC), who worked in Alexandria, was primarily a mathematician, and in his 13-part work Elements, he synthesized much of ancient geometry. What is remarkable about this work is that geometry as taught throughout the world today differs little from Euclid's original formulation. Geometry, of course, is an abstract, deductive, and mathematical system. Archimedes had applied this to some problems in mechanics. Euclid did similarly for the study of light and began the science of geometrical optics.

Optics in the ancient world was the study of light and vision. One debate, probably originating in early Greek thought, concerned whether vision begins in the object or the eye, that is, whether light merely comes passively to the eye or we actively probe the world with the eye emitting some invisible substance. Even though Euclid believed the latter, he abstracted the emitting substance into a straight and narrow (mathematical) ray. He conceived of vision as a cone, with the eye at the vertex and the perceived object at the base; he then deduced, for example, how the sizes of objects are proportional to their distances from the eye.

Ptolemy (c. 100-c. 170), who also thought that the eye produces rays, continued this geometrical approach, and the result was the major work on optics in the ancient world, his book Optics. In it are the laws of reflection and the production of images by various mirrors. There is a rudimentary study of the bending of light (refraction) as it moves between different media. He did not discover the modern law involving the sines of angles (Snell's law), but it is significant that he apparently tested some of his results by performing physical experiments. This may be one of the first cases of real experimentation in physics that we know of.

It was in the application of mathematics to astronomy that Ptolemy was most famous, having written the Almagest (from the Arabic for "greatest"), which remained the textbook for astronomy through the time of Copernicus in the sixteenth century. Ptolemy and others at Alexandria, from the founding of the museum and library, had applied geometry to the motion of the heavens using a system of circles that in the geocentric model was able to predict with considerable accuracy the positions of the Sun, Moon, planets, and stars. This, along with mechanics and optics, constituted the exact sciences at the time.


Some Greek writings in science survived the end of the ancient world, preserved in handbooks and encyclopedias by the Romans, the early Christians, and (by the seventh century AD) Islamic culture. Many important works were preserved in Arabic translation, coming back to the West and translated into Latin during the revival of learning and the rise of the universities in the twelfth and thirteenth centuries.

As Aristotle's physics was copied, translated, retranslated, and digested, not surprisingly some critiques arose. Noteworthy are those involving projectile motion. John Philoponus of Alexandria (fl. sixth century AD) made some searing criticisms. He argued that a medium could not simultaneously push and pull a projectile; instead, it acted only to resist motion. Thus an internal power is required to keep a projectile moving. Unlike the internal power of gravity, which all bodies have by nature, this "impressed force" is given to the body by the initial mover. He argued further that the speed of a moving object is not inversely proportional to the resistance of the medium but that the resistance should be subtracted from the impressed force; thus his law of motion may be written as S [proportional to] F - R. From a mathematical viewpoint, this meant that if the resistance is zero, the speed is proportional to the force; physically, this implied that a void (or vacuum) could exist since the speed would not be infinite. He also noted that falling bodies do not fall with speeds proportional to their weight, but that the differences are very small. He explained the result as due to the medium acting similarly on the two weights. Most significantly, it seems that he actually performed this experiment.

Philoponus's ideas were transmitted through Arabic translation by Avempace (Ibn Bajjah, d. 1139), surfacing in the West in the work of Jean Buridan (c. 1295-c. 1358) at the University of Paris. Buridan bestowed the name impetus on the internal power given to a moving body. He further proposed a quantification of impetus: namely, that it was proportional to the weight and speed of the body. This definition anticipated the modern concept of momentum, which is the product of mass and velocity. Buridan made the interesting hypothesis that perhaps the heavenly bodies moved by impetus; with little or no resistance from the aether, their impetus could keep them moving almost indefinitely. Importantly, this idea entails the possibility that the physics of motion of terrestrial bodies applies similarly to the heavens, contradicting Aristotle's idea of the two distinct realms. Later the concept of one law of motion for the heavens and Earth was fundamental to the view of nature conceived in the Scientific Revolution. Yet it must be remembered that Buridan was still working within an Aristotelian framework, in that he believed that motion required a continuous cause (whether internal or external).

The study of motion and its causal forces is called dynamics. Kinematics is the study of motion independently of forces, and it was also explored in the late Middle Ages. In the first half of the fourteenth century at Merton College of Oxford University, scholars introduced a clear concept of acceleration (possibly not explored since Strato in the third century BC) based on the notion of instantaneous speed. This was transformed into a visual format by the important work of Buridan's colleague in Paris, Nicole Oresme (c. 1320-82), who showed how to graph such motion using a vertical line for speed and a horizontal line for time. Oresme's figures are an early form of modern scientific graphing techniques. They are usually considered the first graphs, but in fact, the first graph was really the musical staff invented a few centuries earlier, since it entails a vertical direction (for pitch) and horizontal lines (for time).

Alhazen (Ibn al-Haytham, c. 965-c. 1040) did the most important medieval work in optics. He produced evidence that information in vision begins in the object, not the eye, but he retained the idea of a cone of vision and showed how it would work geometrically. When it was translated into Latin around 1200, his Optics had a major impact on Western optics.



It is customary to date the Scientific Revolution from 1543, the date of publication of Copernicus's book putting forth a heliocentric cosmology. The concept of a Sun-centered system was taken seriously only by a few scholars, but they turned out to be some of the best minds of that era. Perhaps the major challenge to this heliocentric concept was the need to explain motion on Earth (such as falling bodies) if Earth really moves. Posed as a question: Why does all motion on Earth act as if Earth were at rest? Galileo Galilei (1564-1642) met this challenge with his concept of inertia, which probably developed out of Buridan's idea of impetus.

Galileo realized, as had Buridan, that an object would move indefinitely if there were no resisting medium. But Galileo carried this idea further by arguing that it was not necessary to postulate an internal power to explain this motion. Just as an object at rest remains at rest until pushed, so an object in motion (moving at a constant speed) will do so unless a force is applied. Thus rest and motion are natural states of an object. A force is required only to overcome this tendency of bodies to stay in their state; this tendency is, as it was later called, inertia. He then used inertia to explain how an object, say a stone dropped from a tower, will fall to the base even if Earth moves. It had always been assumed that the stone should fall behind the tower on a moving Earth.

But Galileo realized that two powers are operating on the stone as it falls from the tower: gravity (the vertical component) and inertia (the horizontal component) since the stone is moving with the moving Earth. These two powers operate simultaneously (contrary to Aristotle), with the result that the stone moves as if it were a projectile thrown from the tower at rest. Because the tower is moving with Earth, as the stone follows its projectile path, the tower catches up to the stone when it hits the ground. Hence the falling stone falls to the bottom of the tower. That is why objects really do behave as if Earth does not move. This "experiment," which he did in his mind (called a thought experiment), contains the idea of the relativity of motion.

In thinking about motion, Galileo idealized the cases he considered by assuming no resisting medium. Thus when it came to falling bodies, he realized that they would accelerate until they hit the ground. From this, and inspired by Archimedes, he deduced the laws of falling bodies (in a vacuum): The distance (d) an object falls is proportional to the time (t) of fall squared, that is, d [proportional to] t2. Newton would later use this important law to work out his law of gravity. Galileo applied it to projectile motion. In the case of a stone falling from a moving tower, Galileo, as seen, ignored Aristotle's belief that two powers could not operate at the same time. Therefore, he applied simultaneously his laws of falling bodies and inertia and discovered the mathematical law of projectile motion: that the path is symmetrical and parabolic. This had an important application to warfare since it explained how cannons should be aimed. This was published for the first time in his last book, Two New Sciences (1638), written a few years before he died.

Galileo's work was almost entirely in kinematics. Gravity for him was just a name for the fact that on Earth, bodies fall; he did not know the cause. But he was sure that whatever its origin, gravity was localized around Earth because he did not believe in action-at-a-distance (or occult powers). Therefore, gravity did not extend to the Moon, as some had speculated in attempts to explain the tides on Earth. Instead, Galileo argued, erroneously as it turned out, that the motion of Earth causes the tides. He also seemed to believe that inertia on a large scale was circular; thus the Moon goes around Earth only by its circular inertia, and the planets move around the Sun in the same way. Although wrong, this concept was at least a way of uniting the laws of terrestrial motion with those of the heavens, an anti-Aristotelian presumption that was essential if Earth really was just another planet.

The French philosopher and mathematician René Descartes (1596-1650) believed that he could arrive at an understanding of the foundational laws of the world by pure reason alone. Like Aristotle and Galileo, he did not believe in occult powers. The cosmos he invented was a world of matter and motion, as conceived by the atomists, but without the void. Descartes's writings contain the first explicit presentation of linear inertia. Thus, for bodies to move in circular paths (such as the planets), a medium was constantly required to deflect them from moving in otherwise straight lines. Accordingly, Descartes's cosmos was a world filled with an aether of colliding particles whose vortex motions propelled objects in circular motions around the Sun, since he also adopted the Copernican model. Such a picture of the cosmos became associated with the concept of a mechanical world view, where all motion is caused by direct contact between parts, like the gears and wheels of a machine.

Fundamental to such a worldview was a law of impact for colliding particles. Descartes initiated this by defining an entity he called motion, which was the product of the size of a body and its speed, a quantity he believed was unchanged by collisions. His idea of motion (like Buridan's impetus) anticipated the modern concept of momentum, mv (mass multiplied by velocity), but without clear definitions of either mass or velocity as a vector. Descartes believed that in the beginning God had imparted a fixed amount of such motion to the universe, and since then it has worked like clockwork without running down. The idea of a clockwork universe developed from this conception.

Since Descartes was more interested in the "big picture," he did not develop further his law of impact. It was left to his disciple, the brilliant Dutch mathematician Christiaan Huygens (1629-95), to work this out and produce what today is the law of conservation of momentum. Significantly, he used Galileo's principle of relative motion to correct Descartes's erroneous rule of collision, which, appropriately for a Dutchman, he conceived of in terms of boats moving along canals. In the course of this work, he discovered that another quantity was conserved, specifically in collisions between elastic bodies; the quantity was body size times velocity squared, which would later (as mv2) form the basis of the concept of kinetic energy.

Since Huygens was working within the framework of Descartes (i.e., a Cartesian framework), a problem such as a planet orbiting the Sun or the Moon orbiting Earth involved reconciling linear inertia with rotational motion. Huygens's insight to the solution of this problem was his realization that for any rotating object the tendency to pull away from the center (which he named centrifugal force) is due to the inertia of the object. Representing inertia as a straight-line tangent to a circle, Huygens was able to derive a mathematical expression for this force: For any rotating object with velocity (v) at distance (r) from the center, the centrifugal force is proportional to the quantity v2/r. Unbeknown to him, and of course independently, in England Isaac Newton (1644-1727) found the same relationship. Huygens went on to use this formula to derive equations for the motions of pendulums, and from this work he designed one of the first precision clocks.

If Galileo illustrates the revival of mathematical physics as first expressed in the work of Archimedes, Huygens and Newton represent its coming to fruition. Whether true or not, the story of Newton in the mid-1660s watching an apple fall and conjecturing that the force on the apple may also be applied to the Moon around Earth contains the essence of what became his theory of gravity. He derived the same relationship for the force of a rotating object as had Huygens; except that his force was the opposite of Huygens's: namely, the force toward the center, which he called the centripetal force. Newton assumed that the Moon's natural tendency to move in a straight line (inertia) was deflected by the centripetal pull of gravity toward Earth. Since the Moon continually "falls" by Galileo's law of falling bodies, he deduced that this centripetal pull or force (F) on the Moon was inversely proportional to its distance (d) from Earth, squared. This was the inverse-square law of gravity: namely, F [proportional to] 1/d2. In comparing this force on the Moon with the force of gravity on Earth, he found them to be very close. Thus Newton was the first to conceive of the possibility of an artificial satellite, which he once illustrated in a manuscript published after his death.

This was an extraordinary result, but does it mean that gravity extends to the Moon? If so, would this not entail action-at-a-distance, or occult forces? Besides this conceptual problem, there was a mathematical one: In making his calculation Newton had used a circle for the orbit of the Moon. But he was convinced that Johannes Kepler (1571-1630) was right; the heavenly orbits are not circular, as was still assumed by Galileo and Descartes, but instead, are elliptical. It was a formidable problem to deduce the force law from an ellipse.

Sometime later Newton solved it, and it appeared along with many other solutions in his masterpiece, Mathematical Principles of Natural Philosophy (1687), usually just called Principia or the Principia (Latin for "Principles"). As had Descartes, Newton asserted the law of linear inertia. He also presented a clear definition of mass as the amount of matter possessed by an object independent of its weight caused by gravity. Using Descartes's concept of motion now as the product of mass and velocity, he put forward the basis of what would become his law of motion: that the force on a moving body is proportional to the change in motion. This means, first, that for a moving object, a force produces a change in velocity; without the force, the speed remains constant. (The contrast with Aristotle's law, where continuous force produced speed, was stark.) Second, the resistance to this change of motion (or speed) is due to the mass of the object (not necessarily the medium). Newton went on to show that the parabolic motions of projectiles as well as the elliptical orbits of the planets entailed the inverse-square law of gravity. And contrary to Galileo's attempt at explaining the tides by the motion of Earth, Newton showed mathematically that the tides are caused by the gravitational attractions between Earth, Sun, and Moon. It was a brilliant synthesis of the works of Galileo, Descartes, and Kepler. Newton had truly created one law of physics uniting the heavens and Earth.

Of course, all of this mathematical physics took place seemingly in a vacuum. How, therefore, could gravity extend across space and not be an occult force? Newton never really answered this question. At different times he gave contradictory answers. In Principia, he stated that our knowledge of the mathematics of gravity was sufficient, going so far as to imply that the question went beyond the bounds of science. Yet he did hint that God might be the ultimate cause. Elsewhere he postulated that some special form of aether might explain the gravitational attraction throughout the Universe. However, this conjecture was at odds with Principia. In the middle section of the book, he had worked on the problem of motion in media, essentially applying his laws to continuous bodies rather than just point masses. In doing so he began the task, completed in the next centuries, of working out the laws of fluid motion, heat flow, and so on. In his rudimentary calculations he had shown that the planets would not move according to Kepler's laws of motion if a medium were assumed; in short, Newton had mathematically disproved a Cartesian (or apparently any) aether theory of the solar system. The Newtonian universe evidently was empty and gravity was an occult force. In time, many scientists (especially mathematical physicists) became accustomed to working with forces operating across space, without questioning the source of such forces.


The seventeenth century saw a leap in the invention of scientific instruments, principally the telescope, microscope, thermometer, barometer, air pump, and precision clock, all of which are still in use. With instruments went experiments.

Experimentation refers to the active manipulation of the world to extract empirical data rather than a passive recording of data. From this point of view, there was little experimentation in science up to this time (with Philoponus's work on falling bodies and Ptolemy's work in optics perhaps being exceptions).

Of course, the science of astronomy, from the beginning, was based on empirical knowledge, but it was not experimental. The invention of the optical telescope (as well as the microscope) in the seventeenth century signaled a change in the range of sense data; further developments in the twentieth century of radio, x-ray, infrared, and other telescopes extended that range and raise doubts about how passive this work is. Nevertheless, there is a meaningful distinction between such extracting of data (really, the gathering of light in all its forms) and the more active manipulation entailed in direct experimentation.

Galileo neither invented the telescope nor was he the first person to place the telescope in the service of astronomy. But he was the first to study the heavens systematically, using his skills as an artist (he was trained in art and taught drawing) and an observer. As a result, he discovered the mountains of the Moon, the moons of Jupiter, the phases of Venus, and sunspots, among other sights that supported the Sun-centered system.

The subject of Galileo as an experimenter is epitomized by the story of his dropping two different weights from the leaning tower of Pisa. Supposedly, he showed that the weights would hit the ground at about the same time, confirming his theory that bodies fall independent of their weight. Many historians have relegated this tale to the realm of mythology, like the story of Newton and the apple. We know that Galileo presented a logical argument (or thought experiment) for why bodies in a vacuum fall at the same speed. Since three identical weights fall at the same speed, if we combine two of them (thus making one weight twice as heavy as the other), the two will still fall at the same speed (since nothing has changed physically). Thus all weights will fall at the same speed. Because Galileo arrived at this result by logic, and because any physical experiment would involve air and hence the weights would not necessarily fall at the same speed, it has been argued that Galileo would not and hence did not perform it. But all historians do not agree with this. We know from more recent studies of Galileo's notebooks that he performed many experiments by, for example, rolling small spheres down inclined planes and making projectiles. These experiments were done meticulously and verified his law of the parabolic shape of a projectile, which he had also arrived at by logical, mathematical deduction. So there is every reason to assume that Galileo actually performed an experiment with falling weights, at least to show that Aristotle's law is quantitatively wrong.

Robert Hooke (1635-1703) was also a meticulous observer and an ingenious thinker. It was said that as a boy he constructed many imaginative mechanical toys; once, when seeing a clock being dismantled, he made a wooden replica. His ingenuity was revealed through his design, invention, and perfection of scientific instruments, such as the compound microscope, wheel barometer, reflecting telescope, crosshair sight, air pump, and spring control for the balance wheel in watches. As curator of experiments at the Royal Society, he was responsible for performing experiments at weekly meetings. He anticipated qualitatively Newton's hypothesis that the Moon "falls" by a combination of inertia and a force directed toward Earth; he also proposed the idea of universal gravity, but he did not carry through the mathematical demonstrations.

Like Galileo, Hooke was an artist whose drawing talent is revealed in the marvelous illustrations in his Micrographia (1665), one the first major works on microscopic observations. The book also contains his work on optics. Especially important are his observations of the colors in thin, transparent films (e.g., oil on water). He examined these colors in many situations: mica, soap bubbles, air layers between glass sheets, and many more. He realized that the colors were periodic since the spectrum repeated. This work inspired Newton and Huygens to study the phenomenon, from which they worked out a theory of what today are called Newton's rings. Hooke's concept of light as a pulse in a medium was one of the wavelike hypotheses of the seventeenth century.

Huygens also used the wave model for light, partially because he worked within the Cartesian framework and this implied that space is full. Descartes assumed that light was an instantaneous pressure through a medium. Vision he viewed as analogous to a blind man using a cane, where the feeling of an object is transmitted immediately through the cane to the hand. Huygens then conceived of light as a pulse in the aether and developed the concept of a wavefront, where each point is a center of a spherical wave with the summation of these waves forming a wavefront; this model allowed a mathematical formulation from which he derived the laws of reflection and refraction. For him all this was a mechanical explanation of light. But these wavelike models were not really wave theories in the modern sense since they were not based on periodic waves or undulations in a medium.

Like Galileo, Huygens usually derived his results first by mathematical deduction and then tested them by experiment. He stressed the importance of experience and experiment, for he did not agree with Descartes that truth could be deduced by reason alone. He performed experiments in optics and mechanics. For the latter he tested his theories of colliding bodies, falling bodies, rotating bodies, and the motion of bodies in media. He once demonstrated his laws of collision before the Royal Society in London. He invented the pendulum clock and, independently of Hooke, he assembled a watch controlled by a spiral spring attached to the balance, an idea that Hooke believed had been stolen from him. Huygens and his brother were expert lens grinders who built microscopes and telescopes of superior quality. With their telescope Huygens discovered the rings of Saturn, which he interpreted as evidence of the Cartesian vortex theory.

Just as in Principia, where Newton assumed a world of empty space, so in his initial theory of light Newton contradicted the aether explanations of the time, especially regarding color. The established theories assumed that colored light, such as the rainbow, was a modification of white light. In a famous set of experiments sending rays of light through prisms and observing their colored spectra, Newton demonstrated that white light is a mixture of colors and that the prisms do not produce colors by modifying the incoming light; instead, the prism separates the colors already existing in the white light, with the distinct colors refracting by different amounts. From this he concluded that light must have a particlelike or corpuscular nature. His later work on thin films involved detailed experimental measurements of the thickness of the films and the diameter of the rings of color. His explanation of these Newton's rings employed a hypothesis about vibrations, implying a more wavelike model. Newton published this work in his second great book, Opticks (1704). He and Huygens thus bequeathed to the next generation rather ambiguous notions about the nature of light (both particle-like and wavelike), nevertheless based on solid experimentation.

The breakthrough in the optics of vision was primarily the work of Kepler. From the classical and medieval texts he borrowed the geometrical construction of light rays but combined this with the realization that vision involves light coming from the object to the eye. He showed that each point of the object is the apex of a cone of light coming to the eye, and that the lens focuses the light on the retina of the eye. This was the basis of the modern theory of the physiology of the eye as a camera with focusing properties. It also separated physics and physiology, so that geometrical optics was henceforth studied independent of vision.

Eighteenth Century


That the eighteenth century was a golden age of mechanics is epitomized by the fact that what today is taught to physics and engineering students as Newtonian physics actually has less to do with the work of Newton than with the mechanics of Euler and Lagrange. Newton had worked out the physics of motion of two bodies acting as mass-points in a vacuum and interacting by a gravitational force (the inverse-square law). He then applied this to the motions in the solar system (e.g., the Sun and Earth, or the Sun and a comet). The eighteenth-century mathematical physicists extended this work, through the development of calculus, to three bodies (e.g., the Sun, Earth, and Moon) and more.

In the middle section of Principia, Newton attempted to extend his work to include finite bodies and resisting media (i.e., ordinary objects and fluid media on Earth). Some cases he attempted were the speed of sound in a gas, a fluid flowing through a hole, and a sphere moving in a medium. But Newton had little success in solving these formidable problems; such motion is much more complex than a few mass points moving in empty space. Yet he set the problems for the next century, which auspiciously produced several brilliant thinkers who corrected his work and developed the subject of fluid mechanics. They also went beyond, to other areas, such as the motions of vibrating strings and elastic bodies. Virtually all of this work was done as mathematical deductions from a priori (i.e., nonexperimental) principles, as if inspired as much by Archimedes as Newton, but, of course, grounded on fundamental physical insights. Using mathematical symbols to represent physical quantities, these scholars were able to deduce relations between these quantities.

The Swiss mathematician Leonhard Euler (1707-83) has been called the most prolific mathematician of all time. He applied much of that mathematical acumen to physics, clarifying and simplifying concepts and problems. The form of Newton's law written in textbooks today, namely F = ma, was first formulated by Euler in midcentury (only by hindsight is Newton's law in Newton's work). In fluid mechanics Euler realized the important part played by pressure in fluid flow, and worked out an equation of flow for an ideal fluid. He made contributions to the motion of sound in air and the elasticity of materials, and showed that the center of mass of the solar system (not just the Sun) should be used in calculating Kepler's elliptical orbits.

The publication in 1788 of Analytical Mechanics by the Frenchman Joseph Louis Lagrange (1736-1813) was another step in the quest for a pure mathematical physics of motion, for he attempted a complete algebraic approach to mechanics so as to reduce the subject to a few mathematical formulas. He was proud that his book contained no diagrams and few examples or applications. A later mathematician called it a "scientific poem." It contained what today is called Lagrange's equation, an abstract formulation of Newton's law.

Building on Lagrange's work, another French mathematician, Pierre-Simon de Laplace (1749-1827), solved a problem left by Newton, the stability of the solar system. Being unable to solve the motion of three bodies interacting by gravity, Newton concluded that the solar system is extremely unstable and probably held together by God's power. Laplace showed that the tides affect Earth's rotation. He then worked on the orbits of the planets, finding the upper and lower bounds to their elongation or "eccentricity"; he went on to calculate how the gravitational forces among the planets affect their motions and deduced that, contrary to Newton, the solar system is extremely stable. This was published in his monumental five-volume work, Traité de mécanique céleste, 1798-1827. Laplace is supposed to have said that God is therefore just a hypothesis. This pronouncement is probably not a statement of atheism but rather an expression that the laws of mechanics (and hence the celestial machine that God made) work perfectly without the need for constant intervention as Newton assumed.


Rational mechanics growing from Newton's Principia represented only one branch of physics. Other subjects were being explored starting in the seventeenth century: heat, light, electricity, magnetism, and chemical phenomena. Newton's Opticks, with an emphasis on experiment and accompanied by much speculation about the underlying structure of things, was often the source of inspiration for work on these subjects. Yet most of the experiments performed in the eighteenth century, until late in the century, were qualitative.

The concept of an imponderable (i.e., something weightless) was a key hypothetical entity in this tradition. Probably the first imponderable was fire; later there was the aether. In the eighteenth century most imponderables were conceived of as fluids. In heat theory there was caloric, a fluid that explained why, when hot and cold bodies are in contact, the cold one gets hot, and vice versa--because caloric flows from the hot to the cold body. Since the weight of a body does not change when heated, caloric is weightless. The concept of a calorie as a unit of food energy is a remnant of this hypothesis.

Although Newton's primary concept of light was particulate, he had speculated about a medium to explain Newton's rings. Various wavelike hypotheses were further explored throughout the century, and these required the existence of an optical aether. Indeed, imponderables were often conceived of as various types of aethers. There was also speculation about the ultimate composition of these fluids or aethers. Sometimes they were thought to be continuous, but more often they were assumed to be composed of very small ("subtle") particles, which could penetrate the particles of ordinary matter. In this way the Cartesian tradition continued into the eighteenth century.

Forces accompanied the particles of aether. The aether particles were attracted to matter but were mutually repulsive. The case of static electricity shows how this theory worked. For this, two fluids were required, each with mutually repelling particles but such that the particles of the two fluids attracted each other. Call one fluid (+) and the other (-), and consider two bodies. If both bodies have the (+) fluid, they will repel each other; similarly, if they both have the (-) fluid. But if one body has the (+) fluid and the other the (-) fluid, they will be attracted to each other. A similar hypothesis of two magnetic fluids was used to explain magnetism. Of course, the (+) and (-) symbols are still used today. Early in the century only static electricity was known; later, electric currents were created. Clearly, such currents, almost by definition, are easily conceptualized as flowing fluids.

One of the most important quantitative experiments in electricity was performed by Charles Augustin Coulomb (1736-1806) when he measured the electrical force between two electrical charges. Previously, Henry Cavendish (1731-1810) in England had used a torsion balance (a device used to measure very small force, based on the twisting of a suspended wire) to measure the gravitational force between two masses (actually, he was trying to measure the density of Earth). Coulomb, making an analogy with Newton's law of gravity, used a balance for electrical charges and found that the force also obeyed an inverse-square law. This result, called Coulomb's law, was independent of any speculation about electrical fluids or aethers.


Much of what constituted science in the seventeenth century took place outside the universities. Steeped in Aristotelianism, many university professors were opposed to the "New Philosophy," as science was often called, and its strong critique of Aristotle. Renaissance humanism, with its study of classical texts, had previously taken place among informal groups outside the universities. So in the same way the new scientists began organizing themselves into various groups, such as the Accademia dei Lincei (the Academy of the Lynx) in Rome, where Galileo was a member, or the Accademia del Cimento (the Academy of Experiment) in Florence. Similar informal groups arose in other countries.

The major organizations founded in the seventeenth century were the Royal Society of London, a private organization although bestowed official sanction in 1662, and the Académie Royale des Sciences (the Royal Academy of Sciences) in Paris, officially established in 1666 by Louis XIV's minister of finance. Financed by the government, the French Academy admitted only the scientific elite. In contrast, the Royal Society was open to all interested in science. It began in 1660 when a group in London formed a scientific club they called the "College for the Promoting of Physico-Mathematical Experimental Learning." Two years later it received the king's seal (but not his money) and was called "the Royal Society for the Improvement of Natural Knowledge," with Baconian utilitarianism implicit in its title. An important brainchild of the society was the first scientific journal, called the Philosophical Transactions; it still exists, just as the society is the oldest such organization still in existence.

Smaller organizations were formed in major cities in other countries, such as the Berlin Academy and the St. Petersburg Academy, Euler being a notable member of the latter. They also spawned numerous local (provincial) societies that grew and flourished throughout the next two centuries. By the end of the nineteenth century, there were more than 100 such societies in towns throughout Great Britain. The nineteenth century constitutes a transition in the teaching, learning, and organization of science. Importantly, the universities returned as centers of science, a process that continued into the twentieth century. During the French Revolution the Académie was suspended temporarily, but afterward it returned as the chief scientific institution in France. The revolution also secularized the educational system and established an engineering school, the École Polytechnique, with a strong emphasis on applied science. The curriculum was rather narrow, educating "specialists" without a broad range of learning. Mathematics was emphasized; both Lagrange and Laplace initially taught there.

In contrast, the German system of education at the universities was based on Naturphilosophie, a sweeping view of science encompassing all of nature and viewing the world as a whole. Particular curriculum innovations around 1830 saw the creation of the physics seminar and training in both theoretical and experimental science. Also important was the rise of laboratories, used for both research and teaching, some with government funding. As a result, physics emerged as a discipline first in Germany and then, as the German model was adopted by other universities, in France, the United States, and Great Britain. In short, science was becoming a profession.

The 1830s also saw a reform in British education from within. In the early part of the century the universities played a minor role in science. But public lectures on science were quite popular, delivered at various "institutes" throughout the country. The American Count Rumford (Benjamin Thompson, 1753-1814), who had immigrated after the American Revolution, founded the Royal Institution in 1799; his aim was to educate the lower classes. Also, the Society for the Diffusion of Useful Knowledge was formed in 1825 and, later the Imperial College of Science and Technology and the science museums.

The reforms at Cambridge and Oxford universities originated in an undergraduate club at Cambridge (a notable member being Charles Babbage, most famous for his early conceptualization of computing devices). The club attempted to reform the mathematics curriculum, which was based on the antiquated and cumbersome system of Newtonian calculus, whereas the continental system, based on the method of Gottfried Wilhelm Leibniz (1646-1716) was much more elegant and easy to use. Babbage (1792-1871) was a radical thinker and had a more general critique in mind, publishing in 1830 his Reflections on the Decline of Science in England. Noting that amateurs traditionally ran science, Babbage argued that professionally trained teachers were now needed to attract talented students. Moreover, the Royal Society was a bastion of privilege run by men with little talent for science; and there were growing complaints about the lack of discussion and debate after papers were presented at its meetings. Babbage's critique was widely discussed and debated. It led to university reform and, in 1831, the creation of the British Association for the Advancement of Science (BAAS). Membership in the BAAS was not based on those who could pay but on an interest in science; it attracted emerging professionals. Also, debate was encouraged at their annual meetings, which took place at different cities throughout the United Kingdom and Commonwealth. Similar societies followed in other countries, such as the American Association for the Advancement of Science (AAAS) and the more elitist National Academy of Sciences.

Thus the nineteenth century bequeathed to us our modern image of science as a subject taught at a college or university, consisting of lectures, seminars, and labs, and ultimately producing, especially for those proceeding to graduate school, the professional scientist.

Nineteenth Century


When humans begin to think, they may also think about thinking. Witness Immanuel Kant's (1724-1804) Critique of Pure Reason, questioning the limits of reason. When they think about the physical world, they may think about their thinking about the world. Are there limits to what we can know? Aristotle's teacher Plato pondered this question. He believed that reason could reach absolute knowledge about matters of philosophy (ethics and beauty) but not the physical world (science). At most we can construct hypotheses that work (such as models of the heavenly motions) but whose truth is unknowable. This attitude toward scientific knowledge is sometimes called phenomenalism because it is based on the principle that we can account for a phenomenon (since the model fits the empirical data) but cannot penetrate its reality.

Aristotle, in contrast, was a realist, in that he was convinced that by induction from sense data we could arrive at truths about the world. However, this empirical side of Aristotle got lost in the subsequent centuries of copying, translating, and assimilating his works. By the late Middle Ages, Aristotelianism was a system of rational deductions throughout the world.

During the Scientific Revolution, the issue arose again; recall the difference between Bacon's emphasis on induction and Descartes's use of deduction. Yet for most practicing scientists there was more of a mutual interplay between theory and experiment. The question of phenomenalism or realism played itself out around the question of the reality of a moving Earth and the physics that accompanied it. Galileo, Descartes, Kepler, Huygens, Newton, and other major scientists were convinced of the truth of the Copernican system. Those who opposed it argued either from a physical viewpoint (using the relativity of motion to assert the "truth" of both models) or a theological one (saying that human knowledge is limited in such matters unless supported by divine revelation).

But even with the victory of the Copernican system in the wake of the scientific revolution, realism was not completely triumphant. True, the reality of a moving Earth was settled, but other matters of a fundamental nature remained, such as the reality of imponderables, atoms, and especially forces. Newton himself had introduced phenomenalism at the end of Principia. Refusing to explain the cause of gravity, he said it was sufficient that we know the law of gravity and how it accounts for the motions of the planets and tides around Earth, an argument that would have pleased Plato.

Championed by Newton, phenomenalism threaded its way through the eighteenth and into the nineteenth century, especially among mathematical physicists. An eloquent example in this tradition is the famous textbook The Analytical Theory of Heat (1822) by the Frenchman Joseph Fourier (1768-1830), containing equations for heat flow and what became Fourier series. He makes it clear at the start that like Newton on gravity, he is saying nothing about the physical nature of heat and thus avoiding the debate over heat as caloric or the motion of particles. Using only observable and measurable quantities, Fourier deduces the mathematical relationships among them, and this constitutes a sufficient and complete theory of heat.

Of course, independently of any debate over the nature of reality, the sciences in general were seen as making progress. Science coupled with technology was initiating the industrial revolution, and accompanied by the professionalization of science, Bacon's vision for the attainment of a complete science and the prosperity of humankind seemed to be on the horizon. The doctrine of progress, with science as the role model, pervaded the intellectual, economic, and social worlds of the nineteenth century.

The philosophical writings of Auguste Comte (1798-1857) contributed to this vision. His study at the École Polytechnique was possibly the source of his conception of a society run by an elite corps of scientists and engineers. He adopted the term positivism to designate his particular dream. Both society and the individual must progress through three stages: theological, metaphysical, and positive (scientific). The theological or religious phase is dominated by belief systems and a world pervaded by spirits; during the metaphysical stage, rational thought emerges but is limited, since empirical knowledge is eschewed. But the third stage is one of positive knowledge, inductive and grounded in empirical reality.

Positivism (and the accompanying appellative positivist) entered the lexicon of everyday speech but not without much confusion. The way the term is commonly used would make it seem synonymous with realism. The usual interpretation of a positivist is one who believes that all knowledge is encompassed by a scientific worldview, so there is no need for either theological or philosophical thought; this is certain because our scientific knowledge of the world is complete. Journalists, popular science writers, and especially theologians use positivism and positivist this way.

But, perhaps ironically, the term was originally coupled with phenomenalism. Theorists who truly believed in Comte's goal of trying to ground science on empirical data independently of preconceived ideas concluded that such positive knowledge came at a price--that of abandoning a complete picture of the real world. Some even went so far as to reject the concepts of the aether and the atom as nonscientific.

Positivism as it flowed into the twentieth century came with two meanings. One was the common Comtean version, which was often seen as synonymous with realism and a form of scientism. The other, sometimes called critical positivism to distinguish it from the former, branched into various tributaries, such as operationalism (a form of phenomenalism where knowledge is based on "operational definitions") and instrumentalism (where knowledge is limited to readings of instruments without seeing the reality hidden behind). Thus the phenomenalism versus realism debate, originating in the ancient world, made its way into the twentieth century.


The nineteenth century is often called Darwin's Century because of the revolution in biology initiated by his theory of evolution. But less well known is the fact that enormous progress was made in physics, too, especially in the studies of heat (thermodynamics), electricity and magnetism, and the kinetic theory of gases. A combination of theoretical and experimental work resulted in key discoveries, such as the electromagnetic nature of light and the law of conservation of energy. Conceptually, this work saw a unification of disparate areas of physics, a change as momentous as the unification of earthly and celestial physics during the Scientific Revolution of the seventeenth century.

In 1800, the English polymath Thomas Young (1773-1829) announced his discovery of the interference of light. In his famous experiment Young split one beam of light in two by sending it through two pinholes and onto a screen; as the two beams interfered, alternate fringes of light and dark appeared on the screen, indicating that the beams could cancel each other out so as to produce darkness. To him this sealed the question of the nature of light: Is it a particle or a wave? Interference patterns could be produced only by waves. Further work on the wave model by Augustin Fresnel (1788-1827) in 1818 convinced the Académie of the reality of light waves. Furthermore, it was taken for granted that waves must exist in a medium, and hence the wave theory of light reinforced the notion of an aether filling space. So one of the oldest of the imponderables remained part of mainstream science throughout the nineteenth century.

Theories of electricity and magnetism in the eighteenth century, such as the two fluids model, were also grounded on the concept of imponderables. But the Danish physicist Hans Christian Ørsted (1777-1851) made an important discovery in 1820. Inspired by Kant's concept of the unity of forces, Ørsted reasoned that since both electricity and magnetism obey Coulomb's law of forces, perhaps the same underlying forces are responsible for the two seemingly different phenomena. Postulating that electricity and magnetism are not independent phenomena, he performed an experiment to test his idea. Bringing a magnet near a current-carrying wire, he discovered that the magnet was affected by the wire; in short, the wire was producing magnetic forces in circles around the wire. Kant was right; there is a unity to forces.

From a Newtonian viewpoint, however, Ørsted's discovery was puzzling, since forces were always conceived of as acting in a straight line, not in circles. For this reason the French scientist André Marie Ampère (1775-1836) wrote a series of mathematical and experimental papers showing how electric and magnetic forces may be reduced to Newtonian action-at-a-distance.

The Englishman Michael Faraday (1791-1867) was not averse to Kantian concepts. Lacking a mathematical education, he had to rely on his physical intuition, and for this the mental image of Kantian forces was especially appealing. Accordingly, he performed an experiment that schoolchildren every year still do: On a piece of paper placed over a magnet, he sprinkled metal shavings and created beautiful swirling patterns. These patterns, which he called lines of force, were visual representations of the invisible world of Kantian forces. He later used the term field for this picture, a term adopted by others in the development of what became field theory.

Believing in the unity of forces and contemplating Ørsted's discovery, Faraday reasoned that the reverse should also be true, namely, one should be able to produce electricity from magnetism (a phenomenon now known as electromagnetic induction). He verified this experimentally in 1831. Independently, in the previous year, the American Joseph Henry (1797-1878) performed but did not publish an analogous experiment also producing electromagnetic induction. So after over two millennia as separate phenomena, electricity and magnetism were fused as one. From a practical standpoint, the experimental work of Ørsted, Faraday, and Henry led to the invention of both the electric motor and the generator, an important example of basic science leading to technological applications.

The reverse was true in heat theory. The role of the steam engine in the industrial revolution stimulated work in the science of heat. In the 1820s, the French engineer Sadi Carnot (1796-1832) found the important relationship between heat and mechanical work, showing how to maximize the amount of work generated by heat in an engine. The relationship was correct even though Carnot's theory was based on the concept of heat as the flow of caloric.

Earlier Rumford had raised doubts about the concept of caloric, noting that when heat is generated by friction there seems to be no end to its production; this implies a seemingly infinite amount of caloric fluid, which is meaningless. He speculated that maybe heat is just a product of motion itself, but the idea was not widely developed. The breakdown of the caloric theory in midcentury was fostered instead by the rise of Kantian ideas now applied to experiments involving heat. The experiments in the 1840s of the Englishman James P. Joule (1818-89) showed that a specific amount of work produced a specific amount of heat, which today is called the mechanical equivalence of heat. He also showed that specific amounts of heat are produced by specific amounts of electricity (as when heat is produced by a wire). He pictured these transformations among heat, mechanical work (or motion), and electricity in terms of the unity of forces. Moreover, and importantly, his discovery of these equivalencies implied something else about the transformations, namely the conservation of these forces.

Later the term energy was used to identify the entity conserved in such processes. Young had first used the word energy in the scientific sense as a name for the quantity 1/2mv2 (later called kinetic energy), which played a key role in the mechanics of motion. Since these Kantian forces were of a different nature from the action-at-a-distance forces of Newton, another word was needed to characterize them, and energy eventually was used.

In Germany the eclectic scientist Hermann von Helmholtz (1821-94) also pondered such transformations. In a landmark paper of 1847 he synthesized an array of ideas and experiments on the notion of conservation; although he still used the term force (Kraft in German), conceptually his paper was a clear statement of what became the law of conservation of energy. He noted not only convertibility and conservation among motion, heat, electricity, and magnetism, but also realized that chemical forces in batteries produce electricity and hence exhibit another transformation; similarly, in organic organisms, "animal heat" (today, the energy from calories) produces motion, and so on.

For those working within the Kantian framework, these forces or forms of energy were fundamental entities in nature. But a materialist may ask, for example: "Heat is the energy of what?" Thus when radiant heat was discovered and was shown to be a form of invisible light (it is infrared light), it was classified as a wave (with longer wavelengths than visible red light) in the aether. Ultraviolet light was discovered at about the same time. This wave theory of heat, along with the competing theory based on Kantian forces, led to the collapse of the caloric theory. Of course, there was always the phenomenalist option, as Fourier put forward in his Analytical Theory of Heat, in which the mathematical model was seen as sufficient explanation.

A consolidation of heat theory took place in the 1850s, mainly through the work of William Thomson (later Lord Kelvin, 1824-1907) in Scotland and especially Rudolf Clausius (1822-88) in Germany. From their work emerged the two laws of thermodynamics: first, the law of conservation of energy, where heat is converted into work and vice versa; and second, the law of what Clausius called entropy, which increases as the form of energy changes when heat flows from hot to cold. These laws provided the basis of the modern science of thermodynamics.

The modern theory of electromagnetism matured in the latter decades of the century. Thomson initiated the construction of a mathematical framework for Faraday's physical images. Then the Scottish physicist James Clerk Maxwell (1831-79) brought this mathematical field theory to fruition in a monumental trilogy of papers and a book. Hendrik Lorentz (1853-1928) in Holland later formulated the modern version of Maxwell's equations.

One of Maxwell's core discoveries was the relationship between light and electromagnetism. Faraday had speculated on such a possibility because, among other things, electricity produces sparks. What Maxwell deduced mathematically, however, was even more profound: that light itself is a form of electromagnetic wave; light is the visible part of what otherwise is a spectrum of invisible waves. Hence electricity, magnetism, and now light were unified in the nineteenth century.

Maxwell died before realizing the significance of his prediction and the possibility of testing it. Moreover, his work was not very well received on the Continent, where theorists, in the tradition of Ampère, preferred Newtonian forces across space over electromagnetic fields. But the German physicist Heinrich Hertz (1857-94) took the idea of invisible waves seriously, and in a series of experiments performed between 1886 and 1888, he detected such waves beyond the infrared (really microwaves) and showed that they behaved as visible light. This work, coupled with Lorentz's formulation, consolidated modern electromagnetic theory. Out of this grew the technology of radio waves and the rest.

The consolidation was primarily a mathematical description of electromagnetic phenomena based on the concept of the field. Analogous to the mid-nineteenth-century question about energy ("The energy of what?"), late in the century one could ask, "An electromagnetic field of what?" Hertz himself put it more generally: "What is Maxwell's theory?" Faraday, who introduced the field concept, saw it as a replacement for the aether. Based on Kantian notions, he believed the lines of force were real entities in the field; hence there was no need for further aethereal foundations. Maxwell also saw field theory as supplanting action-at-a-distance, but on the nature of the field he was ambivalent. Two concepts are found in his writings. Electromagnetic waves are waves in the aether, as light is assumed to be; so the field is grounded in the aether. But he also implied, as Fourier had with regard to heat, that the mathematical theory is the complete theory. The latter was Hertz's conclusion when he famously wrote: "Maxwell's theory is Maxwell's system of equations." So a phenomenalist (or positivist) interpretation of Maxwell's work accompanied the theory as it was bequeathed to the next century.

Thermodynamics, too, carried phenomenalist overtones. Today's physics textbooks resonate with this: Thermodynamics is discussed in terms of abstract engines, with heat going in and out, work being produced, and so on, without speculations or specifications on the nature of the engines. But the explication of thermodynamics is invariably followed by the kinetic theory of gases. Historically, kinetic theory developed in parallel to thermodynamics in the nineteenth century, picking up on Rumford's speculation that heat is the motion of matter. Of course, any idea about molecules or atoms was mere speculation at the time. Chemists, for example, were divided on the issue. Even though in 1803 the English chemist John Dalton (1766-1844) introduced the atomic model to determine the atomic weights of known elements, many chemists held that only measurable entities, such as weight and volume, were permissible in chemistry. So the kinetic theorists (Clausius, Thomson, and Maxwell, among others) knew that they were working on shaky ground. The results, however, were suggestive, for they were able to deduce various gas laws by assuming that gases are composed of very small, fast moving, and colliding molecules or atoms. They even predicted the size of an atom. The Austrian physicist Ludwig Boltzmann (1844-1906) then tried to derive the concept of entropy from kinetic theory; his calculations involved the use of statistics and probability and laid the groundwork for statistical mechanics. The result was another definition of entropy as a measure of the degree of disorder of a system. This need for statistics led to an important question: Are the statistics necessary because of limitations in our knowledge of the world, or is the world itself of a statistical (or random) nature? The question was left for the next century to ponder.

There is a well-worn myth that in the late nineteenth century, almost all progress had been made in our knowledge of the physical world. It is true that a few such pronouncements were made, but there was no widespread belief in this notion. Indeed, at the end of the century a series of discoveries raised new questions. In 1895, x-rays were discovered; in 1896, radioactivity; in 1897, the electron. At first, x-rays were thought to be particles of matter; it was not obvious that they are high-energy waves. The study of radioactivity eventually opened the door to the emergence of nuclear physics, an entirely new subject for the next century. The electron, if indeed it was a fundamental particle of matter, had the extraordinary property of being thousands of times smaller than the predicted atom, which, by definition, was the smallest possible thing! At the time, then, it was not clear where or how these discoveries fitted into the scheme of things.

Nevertheless, immeasurable progress was seen in the unification of physics, and there was a widespread belief that physics was based securely on Newton's laws of mechanics, Maxwell's equations of electromagnetism, and the laws of thermodynamics. But it was also acknowledged that the substructures were not very secure. Newton's laws and hence gravity were still viewed as action-at-a-distance, whereas electromagnetism employed field theory. Most believed in a materialist aether as the infrastructure of physical theory. Some put forward an electromagnetic worldview, believing that electricity replaced matter as the fundamental reality; other antimaterialists, called energeticists, also drawing on the Kantian tradition, thought that energy was the foundation of reality. In short, despite the century of progress, fundamental questions lingered and new discoveries were left to ponder as physics drifted into the twentieth century.

Twentieth Century


In 1905, Albert Einstein (1879-1955) published a paper that formed the basis of what became known as the special theory of relativity. Galileo's idea of the relativity of motion was one of its essential assumptions; stated as a principle, it asserted that for a person moving in a straight line at constant speed (so that there is no acceleration), no mechanical experiment can be performed to detect such motion, and hence rest and motion are equivalent. Einstein expanded this principle to include electrodynamics as well as mechanics in the frame of reference. He also made another assumption, namely that the speed of light (c) in a vacuum is always the same, independent of the speed of its source. The property that c is the same for anyone measuring it is called invariance. The origin of Einstein's idea about the speed of light has puzzled historians, but it was possibly a consequence of a thought experiment he performed at about the age of 16 when he asked himself what it would be like to ride a beam of light.

One deduction from these assumptions was that the aether is, as Einstein said, a "superfluous" entity. This was a consequence of the principle of relativity, for an aether at rest with respect to the universe would constitute an absolute frame of reference; a measure of one's speed in this aether would therefore contradict the principle of relativity. Because of Einstein's rejection of the aether, textbooks and popular books on relativity often begin with a discussion of the Michelson-Morley experiment, performed in Cleveland, Ohio, in 1887. The physicist Albert Michelson (1852-1931) and the chemist Edward W. Morley (1838-1923) tried to measure the aether with a delicate apparatus that bounced simultaneous beams of light in perpendicular directions; they expected the beams to arrive back at different times because of the motion of Earth through the aether. But the beams always came back at the same time. Since neither scientist doubted the reality of the aether, the null result was at once disappointing and puzzling. In the 1890s, the Irishman George E. FitzGerald (1851-1901) and Lorentz postulated independently that the null result was due to contraction of the experimental apparatus by a very small amount in the direction of motion through the aether. Called the Lorentz-FitzGerald contraction, it was often invoked to explain the experimental result without abandoning the aether. But historical evidence reveals that the Michelson-Morley experiment had no influence on the origin of Einstein's theory and that, in fact, he may not even have known of it at the time. Einstein met Michelson in 1931, at which time he conceded that Michelson's experiment supported his theory of relativity.

The twin postulates of relativity and the invariant speed of light lead to some surprising and unusual features of the physical world. First, events that are simultaneous in one reference frame (say, in a moving aircraft) are not simultaneous in another frame (say, on the ground). From this it follows that the measurement of time is also different in the two frames; specifically, time on the aircraft is measured more slowly than time on the ground. Hence the theory predicts that in the moving reference frame time itself slows more and more as speed increases. The second prediction is that the lengths of objects in the moving frame decrease by an amount similar to that proposed by the Lorentz-FitzGerald contraction. But Einstein's deduction was arrived at independently of the Michelson-Morley experiment. The contrast is illuminating.

Einstein's result is a deduction from first principles; the other is an induction from an experimental result devised to salvage the aether. Third, there is an increase in the mass of objects in the moving frame; that is, they get heavier. All these changes have a limit because c is the maximum speed the frame (and hence any moving object) can attain. At the speed of light, time itself would stop and objects would shrink to zero length while their mass would increase to infinity; since these outcomes are impossible and physically meaningless, c is the limit of all things. Fourth, as a consequence of the mass increase, Einstein deduced the famous equation E = mc2, which implies the interconvertibility of mass and energy.

There are Kantian overtones to this result. Kant had reduced matter to force, out of which grew the concept of energy; so the link between matter and energy, implicit in nineteenth-century physics, was formalized and quantified by Einstein. There are also positivistic overtones to the theory. Einstein returned to basics--time, length, mass--and our measurements of them. In his formative years he was highly influenced by the positivist writings of Ernst Mach (1838-1916), who aimed to purge science of nonobservable entities, and this is reflected in his rejection of absolute motion and the almost operational way in which these parameters are conceived. Einstein was also profoundly influenced by the phenomenalism of thermodynamics, and this too is reflected in his 1905 paper.

There was scant experimental evidence for Einstein's theory in 1905. Only the prediction of the increase in mass was verifiable, for electrons in certain experiments were known to move near the speed of light. Although these experiments actually revealed that the mass of the electrons was increasing, the quantitative results did not fit Einstein's formula. One of the reasons the experiments were not interpreted as supporting Einstein's theory (even qualitatively) is because it was believed that the measured increase in mass was due to aether being dragged along with the speeding electrons. This was a mechanical and materialist explanation. The contrast with Einstein's thinking is enlightening. He was not proposing any physical cause to the changes in time, length, mass, and energy. It was a decade before more precise experiments on speeding electrons quantitatively verified Einstein's prediction. Later, E = mc2 was confirmed in atomic and nuclear processes, and with the invention of atomic clocks it was shown that time indeed slows down as the speed of a clock increases, precisely as Einstein predicted.

As conceived in 1905, the theory of relativity was restricted to objects moving at constant speeds. Around 1907, Einstein began thinking about expanding or generalizing the theory to encompass all motion, so that the relativity principle would apply to accelerating frames of reference. The insight came when Einstein realized that for a person in free fall (in a vacuum) there is no experience of gravity. It is as if gravity in this accelerating frame of reference is turned off. Today, astronauts in orbit demonstrate this by their weightlessness despite the fact that they are accelerating (falling forever) around Earth.

The central idea of what came to be known as the general theory of relativity was simple: Gravitational force is equivalent to inertial force experienced by a mass in an accelerated frame of reference. Since the experience of a person in free fall is equivalent to one floating in empty space, a person in a gravitational field (experiencing a gravitational force) is equivalent to one accelerating in space (experiencing only an inertial force). So, perhaps gravity could be explained by inertia. It was a beautiful vision, for, if true, action-at-a-distance could be eliminated. To bring it to fruition, however, was a formidable task. Einstein was required to learn a whole new field of mathematics, that of non-Euclidean geometry. In the nineteenth century, three mathematicians independently put forward the idea that there were other valid systems of geometry besides the one Euclid synthesized around 300 BC By the early twentieth century, non-Euclidean geometries were common in mathematics but were believed to be irrelevant to physics, since physical space (the infinite Newtonian universe) was Euclidean; other geometries were just imaginary worlds. But Einstein found that he could fulfill his quest to explain gravity by inertia if space around matter was assumed to be non-Euclidean. The mathematical problem was monumental and he spent years trying to crack it, with much help from his close friend, Marcel Grossmann (1878-1936), a mathematician. He arrived at a formula for general relativity late in 1915, a decade after special relativity was first published. The paper on general relativity was published in 1916 and made some further surprising and unusual predictions.

The fundamental idea, which is a consequence of using non-Euclidean geometry, is that space is curved around matter, so that gravity is due to the curving, bending, or warping of space into a fourth dimension. The apparent Newtonian attraction of matter to matter is an illusion; thus, for example, the Moon orbits Earth because it is caught in curved space around Earth. Einstein thus reduced gravitational force to four-dimensional curved space-time. Newton's trilogy (space, matter, and force) was now a duality (space and energy).

This was the conceptual component of the theory. The first empirical component was already true, for the formula for general relativity implied that the elliptical orbit of Mercury should rotate by a very small amount. This behavior of Mercury had indeed been discovered in the mid-nineteenth century but had never been answered adequately. Second, if space is curved around matter, light should be bent as it passes any mass. On an astronomical scale, the light from a star should bend around the Sun, and this could be tested during a solar eclipse. This was done during an eclipse in 1919, and Einstein's prediction was confirmed. The third prediction was that time should slow down as gravity increases. Thus a clock on a mountain should run faster, by a very, very small amount, than one at sea level. This, like the time prediction from special relativity, was later tested using atomic clocks, and again Einstein was right.

There was (and still is) a conceptual elegance to general relativity, even though the mathematics is cumbersome. Moreover, that he could deduce properties about the world from first principles arrived at by pure thought led Einstein toward an almost Cartesian view of science later in life: a position that, he conceded, was far from the original Machian core of special relativity.


The positivists' philosophical protestations notwithstanding, atomism was affirmed in the twentieth century following the discovery of x-rays, radioactivity, and the electron, along with the kinetic theory of gases. Not surprisingly, Einstein made a significant contribution. In a second paper of 1905, on Brownian motion (e.g., the zigzag motion of small particles suspended in a fluid), he derived an estimate of the size of molecules. The French physicist Jean Perrin (1870-1942) used Einstein's paper for a series of experiments, published in 1909, that confirmed the atomic theory.

Quantum theory is based on both the atomic (particulate) picture of matter and a particulate picture of electromagnetic radiation. The German physicist Max Planck (1858-1947) introduced the concept of a discrete unit (or quantum) of radiation in 1900. He derived the formula E = hf, where E is the energy of radiation, f its frequency, and h a constant (later called Planck's constant), for the quantum of energy. Einstein, in a third paper of 1905, on the photoelectric effect, used Planck's concept of a quantum of energy to explain the photoelectric emissions of electrons from metals. He hypothesized that light is composed of discrete quanta of energy obeying Planck's formula. Recent historical research indicates that Planck introduced quanta merely as a mathematical means of accounting for experimental data on radiation; thus he did not consider them a physical reality, as Einstein did. However, this historical interpretation, which bestows credit for the genesis of quantum theory more on Einstein than on Planck, remains controversial.

Experimental work by the American physicists Robert A. Millikan (1868-1953) in 1916 and Arthur H. Compton (1892-1962) in 1923 confirmed Einstein's hypothesis on the particulate nature of light. Millikan confirmed Einstein's equation (E = hf) for the photoelectric effect; varying both the energy and frequency, he obtained an accurate value for Planck's constant. Compton studied the collision of x-rays with electrons and found that x-rays behaved as particles transferring momentum to the electrons according to the principle of conservation of momentum. These "particles" of light became known as photons. Einstein realized the radical nature of his hypothesis in 1905, writing to a friend at the time that the idea was "very revolutionary," apparently even more so than relativity. It certainly contradicted the nineteenth-century certitude that light is a wave. But even after the experimental evidence for photons of light there remained the interference of light, which seems explicable only on a wave model. Millikan, who won a Nobel prize partially for confirming Einstein's hypothesis, actually performed his experiment to disprove the particle model; and although the results confirmed the particulate nature of light, he continued to believe that there was something wrong with the theory, because of the fact of interference. This paradox that light seemingly displays a binary nature became known as the wave-particle duality.

Meanwhile, the atomic theory of matter was being developed. In 1913, the Danish physicist Niels Bohr (1885-1962) applied the quantum hypothesis to atomic structure. Bohr built upon the atomic model of the New Zealand-born experimentalist Ernest Rutherford (1871-1937), working at Cambridge, McGill University in Montreal, and Manchester University in England. In 1911, Rutherford's experiments showed that the atom was composed of a positively charged nucleus with (negative) electrons around it. Bohr pictured this as an analog to the solar system, with electrons orbiting the nucleus as planets orbit the Sun, and electrical attraction replacing gravity. He applied the quantum hypothesis to the atom by assuming that only certain discrete orbits, corresponding to discrete (quantized) energies, were permissible, and that the electrons could radiate energy only when jumping between orbits. This model provided an interpretation of the spectra of hydrogen and formed the basis for the development of the atomic model of the chemical elements.

What became known as quantum mechanics emerged in the 1920s out of the wave-particle duality and Bohr's quantized atom. The French physicist Louis de Broglie (1892-1987) provided theoretical insights into both. In 1924, he suggested that since light exhibits both particle and wave properties, then by symmetry perhaps matter does, too. Thus electrons should have a wave nature. He carried this idea further by proposing that the restricted (quantized) atomic orbits may reveal the wave nature of the electrons; just as a tight guitar string can vibrate only in an integral number of loops, so each electron orbit contains an integral number of wavelengths. In the late 1920s, the wave nature of electrons was confirmed in several experiments. George P. Thomson (1892-1975), the son of J. J. Thomson (1856-1940), performed one. He used an interference-like apparatus for electrons and showed that they exhibit wave properties--ironically just as, about 30 years before, his father's experiment revealed the particulate nature of electrons. The term matter waves was introduced, by analogy with light waves.

Building on this idea, the Austrian physicist Erwin Schrödinger (1887-1961) constructed a mathematical theory of the electron and other subatomic particles. Published in 1926 and called wave mechanics, it is an extremely abstract model in which particles or matter waves are not actually impulses in a medium but mathematical entities that appear in a wavelike equation. Schrödinger's equation was successful for computing atomic energies and properties of the hydrogen atom. But its physical interpretation was problematic. It did not yield information on individual particles but only probability distributions of systems of particles.

Schrödinger saw the statistical nature of his equation as inherent in the theory. But a more radical notion was put forward by the German physicist Werner Heisenberg (1901-76). While studying with Bohr at his Institute for Theoretical Physics in Copenhagen, Heisenberg proposed that the probability factor did not reflect on the theory but instead disclosed the way nature is. He expressed this in his principle of indeterminacy (sometimes misleadingly called uncertainty), which asserts that the more accurately the position of a particle is measured, the less well specified is the velocity, and vice versa. This means that contrary to Newtonian determinism, it is impossible to measure simultaneously and with perfect accuracy the position and velocity of a particle. To Heisenberg this indeterminacy was a property of the subatomic world.

Bohr went further. He interpreted Heisenberg's principle as implying that electrons and photons do not have an actual existence until they are measured. This resolved the wave-particle duality, and he expressed this idea in his principle of complementarity. From a Newtonian viewpoint waves and particles are direct opposites. But if the wave or particle behavior of electrons or photons comes into existence only with the act of measuring, there is no contradiction, since the electron or photon is really neither a wave nor a particle. They only exhibit either of these properties if "asked" by an act of measurement. This viewpoint was developed at his Institute and became known through his many students as the Copenhagen interpretation of quantum mechanics.

There certainly were positivistic overtones to these ideas, with emphasis on the role of measurement and the corresponding limitations of knowledge about the real world, and Einstein objected strongly to this. At first this may seem surprising, since Mach had profoundly influenced him in his early work. But by the 1920s, Einstein had abandoned Machian principles, mainly because of the success of general relativity. He firmly believed in the existence of an independent, objective reality. Any statistical or probabilistic feature of a theory is due to the limitation of knowledge of complex or many-particle systems, not to the nature of reality itself. The measuring apparatus, not the world itself, limits us. Quantum mechanics, therefore, is an incomplete theory. He made this point in his famous statement that "God does not play dice." Late in life he said that he had spent much effort in trying to understand the nature of light, and even though many scientists will tell you they know what light is, neither he nor they really know.


The discovery of radioactivity in 1896 and the nucleus of the atom in 1911 led to the creation of the entirely new field of nuclear physics in the twentieth century. It came to fruition in the 1930s, when the realization of a nuclear chain reaction had immediate application to the development of the atomic bomb and to our understanding of the energy of the stars. Later, nuclear energy became a source of power for major cities throughout the world.

After the discovery of radioactivity, radium and other radioactive substances were isolated. Rutherford's experiments showed that the radioactive process leads to the transmutation (radioactive decay) of one element into another. This meant that elements were not separate and immutable as was commonly thought. Many scientists saw this as impossible, for it implied that elements were self-destructing. Rutherford's other discovery, of the nucleus of the atom, revealed that the mass of an atom is concentrated at its center. Later, the nucleus of the hydrogen atom was identified as the proton, and it was assumed that the nuclei of the higher elements, helium and above, contained protons and electrons. By 1932, only three elementary particles were known: the electron, proton, and photon. But in that year the neutron was discovered; it had no charge and its mass was about the same as that of the proton. Heisenberg proposed that neutrons were in the nucleus, which solved some problems with atomic weights in the older theory. Thus the hydrogen atom contained one proton in its nucleus with one orbiting electron, the higher elements having both protons and neutrons in their nuclei.

In 1938, Hans Bethe (1906-2005) put forward a theory of how stellar energy is generated by the fusion of protons to form helium and other higher elements. He wanted to solve an old problem of solar power. Kelvin had assumed that the stars were formed by the gravitational collapse of hot, rotating gases; his calculations showed that the Sun had enough energy to last about 20 million years. But Rutherford's work around 1905 on radioactive dating showed that Earth alone was 2 to 3 billion years old. Apparently, there was some other source of energy for the Sun and the other stars.

By the mid-1920s, it was known, through the study of stellar spectra, that stars are composed mainly of hydrogen and helium. So Bethe proposed that stellar energy comes from the fusion of protons. His calculation showed that the fusion of hydrogen into other elements results in an enormous release of energy, enough for our Sun to shine for billions of years. Bethe's paper laid the foundation for the study of stellar energy. Work has since continued on how the fusion process takes place in stars and its implications for their evolution and for the formation of supernovae, neutron stars, and black holes.

The other less benign application of nuclear physics also began in 1938. Physicists in German laboratories discovered nuclear fission, the breakup of uranium nuclei into smaller nuclei when bombarded by neutrons. When it was found that this fission process produced more neutrons, it was conjectured that this could lead to a chain reaction, since the excess neutrons would produce new fissions and the process would continue to diverge. If the process continued uncontrolled, it would create a violent explosion. This is the basis of what is called an atomic bomb (although it is really a nuclear bomb). If, however, there were some way to control the reaction, it could be used as a source of energy, and this realization led to the development of nuclear reactors and power plants.

These discoveries came during the time of the emigration of many scientists from Nazi-held territories to the United States and Great Britain. Based on the belief that Germany was working on an atomic bomb, projects began in both the United States and Great Britain toward beating Germany in this race. A letter from Einstein to President Roosevelt in 1939, warning him that Germany might be working on an atomic bomb, was a powerful catalyst for the launch of the Manhattan Project. Begun in December 1941, it was a complex secret enterprise involving universities and laboratories throughout the country. General Leslie R. Groves (1896-1970) was placed in charge, and he chose J. Robert Oppenheimer (1804-1967) as scientific director. Scientists gladly joined the project. There were several reasons for the enthusiasm that went into this work. From a scientific perspective, the discoveries since the later 1930s were surely opening a new frontier in physics, and this project was at the forefront of such work. But there was also the real threat from Hitler and Nazism, which motivated many to drop whatever they were doing and join the project. Many of the refugee scientists from Europe (such as Bethe and Edward Teller [1908-2003]) worked passionately on this project. Indeed, it may have been the greatest ensemble of scientists working together ever seen.

The first bomb was tested in the summer of 1945 near Alamogordo, New Mexico. But the bomb was never used against the German enemy; instead, Japan was the target. A bomb was dropped on Hiroshima on August 6, 1945; a second on Nagasaki on August 9. The first was equivalent to about 12,500 tons of TNT; the second, about 22,000 tons. Today, the warhead of just one Minuteman missile is almost 50 times as powerful as the second bomb.

With the discovery of nuclear power, science and the world were never the same. Perhaps for the first time, ethics consciously entered the world of science. Most scientists involved in the Manhattan Project returned to their previous fields, with some working to control nuclear technology and the spread of nuclear weapons. Yet others continued working on the development of nuclear reactors or military weapons. The bombs dropped on Japan were fission bombs. The idea of a fusion (hydrogen) bomb, called the super, was explored during the war but was developed and tested only after the war by the United States and the Soviet Union, and this fueled the arms race.

Physics Today: Quarks, Superstrings, and the Theory of Everything

In the nineteenth century, the idea of an atom was just that, an idea. The first experimental evidence for any such building block of nature was the discovery in 1897 of the electron, which was ironically much smaller than the proposed atom. After the discovery of the neutron in 1932, the atoms of the chemical elements were shown to be built up from the subatomic particles: electrons, protons, and neutrons. So the elements were not elementary. But even the "elementary" particles known today are probably not elementary. The Greek philosopher Thales of Miletus thought that everything was reducible to water; two modern candidates for these irreducible entities are quarks and strings.

In 1935, the Japanese theoretical physicist Hideki Yukawa (1907-81) published a paper on the application of the quanta to the nucleus in which he predicted another particle with a mass between the electron and proton. After the war, a new particle was detected in cosmic rays; later called a muon, the particle was thought to be the one predicted by Yukawa. But shortly thereafter, Yukawa's particle was detected and was called a pion; it turned out that there are two pions, one with a positive (+) charge and the other negative (-). There are also two (+ and -) muons.

After World War II, high-energy accelerators (often popularly called atom smashers) were built, and with these scientists opened up the world of elementary particles previously available only from cosmic rays. Soon other particles appeared--the kaon, the lambda, the sigma, the xi, and so on. This was the beginning of the proliferation of hundreds of such elementary particles that were discovered (some would say created) over the next few decades. By the 1960s, the aggregation was called the "elementary particle zoo."

A zoo was an appropriate metaphor: Just as animals are classified into a logical taxonomic system, attempts were made to find a similar order to the myriad particles, especially since they decayed (or transformed) into one another. One attempt was to assign numbers (0, +1, -1, +2, -2) to the particles as the basis for a conservation rule to be applied as they decayed from one to another. The decay process is written as an equation, and the sums of the particle numbers on both sides should balance. Since this process systematized a rather strange phenomenon, the assigned number was called strangeness. The concept came from the American physicist Murray Gell-Mann (b. 1929) who went on to propose that the elementary particles are actually composed of even smaller entities, which he called quarks. On the one hand, the idea of a quark made sense: That the world is composed of hundreds of fundamental particles is not likely if one assumes that there is a simplicity principle at work in nature. Moreover, the fact that the particles decay into one another seriously questions their fundamental nature. On the other hand, a key reason for the resistance to the idea of something smaller than an elementary particle was that it entailed fractional electrical charges, which was assumed to be impossible. Yet Gell-Mann followed through with the idea, proposing, for example, two quarks: one with a + 2/3 charge (called the up quark) and the other with a - 1/3 charge (called down). Thus a proton (+1 charge) is made up of two up quarks and one down quark (since + 2/3 + 2/3 - 1/3 = +1). Similarly, a neutron is made up of two down quarks and one up quark. Like strangeness, the assigned numbers (called quantum numbers) balanced.

So at first there were three quarks: up, down, and strange. But further application of this process to the elementary particle zoo required more quarks. By the mid-1970s, there were three more, called charm, top, and bottom. Of course the names are not to be taken seriously, since they are just terms assigned to the quantum numbers. However, some important unanswered questions are raised by the quark model. Are these six quarks the fundamental building blocks of nature? Are quarks to be conceived of as real things? Or are they, as a phenomenalist might say, just assigned labels for following classification rules for the elementary particles?

An alternative or even rival theory, conceived at about the same time, pictured elementary particles as very small vibrating strings. At least since the work of de Broglie, it was known that the electron had wave characteristics. The same was true for the other elementary particles. So it was not unreasonable to propose such a model: Small filaments of string replaced pointlike particles, with different vibrations producing different particles. Indeed, it seemed that perhaps even quarks could be incorporated into string theory. Of course, these strings were extremely small, 100 billion billion times smaller than a proton!

But initially, there were problems with the string theory, so that it was almost abandoned. The seemingly simple visual picture of a vibrating string was a foil. The theory used an abstract and cumbersome mathematical system involving a 10- or perhaps a 26-dimensional space. Also, it accounted only for some nuclear particles. Just before physicists gave up on it, however, it was found that by modifying the strings, turning them into superstrings, the theory could account for the rest of the elementary particles (such as electrons). But probably the key reason it was and still is taken seriously is that there is the possibility that the theory may also account for gravity. If so, it would fulfill Einstein's quest.

Einstein spent the last 22 years of his life in the United States working mainly on one problem in physics, the search for a unified field theory. Nineteenth-century physics had begun the unification process, but it was not complete. His general theory of relativity had successfully explained gravity as a warping of space-time into a fourth dimension. The theory therefore accounts for large-scale phenomena in which gravity is the primary force: the solar system, galaxies, black holes, and so on. But there is also the electrical force, which plays a major role at the atomic level. Since the Sun and Earth, for example, are neutral, only gravity is relevant when considering their interaction. At the atomic level the gravitational attraction between, say, two electrons is negligible, so only the electrical forces come into play for atomic interactions.

Einstein's goal was to explain electrical force as a property of space, just as he had done for gravity. If electrical force could thus be reduced to geometry, perhaps one geometrical equation could account for both the gravitational and electrical fields. This would unify the fields--hence the phrase unified field theory. The Kantian overtones are transparent. Einstein never achieved his goal, although he worked on it to the very end. Notes on this problem were found on the night table next to his deathbed.

During the more than two decades during which Einstein pursued his quest, many colleagues saw him as an old fool wasting his time chasing a dream. "Einstein is completely cuckoo," Oppenheimer once wrote. Moreover, the problem was more complex than unifying two forces; by the 1930s, there were known to be four forces in nature. The other two forces were in the nucleus: the strong force holding the nucleus together, and the weak force accounting for radioactivity. Starting about the 1970s, as physicists began pursuing quarks and superstrings, the quest for a unified field theory was again seriously taken up by several noted thinkers, such as Stephen Hawking (b. 1942). In time, this became known as the theory of everything. The terminology, however, is inappropriate and confusing. It does not mean that the theory, if achieved, will account for all phenomena in the universe. The result will be a theory of fundamental physics that will unite gravity with the three other forces; or, to put it another way, relativity and quantum physics. It will be a complete description of fundamental physics.

Other theories that are being researched in the early twenty-first century to unite general relativity and quantum field theory. Several of these "string theories" have been proposed. One theory that emcompasses many of these ideas is the M-theory, which was proposed in 1995 and is now supported by Hawking as the best theory for completely understanding the workings of the universe.

Scientific Biography: Archimedes

What posterity remembers most about Archimedes (c. 287-212 BC), and from which the term eureka experience comes, probably never happened. But it remains a grand story. Archimedes was a native of Syracuse, a Greek city in southeastern Sicily (present-day Italy), who often worked for the king, Hiero. One day the king asked him if a recently fashioned crown was really made of gold or whether the goldsmith had cheated and added some silver. A solution to the problem came to Archimedes while immersed in the public bath. He realized that the volume of water he displaced was equal to the volume of his body and hence he could measure the volume of the crown and from this calculate its density. He was so enthralled by his insight that he ran home naked shouting, "Eureka, eureka" (i.e., "I found it!").

Most of his life he probably live in Syracuse, but he spent time in Alexandria studying with some followers of Euclid. From various manuscripts and fragments of manuscripts we know that he was one of the most brilliant mathematicians in the ancient world; some scholars rank him at the top. Noteworthy are his approximation method for finding the ratio of the circumference to the diameter of a circle (what was later called pi) and his system for representing very large numbers.

He also invented a number of mechanical devices: a planetarium model to represent the motions of the Sun, Moon, and planets; a water screw for raising water; ballistic instruments for warfare; and systems of compound pulleys. From the latter comes the famous Archimedean aphorism: "Give me a place to stand and I will move the Earth." He was said to have invented a system of mirrors that reflected so much sunlight that they could burn enemy ships; this is usually considered a fanciful tale, although in recent years it has been tested with some success.

Yet despite his various gadgets of warfare, the Romans defeated Syracuse in 212 BC Numerous accounts say that a Roman soldier killed Archimedes, although they differ on the details. One of the more colorful legends says that he admonished the soldier for getting too close to a mathematical diagram he was working on.

He requested that his gravestone carry a diagram of a cylinder containing a sphere. This illustrated the mathematical problem of finding the ratio of their volumes. Archimedes had solved this problem and he was obviously very proud of his solution.

Scientific Biography: Francis Bacon and Physics as Empowerment

Francis Bacon (1561-1626) was an essayist, philosopher, and statesman, achieving the status of lord high chancellor at the peak of his political career. He was a prolific and splendid writer of history, law, politics, morals, and especially science. Bacon realized that he lived in an age of major scientific changes, that Aristotelianism was dead, and that therefore a new science would emerge outside the universities (which were the bastions of Aristotelian ideas). Yet all seventeenth-century scientists were not thoroughly modern; these "natural philosophers" could be involved in various "magical" pursuits, such as alchemy (Newton) or astrology (Kepler). Out of this tradition came the notion of occult powers (such as Newton's conception of gravity) and the idea that humans can have power over and control nature, a belief fundamental to Bacon's vision of science.

He stressed the utilitarian side of science: that the burdens of life will be lightened through our knowledge and control of nature. To achieve this it is necessary to use the right method. He emphasized the process of induction from a close inspection of nature, whereby the causes of things are extracted by an exhaustive collecting of data. He was wary of mathemathical deductions (i.e., derivations or proofs from first principles) and distrustful of seemingly arbitrary hypotheses. Although this was a rather narrow view, it was broadened by his inclusion of engineering, ballistics, and various mechanical arts as part of his vision for the advancement of knowledge and progress.

Bacon was not a scientist; he dabbled in experiments but made no discovery. But he was a prophet of science, its methodology and organization. Charles Darwin said that his own work was based on the Baconian method. The Royal Society exhorted the utilitarian value of science, sometimes called Baconian utilitarianism. His prestige and influence, especially in England, carried this optimistic vision into the twentieth century.

Immanuel Kant and the Unity of Forces

Immanuel Kant (1724-1804) is known today as an academic philosopher; his Critique of Pure Reason (1781), on the limits of human reason, is still widely read in philosophy courses. But in the late eighteenth century he was also a scientist, teaching science at the University of Königsberg. He was particularly interested in the philosophical underpinnings of science, or what he called, in the title of his book on the subject, Metaphysical Foundations of Natural Science (1786). Science for him was primarily Newtonian physics.

Newton's universe as Kant saw it, was composed of three entities: empty space, matter and force. Space was geometrical, Euclidean space; matter, whose essential property was inertia, was ultimately atomistic; and force, namely action-at-a-distance, operated between chunks of matter (whether planets or apples or atoms). This represented one stream of Newtonianism; the other, wary of occult powers, introduced various aethers or imponderables to explain the mechanisms of apparent cases of action-at-a-distance. But by the time of Kant's Foundations, a century after Principia, mathematical scientists working in that stream of Newtonian physics were more accustomed to manipulating forces, and hence the taint of occultism had dissipated. This was the starting point for Kant.

Space and force could readily be understood in conceptual terms, but Kant asked, "What is matter?" In the first sentence of Critique, he affirms that all knowledge begins with experience (sense data), however much reason ultimately prevails in our cognition of the world. So in Foundations he reflects on our empirical knowledge of matter. Consider a stone (a chunk of matter): It exists by filling a (three-dimensional) volume of space. Remove the stone and the (empty) space remains. We know that the stone fills the space by experience; the volume of space cannot be penetrated when the stone occupies it. So a force, specifically a repulsive force filling a volume of space, is the empirical attribute of the stone. But, of course, if that repulsive force alone were the essence of the stone, it would not continue to fill that space but instead would explode; thus an attractive force within the stone, balancing the repulsive force, is necessary. This then is the essence of matter: an equilibrium between attractive and repulsive forces in a volume of space, and when the attractive forces extend across and fill space they constitute what is conceive of as action-at-a-distance. Newton's trinity (space, matter, and force) is reduced to a duality: Force and space alone are the fundamental entities in the universe.

Kant's vision had a profound impact on physical theory in the nineteenth century, particularly the conceptualization of the unity and the convertibility of forces and the concept of field theory.

Science and Society: The Emigration of Scientists

Germany played a major role in the development and professionalization of physics beginning in the nineteenth century, so that by the early twentieth century, Germany universities and laboratories were at the forefront of the revolution taking place as relativity and quantum physics changed our picture of the world. Berlin, Leipzig, and Göttingen rivaled Cambridge and Leiden (in the Netherlands) as centers of scientific activity, with physicists often moving among the universities. These exchanges--sometimes yearly sojourns, sometimes summer programs, sometimes just weekly seminars--were important mechanisms for the free interchange and interplay of ideas. By the 1920s, this intercourse became international as physicists crossed the Atlantic in both directions. The development of several first-rate physics faculties and laboratories in the United States fueled this process. Of at least symbolic significance, Einstein was enticed by Robert Millikan, director of physics at the California Institute of Technology, to spend the winter quarter there as a visiting professor. This arrangement began in the winter of 1930-31. Other distinguished physicists, such as Bohr, Schrödinger, and Einstein's friend Paul Ehrenfest (1880-1933), had made this pilgrimage previously. Especially important in this international exchange were fellowships from the Rockefeller Foundation in the 1920s, which funded the swapping of students. For example, between 1924 and 1930, 135 postdoctoral fellowships were given to European physicists to study at a different university, and one-third came to the United States. This spurred the internationalization of physics in the twentieth century.

But all this changed dramatically in 1933. In January of that year, Hitler came to power. At the time, Einstein was in California, on his third annual trip there, and upon hearing the news he realized that he would never return to Germany. Accepting the first position of the newly founded Institute for Advanced Study at Princeton, Einstein never again set foot on German soil. By the spring of 1933, numerous professors were dismissed from their university posts, mainly because of Jewish ancestry or involvement in anti-Nazi activities. The famous Princeton mathematician John von Neumann ([1903-57] who was Hungarian by birth) was in Germany in the summer of 1933 and he wrote of the "horrible" situation there, calling the expulsions "German madness." He was prophetic when he said it "will ruin German science for a generation--at least...." By April 1936, more than 1600 scholars (one-third of them scientists) were dismissed from German universities and institutions.

The result was probably the greatest "brain drain" in modern history, as scholars fled Germany and its allied countries to other European countries or to England, Canada, and especially the United States. Between 1933 and 1941, more than 100 physicists came from Europe to the United States, eight of whom were or would be Nobel prize winners. The following contrasting numbers of Nobel prizes in science are stark: From 1901 through 1939, there were 15 awarded to U.S. scientists and 35 to German scientists; whereas from 1943 though 1959, 42 were awarded to U.S. scientists and only eight to German scientists. (There were no awards during the war.) This was an extraordinary exodus.

In retrospect, the internationalism of science in the early twentieth century probably prepared and facilitated the mass movement of scientists from Nazi Germany in the 1930s. Organizations of professors were formed in England and the United States to assist refugee scientists, to find them jobs, and to support them with financial aid. Some positions were only temporary and the immigrant scientists had to search for further positions after a year or so. But in time, the new physicists and their families were assimilated into the fabric of their adopted societies. They enriched the scientific communities in the United States and Great Britain, and many became involved in the war effort against their former country and its allies.

Scientific Biography: J. Robert Oppenheimer

There is a legend, whether true or not, that while watching the atomic test at Alamogordo, Oppenheimer quoted lines from the Hindu sacred text the Bhagavad-Gita, which he knew in the original Sanskrit: "I am become Death, the destroyer of worlds." Several years later, during a conversation with President Truman, he said he had "blood on his hands."

J. Robert Oppenheimer was a precocious child who studied and read widely. As a student at Harvard, he studied classical languages as well as physics and chemistry, graduating summa cum laude in three years. He spent the next four years in Europe studying theoretical physics with some of the most eminent men of the time. When he received his Ph.D. in 1927, he had already published several papers in quantum physics. Returning to the United States in 1929, he taught at universities in California, making a mark as an exceptional teacher and a physicist. His intellectual life had been mainly academic, but with the rise of Fascism in Europe in the 1930s, he became involved with some left-wing groups.

When the U.S. government put General Leslie R. Groves in charge of the Manhattan Project aimed at building the bomb, he consulted Oppenheimer. Oppenheimer was involved with specific work in nuclear physics in California but also knew about work at laboratories elsewhere. He suggested to Groves that work on the bomb should be organized in one place, and even proposed the remote site of Los Alamos, New Mexico, where he had a ranch. Groves not only accepted his idea but also made him scientific director. Although Oppenheimer was not an experimental physicist and had little administrative experience, he proved to be a brilliant laboratory organizer. He arrived at Los Alamos in March 1943.

In May 1945, before the first test took place, Oppenheimer and three other scientists were consultants on a committee of statesmen and military representatives about the planned use of the bomb. Although there was initially some disagreement, one point became unanimous: They believed that dropping the bomb would swiftly end the war and result in fewer lives being lost than would result from the planned land invasion of Japan. So, it seems that the issue became how the bomb was to be used, not whether to use it at all. One idea was to drop a bomb on an isolated island as a warning. Oppenheimer wondered if such "an enormous nuclear firecracker" would convince the Japanese government to surrender. There was also the possibility that the first try would be a dud. The military wanted the bomb dropped on a military target, and it appears they convinced the scientists, including Oppenheimer. As far as is known, the scientists made no moral protestations. Thus in August 1945, Japan was bombed, twice.

A week later, Oppenheimer wrote this to a former teacher: "You will believe that this undertaking has not been without its misgivings; they are heavy on us today, when the future, which has so many elements of high promise, is yet only a stone's throw from despair. Thus the good which this work has perhaps contributed to make in the ending of the war looms very large to us, because it is there for sure" (Robert Oppenheimer: Letters and Recollections, 1995).

Oppenheimer went back to teaching in late 1945; in two years, he assumed the directorship of the Institute for Advanced Study in Princeton, where Einstein was. But he remained involved with the politics of physics because of his concern with atomic weapons. Being occupied with various attempts to control the use of atomic energy, he focused on the need for international control of weapons. He wrote and lectured widely on the topic, perhaps because of the "blood on his hands." From 1946 through 1952, he was chairman of the General Advisory Committee of the Atomic Energy Commission, which aimed at civilian control of atomic energy. He was especially ambivalent about the crash program for developing the super (hydrogen) bomb.

With the rise of anti-communist hysteria during the McCarthy era, Oppenheimer's left-wing past came back to haunt him. In December 1953, he lost his security clearance. His loyalty was questioned because of his past activities as well as his present resistance to the hydrogen bomb. Publicly disgraced, he was not officially exonerated until 1963, when he received the Fermi Award as a gesture of reconciliation by the Atomic Energy Commission.

Words to Know

Static electricity
The well-known phenomenon of electricity usually created by friction in dry air.
A quantity, such as velocity, that has both magnitude and direction; it is often symbolized as an arrow, whose length corresponds to the magnitude.
Occult force
A derisive term used in reference to those qualities of bodies, such as the healing characteristic of some substances, that are not manifest in ordinary sensation.
Eccentricity of orbits
Essentially, the departure of a planet's orbit from an exact circle, which determines the elliptical shape of the orbit. Eccentricity is a measure of how elongated the orbit is. For example, a circle has zero eccentricity; Earth's is 0.0167.
Photoelectric effect
That light falling on some surfaces (such as metals) results in the emission of electrons, confirmed by a series of experiments from 1887 through 1904. The energy of the electrons varies with the frequency of light but is not dependent on its intensity, as predicted by the wave theory.
Atomic spectrum
The unique spectrum produced by each chemical element, based on its atomic structure. Just as white light sent through a prism results in a colored spectrum, so light from an incandescent substance produces a spectrum of light emitted at particular wavelengths (i.e., with particular colors).
Cosmic rays
The high-energy radiation beyond x-rays. Their source remains unknown but it is believed that they originate in supernova explosions, the death of massive stars.

Source Citation

Source Citation   

Gale Document Number: GALE|CV2640700033