Sep 11, 2010

PAGE 3

Compared with earlier humans, Neanderthals had a high degree of cultural sophistication. They appear to have performed symbolic rituals, such as the burial of their dead. Neanderthal fossils - including a number of fairly complete skeletons, - are quite common compared with those of earlier forms of Homo, in part because of the Neanderthal practice of intentional burial. Neanderthals also produced sophisticated types of stone tools known as Mousterian, which involved creating blanks (rough forms) from which several types of tools could be made.


Along with many physical similarities, Neanderthals differed from modern humans in several ways. The typical Neanderthal skull had a low forehead, a large nasal area (suggesting a large nose), a forward-projecting nasal and cheek region, a prominent brow ridge with a bony arch over each eye, a nonprojecting chin, and obvious space behind the third molar (in front of the upward turn of the lower jaw).

Neanderthals were heavily built and had prominently-boned skeleton body structures than do modern humans. Other Neanderthal skeletal features included a bowing of the limb bones in some individuals, broad scapulae (shoulder blades), hip joints turned outward, a long and thin pubic bone, short lower leg and arm bones on the upper bones, and large surfaces on the joints of the toes and limb bones. Together, these traits made a powerful, compact body of short stature of males averaged 1.7 m. (5 ft. 5 in.) tall and 84 kg. (185 lb.), and females averaged 1.5 m. (5 ft.) tall and 80 kg. (176 lb.). The short, stocky build of Neanderthals conserved heat and helped them withstand extremely cold conditions that prevailed in temperate regions beginning about 70,000 years ago. The last known Neanderthal fossils come from western Europe and date from approximately 36,000 years ago.

At the same time as Neanderthal populations grew in number in Europe and parts of Asia, other populations of nearly modern humans arose in Africa and Asia. Scientists also commonly refer to these fossils, which are distinct from but similar to those of Neanderthals, as archaic. Fossils from the Chinese sites of Dali, Maba, and Xujiayao display the long, low cranium and large face typical of archaic humans, yet they also have features similar to those of modern people in the region. At the cave site of Jebel Irhoud, Morocco, scientists have found fossils with the long skull typical of archaic humans but also the modern traits of a higher forehead and flatter midface. Fossils of humans from East African sites older than 100,000 years, such as Ngaloba in Tanzania and Eliye Springs in Kenya, - also seem to show a mixture of archaic and modern traits.

The oldest known fossils that possess skeletal features typical of modern humans date from between 130,000 and 90,000 years ago. Several key features distinguish the skulls of modern humans from those of archaic species. These features include a much smaller brow ridge, if any; a globe-shaped braincase; and a flat or only projecting face of reduced size, located under the front of the braincase. Among all mammals, only humans have a face positioned directly beneath the frontal lobe (forward-most area) of the brain. As a result, modern humans tend to have a higher forehead than did Neanderthals and other archaic humans. The cranial capacity of modern humans ranges from about 1,000 to 2,000 cu. cm. (60 to 120 cu. in.), with the average being about 1,350 cu. cm. (80 cu. in.).

Scientists have found both fragmentary and nearly complete cranial fossils of early anatomically modern Homo sapiens from the sites of Singha, Sudan; Omo, Ethiopia; Klasies River Mouth, South Africa; and Skhū-Cave, Israel. Based on these fossils, many scientists conclude that modern H. sapiens had evolved in Africa by 130,000 years ago and started spreading to diverse parts of the world beginning on a route through the Near East sometime before 90,000 years ago.

Paleoanthropologists are engaged in an ongoing debate about where modern humans evolved and how they spread around the world. Differences in opinion rest on the question of whether the evolution of modern humans took place in a small region of Africa or over a broad area of Africa and Eurasia. By extension, opinions differ as to whether modern human populations from Africa displaced all existing populations of earlier humans, eventually resulting in their extinction.

Those, who think modern humans originated exclusively in Africa, and then spread around the world support what is known as the out of Africa hypothesis. Those who think modern humans evolved over a large region of Eurasia and Africa support the so-called multi-regional hypothesis.

Researchers have conducted many genetic studies and carefully assessed fossils to figure out which of these hypotheses agrees more with scientific evidence. The results of this research do not entirely confirm or reject either one. Therefore, some scientists think a compromise between the two hypotheses is the best explanation. The debate between these views has implications for how scientists understand the concept of race in humans. The dubious question that raises an augmented curiously of itself is to whether the physical differences among modern humans evolved deep in the past or relatively recent, in which is accorded to the out of Africa hypothesis. It is also known as the replacement hypothesis, by which early populations of modern humans out from Africa migrated to other regions and entirely replaced existing populations of archaic humans. The replaced populations would have included the Neanderthals and any surviving groups of Homo erectus. Supporters of this view note that many modern human skeletal traits evolved relatively recently - within the past 200,000 years or so, - suggesting a single, common origin. Additionally, the anatomical similarities shared by all modern human populations far outweigh those shared by premodern and modern humans within particular geographic regions. Furthermore, biological research suggested that most new species of organisms, including mammals, arose from small, geographically isolated populations.

According to the multi-regional hypothesis, also known as the continuity hypothesis, the evolution of modern humans began when Homo erectus spread throughout much of Eurasia around one million years ago. Regional populations retained some unique anatomical features for hundreds of thousands of years, but they also mated with populations from neighbouring regions, exchanging heritable traits with each other. This exchange of heritable traits is known as gene flow.

Through gene flow, populations of H. erectus passed on a variety of increasingly modern characteristics, such as increases in brain size, across their geographic range. Gradually this would have resulted in the evolution of more modern looking humans throughout Africa and Eurasia. The physical differences among people today, then, would result from hundreds of thousands of years of regional evolution. This is the concept of continuity. For instance, modern East Asian populations have some skull features that scientists also see in H. erectus fossils from that region.

Noticeably critics of the multi-regional hypothesis claim that it wrongly advocates a scientific belief in race and could be used to encourage racism. Supporters of the theory point out, however, that their position does not imply that modern races evolved in isolation from each other, or that racial differences justify racism. Instead, the theory holds that gene flow linked different populations together. These links allowed progressively more modern features, no matter where they arose, to spread from region to region and eventually become universal among humans.

Scientists have weighed the out of Africa and multi-regional hypotheses against both genetic and fossil evidence. The results do not unanimously support either one, but weigh more heavily in favour of the out of Africa hypothesis.

Geneticists have studied the amount of difference in the DNA (deoxyribonucleic acid) of different populations of humans. DNA is the molecule that contains our heritable genetic code. Differences in human DNA result from mutations in DNA structure. Mutations may result from exposure to external elements such as solar radiation or certain chemical compounds, while others occur naturally at random.

Geneticists have calculated rates at which mutations can be expected to occur over time. Dividing the total number of genetic differences between two populations by an expected rate of mutation provides an estimate of the time when the two gave cause to be joined of a common ancestor. Many estimates of evolutionary ancestry rely on studies of the DNA in cell structures called mitochondria. This DNA is referred to as mtDNA (mitochondrial DNA). Unlike DNA from the nucleus of a cell, which codes for most of the traits an organism inherits from both parents, mtDNA inheritance passes only from a mother to her offspring. MtDNA also accumulates mutations about ten times faster than does DNA in the cell nucleus (the location of most DNA). The structure of mtDNA changes so quickly that scientists can easily measure the differences between one human population and another. Two closely related populations should have only minor differences in their mtDNA. Conversely, two very distantly related populations should have large differences in their mtDNA.

MtDNA research into modern human origins has produced two major findings. First, the entire amount of variation in mtDNA across human populations is small in comparison with that of other animal species. This significance, in that all human mtDNA originated from a single ancestral lineage - specifically, a single female - recently and has been mutating ever since. Most estimates of the mutation rate of mtDNA suggest that this female ancestor lived about 200,000 years ago. In addition, the mtDNA of African populations varies more than that of peoples in other continents. This suggests that the mtDNA of African populations have proven in identifying their place of a value on a longer time than it has in populations over any other region. In that all living people inherited their mtDNA from one woman in Africa, who is sometimes called the Mitochondrial Eve, in addition geneticists and anthropologists have concluded from this evidence that modern humans originated in a small population in Africa and spread out from there.

MtDNA studies have weaknesses, however, including the following four. First, the estimated rate of mtDNA mutation varies from study to study, and some estimates put the date of origin closer to 850,000 years ago, the time of Homo erectus. Second, mtDNA makes up a small part of the total genetic material that humans inherit. The rest of our genetic material - about 400,000 times more than the amount of mtDNA, - came from many individuals living at the time of the African Eve, conceivably from many different regions. This intermittent interval of which time modern mtDNA began to diversify does not necessarily coincide with the origin of modern human biological traits and cultural abilities. Fourth, the smaller amount of modern mtDNA diversity outside of Africa could result from times when European and Asian populations declined in numbers, perhaps due to climate changes.

Regardless of these criticisms, many geneticists continue to favour the out of Africa hypothesis of modern human origins. Studies of nuclear DNA also suggest an African origin for a variety of genes. Furthermore, in a remarkable series of studies in the late 1990s, scientists recovered mtDNA from the first Neanderthal fossil found in Germany and two other Neanderthal fossils. In each case, the mtDNA does not closely match that of modern humans. This finding suggests that at least some Neanderthal populations had diverged from the line to modern humans by 500,000 to 600,000 years ago, and the depriving of an augmented potential of possible occurrence is apprehensibly actualized, and which can be known as having an existence as categorized in virtue been no attributed thing but some substantiation by a form of something exacted to have happened. Also to suggest that Neanderthals represent a separate species from modern H. sapiens. In another study, however, mtDNA extracted from a 62,000-year-old Australian H. sapiens fossil was found to differ significantly from modern human mtDNA, suggesting a much wider range of mtDNA variation within H. sapiens than was previously believed. According to the Australian researchers, this finding lends support to the multi-regional hypothesis because it shows that different populations of H. sapiens, possibly including Neanderthals, could have evolved independently in different parts of the world.

As with genetic research, fossil evidence also does not entirely support or refute either of the competing hypotheses of modern human origins. However, many scientists see the balance of evidence favouring an African origin of modern H. sapiens within the past 200,000 years. The oldest known modern-looking skulls come from Africa and date from perhaps 130,000 years ago. The next oldest comes from the Near East, where they date from about 90,000 years ago. Fossils of modern humans in Europe do not exist from before about 40,000 years ago. In addition, the first modern humans in Europe - often referred to as Cro-Magnon people had elongated lower leg bones, as did African populations that were adapted too warm, tropical climates. This suggests that populations from warmer regions replaced those in colder European regions, such as the Neanderthals.

Fossils also show that populations of modern humans lived at the same time and in the same regions as did populations of Neanderthals and Homo erectus, but that each retained its distinctive physical features. The different groups overlapped in the Near East and Southeast Asia for between about 30,000 and 50,000 years. The maintenance of physical differences for this amount of time implies that archaically and modern humans could either not or generally did not interbreed. To some scientists, this also means that the Neanderthals belong to a separate species, H. neanderthalensis, and that migratory populations of modern humans entirely replaced archaic humans in both Europe and eastern Asia.

On the other hand, fossils of archaic and modern humans in some regions show continuity in certain physical characteristics. These similarities may indicate multi-regional evolution. For example, both archaic and modern skulls of eastern Asia have flatter cheek and nasal areas than do skulls from other regions. By contrast, the same parts of the face project forward in the skulls of both archaic and modern humans of Europe. Assuming that these traits were influenced primarily by genetic inheritance rather than environmental factors, archaic humans may have given rise to modern humans in some regions or at least interbred with migrant modern-looking humans.

Each of the competing major hypotheses of modern human origins has its strengths and weaknesses. Genetic evidence appears to support the out of Africa hypothesis. In the western half of Eurasia and in Africa, this hypothesis also seems the better explanation, particularly in regard to the apparent replacement of Neanderthals by modern populations. At the same time, the multi-regional hypothesis appears to explain some of the regional continuity found in East Asian populations.

Therefore, many paleoanthropologists advocate a theory of modern human origins that combines elements of the out of Africa and the changing regional hypotheses. Humans with modern features may have first come forth in Africa or come together there as a result of gene flow with populations from other regions. These African populations may then have replaced archaic humans in certain regions, such as western Europe and the Near East. Nevertheless, elsewhere, - especially in East Asia- gene flow may have occurred among local populations of archaic and modern humans, resulting in distinct and enduring regional characteristics.

All three of these views - the two competing positions and the compromise; acknowledge the strong biological unity of all people. In the multi-regional hypothesis, this unity results from hundreds of thousands of years of continued gene flow among all human populations. According to the out of Africa hypothesis, on the other hand, similarities among all living human populations result from a recent common origin. The compromise position accepts both of these as reasonable and compatible explanations of modern human origins.

The story of human evolution is as much about the development of cultural behaviour as it is about changes in physical appearance. The term culture, in anthropology, traditionally refers to all human creations and activities governed by social customs and rules. It includes elements such as technology, language, and art. Human cultural behaviour depends on the social transfer of information from one generation to the next, which it depends on a sophisticated system of communication, such as language.

The term culture has often been used to distinguish the behaviour of humans from that of other animals. However, some nonhuman animals also appear to have forms of learned cultural behaviours. For instance, different groups of chimpanzees use different techniques to capture termites for food using sticks. Also, in some regions chimps use stones or pieces of wood for cracking open nuts. Chimps in other regions do not practice this behaviour, although their forests have similar nut trees and materials for making tools. These regional differences resemble traditions that people pass from generation to generation. Traditions are a fundamental aspect of culture, and paleoanthropologists assume that the earliest humans also had some types of traditions.

Fossils indicate that the evolutionary line leading to us had achieved a substantially upright posture by around four million years ago, then began to increase in body size and a reasonably sized brain, around 2.5 million years ago. Those prove that humans are generally known as Australopithecus africanus, Homo habilis, and the Homo erectus, which apparently evolved into each other in that sequence. Although Homo erectus, he stage reached around 1.7 million tears aga, was close to us modern humans in body size, its brain was still barely half of ours. Stone tools became common around 2.5 million years ago, but they were merely the crudest of flaked or battered stones

Human history at last would take of around 50,000 years ago, at the time as the earliest definite signs that its surge came from East African sites with standardized stone tools and the first preserved jewellery (ostrich-shell beads). Similar developments soon appear in the Near East and in southern Europe, then (some 40,000 years ago) in southwestern Europe, where abundant artifacts are associated with fully modern skeleton’s of people termed Cro-Magnons. Thereafter, the garbage preserved at archaeological sites rapidly became more and more interesting and leaves no doubt that we are dealing with biologically and behaviourally modern humans.

Human technology developed from the first stone tools, in use by two and a half million years ago, to the 1996 laser printer that replaced the outdated 1992 laser printers and that was used for its rate of development, which was unbearably slow at the beginning, when hundreds of thousands of tears passed with no discernible change in our stone tools and with no surviving evidence for artefacts made of other materials. Today, technology advances so rapidly that it is reported in the daily newspaper.

Yet, in this long history of accelerating development, one can single out two especially significant jumps. The first, occurring between 100,000 and 50,000 years ago, probably was made possible by genetic changes in our bodies: namely, by evolution of the modern anatomy permitting modern speech or modern brain function, or both. That jump led to bone tools, single-purpose stone tools, and compound tools. The second jump resulted from our adoption of a sedentary lifestyle, which happened at different times in different parts of the world, as early as 13.000 years ago in some areas and not even today in others. For th most part, that adoption was linked to our adoption of food production, which required us to remain close to our crops, orchards, and stored food surpluses.

However, modern humans differ from other animals, and probably many early human species, in that they actively teach each other and can pass on and accumulate unusually large amounts of knowledge. People also have a uniquely long period of learning before adulthood, and the physical and mental capacity for language. Language of all forms, spoken, signed, and written in provides a medium for communicating vast amounts of information, much more than any other animal appears to be able to transmit through gestures and vocalizations.

Scientists have traced the evolution of human cultural behaviour through the study of archaeological artifacts, such as tools, and related evidence, such as the charred remains of cooked food. Artifacts show that throughout much of human evolution, culture has developed slowly. During the Palaeolithic, or early Stone Age, basic techniques for making stone tools changed very little for periods of well more than a million years.

Human fossils also provide information about how culture has evolved and what effects it has had on human life. For example, over the past 30,000 years, the basic anatomy of humans has undergone only one prominent change: The bones of the average human skeleton have become much smaller and thinner. Innovations in the making and use of tools and in obtaining food.- results of cultural evolution may have led to more efficient and less physically taxing lifestyles, and thus caused changes in the skeleton.

Paleoanthropologists and archaeologists have studied many topics in the evolution of human cultural behaviour. These have included the evolution of (1) social life; (2) subsistence (the acquisition and production of food); (3) the making and using of tools; (4) environmental adaptation; (5) symbolic thought and its expression through language, art, and religion; and (6) the development of agriculture and the rise of civilizations.

One of the first physical changes in the evolution of humans from apes - a decrease in the size of male canine teeth - also indicates a change in social relations. Male apes sometimes use their large canines to threaten (or sometimes fight with) other males of their species, usually over access to females, territory, or food. The evolution of small canines in Australopiths implies that males had either developed other methods of threatening each other or become more cooperative. In addition, both male and female Australopiths had small canines, indicating a reduction of sexual dimorphism from that in apes. Yet, although sexual dimorphism in canine size decreased in Australopiths, males were still much larger than females. Thus, male Australopiths might have competed aggressively with each other based on sheer size and strength, and the social life of humans may not have differed much from that of apes until later times.

Scientists believe that several of the most important changes from apelike to characteristically human social life occurred in species of the genus Homo, whose members show even less sexual dimorphism. These changes, which may have occurred at different times, included (1) prolonged maturation of infants, including an extended period during which they required intensive care from their parents; (2) special bonds of sharing and exclusive mating between particular males and females, called pair-bonding; and (3) the focus of social activity at a home base, a safe refuge in a special location known to family or group members.

Humans, who have a large brain, have a prolonged period of infant development and childhood because the brain takes a long time too mature. Since the australopith brain was not much larger than that of a chimp, some scientists think that the earliest humans had a more apelike rate of growth, which is far more rapid than that of modern humans. This view is supported by studies of australopith fossils looking at tooth development - a good indicator of overall body development.

In addition, the human brain becomes very large as it develops, so a woman must give birth to a baby at an early stage of development in order for the infant’s head to fit through her birth canal. Thus, human babies require a long period of care to reach a stage of development at which they depend less on their parents. In contrast with a modern female, a female australopith could give birth to a baby at an advanced stage of development because its brain would not be too large to pass through the birth canal. The need to give birth early, - and therefore to provide more infant care - may have evolved around the time of the middle Homo species Homo ergaster. This species had a brain significantly larger than that of the Australopiths, but a narrow birth canal.

Pair-bonding, usually of a short duration, occurs in a variety of primate species. Some scientists speculate that prolonged bonds developed in humans along with increased sharing of food. Among primates, humans have a distinct type of food-sharing behaviour. People will delay eating food until they have returned with it to the location of other members of their social group. This type of food sharing may have arisen at the same time as the need for intensive infant care, probably by the time of H. ergaster. By devoting himself to a particular female and sharing food with her, a male could increase the chances of survival for his own offspring.

Humans have lived as foragers for millions of years. Foragers obtain food when and where it is available over a broad territory. Modern-day foragers (also known as hunter-gatherers) such as, the San people in the Kalahari Desert of southern Africa who also set up central campsites, or home bases, and divide work duties between men and women. Women gather readily available plant and animal foods, while men take on the often less successful task of hunting. For most of the time since the ancestors of modern humans diverged from the ancestors of the living great apes, around seven million years ago, all humans on Earth f ed themselves exclusively by hunting wild animals and gathered wild planets, as the Blackfeet still did in thee 19th century. It was only within the last 11,000 years that some peoples turned to what is termed food production: that is, domesticating wild animals and planets and eating the resulting livestock and crops. Toda y, most people on Earth consume food that they produced themselves or that someone else produced for them. Some current rates of change, within the next decade the few remaining bands of hunter-gatherers will abandon their ways, disintegrate, or die out, thereby ending our million of the years of commitment to the hunter-gatherers lifestyle. Those few peoples who remained hunter-gatherers into the 20th century escaped replacement by food producers because they ere confined to areas not fit for food production, especially deserts and Arctic regions. Within the present decade, even they will have been seduced by the attractions of civilization, settled down under pressure from bureaucrats or missionaries, or succumbed to germs.



Nevertheless, female and male family members and relatives bring together their food to share at their home base. The modern form of the home base, - that also serves as a haven for raising children and caring for the sick and elderly - may have first developed with middle Homo after about 1.7 million years ago. However, the first evidence of hearths and shelters, - common to all modern home bases - comes from only after 500,000 years ago. Thus, a modern form of social life may not have developed until late in human evolution.

Human subsistence refers to the types of food humans eat, the technology used in and methods of obtaining or producing food, and the ways in which social groups or societies organize themselves for getting, making, and distributing food. For millions of years, humans probably fed on-the-go, much as other primates do. The lifestyle associated with this feeding strategy is generally organized around small, family-based social groups that take advantage of different food sources at different times of year.

The early human diet probably resembled that of closely related primate species. The great apes eat mostly plant foods. Many primates also eat easily obtained animal foods such as insects and bird eggs. Among the few primates that hunt, chimpanzees will prey on monkeys and even small gazelles. The first humans probably also had a diet based mostly on plant foods. In addition, they undoubtedly ate some animal foods and might have done some hunting. Human subsistence began to diverge from that of other primates with the production and use of the first stone tools. With this development, the meat and marrow (the inner, fat-rich tissue of bones) of large mammals became a part of the human diet. Thus, with the advent of stone tools, the diet of early humans became distinguished in an important way from that of apes.

Scientists have found broken and butchered fossil bones of antelopes, zebras, and other comparably sized animals at the oldest archaeological sites, which go on a date from about 2.5 million years ago. With the evolution of late Homo, humans began to hunt even the largest animals on Earth, including mastodons and mammoths, members of the elephant family. Agriculture and the domestication of animals arose only in the recent past, with H. sapiens.

Paleoanthropologists have debated whether early members of the modern human genus were aggressive hunters, peaceful plant gatherers, or opportunistic scavengers. Many scientists once thought that predation and the eating of meat had strong effects on early human evolution. This hunting hypothesis suggested that early humans in Africa survived particularly arid periods by aggressively hunting animals with primitive stone or bone tools. Supporters of this hypothesis thought that hunting and competition with carnivores powerfully influenced the evolution of human social organization and behaviour; toolmaking; anatomy, such as the unique structure of the human hand; and intelligence.

Beginning in the 1960s, studies of apes cast doubt on the hunting hypothesis. Researchers discovered that chimpanzees cooperate in hunts of at least small animals, such as monkeys. Hunting did not, therefore, entirely distinguish early humans from apes, and therefore hunting alone may not have determined the path of early human evolution. Some scientists instead argued in favour of the importance of food-sharing in early human life. According to a food-sharing hypothesis, cooperation and sharing within family groups

- instead of aggressive hunting - strongly influenced the path of human evolution.

Scientists once thought that archaeological sites as much as two million years old provided evidence to support the food-sharing hypothesis. Some of the oldest archaeological sites were places where humans brought food and stone tools together. Scientists thought that these sites represented home bases, with many of the social features of modern hunter-gatherers campsites, including the sharing of food between pair-bonded males and females.

Critique of the food-sharing hypothesis resulted from more careful study of animal bones from the early archaeological sites. Microscopic analysis of these bones revealed the marks of human tools and carnivore teeth, indicating that both humans and potential predators, such as hyenas, cats, and jackals were active at these sites. This evidence suggested that what scientists had thought were home bases where early humans shared food were in fact food-processing sites that humans abandoned to predators. Thus, evidence did not clearly support the idea of food-sharing among early humans.

The new research also suggested a different view of early human subsistence that early humans scavenged meat and bone marrow from dead animals and did little hunting. According to this scavenging hypothesis, early humans opportunistically took parts of animal carcasses left by predators, and then used stone tools to remove marrow from the bones.

Observations that many animals, such as antelope, often die off in the dry season make the scavenging hypothesis quite plausible. Early toolmakers would have had plenty of opportunity to scavenge animal fat and meat during dry times of the year. However, other archaeological studies - and a better appreciation of the importance of hunting among chimpanzees - suggest that the scavenging hypothesis be too narrow. Many scientists now believe that early humans both scavenged and hunted. Evidence of carnivore tooth marks on bones cut by early human toolmakers suggests that the humans scavenged at least the larger of the animals they ate. They also ate a variety of plant foods. Some disagreement remains, however, as to how much early humans relied on hunting, especially the hunting of smaller animals.

Scientists debate when humans first began hunting on a regular basis. For instance, elephant fossils found with tools made by middle Homo once led researchers to the idea that members of this species were hunters of big game. However, the simple association of animal bones and tools at the same site does not necessarily mean that early humans had killed the animals or eaten their meat. Animals may die in many ways, and natural forces can accidentally place fossils next to tools. Recent excavations at Olorgesailie, Kenya, show that H. erectus cut meat from elephant carcasses but give rise of not revealing to whether these humans were regular or specialized hunters.

Humans who lived outside of Africa, - especially in colder temperate climates, - almost necessitated eating more meat than their African counterparts. Humans in temperate Eurasia would have had to learn about which plants they could safely eat, and the number of available plant foods would drop significantly during the winter. Still, although scientists have found very few fossils of edible or eaten plants at early human sites, early inhabitants of Europe and Asia probably did eat plant foods in addition to meat.

Sites that provide the clearest evidence of early hunting include Boxgrove, England, where about 500,000 years ago people trapped a great number of large game animals between a watering hole and the side of a cliff and then slaughtered them. At Schningen, Germany, a site about 400,000 years old, scientists have found wooden spears with sharp ends that were well designed for throwing and probably used in hunting large animals.

Neanderthals and other archaic humans seem to have eaten whatever animals were available at a particular time and place. So, for example, in European Neanderthal sites, the number of bones of reindeer (a cold-weather animal) and red deer (a warm-weather animal) changed depending on what the climate had been like. Neanderthals probably also combined hunting and scavenging to obtain animal protein and fat.

For at least the past 100,000 years, various human groups have eaten foods from the ocean or coast, such as shellfish and some sea mammals and birds. Others began fishing in interior rivers and lakes. Between probably 90,000 and 80,000 years ago people in Katanda, in what is now the Democratic Republic of the Congo, caught large catfish using a set of barbed bone points, the oldest known specialized fishing implements. The oldest stone tips for arrows or spears date from about 50,000 to 40,000 years ago. These technological advances, probably first developed by early modern humans, indicate an expansion in the kinds of foods humans could obtain.

Beginning 40,000 years ago humans began making even more significant advances in hunting dangerous animals and large herds, and in exploiting ocean resources. People cooperated in large hunting expeditions in which they killed great numbers of reindeer, bison, horses, and other animals of the expansive grasslands that existed at that time. In some regions, people became specialists in hunting certain kinds of animals. The familiarity these people had with the animals they hunted appears in sketches and paintings on cave walls, dating from as much as 32,000 years ago. Hunters also used the bones, ivory, and antlers of their prey to create art and beautiful tools. In some areas, such as the central plains of North America that once teemed with a now-extinct type of large bison (Bison occidentalis), hunting may have contributed to the extinction of entire species.

The making and use of tools alone probably did not distinguish early humans from their ape predecessors. Instead, humans made the important breakthrough of using one tool to make another. Specifically, they developed the technique of precisely hitting one stone against another, known as knapping. Stone toolmaking characterized the period sometimes referred to as the Stone Age, which began at least 2.5 million years ago in Africa and lasted until the development of metal tools within the last 7,000 years (at different times in different parts of the world). Although early humans may have made stone tools before 2.5 million years ago, toolmakers may not have remained long enough in one spot to leave clusters of tools that an archaeologist would notice today.

The earliest simple form of stone toolmaking involved breaking and shaping an angular rock by hitting it with a palm-sized round rock known as a hammerstone. Scientists refer to tools made in this way as Oldowan, after Olduvai Gorge in Tanzania, a site from which many such tools have come. The Oldowan tradition lasted for about one million years. Oldowan tools include large stones with a chopping edge, and small, sharp flakes that could be used to scrape and slice. Sometimes Oldowan toolmakers used anvil stones (flat rocks found or placed on the ground) on which hard fruits or nuts could be broken open. Chimpanzees are known to do this today.

Scientists once thought that Oldowan toolmakers intentionally produced several different types of tools. It now appears that differences in the shapes of larger tools were some byproducts of detaching flakes from a variety of natural rock shapes. Learning the skill of Oldowan toolmaking assiduously required observation, but not necessarily instruction or language. Thus, Oldowan tools were simple, and their makers used them for such purposes as cutting up animal carcasses, breaking bones to obtain marrow, cleaning hides, and sharpening sticks for digging up edible roots and tubers.

Oldowan toolmakers sought out the best stones for making tools and carried them to food-processing sites. At these sites, the toolmakers would butcher carcasses and eat the meat and marrow, thus avoiding any predators that might return to a kill. This behaviour of bringing food and tools together contrasts with an eat-as-you-go strategy of feeding commonly seen in other primates.

The Acheulean toolmaking traditions, which began sometime between 1.7 million and 1.5 million years ago, consisted of increasingly symmetrical tools, most of which scientists refer to as hand-axes and cleavers. Acheulean toolmakers, such as Homo erectus, also worked with much larger pieces of stone than did Oldowan toolmakers. The symmetry and size of later Acheulean tools show increased planning and design - and thus probably increased intelligence - on the part of the toolmakers. The Acheulean tradition continued for more than 1.35 million years.

The next significant advances in stone toolmaking were made by at least 200,000 years ago. One of these methods of toolmaking, known as the prepared core technique (and Levallois in Europe), involved carefully and exactingly knocking off small flakes around one surface of a stone and then striking it from the side to produce a preformed tool blank, which could then be worked further. Within the past 40,000 years, modern humans developed the most advanced stone toolmaking techniques. The so-called prismatic-blade core toolmaking technique involved removing the top from a stone, leaving a flat platform, and then breaking off multiple blades down the sides of the stone. Each blade had a triangular cross-section, giving it excellent strength. Using these blades as blanks, people made exquisitely shaped spearheads, knives, and numerous other kinds of tools. The most advanced stone tools also exhibit distinct and consistent regional differences in style, indicating a high degree of cultural diversity.

Early humans experienced dramatic shifts in their environments over time. Fossilized plant pollen and animal bones, along with the chemistry of soils and sediments, reveal much about the environmental conditions to which humans had to adapt.

By eight million years ago, the continents of the world, which move over very long periods, had come to the positions they now occupy. However, the crust of the Earth has continued to move since that time. These movements have dramatically altered landscapes around the world. Important geological changes that affected the course of human evolution include those in southern Asia that formed the Himalayan mountain chain and the Tibetan Plateau, and those in eastern Africa that formed the Great Rift Valley. The formation of major mountain ranges and valleys led to changes in wind and rainfall patterns. In many areas dry seasons became more pronounced, and in Africa conditions became generally cooler and drier.

By five million years ago, the amount of fluctuation in global climate had increased. Temperature fluctuations became quite pronounced during the Pliocene Epoch (five million to 1.6 million years ago). During this time the world entered a period of intense cooling called an ice age, which began from place to place of 2.8 million years ago. Ice ages cycle through colder phases known as glacial (times when glaciers form) and warmer phases known as interglacial (during which glaciers melt). During the Pliocene, glacial and interglacial each lasted about 40,000 years each. The Pleistocene Epoch (1.6 million to 10,000 years ago), in contrast, had much larger and longer ice age fluctuations. For instance, beginning about 700,000 years ago, these fluctuations repeated roughly every 100,000 years.

Between five million and two million years ago, a mixture of forests, woodlands, and grassy habitats covered most of Africa. Eastern Africa entered a significant drying period around 1.7 million years ago, and after one million years ago large parts of the African landscape turned to grassland. So the early Australopiths and early Homo lived in relatively wooded places, whereas Homo ergaster and H. erectus lived in areas of Africa that were more open. Early human populations encountered many new and different environments when they spread beyond Africa, including colder temperatures in the Near East and bamboo forests in Southeast Asia. By about 1.4 million years ago, populations had moved into the temperate zone of northeast Asia, and by 800,000 years ago they had dispersed into the temperate latitudes of Europe. Although these first excursions to latitudes of 400 north and higher may have occurred during warm climate phases, these populations also must have encountered long seasons of cold weather.

All of these changes, - dramatic shifts in the landscape, changing rainfall and drying patterns, and temperature fluctuations posed challenges to the immediate and long-term survival of early human populations. Populations in different environments evolved different adaptations, which in part explains why more than one species existed at the same time during much of human evolution.

Some early human adaptations to new climates involved changes in physical (anatomical) form. For example, the physical adaptation of having a tall, lean body such as that of H. ergaster, - with lots of skin exposed to cooling winds - would have dissipated heat very well. This adaptation probably helped the species to survive in the hotter, more open environments of Africa around 1.7 million years ago. Conversely, the short, wide bodies of the Neanderthals would have conserved heat, helping them to survive in the ice age climates of Europe and western Asia

Increases in the size and complexity of the brain, however, made early humans progressively better at adapting through changes in cultural behaviour. The largest of these brain-size increases occurred over the past 700,000 years, a period during which global climates and environments fluctuated dramatically. Human cultural behaviour also evolved more quickly during this period, most likely in response to the challenges of coping with unpredictable and changeable surroundings

Humans have always adapted to their environments by adjusting their behaviour. For instance, early Australopiths moved both in the trees and on the ground, which probably helped them survive environmental fluctuations between wooded and more open habitats. Early Homo adapted by making stone tools and transporting their food over long distances, thereby increasing the variety and quantities of different foods they could eat. An expanded and flexible diet would have helped these toolmakers survive unexpected changes in their environment and food supply

When populations of H. erectus moved into the temperate regions of Eurasia, but they faced new challenges to survival. During the colder seasons they had either to move away or seek shelter, such as in caves. Some of the earliest definitive evidence of cave dwellers dates from around 800,000 years ago at the site of Atapuerca in northern Spain. This site may have been home too early

H. heidelbergensis populations. H. erectus also used caves for shelter.

Eventually, early humans learned to control fire and to use it to create warmth, cook food, and protect themselves from other animals. The oldest known fire hearths date from between 450,000 and 300,000 years ago, at sites such as Bilzingsleben, Germany; Verteszöllös, Hungary; and Zhoukoudian (Chou - k’ou - tien), China. African sites as old as 1.6 million to 1.2 million years contain burned bones and reddened sediments, but many scientists find such evidence too ambiguous to prove that humans controlled fire. Early populations in Europe and Asia may also have worn animal hides for warmth during glacial periods. The oldest known bone needles, which indicate the development of sewing and tailored clothing, date from about 30,000 to 26,000 years ago.

Behaviour relates directly to the development of the human brain, and particularly the cerebral cortex, the part of the brain that allows abstract thought, beliefs, and expression through language. Humans communicate through the use of symbols - ways of referring to things, ideas, and feelings that communicate meaning from one individual to another but that need not have any direct connection to what they identify. For instance, a word - one type of symbol - does not usually relate directly or actualized among the things or indexical to its held idea, but by its representation, it has only of itself for being abstractive.

People can also paint abstract pictures or play pieces of music that evoke emotions or ideas, even though emotions and ideas have no form or sound. In addition, people can conceive of and believe in supernatural beings and powers - abstract concepts that symbolize real-world events such as the creation of Earth and the universe, the weather, and the healing of the sick. Thus, symbolic thought lies at the heart of three hallmarks of modern human culture: language, art, and religion.

In language, people creatively join words together in an endless variety of sentences, - each with a distinct meaning - according to a set of mental rules, or grammar. Language provides the ability to communicate complex concepts. It also allows people to exchange information about both past and future events, about objects that are not present, and about complex philosophical or technical concepts

Language gives people many adaptive advantages, including the ability to plan, to communicate the location of food or dangers to other members of a social group, and to tell stories that unify a group, such as mythologies and histories. However, words, sentences, and languages cannot be preserved like bones or tools, so the evolution of language is one of the most difficult topics to investigate through scientific study.

It appears that modern humans have an inborn instinct for language. Under normal conditions not developing language is almost impossible for a person, and people everywhere go through the same stages of increasing language skill at about the same ages. While people appear to have inborn genetic information for developing language, they learn specific languages based on the cultures from which they come and the experiences they have in life.

The ability of humans to have language depends on the complex structure of the modern brain, which has many interconnected, specific areas dedicated to the development and control of language. The complexity of the brain structures necessary for language suggests that it probably took a long time to evolve. While paleoanthropologists would like to know when these important parts of the brain evolved, endocasts (inside impressions) of early human skulls do not provide enough detail to show this.

Some scientists think that even the early Australopiths had some ability to understand and use symbols. Support for this view comes from studies with chimpanzees. A few chimps and other apes have been taught to use picture symbols or American Sign Language for simple communication. Nevertheless, it appears that language, - as well as art and religious rituals became vital aspects of human life only during the past 100,000 years, primarily within our own species.

Humans also express symbolic thought through many forms of art, including painting, sculpture, and music. The oldest known object of possible symbolic and artistic value dates from about 250,000 years ago and comes from the site of Berekhat Ram, Israel. Scientists have interpreted this object, a figure carved into a small piece of volcanic rock, as a representation of the outline of a female body. Only a few other possible art objects are known from between 200,000 and 50,000 years ago. These items, from western Europe and usually attributed to Neanderthals, include two simple pendants - a tooth and a bone with bored holes, - and several grooved or polished fragments of tooth and bone.

Sites dating from at least 400,000 years ago contain fragments of red and black pigment. Humans might have used these pigments to decorate bodies or perishable items, such as wooden tools or clothing of animal hides, but this evidence would not have survived to today. Solid evidence of the sophisticated use of pigments for symbolic purposes, - such as in religious rituals comes only from after 40,000 years ago. From early in this period, researchers have found carefully made types of crayons used in painting and evidence that humans burned pigments to create a range of colours.

People began to create and use advanced types of symbolic objects between about 50,000 and 30,000 years ago. Much of this art appears to have been used in rituals - possibly ceremonies to ask spirit beings for a successful hunt. The archaeological record shows a tremendous blossoming of art between 30,000 and 15,000 years ago. During this period people adorned themselves with intricate jewellery of ivory, bone, and stone. They carved beautiful figurines representing animals and human forms. Many carvings, sculptures, and paintings depict stylized images of the female body. Some scientists think such female figurines represent fertility.

Early wall paintings made sophisticated use of texture and colour. The area of what is now Southern France contains many famous sites of such paintings. These include the caves of Chauvet, which contain art more than 30,000 years old, and Lascaux, in which paintings date from as much as 18,000 years ago. In some cases, artists painted on walls that can be reached only with special effort, such as by crawling. The act of getting to these paintings gives them a sense of mystery and ritual, as it must have to the people who originally viewed them, and archaeologists refer to some of the most extraordinary painted chambers as sanctuaries. Yet no one knows for sure what meanings these early paintings and engravings had for the people who made them.

Graves from Europe and western Asia indicate that the Neanderthals were the first humans to bury their dead. Some sites contain very shallow graves, which group or family members may have dug simply to remove corpses from sight. In other cases it appears that groups may have observed rituals of grieving for the dead or communicating with spirits. Some researchers have claimed that grave goods, such as meaty animal bones or flowers, had been placed with buried bodies, suggesting that some Neanderthal groups might have believed in an afterlife. In a large proportion of Neanderthal burials, the corpse had its legs and arms drawn in close to its chest, which could indicate a ritual burial position.

Other researchers have challenged these interpretations, however. They suggest that perhaps the Neanderthals had practically rather than religious reasons for positioning dead bodies. For instance, a body manipulated into a fetal position would need only a small hole for burial, making the job of digging a grave easier. In addition, the animal bones and flower pollen near corpses could have been deposited by accident or without religious intention.

Many scientists once thought that fossilized bones of cave bears (a now-extinct species of large bear) found in Neanderthal caves indicated that these people had what has been referred to as a cave bear cult, in which they worshipped the bears as powerful spirits. However, after careful study researchers concluded that the cave bears probably died while hibernating and that Neanderthals did not collect their bones or worship them. Considering current evidence, the case for religion among Neanderthals remains controversial.

One of the most important developments in human cultural behaviour occurred when people began to domesticate (control the breeding of) plants and animals. Domestication and the advent of agriculture led to the development of dozens of staple crops (foods that forms the basis of an entire diet) in temperate and tropical regions around the world. Almost the entire population of the world today depends on just four of these major crops: wheat, rice, corn, and potatoes.

The growth of farming and animal herding initiated one of the most remarkable changes ever in the relationship between humans and the natural environment. The change first began just 10,000 years ago in the Near East and has accelerated very rapidly since then. It also occurred independently in other places, including areas of Mexico, China, and South America. Since the first domestication of plants and animals, many species over large areas of the planet have come under human control. The overall number of plant and animal species has decreased, while the populations of a few species needed to support large human populations have grown immensely. In areas dominated by people, interactions between plants and animals usually fall under the control of a single species, - Homo sapiens.

The rise of civilizations - the large and complex types of societies in which most people still live today - developed along with surplus food production. People of high status eventually used food surpluses as a way to pay for labour and to create alliances among groups, often against other groups. In this way, large villages could grow into city-states (urban centres that governed themselves) and eventually empires covering vast territories. With surplus food production, many people could work exclusively in political, religious, or military positions, or in artistic and various skilled vocations. Command of food surpluses also enabled rulers to control labourers, such as in slavery. All civilizations developed based on such hierarchical divisions of status and vocation.

The earliest civilization arose more than 7,000 years ago in Sumer in what is now Iraq. Sumer grew powerful and prosperous by 5,000 years ago, when it entered on the city-state of Ur. The region containing Sumer, known as Mesopotamia, was the same area in which people had first domesticated animals and plants. Other centres of early civilizations include the Nile Valley of Northeast Africa, the Indus. Valley of South Asia, the Yellow River Valley of East Asia, the Oaxaca and Mexico valleys and the Yucatán region of Central America, and the Andean region of South America, China and Inca Empire.

All early civilizations had some common features. Some of these included a bureaucratic political body, a military, a body of religious leadership, large urban centres, monumental buildings and other works of architecture, networks of trade, and food surpluses created through extensive systems of farming. Many early civilizations also had systems of writing, numbers and mathematics, and astronomy (with calendars); road systems; a formalized body of law; and facilities for education and the punishment of crimes. With the rise of civilizations, human evolution entered a phase vastly different from all before which came. Before this time, humans had lived in small, family-entered groups essentially exposed to and controlled by forces of nature. Several thousand years after the rise of the first civilizations, most people now live in societies of millions of unrelated people, all separated from the natural environment by houses, buildings, automobiles, and numerous other inventions and technologies. Culture will continue to evolve quickly and in unforeseen directions, and these changes will, in turn, influence the physical evolution of Homo sapiens and any other human species to come, - attempt to base ethical reasoning on the presumed fact about evolution. The movement is particularly associated with Spencer, the premise that later elements in an evolutionary path are better than earlier ones, the application of the principle then requires seeing western society, laissez faire capitalism, or another object of approval as more evolved than more ‘primitive’ social forms. Neither the principle nor the application commands much respect. The version of evolutionary ethics called ‘social Darwinism, emphasised the struggle for natural selection, and drew the conclusion that we should glorify and help such struggles, usually by enchaining competitive and aggressive relations between people in society, or between societies themselves. More recently subjective matters and opposing physical theories have rethought the relations between evolution and ethics in the light of biological discoveries concerning altruism and kin-selection.

It is, nevertheless, and, least of mention, that Sociobiology (the academic discipline best known through the work of Edward O. Alison who coined the tern in his Sociobiology: the New Synthesise, 1975). The approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that logical basis as to genetic encoding for features that are themselves selected for through evolutionary history. The philosophical problem is essentially of methodology of finding criteria for identifying features that are objectively manifest in that they can usefully identify features, which classical epistemology can usefully explain in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations among the features proposed for this kind of explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristics accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty, however there is no need for the approach to commit such errors, since the feature explained sociobiologically may be indexical to environmental considerations: For instance, it may be a propensity to develop some feature in some social or order environment (or even a propensity to develop propensities . . . ). That man’s problem was to separate genuine explanation from speculatively methodological morally stories, which may or may not identify really selective mechanisms

Scientists are unbiased observers who use the scientific method to confirm conclusively and falsify various theories. These experts have no preconceptions in gathering the data and logically derive theories from these objective observations. One great strength of science is that its self-correcting, because scientists readily abandon theories when their use has been forfeited, and then again they have shown them to be irrational, although many people have accepted such eminent views of science, they are almost completely untrue. Data can neither conclusively confirm nor conclusively falsify theories, there really is no such thing as the scientific method, data become subjective in practice, and scientists have displayed a surprising fierce loyalty to their theories. There have been many misconceptions of what science is and what science is not.

Science, is, and should be the systematic study of anything that breathes, walk in its own motion of a bipedal erection, and have some regarding its own Beingness, and, of course, have its to some form of living fashion. In that others of science can examine, test, and verify. Not-knowing or knowing has derived the word science from the Latin word scribe meaning ‘to know.’ From its beginnings, science has developed into one of the greatest and most influential fields of human endeavour. Today different branches of science investigate almost everything that thumps in the night in that can observe or detect, and science as the whole shape in the way we understand the universe, our planet, ourselves, and other living things.

Science develops through objective analysis, instead of through personal belief. Knowledge gained in science accumulates as time goes by, building on work performed earlier. Some of this knowledge, such as our understanding of numbers, stretches back to the time of ancient civilizations, when scientific thought first began. Other scientific knowledge, - such as our understanding of genes that cause cancer or of quarks (the smallest known building block of matter), dates back to less than 50 years. However, in all fields of science, old or new, researchers use the same systematic approach, known as the scientific method, to add to what governing evolutionary principles have known.

During scientific investigations, scientists put together and compare new discoveries and existing knowledge. Commonly, new discoveries extend what continuing phenomenons have currently accepted, providing further evidence that existing idea are correct. For example, in 1676 the English physicist Robert Hooke discovered those elastic objects, such as metal springs, stretches in proportion to the force that acts on them. Despite all the advances made in physics since 1676, this simple law still holds true.

Scientists use existing knowledge in new scientific investigations to predict how things will behave. For example, a scientist who knows the exact dimensions of a lens can predict how the lens will focus a beam of light. In the same way, by knowing the exact makeup and properties of two chemicals, a researcher can predict what will happen when they combine. Sometimes scientific predictions go much further by describing objects or events those existing object relations have not yet known. An outstanding instance occurred in 1869, when the Russian chemist Dmitry Mendeleyev drew up a periodic table of the elements arranged to illustrate patterns of recurring chemical and physical properties. Mendeleyev used this table to predict the existence and describe the properties of several elements unknown in his day, and when the mysteriousness of science began the possibilities of experimental simplicities in the discovering enactments whose elements, under which for the several years past, the later, predictions were correct.

In science, and only through experimentation can we find the sublime simplicities of our inherent world, however, by this similarity to theoretical implications can we manifest of what can also be made important as when current ideas are shown to be wrong. A classic case of this occurred early in the 20th century, when the German geologist Alfred Wegener suggested that the continents were at once connected, a theory known as continental drift. At the time, most geologists discounted Wegener's ideas, because the Earth's crust may be fixed. However, following the discovery of plate tectonics in the 1960's, in which scientists found that the Earth’s crust is made of moving plates, continental drift became an important part of geology.

Through advances like these, scientific knowledge is constantly added to and refined. As a result, science gives us an ever more detailed insight into the way the world around us works.

For a large part of recorded history, science had little bearing on people's everyday lives. Scientific knowledge was gathered for its own sake, and it had few practical applications. However, with the dawn of the Industrial Revolution in the 18th century, this rapidly changed. Today, science affects the way we live, largely through technology - the use of scientific knowledge for practical purposes.

Some forms of technology have become so well established that forgetting the great scientific achievements that they represent is easy. The refrigerator, for example, owes its existence to a discovery that liquids take in energy when they evaporate, a phenomenon known as latent heat. The principle of latent heat was first exploited in a practical way in 1876, and the refrigerator has played a major role in maintaining public health ever since. The first automobile, dating from the 1880's, used many advances in physics and engineering, including reliable ways of generating high-voltage sparks, while the first computers emerged in the 1940's from simultaneous advances in electronics and mathematics.

Other fields of science also play an important role in the things we use or consume every day. Research in food technology has created new ways of preserving and flavouring what we eat. Research in industrial chemistry has created a vast range of plastics and other synthetic materials, which have thousands of uses in the home and in industry. Synthetic materials are easily formed into complex shapes and can be used to make machine, electrical, and automotive parts, scientific and industrial instruments, decorative objects, containers, and many other items. Alongside these achievements, science has also caused technology that helps save human life. The kidney dialysis machine enables many people to survive kidney diseases that would once have proved fatal, and artificial valves allow sufferers of coronary heart disease to return to active living. Biochemical research is responsible for the antibiotics and vaccinations that protect us from infectious diseases, and for a wide range of other drugs used to combat specific health problems. As a result, the majority of people on the planet now live longer and healthier lives than ever before.

However, scientific discoveries can also have a negative impact in human affairs. Over the last hundred years, some technological advances that make life easier or more enjoyable have proved to have unwanted and often unexpected long-term effects. Industrial and agricultural chemicals pollute the global environment, even in places as remote as Antarctica, and city air is contaminated by toxic gases from vehicle exhausts. The increasing pace of innovation means that products become rapidly obsolete, adding to a rising tide of waste. Most significantly of all, the burning of fossil fuels such as coal, oil, and natural gas releases into the atmosphere carbon dioxide and other substances knew as greenhouse gases. These gases have altered the composition of the entire atmosphere, producing global warming and the prospect of major climate change in years to come.

Science has also been used to develop technology that raises complex ethical questions. This is particularly true in the fields of biology and medicine. Research involving genetic engineering, cloning, and in vitro fertilization gives scientists the unprecedented power to cause new life, or to devise new forms of living things. At the other extreme, science can also generate technology that is deliberately designed to harm or to kill. The fruits of this research include chemical and biological warfare, and nuclear weapons, by far the most destructive weapons that the world has ever known.

Scientific research can be divided into basic science, also known as pure science, and applied science. In basic science, scientists working primarily at academic institutions pursue research simply to satisfy the thirst for knowledge. In applied science, scientists at industrial corporations conduct research to achieve some kind of practical or profitable gain.

In practice, the division between basic and applied science is not always clear-cut. This is because discoveries that initially seem to have no practical use often develop one as time goes away. For example, superconductivity, the ability to conduct electricity with no resistance, was little more than a laboratory curiosity when Dutch physicist Heike Kamerlingh Onnes discovered it in 1911. Today superconducting electromagnets are used in several of important applications, from diagnostic medical equipment to powerful particle accelerators.

Scientists study the origin of the solar system by analysing meteorites and collecting data from satellites and space probes. They search for the secrets of life processes by observing the activity of individual molecules in living cells. They observe the patterns of human relationships in the customs of aboriginal tribes. In each of these varied investigations the questions asked and the means employed to find answers are different. All the inquiries, however, share a common approach to problem solving known as the scientific method. Scientists may work alone or they may collaborate with other scientists. Always, a scientist’s work must measure up to the standards of the scientific community. Scientists submit their findings to science forums, such as science journals and conferences, to subject the findings to the scrutiny of their peers.

Whatever the aim of their work, scientists use the same underlying steps to organize their research: (1) they make detailed observations about objects or processes, either as they occur in nature or as they take place during experiments; (2) they collect and analyse the information observed; and (3) they formulate a hypothesis that explains the behaviour of the phenomena observed.

A scientist begins an investigation by observing an object or an activity. Observations typically involve one or more of the human senses, like hearing, sight, smells, tastes, and touch. Scientists typically use tools to aid in their observations. For example, a microscope helps view objects too small to be seen with the unaided human eye, while a telescope views objects too far away to be seen by the unaided eye.

Scientists typically implement their observation skills to an experiment. An experiment is any kind of trial that enables scientists to control and change at will the conditions under which events occur. It can be something extremely simple, such as heating a solid to see when it melts, or the periodical perception to differences of complexity, such as bouncing a radio signal off the surface of a distant planet. Scientists typically repeat experiments, sometimes many times, in order to be sure that the results were not affected by unforeseen factors.

Most experiments involve real objects in the physical world, such as electric circuits, chemical compounds, or living organisms. However, with the rapid progress in electronics, computer simulations can now carry out some experiments instead. If they are carefully constructed, these simulations or models can accurately predict how real objects will behave.

One advantage of a simulation is that it allows experiments to be conducted without any risks. Another is that it can alter the apparent passage of time, speeding up or slowing natural processes. This enables scientists to investigate things that happen very gradually, such as evolution in simple organisms, or ones that happen almost instantaneously, such as collisions or explosions.

During an experiment, scientists typically make measurements and collect results as they work. This information, known as data, can take many forms. Data may be a set of numbers, such as daily measurements of the temperature in a particular location or a description of side effects in an animal that has been given an experimental drug. Scientists typically use computers to arrange data in ways that make the information easier to understand and analysed data may be arranged into a diagram such as a graph that shows how one quantity (body temperature, for instance) varies in relation to another quantity (days since starting a drug treatment). A scientist flying in a helicopter may collect information about the location of a migrating herd of elephants in Africa during different seasons of a year. The data collected maybe in the form of geographic coordinates that can be plotted on a map to provide the position of the elephant herd at any given time during a year.

Scientists use mathematics to analyse the data and help them interpret their results. The types of mathematical use that include statistics, which is the analysis of numerical data, and probability, which calculates the likelihood that any particular event will occur.

Once an experiment has been carried out, data collected and analysed, scientists look for whatever pattern their results produce and try to formulate a hypothesis that explains all the facts observed in an experiment. In developing a hypothesis, scientists employ methods of induction to generalize from the experiment’s results to predict future outcomes, and deduction to infer new facts from experimental results.

Formulating a hypothesis may be difficult for scientists because there may not be enough information provided by a single experiment, or the experiment’s conclusion may not fit old theories. Sometimes scientists do not have any prior idea of a hypothesis before they start their investigations, but often scientists start out with a working hypothesis that will be proved or disproved by the results of the experiment. Scientific hypotheses can be useful, just as hunches and intuition can be useful in everyday life. Still, they can also be problematic because they tempt scientists, either deliberately or unconsciously, to favour data that support their ideas. Scientists generally take great care to avoid bias, but it remains an ever-present threat. Throughout the history of science, numerous researchers have fallen into this trap, either in the promise of self-advancement that perceive to be the same or that they firmly believe their ideas to be true.

If a hypothesis is borne out by repeated experiments, it becomes a theory - an explanation that seems to fit with the facts consistently. The ability to predict new facts or events is a key test of a scientific theory. In the 17th century German astronomer Johannes Kepler proposed three theories concerning the motions of planets. Kepler’s theories of planetary orbits were confirmed when they were used to predict the future paths of the planets. On the other hand, when theories fail to provide suitable predictions, these failures may suggest new experiments and new explanations that may lead to new discoveries. For instance, in 1928 British microbiologist Frederick Griffith discovered that the genes of dead virulent bacteria could transform harmless bacteria into virulent ones. The prevailing theory at the time was that genes were made of proteins. Nevertheless, studies performed by Canadian-born American bacteriologist Oswald Avery and colleagues in the 1930's repeatedly showed that the transforming gene was active even in bacteria from which protein was removed. The failure to prove that genes were composed of proteins spurred Avery to construct different experiments and by 1944 Avery and his colleagues had found that genes were composed of deoxyribonucleic acid (DNA), not proteins.

If other scientists do not have access to scientific results, the research may as well not have been performed at all. Scientists need to share the results and conclusions of their work so that other scientists can debate the implications of the work and use it to spur new research. Scientists communicate their results with other scientists by publishing them in science journals and by networking with other scientists to discuss findings and debate issues.

In science, publication follows a formal procedure that has set rules of its own. Scientists describe research in a scientific paper, which explains the methods used, the data collected, and the conclusions that can be drawn. In theory, the paper should be detailed enough to enable any other scientist to repeat the research so that the findings can be independently checked.

Scientific papers usually begin with a brief summary, or abstract, that describes the findings that follow. Abstracts enable scientists to consult papers quickly, without having to read them in full. At the end of most papers is a list of citations - bibliographic references that acknowledge earlier work that has been drawn on in the course of the research. Citations enable readers to work backwards through a chain of research advancements to verify that each step is

soundly based.

Scientists typically submit their papers to the editorial board of a journal specializing in a particular field of research. Before the paper is accepted for publication, the editorial board sends it out for peer review. During this procedure a panel of experts, or referees, assesses the paper, judging whether or not the research has been carried out in a fully scientific manner. If the referees are satisfied, publication goes ahead. If they have reservations, some of the research may have to be repeated, but if they identify serious flaws, the entire paper may be rejected from publication.

The peer-review process plays a critical role because it ensures high standards of scientific method. However, it can be a contentious area, as it allows subjective views to become involved. Because scientists are human, they cannot avoid developing personal opinions about the value of each other’s work. Furthermore, because referees tend to be senior figures, they may be less than welcoming to new or unorthodox ideas.

Once a paper has been accepted and published, it becomes part of the vast and ever-expanding body of scientific knowledge. In the early days of science, new research was always published in printed form, but today scientific information spreads by many different means. Most major journals are now available via the Internet (a network of linked computers), which makes them quickly accessible to scientists all over the world.

When new research is published, it often acts as a springboard for further work. Its impact can then be gauged by seeing how often the published research appears as a cited work. Major scientific breakthroughs are cited thousands of times a year, but at the other extreme, obscure pieces of research may be cited rarely or not at all. However, citation is not always a reliable guide to the value of scientific work. Sometimes a piece of research will go largely unnoticed, only to be rediscovered in subsequent years. Such was the case for the work on genes done by American geneticist Barbara McClintock during the 1940s. McClintock discovered a new phenomenon in corn cells known as ‘transposable genes’, sometimes referred to as jumping genes. McClintock observed that a gene could move from one chromosome to another, where it would break the second chromosome at a particular site, insert itself there, and influence the function of an adjacent gene. Her work was largely ignored until the 1960s when scientists found that transposable genes were a primary means for transferring genetic material in bacteria and more complex organisms. McClintock was awarded the 1983 Nobel Prize in physiology or medicine for her work in transposable genes, more than 35 years after doing the research.

In addition to publications, scientists form associations with other scientists from particular fields. Many scientific organizations arrange conferences that bring together scientists to share new ideas. At these conferences, scientists present research papers and discuss their implications. In addition, science organizations promote the work of their members by publishing newsletters and Web sites; networking with journalists at newspapers, magazines, and television stations to help them understand new findings; and lobbying lawmakers to promote government funding for research.

The oldest surviving science organization is the Academia dei Lincei, in Italy, which was established in 1603. The same century also saw the inauguration of the Royal Society of London, founded in 1662, and the Académie des Sciences de Paris, founded in 1666. American scientific societies date back to the 18th century, when American scientist and diplomat Benjamin Franklin founded a philosophical club in 1727. In 1743 this organization became the American Philosophical Society, which still exists today.

In the United States, the American Association for the Advancement of Science (AAAS) plays a key role in fostering the public understanding of science and in promoting scientific research. Founded in 1848, it has nearly 300 affiliated organizations, many of which originally developed from AAAS special-interest groups.

Since the late 19th century, communication among scientists has also been improved by international organizations, such as the International Bureau of Weights and Measures, founded in 1873, the International Council of Research, founded in 1919, and the World Health Organization, founded in 1948. Other organizations act as international forums for research in particular fields. For example, the Intergovernmental Panel on Climate Change (IPCC), established in 1988, as research on how climate change occurs, and what affects change is likely to have on humans and their environment.

Classifying sciences involves arbitrary decisions because the universe is not easily split into separate compartments. This article divides science into five major branches: mathematics, physical sciences, earth sciences, life sciences, and social sciences. A sixth branch, technology, draws on discoveries from all areas of science and puts them to practical use. Each of these branches itself consists of numerous subdivisions. Many of these subdivisions, such as astrophysics or biotechnology, combine overlapping disciplines, creating yet more areas of research.

The 20th century mathematics made rapid advances on all fronts. The foundations of mathematics became more solidly grounded in logic, while at the same time mathematics advanced the development of symbolic logic. Philosophy was not the only field to progress with the help of mathematics. Physics, too, benefited from the contributions of mathematicians to relativity theory and quantum theory. Indeed, mathematics achieved broader applications than ever before, as new fields developed within mathematics (computational mathematics, game theory, and chaos theory) and other branches of knowledge, including economics and physics, achieved firmer grounding through the application of mathematics. Even the most abstract mathematics seemed to find application, and the boundaries between pure mathematics and applied mathematics grew ever fuzzier.

Mathematicians searched for unifying principles and general statements that applied to large categories of numbers and objects. In algebra, the study of structure continued with a focus on structural units called rings, fields, and groups, and at mid-century it extended to the relationships between these categories. Algebra became an important part of other areas of mathematics, including analysis, number theory, and topology, as the search for unifying theories moved ahead. Topology—the study of the properties of objects that remain constant during transformation, or stretching - became a fertile research field, bringing together geometry, algebra, and analysis. Because of the abstract and complex nature of most 20th-century mathematics, most of the remaining sections of this article will discuss practical developments in mathematics with applications in more familiar fields.

Until the 20th century the centres of mathematics research in the West were all located in Europe. Although the University of Göttingen in Germany, the University of Cambridge in England, the French Academy of Sciences and the University of Paris, and the University of Moscow in Russia retained their importance, the United States rose in prominence and reputation for mathematical research, especially the departments of mathematics at Princeton University and the University of Chicago.

At the Second International Congress of Mathematicians held in Paris in 1900, German mathematician David Hilbert spoke to the assembly. Hilbert was a professor at the University of Göttingen, the former academic home of Gauss and Riemann. Hilbert’s speech at Paris was a survey of 23 mathematical problems that he felt would guide the work being done in mathematics during the coming century. These problems stimulated a great deal of the mathematical research of the 20th century, and many of the problems were solved. When news breaks that another “Hilbert problem” has been solved, mathematicians worldwide impatiently await further details.

Hilbert contributed to most areas of mathematics, starting with his classic Grundlagen der Geometric (Foundations of Geometry), published in 1899. Hilbert’s work created the field of functional analysis (the analysis of functions as a group), a field that occupied many mathematicians during the 20th century. He also contributed to mathematical physics. From 1915 on he fought to have Emmy Noether, a noted German mathematician, hired at Göttingen. When the university refused to hire her because of objections to the presence of a woman in the faculty senate, Hilbert countered that the senate was not the changing room for a swimming pool. Noether later made major contributions to ring theory in algebra and wrote a standard text on abstract algebra.

In some ways pure mathematics became more abstract in the 20th century, as it joined forces with the field of symbolic logic in philosophy. The scholars who bridged the fields of mathematics and philosophy early in the century were Alfred North Whitehead and Bertrand Russell, who worked together at Cambridge University. They published their major work, Principia Mathematica (Principles of Mathematics), in three volumes from 1910 to 1913. In it they demonstrated the principles of mathematical logic and attempted to show that all of mathematics could be deduced from a few premises and definitions by the rules of formal logic. In the late 19th century, German mathematician Gottlob Frége had provided the system of notation for mathematical logic and paved the way for the work of Russell and Whitehead. Mathematical logic influenced the direction of 20th-century mathematics, including the work of Hilbert.

Hilbert proposed that the underlying consistency of all mathematics could be demonstrated within mathematics. Nevertheless, logician Kurt Gödel in Austria proved that the goal of establishing the completeness and consistency of every mathematical theory is impossible. Despite its negative conclusion Gödel’s Theorem, published in 1931, opened up new areas in mathematical logic. One area, known as recursion theory, played a major role in the development of computers.

Several revolutionary theories, including relativity and quantum theory, challenged existing assumptions in physics in the early 20th century. The work of a number of mathematicians contributed to these theories. Among them was Noether, whose gender had denied her a paid position at the University of Göttingen. Noether’s mathematical formulations on invariants (quantities that remain unchanged as other quantities change) contributed to Einstein’s theory of relativity. Russian mathematician Hermann Minkowski contributed to relativity the notion of the space-time continuum, with time as a fourth dimension. Hermann Weyl, a student of Hilbert’s, investigated the geometry of relativity and applied group theory to quantum mechanics. Weyl’s investigations helped advance the field of topology. Early in the century Hilbert quipped, “Physics is getting too difficult for physicists.”

Hungarian-born American mathematician John von Neumann built a solid mathematical basis for quantum theory with his text Mathematische Grundlagen der Quantenmechanik (1932, Mathematical Foundations of Quantum Mechanics). This investigation led him to explore algebraic operators and groups associated with them, opening a new area now known as Neumann algebra. Von Neumann, however, is probably best known for his work in game theory and computers.

During World War II (1939-1945) mathematicians and physicists worked together on developing radar, the atomic bomb, and other technology that helped defeat the Axis powers. Polish-born mathematician Stanislaw Ulam solved the problem of how to initiate fusion in the hydrogen bomb. Von Neumann participated in numerous U.S. defence projects during the war.

Mathematics plays an important role today in cosmology and astrophysics, especially in research into big bang theory and the properties of black holes, antimatter, elementary particles, and other unobservable objects and events. Stephen Hawking, among the best-known cosmologists of the 20th century, in 1979 was appointed Lucasian Professor of Mathematics at Trinity College, Cambridge, a position once held by Newton.

Mathematics formed an alliance with economics in the 20th century as the tools of mathematical analysis, algebra, probability, and statistics illuminated economic theories. A specialty called econometrics links enormous numbers of equations to form mathematical models for use as forecasting tools.

Game theory began in mathematics but had immediate applications in economics and military strategy. This branch of mathematics deals with situations in which some sort of decision must be made to maximize a profit—that is, to win. Its theoretical foundations were supplied by von Neumann in a series of papers written during the 1930s and 1940s. Von Neumann and economist Oskar Morgenstern published results of their investigations in The Theory of Games and Economic Behaviour (1944). John Nash, the Princeton mathematician profiled in the motion picture A Beautiful Mind, shared the 1994 Nobel Prize in economics for his work in game theory.

Mathematicians, physicists, and engineers contributed to the development of computers and computer science. Nevertheless, the early, theoretical work came from mathematicians. English mathematician Alan Turing, working at Cambridge University, introduced the idea of a machine that could perform mathematical operations and solve equations. The Turing machine, as it became known, was a precursor of the modern computer. Through his work Turing brought together the elements that form the basis of computer science: symbolic logic, numerical analysis, electrical engineering, and a mechanical vision of human thought processes.

Computer theory is the third area with which von Neumann is associated, in addition to mathematical physics and game theory. He established the basic principles on which computers operate. Turing and von Neumann both recognized the usefulness of the binary arithmetic system for storing computer programs.

The first large-scale digital computers were pioneered in the 1940s. Von Neumann completed the EDVAC (Electronic Discrete Variable Automatic Computer) at the Institute of Advanced Study in Princeton in 1945. Engineers John Eckert and John Mauchly built ENIAC (Electronic Numerical Integrator and Calculator), which began operation at the University of Pennsylvania in 1946. As increasingly complex computers are built, the field of artificial intelligence has drawn attention. Researchers in this field attempt to develop computer systems that can mimic human thought processes.

Mathematician Norbert Wiener, working at the Massachusetts Institute of Technology (MIT), also became interested in automatic computing and developed the field known as cybernetics. Cybernetics grew out of Wiener’s work on increasing the accuracy of bombsights during World War II. From this came a broader investigation of how information can be translated into improved performance. Cybernetics is now applied to communication and control systems in living organisms.

Computers have exercised an enormous influence on mathematics and its applications. As ever more complex computers are developed, their applications proliferate. Computers have given great impetus to areas of mathematics such as numerical analysis and finite mathematics. Computer science has suggested new areas for mathematical investigation, such as the study of algorithms. Computers also have become powerful tools in areas as diverse as number theory, differential equations, and abstract algebra. In addition, the computer has made possible the solution of several long-standing problems in mathematics, such as the four-colours theorem first proposed in the mid-19th century.

The four-colour theorem stated that four colours are sufficient to colour any map, given that any two countries with a contiguous boundary require different colours. Mathematicians at the University of Illinois finally confirmed the theorem in 1976 by means of a large-scale computer that reduced the number of possible maps to less than 2,000. The program they wrote ran thousands of lines in length and took more than 1,200 hours to run. Many mathematicians, however, do not accept the result as a proof because it has not been checked. Verification by hand would require far too many human hours. Some mathematicians object to the solution’s lack of elegance. This complaint has been paraphrased, “a good mathematical proof is like a poem - this is a telephone directory."

Hilbert inaugurated the 20th century by proposing 23 problems that he expected to occupy mathematicians for the next 100 years. A number of these problems, such as the Riemann hypothesis about prime numbers, remain unsolved in the early 21st century. Hilbert claimed, “If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?”

The existence of old problems, along with new problems that continually arise, ensures that mathematics research will remain challenging and vital through the 21st century. Influenced by Hilbert, the Clay Mathematics Institute at Harvard University announced the Millennium Prize in 2000 for solutions to mathematics problems that have long resisted solution. Among the seven problems is the Riemann hypothesis. An award of $1 million awaits the mathematician who solves any of these problems.

Minkowski, Hermann (1864-1909), Russian mathematician, who developed the concept of the space-time continuum. He was born in Russia and attended and then taught at German universities. To the three dimensions of space, Minkowski added the concept of a fourth dimension, time. This concept developed from Albert Einstein's 1905 relativity theory, and became, in turn, the framework for Einstein's 1916 general theory of relativity.

Gravitation is one of the four fundamental forces of nature, along with electromagnetism and the weak and strong nuclear forces, which hold together the particles that make up atoms. Gravitation is by far the weakest of these forces and, as a result, is not important in the interactions of atoms and nuclear particles or even of moderate-sized objects, such as people or cars. Gravitation is important only when very large objects, such as planets, are involved. This is true for several reasons. First, the force of gravitation reaches great distances, while nuclear forces operate only over extremely short distances and decrease in strength very rapidly as distance increases. Second, gravitation is always attractive. In contrast, electromagnetic forces between particles can be repulsive or attractive depending on whether the particles both have a positive or negative electrical charge, or they have opposite electrical charges (see Electricity). These attractive and repulsive forces tend to cancel each other out, leaving only a weak net force. Gravitation has no repulsive force and, therefore, no such cancellation or weakening.

After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved. open sidebar.

Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives weather patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.

The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size (see Star, Black Hole).

Freefall Falling objects accelerate in response to the force exerted on them by Earth’s gravity. Different objects accelerate at the same rate, regardless of their mass. This illustration shows the speed at which a ball and a cat would be moving and the distance each would have fallen at intervals of a tenth of a second during a short fall

If an object held near the surface of the earth is released, it will fall and accelerate, or pick up speed, as it descends. This acceleration is caused by gravity, the force of attraction between the object and the earth. The force of gravity on an object is also called the object's weight. This force depends on the object's mass, or the amount of matter in the object. The weight of an object is equal to the mass of the object multiplied by the acceleration due to gravity.

A bowling ball that weighs 16 lb. is being pulled toward the earth with a force of 16 lb? In the metric system, the bowling ball is pulled toward the earth with a force of 71 newtons (a metric unit of force abbreviated N). The bowling ball also pulls on the earth with a force of 16 lb. (71 N), but the earth is so massive that it does not move appreciably. In order to hold the bowling ball up and keep it from falling, a person must exert an upward force of 16 lb (71 N) on the ball. This upward force acts to oppose the 16 lb.

(71 N) downward weight force, leaving a total force of zero. The total force on an object determines the object's acceleration.

If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb. (71 N) bowling ball and a 500 lb. (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m.sec.(32 ft./sec.), resulting in an acceleration of 9.8 m/sec/sec (32 ft. sec/sec. In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.

The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the centre of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.

The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century Bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth-entered, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.

At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid-16th century, Copernicus had proposed a heliocentric, or sun-centred system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.

In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not reasonably fit into an earth-centred model of the heavens.

The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.

Gravitational forces because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one-sixth the object’s weight on Earth’s surface. This graph shows how much an object that weighs w on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.

To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object be motion at a constant speed on a straight line, and that it takes a force to slow, speed, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.

Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, G is a constant known as the universal constant of gravitation, M and m are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.

According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts similar between their centres. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the centre of gravity, at which all the force of gravity can be considered to be acting.

Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces overall, however, it became necessary to find the value of G, the universal constant of gravitation. Scientists needed to perform an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured G with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10-11 N-m2/kg2—close to the currently accepted value of 6.670 x 10-11 N-m2/kg2 (a decimal point followed by 10 zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 metre from each other, is about 67 millionths of a newton, or about 15 millionths of a pound.

Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.

Newton's law of gravitation was the first theory to describe the motion of objects on the earth accurately as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.

Frames of Reference A situation can appear different when viewed from different frames of reference. Try to imagine how an observer's perceptions could change from frame to frame in this illustration.© Microsoft Corporation. All Rights Reserved.

Scientists used Newton's theory of gravitation successfully for many years. Several problems began to arise, however, involving motion that did not follow the law of gravitation or Newtonian mechanics. One problem was the observed and unexplainable deviations in the orbit of Mercury (which could not be caused by the gravitational pull of another orbiting body).

Another problem with Newton's theory involved reference frames, that is, the conditions under which an observer measures the motion of an object. According to Newtonian mechanics, two observers making measurements of the speed of an object will measure different speeds if the observers are moving relative to each other. A person on the ground observing a ball that is on a train passing by will measure the speed of the ball as the same as the speed of the train. A person on the train observing the ball, however, will measure the ball's speed as zero. According to the traditional ideas about space and time, then, there could not be a constant, fundamental speed in the physical world because all speed is relative. However, near the end of the 19th century the Scottish physicist James Clerk Maxwell proposed a complete theory of electric and magnetic forces that contained just such a constant, which he called c. This constant speed was 300,000 km/sec (186,000 mi/sec) and was the speed of electromagnetic waves, including light waves. This feature of Maxwell's theory caused a crisis in physics because it indicated that speed was not always relative.

Albert Einstein resolved this crisis in 1905 with his special theory of relativity. An important feature of Einstein's new theory was that no particle, and even no information, could travel faster than the fundamental speed c. In Newton's gravitation theory, however, information about gravitation moved at infinite speed. If a star exploded into two parts, for example, the change in gravitational pull would be felt immediately by a planet in a distant orbit around the exploded star. According to Einstein's theory, such forces were not possible.

Though Newton's theory contained several flaws, it is still very practical for use in everyday life. Even today, it is sufficiently accurate for dealing with earth-based gravitational effects such as in geology (the study of the formation of the earth and the processes acting on it), and for most scientific work in astronomy. Only when examining exotic phenomena such as black holes (points in space with a gravitational force so strong that not even light can escape them) or in explaining the big bang (the origin of the universe) is Newton's theory inaccurate or inapplicable.

The gravitational attraction of objects for one another is the easiest fundamental force to observe and was the first fundamental force to be described with a complete mathematical theory by the English physicist and mathematician Sir Isaac Newton. A more accurate theory called general relativity was formulated early in the 20th century by the German-born American physicist Albert Einstein. Scientists recognize that even this theory is not correct for describing how gravitation works in certain circumstances, and they continue to search for an improved theory.

Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives weather patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.

The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size (see Star, Black Hole).

If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb (71 N) bowling ball and a 500 lb (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m/sec (32 ft/sec), resulting in an acceleration of 9.8 m/sec/sec (32 ft/sec/sec). In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.

The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the centre of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.

The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century Bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth-entered, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.

At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid-16th century, Copernicus had proposed a heliocentric, or sun-entered system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.

In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not reasonably fit into an earth-centred model of the heavens.

The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.

Gravitational Forces Because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one-sixth the object’s weight on Earth’s surface. This graph shows how much an object that weighs w on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.

To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object be motion at a constant speed on a straight line, and that it takes a force to slow, speed, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.

Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, G is a constant known as the universal constant of gravitation, M and m are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.

According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts similarly between their centres. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the centre of gravity, at which all the force of gravity can be considered to be acting.

Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces overall, however, it became necessary to find the value of G, the universal constant of gravitation. Scientists needed to perform an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured G with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10-11 N-m2/kg2 - close to the currently accepted value of 6.670 x 10-11 N-m2/kg2 (a decimal point followed by 10 zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 meter from each other, is about 67 millionths of a newton, or about 15 millionths of a pound.

Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.

Newton's law of gravitation was the first theory to describe the motion of objects on the earth accurately as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.

Einstein's general relativity theory predicts special gravitational conditions. The Big Bang theory, which describes the origin and early expansion of the universe, is one conclusion based on Einstein's theory that has been verified in several independent ways.

Another conclusion suggested by general relativity, as well as other relativistic theories of gravitation, is that gravitational effects move in waves. Astronomers have observed a loss of energy in a pair of neutron stars (stars composed of densely packed neutrons) that are orbiting each other. The astronomers theorize that energy-carrying gravitational waves are radiating from the pair, depleting the stars of their energy. Very violent astrophysical events, such as the explosion of stars or the collision of neutron stars, can produce gravitational waves strong enough that they may eventually be directly detectable with extremely precise instruments. Astrophysicists are designing such instruments with the hope that they will be able to detect gravitational waves by the beginning of the 21st century.

Another gravitational effect predicted by general relativity is the existence of black holes. The idea of a star with a gravitational force so strong that light cannot escape from its surface can be traced to Newtonian theory. Einstein modified this idea in his general theory of relativity. Because light cannot escape from a black hole, for any object - a particle, spacecraft, or wave - to escape, it would have to move past light. Nevertheless, light moves outward at the speed c. According to relativity, c is the highest attainable speed, so nothing can pass it. The black holes that Einstein envisioned, then, allow no escape whatsoever. An extension of this argument shows that when gravitation is this strong, nothing can even stay in the same place, but must move inward. Even the surface of a star must move inward, and must continue the collapse that created the strong gravitational force. What remains then is not a star, but a region of space from which emerges a tremendous gravitational force.

Einstein's theory of gravitation revolutionized 20th-century physics. Another important revolution that took place was quantum theory. Quantum theory states that physical interactions, or the exchange of energy, cannot be made arbitrarily small. There is a minimal interaction that comes in a packet called the quantum of an interaction. For electromagnetism the quantum is called the photon. Like the other interactions, gravitation also must be quantized. Physicists call a quantum of gravitational energy a graviton. In principle, gravitational waves arriving at the earth would consist of gravitons. In practice, gravitational waves would consist of apparently continuous streams of gravitons, and individual gravitons could not be detected.

Einstein's theory did not include quantum effects. For most of the 20th century, theoretical physicists have been unsuccessful in their attempts to formulate a theory that resembles Einstein's theory but also includes gravitons. Despite the lack of a complete quantum theory, making some partial predictions about quantized gravitation is possible. In the 1970s, British physicist Stephen Hawking showed that quantum mechanical processes in the strong gravitational pull just outside of black holes would create particles and quanta that move away from the black hole, thereby robbing it of energy.

Astronomy, is the study of the universe and the celestial bodies, gas, and dust within it. Astronomy includes observations and theories about the solar system, the stars, the galaxies, and the general structure of space. Astronomy also includes cosmology, the study of the universe and its past and future. People who study astronomy are called astronomers, and they use a wide variety of methods to perform their research. These methods usually involve ideas of physics, so most astronomers are also astrophysicists, and the terms astronomer and astrophysicist are basically identical. Some areas of astronomy also use techniques of chemistry, geology, and biology.

Astronomy is the oldest science, dating back thousands of years to when primitive people noticed objects in the sky overhead and watched the way the objects moved. In ancient Egypt, he first appearance of certain stars each year marked the onset of the seasonal flood, an important event for agriculture. In 17th-century England, astronomy provided methods of keeping track of time that were especially useful for accurate navigation. Astronomy has a long tradition of practical results, such as our current understanding of the stars, day and night, the seasons, and the phases of the Moon. Much of today's research in astronomy does not address immediate practical problems. Instead, it involves basic research to satisfy our curiosity about the universe and the objects in it. One day such knowledge may be of practical use to humans.

Astronomers use tools such as telescopes, cameras, spectrographs, and computers to analyse the light that astronomical objects emit. Amateur astronomers observe the sky as a hobby, while professional astronomers are paid for their research and usually work for large institutions such as colleges, universities, observatories, and government research institutes. Amateur astronomers make valuable observations, but are often limited by lack of access to the powerful and expensive equipment of professional astronomers.

A wide range of astronomical objects is accessible to amateur astronomers. Many solar system objects—such as planets, moons, and comets—are bright enough to be visible through binoculars and small telescopes. Small telescopes are also sufficient to reveal some of the beautiful detail in nebulas—clouds of gas and dust in our galaxy. Many amateur astronomers observe and photograph these objects. The increasing availability of sophisticated electronic instruments and computers over the past few decades has made powerful equipment more affordable and allowed amateur astronomers to expand their observations to much fainter objects. Amateur astronomers sometimes share their observations by posting their photographs on the World Wide Web, a network of information based on connections between computers.

Amateurs often undertake projects that require numerous observations over days, weeks, months, or even years. By searching the sky over a long period of time, amateur astronomers may observe things in the sky that represent sudden change, such as new comets or novas (stars that brighten suddenly). This type of consistent observation is also useful for studying objects that change slowly over time, such as variable stars and double stars. Amateur astronomers observe meteor showers, sunspots, and groupings of planets and the Moon in the sky. They also participate in expeditions to places in which special astronomical events - such as solar eclipses and meteor showers - are most visible. Several organizations, such as the Astronomical League and the American Association of Variable Star Observers, provide meetings and publications through which amateur astronomers can communicate and share their observations.

Professional astronomers usually have access to powerful telescopes, detectors, and computers. Most work in astronomy includes three parts, or phases. Astronomers first observe astronomical objects by guiding telescopes and instruments to collect the appropriate information. Astronomers then analyse the images and data. After the analysis, they compare their results with existing theories to determine whether their observations match with what theories predict, or whether the theories can be improved. Some astronomers work solely on observation and analysis, and some work solely on developing new theories.

Astronomy is such a broad topic that astronomers specialize in one or more parts of the field. For example, the study of the solar system is a different area of specialization than the study of stars. Astronomers who study our galaxy, the Milky Way, often use techniques different from those used by astronomers who study distant galaxies. Many planetary astronomers, such as scientists who study Mars, may have geology backgrounds and not consider themselves astronomers at all. Solar astronomers use different telescopes than nighttime astronomers use, because the Sun is so bright. Theoretical astronomers may never use telescopes at all. Instead, these astronomers use existing data or sometimes only previous theoretical results to develop and test theories. An increasing field of astronomy is computational astronomy, in which astronomers use computers to simulate astronomical events. Examples of events for which simulations are useful include the formation of the earliest galaxies of the universe or the explosion of a star to make a supernova.

Astronomers learn about astronomical objects by observing the energy they emit. These objects emit energy in the form of electromagnetic radiation. This radiation travels throughout the universe in the form of waves and can range from gamma rays, which have extremely short wavelengths, to visible light, to radio waves, which are very long. The entire range of these different wavelengths makes up the electromagnetic spectrum.

Astronomers gather different wavelengths of electromagnetic radiation depending on the objects that are being studied. The techniques of astronomy are often very different for studying different wavelengths. Conventional telescopes work only for visible light and the parts of the spectrum near visible light, such as the shortest infrared wavelengths and the longest ultraviolet wavelengths. Earth’s atmosphere complicates studies by absorbing many wavelengths of the electromagnetic spectrum. Gamma-ray astronomy, X-ray astronomy, infrared astronomy, ultraviolet astronomy, radio astronomy, visible-light astronomy, cosmic-ray astronomy, gravitational-wave astronomy, and neutrino astronomy all use different instruments and techniques.

Observational astronomers use telescopes or other instruments to observe the heavens. The astronomers who do the most observing, however, probably spend more time using computers than they do using telescopes. A few nights of observing with a telescope often provide enough data to keep astronomers busy for months analysing the data.

Until the 20th century, all observational astronomers studied the visible light that astronomical objects emit. Such astronomers are called optical astronomers, because they observe the same part of the electromagnetic spectrum that the human eye sees. Optical astronomers use telescopes and imaging equipment to study light from objects. Professional astronomers today hardly ever actually look through telescopes. Instead, a telescope sends an object’s light to a photographic plate or to an electronic light-sensitive computer chip called a charge-coupled device, or CCD. CCDs are about 50 times more sensitive than film, so today's astronomers can record in a minute an image that would have taken about an hour to record on film.

Telescopes may use either lenses or mirrors to gather visible light, permitting direct observation or photographic recording of distant objects. Those that use lenses are called refracting telescopes, since they use the property of refraction, or bending, of light (see Optics: Reflection and Refraction). The largest refracting telescope is the 40-in (1-m) telescope at the Yerkes Observatory in Williams Bay, Wisconsin, founded in the late 19th century. Lenses bend different colours of light by different amounts, so different colours focus differently. Images produced by large lenses can be tinged with colour, often limiting the observations to those made through filters. Filters limit the image to one colour of light, so the lens bends all of the light in the image the same amount and makes the image more accurate than an image that includes all colours of light. Also, because light must pass through lenses, lenses can only be supported at the very edges. Large, heavy lenses are so thick that all the large telescopes in current use are made with other techniques.

Reflecting telescopes, which use mirrors, are easier to make than refracting telescopes and reflect all colours of light equally. All the largest telescopes today are reflecting telescopes. The largest single telescopes are the Keck telescopes at Mauna Kea Observatory in Hawaii. The Keck telescope mirrors are 394 in (10.0 m) in diameter. Mauna Kea Observatory, at an altitude of 4,205 m (13,796 ft), is especially high. The air at the observatory is clear, so many major telescope projects are located there.

The Hubble Space Telescope (HST), a reflecting telescope that orbits Earth, has returned the clearest images of any optical telescope. The main mirror of the HST is only 94 in (2.4 m) across, far smaller than that of the largest ground-based reflecting telescopes. Turbulence in the atmosphere makes observing objects as clearly as the HST can see impossible for ground-based telescopes. HST images of visible light are about five times finer than any produced by ground-based telescopes. Giant telescopes on Earth, however, collect much more light than the HST can. Examples of such giant telescopes include the twin 32-ft (10-m) Keck telescopes in Hawaii and the four 26-ft (8-m) telescopes in the Very Large Telescope array in the Atacama Desert in northern Chile (the nearest city is Antofagasta, Chile). Often astronomers use space and ground-based telescopes in conjunction.

Astronomers usually share telescopes. Many institutions with large telescopes accept applications from any astronomer who wishes to use the instruments, though others have limited sets of eligible applicants. The institution then divides the available time among successful applicants and assigns each astronomer an observing period. Astronomers can collect data from telescopes remotely. Data from Earth-based telescopes can be sent electronically over computer networks. Data from space-based telescopes reach Earth through radio waves collected by antennas on the ground.

Gamma rays have the shortest wavelengths. Special telescopes in orbit around Earth, such as the National Aeronautics and Space Administration’s (NASA’s) Compton Gamma-Ray Observatory, gather gamma rays before Earth’s atmosphere absorbs them. X rays, the next shortest wavelengths, also must be observed from space. NASA’s Chandra x-ray Observatory (CXO) is a school-bus-sized spacecraft scheduled to begin studying X rays from orbit in 1999. It is designed to make high-resolution images. See also Gamma-Ray Astronomy; x-ray Astronomy.

Ultraviolet light has wavelengths longer than X rays, but shorter than visible light. Ultraviolet telescopes are similar to visible-light telescopes in the way they gather light, but the atmosphere blocks most ultraviolet radiation. Most ultraviolet observations, therefore, must also take place in space. Most of the instruments on the Hubble Space Telescope (HST) are sensitive to ultraviolet radiation. Humans cannot see ultraviolet radiation, but astronomers can create visual images from ultraviolet light by assigning particular colours or shades to different intensities of radiation.

Infrared astronomers study parts of the infrared spectrum, which consists of electromagnetic waves with wavelengths ranging from just longer than visible light to 1,000 times longer than visible light. Earth’s atmosphere absorbs infrared radiation, so astronomers must collect infrared radiation from places where the atmosphere is very thin, or from above the atmosphere. Observatories for these wavelengths are located on certain high mountaintops or in space (see Infrared Astronomy). Most infrared wavelengths can be observed only from space. Every warm object emits some infrared radiation. Infrared astronomy is useful because objects that are not hot enough to emit visible or ultraviolet radiation may still emit infrared radiation. Infrared radiation also passes through interstellar and intergalactic gas and dust more easily than radiation with shorter wavelengths. Further, the brightest part of the spectrum from the farthest galaxies in the universe is shifted into the infrared. The Next Generation Space Telescope, which NASA plans to launch in 2006, will operate especially in the infrared.

Radio waves have the longest wavelengths. Radio astronomers use giant dish antennas to collect and focus signals in the radio part of the spectrum (see Radio Astronomy). These celestial radio signals, often from hot bodies in space or from objects with strong magnetic fields, come through Earth's atmosphere to the ground. Radio waves penetrate dust clouds, allowing astronomers to see into the centre of our galaxy and into the cocoons of dust that surround forming stars.

Sometimes astronomers study emissions from space that are not electromagnetic radiation. Some of the particles of interest to astronomers are neutrinos, cosmic rays, and gravitational waves. Neutrinos are tiny particles with no electric charge and very little or no mass. The Sun and supernovas emit neutrinos. Most neutrino telescopes consist of huge underground tanks of liquid. These tanks capture a few of the many neutrinos that strike them, while the vast majority of neutrinos pass right through the tanks.

Cosmic rays are electrically charged particles that come to Earth from outer space at almost the speed of light. They are made up of negatively charged particles called electrons and positively charged nuclei of atoms. Astronomers do not know where most cosmic rays come from, but they use cosmic-ray detectors to study the particles. Cosmic-ray detectors are usually grids of wires that produce an electrical signal when a cosmic ray passes close to them.

Gravitational waves are a predicted consequence of the general theory of relativity developed by German-born American physicist Albert Einstein. Since the 1960s astronomers have been building detectors for gravitational waves. Older gravitational-wave detectors were huge instruments that surrounded a carefully measured and positioned massive object suspended from the top of the instrument. Lasers trained on the object were designed to measure the object’s movement, which theoretically would occur when a gravitational wave hit the object. At the end of the 20th century, these instruments had picked up no gravitational waves. Gravitational waves should be very weak, and the instruments were probably not yet sensitive enough to register them. In the 1970s and 1980s American physicists Joseph Taylor and Russell Hulse observed indirect evidence of gravitational waves by studying systems of double pulsars. A new generation of gravitational-wave detectors, developed in the 1990s, uses interferometers to measure distortions of space that would be caused by passing gravitational waves.

Some objects emit radiation more strongly in one wavelength than in another, but a set of data across the entire spectrum of electromagnetic radiation is much more useful than observations in anyone wavelength. For example, the supernova remnant known as the Crab Nebula has been observed in every part of the spectrum, and astronomers have used all the discoveries together to make a complete picture of how the Crab Nebula is evolving.

Whether astronomers take data from a ground-based telescope or have data radioed to them from space, they must then analyze the data. Usually the data are handled with the aid of a computer, which can carry out various manipulations the astronomer requests. For example, some of the individual picture elements, or pixels, of a CCD may be more sensitive than others. Consequently, astronomers sometimes take images of blank sky to measure which pixels appear brighter. They can then take these variations into account when interpreting the actual celestial images. Astronomers may write their own computer programs to analyse data or, as is increasingly the case, use certain standard computer programs developed at national observatories or elsewhere.

Often an astronomer uses observations to test a specific theory. Sometimes, a new experimental capability allows astronomers to study a new part of the electromagnetic spectrum or to see objects in greater detail or through special filters. If the observations do not verify the predictions of a theory, the theory must be discarded or, if possible, modified.

Up to about 3,000 stars are visible at a time from Earth with the unaided eye, far away from city lights, on a clear night. A view at night may also show several planets and perhaps a comet or a meteor shower. Increasingly, human-made light pollution is making the sky less dark, limiting the number of visible astronomical objects. During the daytime the Sun shines brightly. The Moon and bright planets are sometimes visible early or late in the day but are rarely seen at midday.

Earth moves in two basic ways: It turns in place, and it revolves around the Sun. Earth turns around its axis, an imaginary line that runs down its centre through its North and South poles. The Moon also revolves around Earth. All of these motions produce day and night, the seasons, the phases of the Moon, and solar and lunar eclipses.

Earth is about 12,000 km (about 7,000 mi) in diameter. As it revolves, or moves in a circle, around the Sun, Earth spins on its axis. This spinning movement is called rotation. Earth’s axis is tilted 23.5° with respect to the plane of its orbit. Each time Earth rotates on its axis, it goes through one day, a cycle of light and dark. Humans artificially divide the day into 24 hours and then divide the hours into 60 minutes and the minutes into 60 seconds.

Earth revolves around the Sun once every year, or 365.25 days (most people use a 365-day calendar and take care of the extra 0.25 day by adding a day to the calendar every four years, creating a leap year). The orbit of Earth is almost, but not quite, a circle, so Earth is sometimes a little closer to the Sun than at other times. If Earth were upright as it revolved around the Sun, each point on Earth would have exactly 12 hours of light and 12 hours of dark each day. Because Earth is tilted, however, the northern hemisphere sometimes points toward the Sun and sometimes points away from the Sun. This tilt is responsible for the seasons. When the northern hemisphere points toward the Sun, the northernmost regions of Earth see the Sun 24 hours a day. The whole northern hemisphere gets more sunlight and gets it at a more direct angle than the southern hemisphere does during this period, which lasts for half of the year. The second half of this period, when the northern hemisphere points most directly at the Sun, is the northern hemisphere's summer, which corresponds to winter in the southern hemisphere. During the other half of the year, the southern hemisphere points more directly toward the Sun, so it is spring and summer in the southern hemisphere and fall and winter in the northern hemisphere.

One revolution of the Moon around Earth takes a little more than 27 days 7 hours. The Moon rotates on its axis in this same period of time, so the same face of the Moon is always presented to Earth. Over a period a little longer than 29 days 12 hours, the Moon goes through a series of phases, in which the amount of the lighted half of the Moon we see from Earth changes. These phases are caused by the changing angle of sunlight hitting the Moon. (The period of phases is longer than the period of revolution of the Moon, because the motion of Earth around the Sun changes the angle at which the Sun’s light hits the Moon from night to night.)

The Moon’s orbit around Earth is tilted 5° from the plane of Earth’s orbit. Because of this tilt, when the Moon is at the point in its orbit when it is between Earth and the Sun, the Moon is usually a little above or below the Sun. At that time, the Sun lights the side of the Moon facing away from Earth, and the side of the Moon facing toward Earth is dark. This point in the Moon’s orbit corresponds to a phase of the Moon called the new moon. A quarter moon occurs when the Moon is at right angles to the line formed by the Sun and Earth. The Sun lights the side of the Moon closest to it, and half of that side is visible from Earth, forming a bright half-circle. When the Moon is on the opposite side of Earth from the Sun, the face of the Moon visible from Earth is lit, showing the full moon in the sky

Because of the tilt of the Moon's orbit, the Moon usually passes above or below the Sun at new moon and above or below Earth's shadow at full moon. Sometimes, though, the full moon or new moon crosses the plane of Earth's orbit. By a coincidence of nature, even though the Moon is about 400 times smaller than the Sun, it is also about 400 times closer to Earth than the Sun is, so the Moon and Sun look almost the same size from Earth. If the Moon lines up with the Sun and Earth at new moon (when the Moon is between Earth and the Sun), it blocks the Sun’s light from Earth, creating a solar eclipse. If the Moon lines up with Earth and the Sun at the full moon (when Earth is between the Moon and the Sun), Earth’s shadow covers the Moon, making a lunar eclipse.

A total solar eclipse is visible from only a small region of Earth. During a solar eclipse, the complete shadow of the Moon that falls on Earth is only about 160 km (about 100 mi) wide. As Earth, the Sun, and the Moon move, however, the Moon’s shadow sweeps out a path up to 16,000 km (10,000 mi) long. The total eclipse can only be seen from within this path. A total solar eclipse occurs about every 18 months. Off to the sides of the path of a total eclipse, a partial eclipse, in which the Sun is only partly covered, is visible. Partial eclipses are much less dramatic than total eclipses. The Moon’s orbit around Earth is elliptical, or egg-shaped. The distance between Earth and the Moon varies slightly as the Moon orbits Earth. When the Moon is farther from Earth than usual, it appears smaller and may not cover the entire Sun during an eclipse. A ring, or annulus, of sunlight remains visible, making an annular eclipse. An annular solar eclipse also occurs about every 18 months. Additional partial solar eclipses are also visible from Earth in between.

At a lunar eclipse, the Moon is existent in Earth's shadow. When the Moon is completely in the shadow, the total lunar eclipse is visible from everywhere on the half of Earth from which the Moon is visible at that time. As a result, more people see total lunar eclipses than see total solar eclipses.

In an open place on a clear dark night, streaks of light may appear in a random part of the sky about once every 10 minutes. These streaks are meteors—bits of rock - turning up in Earth's atmosphere. The bits of rock are called meteoroids, and when these bits survive Earth’s atmosphere intact and land on Earth, they are known as meteorites.

Every month or so, Earth passes through the orbit of a comet. Dust from the comet remains in the comet's orbit. When Earth passes through the band of dust, the dust and bits of rock burn up in the atmosphere, creating a meteor shower. Many more meteors are visible during a meteor shower than on an ordinary night. The most observed meteor shower is the Perseid shower, which occurs each year on August 11th or 12th.

Humans have picked out landmarks in the sky and mapped the heavens for thousands of years. Maps of the sky helped people navigate, measure time, and track celestial events. Now astronomers methodically map the sky to produce a universal format for the addresses of stars, galaxies, and other objects of interest.

Some of the stars in the sky are brighter and more noticeable than others are, and some of these bright stars appear to the eye to be grouped together. Ancient civilizations imagined that groups of stars represented figures in the sky. The oldest known representations of these groups of stars, called constellations, are from ancient Sumer (now Iraq) from about 4000 Bc. The constellations recorded by ancient Greeks and Chinese resemble the Sumerian constellations. The northern hemisphere constellations that astronomers recognize today are based on the Greek constellations. Explorers and astronomers developed and recorded the official constellations of the southern hemisphere in the 16th and 17th centuries. The International Astronomical Union (IAU) officially recognizes 88 constellations. The IAU defined the boundaries of each constellation, so the 88 constellations divide the sky without overlapping.

A familiar group of stars in the northern hemisphere is called the Big Dipper. The Big Dipper is part of an official constellation - Ursa Major, or the Great Bear. Groups of stars that are not official constellations, such as the Big Dipper, are called asterisms. While the stars in the Big Dipper appear in approximately the same part of the sky, they vary greatly in their distance from Earth. This is true for the stars in all constellations or asterisms—the stars making up the group do not really occur close to each other in space; they merely appear together as seen from Earth. The patterns of the constellations are figments of humans’ imagination, and different artists may connect the stars of a constellation in different ways, even when illustrating the same myth.

Astronomers use coordinate systems to label the positions of objects in the sky, just as geographers use longitude and latitude to label the positions of objects on Earth. Astronomers use several different coordinate systems. The two most widely used are the altazimuth system and the equatorial system. The altazimuth system gives an object’s coordinates with respect to the sky visible above the observer. The equatorial coordinate system designates an object’s location with respect to Earth’s entire night sky, or the celestial sphere.

One of the ways astronomers give the position of a celestial object is by specifying its altitude and its azimuth. This coordinate system is called the altazimuth system. The altitude of an object is equal to its angle, in degrees, above the horizon. An object at the horizon would have an altitude of 0°, and an object directly overhead would have an altitude of 90°. The azimuth of an object is equal to its angle in the horizontal direction, with north at 0°, east at 90°, south at 180°, and west at 270°. For example, if an astronomer were looking for an object at 23° altitude and 87° azimuth, the astronomer would know to look low in the sky and almost directly east.

As Earth rotates, astronomical objects appear to rise and set, so their altitudes and azimuths are constantly changing. An object’s altitude and azimuth also vary according to an observer’s location on Earth. Therefore, astronomers almost never use altazimuth coordinates to record an object’s position. Instead, astronomers with altazimuth telescopes translate coordinates from equatorial coordinates to find an object. Telescopes that use an altazimuth mounting system may be simple to set up, but they require many calculated movements to keep them pointed at an object as it moves across the sky. These telescopes fell out of use with the development of the equatorial coordinate and mounting system in the early 1800s. However, computers have made the return to popularity possible for altazimuth systems. Altazimuth mounting systems are simple and inexpensive, and - with computers to do the required calculations and control the motor that moves the telescope - they are practical.

The equatorial coordinate system is a coordinate system fixed on the sky. In this system, a star keeps the same coordinates no matter what the time is or where the observer is located. The equatorial coordinate system is based on the celestial sphere. The celestial sphere is a giant imaginary globe surrounding Earth. This sphere has north and south celestial poles directly above Earth’s North and South poles. It has a celestial equator, directly above Earth’s equator. Another important part of the celestial sphere is the line that marks the movement of the Sun with respect to the stars throughout the year. This path is called the ecliptic. Because Earth is tilted with respect to its orbit around the Sun, the ecliptic is not the same as the celestial equator. The ecliptic is tilted 23.5° to the celestial equator and crosses the celestial equator at two points on opposite sides of the celestial sphere. The crossing points are called the vernal (or spring) equinox and the autumnal equinox. The vernal equinox and autumnal equinox mark the beginning of spring and fall, respectively. The points at which the ecliptic and celestial equator are farthest apart are called the summer solstice and the winter solstice, which mark the beginning of summer and winter, respectively.

As Earth rotates on its axis each day, the stars and other distant astronomical objects appear to rise in the eastern part of the sky and set in the west. They seem to travel in circles around Earth’s North or South poles. In the equatorial coordinate system, the celestial sphere turns with the stars (but this movement is really caused by the rotation of Earth). The celestial sphere makes one complete rotation every 23 hours 56 minutes, which is four minutes shorter than a day measured by the movement of the Sun. A complete rotation of the celestial sphere is called a sidereal day. Because the sidereal day is shorter than a solar day, the stars that an observer sees from any location on Earth change slightly from night to night. The difference between a sidereal day and a solar day occurs because of Earth’s motion around the Sun.

The equivalent of longitude on the celestial sphere is called right ascension and the equivalent of latitude is declination. Specifying the right ascension of a star is equivalent to measuring the east-west distance from a line called the prime meridian that runs through Greenwich, England, for a place on Earth. Right ascension starts at the vernal equinox. Longitude on Earth is given in degrees, but right ascension is given in units of time - hours, minutes, and seconds. This is because the celestial equator is divided into 24 equal parts - each called an hour of right ascension instead of 15°. Each hour is made up of 60 minutes, each of which is equal to 60 seconds. Measuring right ascension in units of time makes determine when will be the best time for observing an object easier for astronomers. A particular line of right ascension will be at its highest point in the sky above a particular place on Earth four minutes earlier each day, so keeping track of the movement of the celestial sphere with an ordinary clock would be complicated. Astronomers have special clocks that keep sidereal time (24 sidereal hours are equal to 23 hours 56 minutes of familiar solar time). Astronomers compare the current sidereal time to the right ascension of the object they wish to view. The object will be highest in the sky when the sidereal time equals the right ascension of the object.

The direction perpendicular to right ascension—and the equivalent to latitude on Earth - is declination. Declination is measured in degrees. These degrees are divided into arcminutes and arcseconds. One arcminute is equal to 1/60 of a degree, and one arcsecond is equal to 1/60 of an arcminute, or 1/360 of a degree. The celestial equator is at declination 0°, the north celestial pole is at declination 90°, and the south celestial pole has a declination of –90°. Each star has a right ascension and a declination that mark its position in the sky. The brightest star, Sirius, for example, has right ascension 6 hours 45 minutes (abbreviated as 6h 45m) and declination -16 degrees 43 arcminutes

Stars are so far away from Earth that the main star motion we see results from Earth’s rotation. Stars do move in space, however, and these proper motions slightly change the coordinates of the nearest stars over time. The effects of the Sun and the Moon on Earth also cause slight changes in Earth’s axis of rotation. These changes, called precession, cause a slow drift in right ascension and declination. To account for precession, astronomers redefine the celestial coordinates every 50 years or so.

Solar systems, both our own and those located around other stars, are a major area of research for astronomers. A solar system consists of a central star orbited by planets or smaller rocky bodies. The gravitational force of the star holds the system together. In our solar system, the central star is the Sun. It holds all the planets, including Earth, in their orbits and provides light and energy necessary for life. Our solar system is just one of many. Astronomers are just beginning to be able to study other solar systems.

Our solar system contains the Sun, nine planets (of which Earth is third from the Sun), and the planets’ satellites. It also contains asteroids, comets, and interplanetary dust and gas.

Until the end of the 18th century, humans knew of five planets—Mercury, Venus, Mars, Jupiter, and Saturn—in addition to Earth. When viewed without a telescope, planets appear to be dots of light in the sky. They shine steadily, while stars seem to twinkle. Twinkling results from turbulence in Earth's atmosphere. Stars are so far away that they appear as tiny points of light. A moment of turbulence can change that light for a fraction of a second. Even though they look the same size as stars to unaided human eyes, planets are close enough that they take up more space in the sky than stars do. The disks of planets are big enough to average out variations in light caused by turbulence and therefore do not twinkle.

Between 1781 and 1930, astronomers found three more planets - Uranus, Neptune, and Pluto. This brought the total number of planets in our solar system to nine. In order of increasing distance from the Sun, the planets in our solar system are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto.

Astronomers call the inner planets - Mercury, Venus, Earth, and Mars - the terrestrial planets. Terrestrial (from the Latin word terra, meaning “Earth”) planets are Earthlike in that they have solid, rocky surfaces. The next group of planets - Jupiter, Saturn, Uranus, and Neptune - is called the Jovian planets, or the giant planets. The word Jovian has the same Latin root as the word Jupiter. Astronomers call these planets the Jovian planets because they resemble Jupiter in that they are giant, massive planets made almost entirely of gas. The mass of Jupiter, for example, is 318 times the mass of Earth. The Jovian planets have no solid surfaces, although they probably have rocky cores several times more massive than Earth. Rings of chunks of ice and rock surround each of the Jovian planets. The rings around Saturn are the most familiar.

Pluto, the outermost planet, is tiny, with a mass about one five-hundredth the mass of Earth. Pluto seems out of place, with its tiny, solid body out beyond the giant planets. Many astronomers believe that Pluto is really just the largest, or one of the largest, of a group of icy objects in the outer solar system. These objects orbit in a part of the solar system called the Kuiper Belt. Even if astronomers decide that Pluto belongs to the Kuiper Belt objects, it will probably still be called a planet for historical reasons.

Most of the planets have moons, or satellites. Earth's Moon has a diameter about one-fourth the diameter of Earth. Mars has two tiny chunks of rock, Phobos and Deimos, each only about 10 km (about 6 mi) across. Jupiter has at least 17 satellites. The largest four, known as the Galilean satellites, are Io, Europa, Ganymede, and Callisto. Ganymede is even larger than the planet Mercury. Saturn has at least 18 satellites. Saturn’s largest moon, Titan, is also larger than the planet Mercury and is enshrouded by a thick, opaque, smoggy atmosphere. Uranus has at least 17 moons, and Neptune has at least 8 moons. Pluto had one moon, called Charon. Charon is more than half as big as Pluto.

Comets and asteroids are rocky and icy bodies that are smaller than planets. The distinction between comets, asteroids, and other small bodies in the solar system is a little fuzzy, but generally a comet is icier than an asteroid and has a more elongated orbit. The orbit of a comet takes it close to the Sun, then back into the outer solar system. When comets near the Sun, some of their ice turns from solid material into gas, releasing some of their dust. Comets have long tails of glowing gas and dust when they are near the Sun. Asteroids are rockier bodies and usually have orbits that keep them at always about the same distance from the Sun.

Both comets and asteroids have their origins in the early solar system. While the solar system was forming, many small, rocky objects called planetesimals condensed from the gas and dust of the early solar system. Millions of planetesimals remain in orbit around the Sun. A large spherical cloud of such objects out beyond Pluto forms the Oort cloud. The objects in the Oort cloud are considered comets. When our solar system passes close to another star or drifts closer than usual to the centre of our galaxy, the change in gravitational pull may disturb the orbit of one of the icy comets in the Oort cloud. As this comet falls toward the Sun, the ice turns into vapour, freeing dust from the object. The gas and dust form the tail or tails of the comet. The gravitational pull of large planets such as Jupiter or Saturn may swerve the comet into an orbit closer to the Sun. The time needed for a comet to make a complete orbit around the Sun is called the comet’s period. Astronomers believe that comets with periods longer than about 200 years come from the Oort Cloud. Short-period comets, those with periods less than about 200 years, probably come from the Kuiper Belt, a ring of planetesimals beyond Neptune. The material in comets is probably from the very early solar system, so astronomers study comets to find out more about our solar system’s formation.

When the solar system was forming, some of the planetesimals came together more toward the centre of the solar system. Gravitational forces from the giant planet Jupiter prevented these planetesimals from forming full-fledged planets. Instead, the planetesimals broke up to create thousands of minor planets, or asteroids, that orbit the Sun. Most of them are in the asteroid belt, between the orbits of Mars and Jupiter, but thousands are in orbits that come closer to Earth or even cross Earth's orbit. Scientists are increasingly aware of potential catastrophes if any of the largest of these asteroids hits Earth. Perhaps 2,000 asteroids larger than 1 km (0.6 mi) in diameter are potential hazards.

The Sun is the nearest star to Earth and is the centre of the solar system. It is only 8 light-minutes away from Earth, meaning light takes only eight minutes to travel from the Sun to Earth. The next nearest star is 4 light-years away, so light from this star, Proxima Centauri (part of the triple star Alpha Centauri), takes four years to reach Earth. The Sun's closeness means that the light and other energy we get from the Sun dominate Earth’s environment and life. The Sun also provides a way for astronomers to study stars. They can see details and layers of the Sun that are impossible to see on more distant stars. In addition, the Sun provides a laboratory for studying hot gases held in place by magnetic fields. Scientists would like to create similar conditions (hot gases contained by magnetic fields) on Earth. Creating such environments could be useful for studying basic physics.

The Sun produces its energy by fusing hydrogen into helium in a process called nuclear fusion. In nuclear fusion, two atoms merge to form a heavier atom and release energy (see Nuclear Energy: Nuclear Fusion). The Sun and stars of similar mass start off with enough hydrogen to shine for about 10 billion years. The Sun is less than halfway through its lifetime.

Although most telescopes are used mainly to collect the light of faint objects so that they can be studied, telescopes for planetary and other solar system studies are also used to magnify images. Astronomers use some of the observing time of several important telescopes for planetary studies. Overall, planetary astronomers must apply and compete for observing time on telescopes with astronomers seeking to study other objects. Some planetary objects can be studied as they pass in front of, or occult, distant stars. The atmosphere of Neptune's moon Triton and the shapes of asteroids can be investigated in this way, for example. The fields of radio and infrared astronomy are useful for measuring the temperatures of planets and satellites. Ultraviolet astronomy can help astronomers study the magnetic fields of planets.

During the space age, scientists have developed telescopes and other devices, such as instruments to measure magnetic fields or space dust, that can leave Earth's surface and travel close to other objects in the solar system. Robotic spacecraft have visited all of the planets in the solar system except Pluto. Some missions have targeted specific planets and spent much time studying a single planet, and some spacecraft have flown past a number of planets.

Astronomers use different telescopes to study the Sun than they use for nighttime studies because of the extreme brightness of the Sun. Telescopes in space, such as the Solar and Heliospheric Observatory (SOHO) and the Transition Region and Coronal Explorer (TRACE), are able to study the Sun in regions of the spectrum other than visible light. X rays, ultraviolet, and radio waves from the Sun are especially interesting to astronomers. Studies in various parts of the spectrum give insight into giant flows of gas in the Sun, into how the Sun's energy leaves the Sun to travel to Earth, and into what the interior of the Sun is like. Astronomers also study solar-terrestrial relations - the relation of activity on the Sun with magnetic storms and other effects on Earth. Some of these storms and effects can affect radio reception, cause electrical blackouts, or damage satellites in orbit.

Our solar system began forming about 5 billion years ago, when a cloud of gas and dust between the stars in our Milky Way Galaxy began contracting. A nearby supernova - an exploding star - may have started the contraction, but most astronomers believe a random change in density in the cloud caused the contraction. Once the cloud - known as the solar nebula - began to contract, the contraction occurred faster and faster. The gravitational energy caused by this contraction heated the solar nebula. As the cloud became smaller, it began to spin faster, much as a spinning skater will spin faster by pulling in his or her arms. This spin kept the nebula from forming a sphere; instead, it settled into a disk of gas and dust.

In this disk, small regions of gas and dust began to draw closer and stick together. The objects that resulted, which were usually less than 500 km (300 mi) across, are the planetesimals. Eventually, some planetesimals stuck together and grew to form the planets. Scientists have made computer models of how they believe the early solar system behaved. The models show that for a solar system to produce one or two huge planets like Jupiter and several other, much smaller planets is usual.

The largest region of gas and dust wound up in the centre of the nebula and formed the protosun (proto is Greek for “before” and is used to distinguish between an object and its forerunner). The increasing temperature and pressure in the middle of the protosun vaporized the dust and eventually allowed nuclear fusion to begin, marking the formation of the Sun. The young Sun gave off a strong solar wind that drove off most of the lighter elements, such as hydrogen and helium, from the inner planets. The inner planets then solidified and formed rocky surfaces. The solar wind lost strength. Jupiter’s gravitational pull was strong enough to keep its shroud of hydrogen and helium gas. Saturn, Uranus, and Neptune also kept their layers of light gases.

The theory of solar system formation described above accounts for the appearance of the solar system as we know it. Examples of this appearance include the fact that the planets all orbit the Sun in the same direction and that almost all the planets rotate on their axes in the same direction. The recent discoveries of distant solar systems with different properties could lead to modifications in the theory, however

Studies in the visible, the infrared, and the shortest radio wavelengths have revealed disks around several young stars in our galaxy. One such object, Beta Pictoris (about 62 light-years from Earth), has revealed a warp in the disk that could be a sign of planets in orbit. Astronomers are hopeful that, in the cases of these young stars, they are studying the early stages of solar system formation.

Although astronomers have long assumed that many other stars have planets, they have been unable to detect these other solar systems until recently. Planets orbiting around stars other than the Sun are called extrasolar planets. Planets are small and dim compared to stars, so they are lost in the glare of their parent stars and are invisible to direct observation with telescopes.

Astronomers have tried to detect other solar systems by searching for the way a planet affects the movement of its parent star. The gravitational attraction between a planet and its star pulls the star slightly toward the planet, so the star wobbles slightly as the planet orbits it. Throughout the mind and late 1900s, several observatories tried to detect wobbles in the nearest stars by watching the stars’ movement across the sky. Wobbles were reported in several stars, but later observations showed that the results were false.

In the early 1990s, studies of a pulsar revealed at least two planets orbiting it. Pulsars are compact stars that give off pulses of radio waves at very regular intervals. The pulsar, designated PSR 1257+12, is about 1,000 light-years from Earth. This pulsar's pulses sometimes came a little early and sometimes a little late in a periodic pattern, revealing that an unseen object was pulling the pulsar toward and away from Earth. The environment of a pulsar, which emits X rays and other strong radiation that would be harmful to life on Earth, is so extreme that these objects would have little resemblance to planets in our solar system.

The wobbling of a star changes the star’s light that reaches Earth. When the star moves away from Earth, even slightly, each wave of light must travel farther to Earth than the wave before it. This increases the distance between waves (called the wavelength) as the waves reach Earth. When a star’s planet pulls the star closer to Earth, each successive wavefront has less distance to travel to reach Earth. This shortens the wavelength of the light that reaches Earth. This effect is called the Doppler effect. No star moves fast enough for the change in wavelength to result in a noticeable change in colour, which depends on wavelength, but the changes in wavelength can be measured with precise instruments. Because the planet’s effect on the star is very small, astronomers must analyze the starlight carefully to detect a shift in wavelength. They do this by first using a technique called spectroscopy to separate the white starlight into its component colours, as water vapour does to sunlight in a rainbow. Stars emit light in a continuous range. The range of wavelengths a star emits is called the star’s spectrum. This spectrum had dark lines, called absorption lines, at wavelengths at which atoms in the outermost layers of the star absorb light.

Astronomers know what the exact wavelength of each absorption line is for a star that is not moving. By seeing how far the movement of a star shifts the absorption lines in its spectrum, astronomers can calculate how fast the star is moving. If the motion fits the model of the effect of a planet, astronomers can calculate the mass of the planet and how close it is to the star. These calculations can only provide the lower limit to the planet’s mass, because telling at what angle the planet orbits the star is impossible for astronomers. Astronomers need to know the angle at which the planet orbits the star to calculate the planet’s mass accurately. Because of this uncertainty, some of the giant extrasolar planets may be a type of failed star called a brown dwarf instead of planets. Most astronomers believe that many of the suspected planets are true planets.

Between 1995 and 1999 astronomers discovered more than a dozen extrasolar planets. Astronomers now know of far more planets outside our solar system than inside our solar system. Most of these planets, surprisingly, are more massive than Jupiter and are orbiting so close to their parent stars that some of them have “years” (the time it takes to orbit the parent star once) as long as only a few days on Earth. These solar systems are so different from our solar system that astronomers are still trying to reconcile them with the current theory of solar system formation. Some astronomers suggest that the giant extrasolar planets formed much farther away from their stars and were later thrown into the inner solar systems by some gravitational interaction.

Stars are an important topic of astronomical research. Stars are balls of gas that shine or used to shine because of nuclear fusion in their cores. The most familiar star is the Sun. The nuclear fusion in stars produces a force that pushes the material in a star outward. However, the gravitational attraction of the star’s material for itself pulls the material inward. A star can remain stable as long as the outward pressure and gravitational force balance. The properties of a star depend on its mass, its temperature, and its stage in evolution.

Astronomers study stars by measuring their brightness or, with more difficulty, their distances from Earth. They measure the “colour” of a star—the differences in the star’s brightness from one part of the spectrum to another—to determine its temperature. They also study the spectrum of a star’s light to determine not only the temperature, but also the chemical makeup of the star’s outer layers.

Many different types of stars exist. Some types of stars are really just different stages of a star’s evolution. Some types are different because the stars formed with much more or much less mass than other stars, or because they formed close to other stars. The Sun is a type of star known as a main-sequence star. Eventually, main-sequence stars such as the Sun swell into giant stars and then evolve into tiny, dense, white dwarf stars. Main-sequence stars and giants have a role in the behaviour of most variable stars and novas. A star much more massive than the Sun will become a supergiant star, then explode as a supernova. A supernova may leave behind a neutron star or a black hole.

In about 1910 Danish astronomer Ejnar Hertzsprung and American astronomer Henry Norris Russell independently worked out a way to graph basic properties of stars. On the horizontal axis of their graphs, they plotted the temperatures of stars. On the vertical axis, they plotted the brightness of stars in a way that allowed the stars to be compared. (One plotted the absolute brightness, or absolute magnitude, of a star, a measurement of brightness that takes into account the distance of the star from Earth. The other plotted stars in a nearby galaxy, all about the same distance from Earth.) The resulting Hertzsprung-Russell diagram, also called an H-R diagram or a colour-magnitude diagram (where colour relates to temperature), is a basic tool of astronomers.

On an H-R diagram, the brightest stars are at the top and the hottest stars are at the left. Hertzsprung and Russell found that most stars fell on a diagonal line across the H-R diagram from upper left lower to right. This line is called the main sequence. The diagonal line of main-sequence stars indicates that temperature and brightness of these stars are directly related. The hotter a main-sequence star is, the brighter it is. The Sun is a main-sequence star, located in about the middle of the graph. More faint, cool stars exist than hot, bright ones, so the Sun is brighter and hotter than most of the stars in the universe.

At the upper right of the H-R diagram, above the main sequence, stars are brighter than main-sequence stars of the same colour. The only way stars of a certain colour can be brighter than other stars of the same colour is if the brighter stars are also bigger. Bigger stars are not necessarily more massive, but they do have larger diameters. Stars that fall in the upper right of the H-R diagram are known as giant stars or, for even brighter stars, supergiant stars. Supergiant stars have both larger diameters and larger masses than giant stars.

Giant and supergiant stars represent stages in the lives of stars after they have burned most of their internal hydrogen fuel. Stars swell as they move off the main sequence, becoming giants and—for more massive stars—supergiants.

A few stars fall in the lower left portion of the H-R diagram, below the main sequence. Just as giant stars are larger and brighter than main-sequences stars, these stars are smaller and dimmer. These smaller, dimmer stars are hot enough to be white or blue-white in colour and are known as white dwarfs.

White dwarf stars are only about the size of Earth. They represent stars with about the mass of the Sun that have burned as much hydrogen as they can. The gravitational force of a white dwarf’s mass is pulling the star inward, but electrons in the star resist being pushed together. The gravitational force is able to pull the star into a much denser form than it was in when the star was burning hydrogen. The final stage of life for all stars like the Sun is the white dwarf stage.

Many stars vary in brightness over time. These variable stars come in a variety of types. One important type is called a Cepheid variable, named after the star delta Cepheid, which is a prime example of a Cepheid variable. These stars vary in brightness as they swell and contract over a period of weeks or months. Their average brightness depends on how long the period of variation takes. Thus astronomers can determine how bright the star is merely by measuring the length of the period. By comparing how intrinsically bright these variable stars are with how bright they look from Earth, astronomers can calculate how far away these stars are from Earth. Since they are giant stars and are very bright, Cepheid variables in other galaxies are visible from Earth. Studies of Cepheid variables tell astronomers how far away these galaxies are and are very useful for determining the distance scale of the universe. The Hubble Space Telescope (HST) can determine the periods of Cepheid stars in galaxies farther away than ground-based telescopes can see. Astronomers are developing a more accurate idea of the distance scale of the universe with HST data.

Cepheid variables are only one type of variable star. Stars called long-period variables vary in brightness as they contract and expand, but these stars are not as regular as Cepheid variables. Mira, a star in the constellation Cetus (the whale), is a prime example of a long-period variable star. Variable stars called eclipsing binary stars are really pairs of stars. Their brightness varies because one member of the pair appears to pass in front of the other, as seen from Earth. A type of variable star called R Coronae Borealis stars varies because they occasionally give off clouds of carbon dust that dim these stars.

Sometimes stars brighten drastically, becoming as much as 100 times brighter than they were. These stars are called novas (Latin for "new stars"). They are not really new, just much brighter than they were earlier. A nova is a binary, or double, star in which one member is a white dwarf and the other is a giant or supergiant. Matter from the large star falls onto the small star. After a thick layer of the large star’s atmosphere has collected on the white dwarf, the layer burns off in a nuclear fusion reaction. The fusion produces a huge amount of energy, which, from Earth, appears as the brightening of the nova. The nova gradually returns to its original state, and material from the large star again begins to collect on the white dwarf.

Sometimes stars brighten many times more drastically than novas do. A star that had been too dim to see can become one of the brightest stars in the sky. These stars are called supernovas. Sometimes supernovas that occur in other galaxies are so bright that, from Earth, they appear as bright as their host galaxy.

There are two types of supernova. One type is an extreme case of a nova, in which matter falls from a giant or supergiant companion onto a white dwarf. In the case of a supernova, the white dwarf gains so much fuel from its companion that the star increases in mass until strong gravitational forces cause it to become unstable. The star collapses and the core explodes, vaporizing much of the white dwarf and producing an immense amount of light. Only bits of the white dwarf remain after this type of supernova occurs.

The other type of supernova occurs when a supergiant star uses up all its nuclear fuel in nuclear fusion reactions. The star uses up its hydrogen fuel, but the core is hot enough that it provides the initial energy necessary for the star to begin “burning” helium, then carbon, and then heavier elements through nuclear fusion. The process stops when the core is mostly iron, which is too heavy for the star to “burn” in a way that gives off energy. With no such fuel left, the inward gravitational attraction of the star’s material for itself has no outward balancing force, and the core collapses. As it collapses, the core releases a shock wave that tears apart the star’s atmosphere. The core continues collapsing until it forms either a neutron star or a black hole, depending on its mass

Only a handful of supernovas are known in our galaxy. The last Milky Way supernova seen from Earth was observed in 1604. In 1987 astronomers observed a supernova in the Large Magellanic Cloud, one of the Milky Way’s satellite galaxies (see Magellanic Clouds). This supernova became bright enough to be visible to the unaided eye and is still under careful study from telescopes on Earth and from the Hubble Space Telescope. A supernova in the process of exploding emits radiation in the X-ray range and ultraviolet and radio radiation studies in this part of the spectrum are especially useful for astronomers studying supernova remnants.

Neutron stars are the collapsed cores sometimes left behind by supernova explosions. Pulsars are a special type of neutron star. Pulsars and neutron stars form when the remnant of a star left after a supernova explosion collapses until it is about 10 km (about 6 mi) in radius. At that point, the neutrons - electrically neutral atomic particles - of the star resist being pressed together further. When the force produced by the neutrons balances the gravitational force, the core stops collapsing. At that point, the star is so dense that a teaspoonful has the mass of a billion metric tons.

Neutron stars become pulsars when the magnetic field of a neutron star directs a beam of radio waves out into space. The star is so small that it rotates from one to a few hundred times per second. As the star rotates, the beam of radio waves sweeps out a path in space. If Earth is in the path of the beam, radio astronomers see the rotating beam as periodic pulses of radio waves. This pulsing is the reason these stars are called pulsars.

Some neutron stars are in binary systems with an ordinary star neighbour. The gravitational pull of a neutron star pulls material off its neighbour. The rotation of the neutron star heats the material, causing it to emit X rays. The neutron star’s magnetic field directs the X rays into a beam that sweeps into space and may be detected from Earth. Astronomers call these stars X-ray pulsars.

Gamma-ray spacecraft detect bursts of gamma rays about once a day. The bursts come from sources in distant galaxies, so they must be extremely powerful for us to be able to detect them. A leading model used to explain the bursts is the merger of two neutron stars in a distant galaxy with a resulting hot fireball. A few such explosions have been seen and studied with the Hubble and Keck telescopes.

Black holes are objects that are so massive and dense that their immense gravitational pull does not even let light escape. If the core left over after a supernova explosion has a mass of more than about fives times that of the Sun, the force holding up the neutrons in the core is not large enough to balance the inward gravitational force. No outward force is large enough to resist the gravitational force. The core of the star continues to collapse. When the core's mass is sufficiently concentrated, the gravitational force of the core is so strong that nothing, not even light, can escape it. The gravitational force is so strong that classical physics no longer applies, and astronomers use Einstein’s general theory of relativity to explain the behaviour of light and matter under such strong gravitational forces. According to general relativity, space around the core becomes so warped that nothing can escape, creating a black hole. A star with a mass ten times the mass of the Sun would become a black hole if it were compressed to 90 km (60 mi) or less in diameter.

Astronomers have various ways of detecting black holes. When a black hole is in a binary system, matter from the companion star spirals into the black hole, forming a disk of gas around it. The disk becomes so hot that it gives off X rays that astronomers can detect from Earth. Astronomers use X-ray telescopes in space to find X-ray sources, and then they look for signs that an unseen object of more than about five times the mass of the Sun is causing gravitational tugs on a visible object. By 1999 astronomers had found about a dozen potential black holes.

The basic method that astronomers use to find the distance of a star from Earth uses parallax. Parallax is the change in apparent position of a distant object when viewed from different places. For example, imagine a tree standing in the centre of a field, with a row of buildings at the edge of the field behind the tree. If two observers stand at the two front corners of the field, the tree will appear in front of a different building for each observer. Similarly, a nearby star's position appears different when seen from different angles.

Parallax also allows human eyes to judge distance. Each eye sees an object from a different angle. The brain compares the two pictures to judge the distance to the object. Astronomers use the same idea to calculate the distance to a star. Stars are very far away, so astronomers must look at a star from two locations as far apart as possible to get a measurement. The movement of Earth around the Sun makes this possible. By taking measurements six months apart from the same place on Earth, astronomers take measurements from locations separated by the diameter of Earth’s orbit. That is a separation of about 300 million km (186 million mi). The nearest stars will appear to shift slightly with respect to the background of more distant stars. Even so, the greatest stellar parallax is only about 0.77 seconds of arc, an amount 4,600 times smaller than a single degree. Astronomers calculate a star’s distance by dividing 1 by the parallax. Distances of stars are usually measured in parsecs. A parsec is 3.26 light-years, and a light-year is the distance that light travels in a year, or about 9.5 trillion km (5.9 trillion mi). Proxima Centauri, the Sun’s nearest neighbour, has a parallax of 0.77 seconds of arc. This measurement indicates that Proxima Centauri’s distance from Earth is about 1.3 parsecs, or 4.2 light-years. Because Proxima Centauri is the Sun’s nearest neighbours, it has a larger parallax than any other star.

Astronomers can measure stellar parallaxes for stars up to about 500 light-years away, which is only about 2 percent of the distance to the centre of our galaxy. Beyond that distance, the parallax angle is too small to measure.

A European Space Agency spacecraft named Hipparcos (an acronym for High Precision Parallax Collecting Satellite), launched in 1989, gave a set of accurate parallaxes across the sky that was released in 1997. This set of measurements has provided a uniform database of stellar distances for more than 100,000 stars and a somewhat less accurate database of more than 1 million stars. These parallax measurements provide the base for measurements of the distance scale of the universe. Hipparcos data are leading to more accurate age calculations for the universe and for objects in it, especially globular clusters of stars.

Astronomers use a star’s light to determine the star’s temperature, composition, and motion. Astronomers analyze a star’s light by looking at its intensity at different wavelengths. Blue light has the shortest visible wavelengths, at about 400 nanometers. (A nanometer, abbreviated nm, is one billionth of a metre, or about one forty-thousandth of an inch.) Red light has the longest visible wavelengths, at about 650 nm. A law of radiation known as Wien's displacement law (developed by German physicist Wilhelm Wien) links the wavelength at which the most energy is given out by an object and its temperature. A star like the Sun, whose surface temperature is about 6000 K (about 5730°C or about 10,350°F), gives off the most radiation in yellow-green wavelengths, with decreasing amounts in shorter and longer wavelengths. Astronomers put filters of different standard colours on telescopes to allow only light of a particular colour from a star to pass. In this way, astronomers determine the brightness of a star at particular wavelengths. From this information, astronomers can use Wien’s law to determine the star’s surface temperature.

Astronomers can see the different wavelengths of light of a star in more detail by looking at its spectrum. The continuous rainbow of colour of a star's spectrum is crossed by dark lines, or spectral lines. In the early 19th century, German physicist Josef Fraunhofer identified such lines in the Sun's spectrum, and they are still known as Fraunhofer lines. American astronomer Annie Jump Cannon divided stars into several categories by the appearance of their spectra. She labelled them with capital letters according to how dark their hydrogen spectral lines were. Later astronomers reordered these categories according to decreasing temperature. The categories are O, B, A, F, G, K, and M, where O stars are the hottest and M stars are the coolest. The Sun is a G star. An additional spectral type, L stars, was suggested in 1998 to accommodate some cool stars studied using new infrared observational capabilities. Detailed study of spectral lines shows the physical conditions in the atmospheres of stars. Careful study of spectral lines shows that some stars have broader lines than others of the same spectral type. The broad lines indicate that the outer layers of these stars are more diffuse, meaning that these layers are larger, but spread more thinly, than the outer layers of other stars. Stars with large diffuse atmospheres are called giants. Giant stars are not necessarily more massive than other stars - the outer layers of giant stars are just more spread out.

Many stars have thousands of spectral lines from iron and other elements near iron in the periodic table. Other stars of the same temperature have relatively few spectral lines from such elements. Astronomers interpret these findings to mean that two different populations of stars exist. Some formed long ago, before supernovas produced the heavy elements, and others formed more recently and incorporated some heavy elements. The Sun is one of the more recent stars. Spectral lines can also be studied to see if they change in wavelength or are different in wavelength from sources of the same lines on Earth. These studies tell us, according to the Doppler effect, how much the star is moving toward or away from us. Such studies of starlight can tell us about the orbits of stars in binary systems or about the pulsations of variable stars, for example.

Astronomers study galaxies to learn about the structure of the universe. Galaxies are huge collections of billions of stars. Our Sun is part of the Milky Way Galaxy. Galaxies also contain dark strips of dust and may contain huge black holes at their centres. Galaxies exist in different shapes and sizes. Some galaxies are spirals, some are oval, or elliptical, and some are irregular. The Milky Way is a spiral galaxy. Galaxies tend to group together in clusters.

Our Sun is only one of about 400 billion stars in our home galaxy, the Milky Way. On a dark night, far from outdoor lighting, a faint, hazy, whitish band spans the sky. This band is the Milky Way Galaxy as it appears from Earth. The Milky Way looks splotchy, with darker regions interspersed with lighter ones.

The Milky Way Galaxy is a pinwheel-shaped flattened disk about 75,000 light-years in diameter. The Sun is located on a spiral arm about two-thirds of the way out from the centre. The galaxy spins, but the centre spins faster than the arms. At Earth’s position, the galaxy makes a complete rotation about every 200 million years.

When observers on Earth look toward the brightest part of the Milky Way, which is in the constellation Sagittarius, they look through the galaxy’s disk toward its centre. This disk is composed of the stars, gas, and dust between Earth and the galactic centre. When observers look in the sky in other directions, they do not see as much of the galaxy’s gas and dust, and so can see objects beyond the galaxy more clearly.

The Milky Way Galaxy has a core surrounded by its spiral arms. A spherical cloud containing about 100 examples of a type of star cluster known as a globular cluster surrounds the galaxy. Still farther out is a galactic corona. Astronomers are not sure what types of particles or objects occupy the corona, but these objects do exert a measurable gravitational force on the rest of the galaxy. Galaxies contain billions of stars, but the space between stars is not empty. Astronomers believe that almost every galaxy probably has a huge black hole at its centre.

The space between stars in a galaxy consists of low-density gas and dust. The dust is largely carbon given off by red-giant stars. The gas is largely hydrogen, which accounts for 90 percent of the atoms in the universe. Hydrogen exists in two main forms in the universe. Astronomers give complete hydrogen atoms, with a nucleus and an electron, a designation of the Roman numeral I, or HI. Ionized hydrogen, hydrogen made up of atoms missing their electrons, is given the designation II, or HII. Clouds, or regions, of both types of hydrogen exist between the stars. HI regions are too cold to produce visible radiation, but they do emit radio waves that are useful in measuring the movement of gas in our own galaxy and in distant galaxies. The HII regions form around hot stars. These regions emit diffuse radiation in the visual range, as well as in the radio, infrared, and ultraviolet ranges. The cloudy light from such regions forms beautiful nebulas such as the Great Orion Nebula.

Astronomers have located more than 100 types of molecules in interstellar space. These molecules occur only in trace amounts among the hydrogen. Still, astronomers can use these molecules to map galaxies. By measuring the density of the molecules throughout a galaxy, astronomers can get an idea of the galaxy’s structure. interstellar dust sometimes gathers to form dark nebulae, which appear in silhouette against background gas or stars from Earth. The Horsehead Nebula, for example, is the silhouette of interstellar dust against a background HI region. See also Interstellar Matter

The first known black holes were the collapsed cores of supernova stars, but astronomers have since discovered signs of much larger black holes at the centres of galaxies. These galactic black holes contain millions of times as much mass as the Sun. Astronomers believe that huge black holes such as these provide the energy of mysterious objects called quasars. Quasars are very distant objects that are moving away from Earth at high speed. The first ones discovered were very powerful radio sources, but scientists have since discovered quasars that don’t strongly emit radio waves. Astronomers believe that almost every galaxy, whether spiral or elliptical, has a huge black hole at its centre.

Astronomers look for galactic black holes by studying the movement of galaxies. By studying the spectrum of a galaxy, astronomers can tell if gas near the centre of the galaxy is rotating rapidly. By measuring the speed of rotation and the distance from various points in the galaxy to the centre of the galaxy, astronomers can determine the amount of mass in the centre of the galaxy. Measurements of many galaxies show that gas near the centre is moving so quickly that only a black hole could be dense enough to concentrate so much mass in such a small space. Astronomers suspect that a significant black hole occupies even the centre of the Milky Way. The clear images from the Hubble Space Telescope have allowed measurements of motions closer to the centres of galaxies than previously possible, and have led to the confirmation in several cases that giant black holes are present.

Galaxies are classified by shape. The three types are spiral, elliptical, and irregular. Spiral galaxies consist of a central mass with one, two, or three arms that spiral around the centre. An elliptical galaxy is oval, with a bright centre that gradually, evenly dims to the edges. Irregular galaxies are not symmetrical and do not look like spiral or elliptical galaxies. Irregular galaxies vary widely in appearance. A galaxy that has a regular spiral or elliptical shape but has some special oddity is known as a peculiar galaxy. For example, some peculiar galaxies are stretched and distorted from the gravitational pull of a nearby galaxy.

Spiral galaxies are flattened pinwheels in shape. They can have from one to three spiral arms coming from a central core. The Great Andromeda Spiral Galaxy is a good example of a spiral galaxy. The shape of the Milky Way is not visible from Earth, but astronomers have measured that the Milky Way is also a spiral galaxy. American astronomer Edwin Hubble further classified spiral galaxies by the tightness of their spirals. In order of increasingly open arms, Hubble’s types are Sa, Sb., and Sc. Some galaxies have a straight, bright, bar-shaped feature across their centre, with the spiral arms coming off the bar or off a ring around the bar. With a capital B for the bar, the Hubble types of these galaxies are SBa, SBb, and Sbc.

Many clusters of galaxies have giant elliptical galaxies at their centres. Smaller elliptical galaxies, called dwarf elliptical galaxies, are much more common than giant ones. Most of the two dozen galaxies in the Milky Way’s Local Group of galaxies are dwarf elliptical galaxies.

Astronomers classify elliptical galaxies by how oval they look, ranging from E0 for very round to E3 for intermediately oval to E7 for extremely elongated. The galaxy class E7 is also called S0, which is also known as a lenticular galaxy, a shape with an elongated disk but no spiral arms. Because astronomers can see other galaxies only from the perspective of Earth, the shape astronomers see is not necessarily the exact shape of a galaxy. For instance, they may be viewing it from an end, and not from above or below.

Some galaxies have no structure, while others have some trace of structure but do not fit the spiral or elliptical classes. All of these galaxies are called irregular galaxies. The two small galaxies that are satellites to the Milky Way Galaxy are both irregular. They are known as the Magellanic Clouds. The Large Magellanic Cloud shows signs of having a bar in its centre. The Small Magellanic Cloud is more formless. Studies of stars in the Large and Small Magellanic Clouds have been fundamental for astronomers’ understanding of the universe. Each of these galaxies provides groups of stars that are all at the same distance from Earth, allowing astronomers to compare the absolute brightness of these stars.

In the late 1920s American astronomer Edwin Hubble discovered that all but the nearest galaxies to us are receding, or moving away from us. Further, he found that the farther away from Earth a galaxy is, the faster it is receding. He made his discovery by taking spectra of galaxies and measuring the amount by which the wavelengths of spectral lines were shifted. He measured distance in a separate way, usually from studies of Cepheid variable stars. Hubble discovered that essentially all the spectra of all the galaxies were shifted toward the red, or had redshifts. The redshifts of galaxies increased with increasing distance from Earth. After Hubble’s work, other astronomers made the connection between redshift and velocity, showing that the farther a galaxy is from Earth, the faster it moves away from Earth. This idea is called Hubble’s law and is the basis for the belief that the universe is uniformly expanding. Other uniformly expanding three-dimensional objects, such as a rising cake with raisins in the batter, also demonstrate the consequence that the more distant objects (such as the other raisins with respect to any given raisin) appear to recede more rapidly than nearer ones. This consequence is the result of the increased amount of material expanding between these more distant objects.

Hubble's law states that there is a straight-line, or linear, relationship between the speed at which an object is moving away from Earth and the distance between the object and Earth. The speed at which an object is moving away from Earth is called the object’s velocity of recession. Hubble’s law indicates that as velocity of recession increases, distance increases by the same proportion. Using this law, astronomers can calculate the distance to the most distant galaxies, given only measurements of their velocities calculated by observing how much their light is shifted. Astronomers can accurately measure the redshifts of objects so distant that the distance between Earth and the objects cannot be measured by other means.

The constant of proportionality that relates velocity to distance in Hubble's law is called Hubble's constant, or H. Hubble's law is often written v=Hd, or velocity equals Hubble's constant multiplied by distance. Thus determining Hubble's constant will give the speed of the universe's expansion. The inverse of Hubble’s constant, or 1/H, theoretically provides an estimate of the age of the universe. Astronomers now believe that Hubble’s constant has changed over the lifetime of the universe, however, so estimates of expansion and age must be adjusted accordingly.

The value of Hubble’s constant probably falls between 64 and 78 kilometres per second per mega parsec (between 40 and 48 miles per second per mega parsec). A mega parsec is 1 million parsecs and a parsec is 3.26 light-years. The Hubble Space Telescope studied Cepheid variables in distant galaxies to get an accurate measurement of the distance between the stars and Earth to refine the value of Hubble’s constant. The value they found is 72 kilometres per second per mega parsec (45 miles per second per mega parsec), with an uncertainty of only 10 percent

The actual age of the universe depends not only on Hubble's constant but also on how much the gravitational pull of the mass in the universe slows the universe’s expansion. Some data from studies that use the brightness of distant supernovas to assess distance indicate that the universe's expansion is speeding up instead of slowing. Astronomers invented the term “dark energy” for the unknown cause of this accelerating expansion and are actively investigating these topics. The ultimate goal of astronomers is to understand the structure, behaviour, and evolution of all of the matter and energy that exists. Astronomers call the set of all matter and energy the universe. The universe is infinite in space, but astronomers believe it does have a finite age. Astronomers accept the theory that about 14 billion years ago the universe began as an explosive event resulting in a hot, dense, expanding sea of matter and energy. This event is known as the big bang Astronomers cannot observe that far back in time. Many astronomers believe, however, the theory that within the first fraction of a second after the big bang, the universe went through a tremendous inflation, expanding many times in size, before it resumed a slower expansion .

As the universe expanded and cooled, various forms of elementary particles of matter formed. By the time the universe was one second old, protons had formed. For approximately the next 1,000 seconds, in the era of nucleosynthesis, all the nuclei of deuterium (hydrogen with both a proton and neutron in the nucleus) that are present in the universe today formed. During this brief period, some nuclei of lithium, beryllium, and helium formed as well.

When the universe was about 1 million years old, it had cooled to about 3000 K (about 3300°C or about 5900°F). At that temperature, the protons and heavier nuclei formed during nucleosynthesis could combine with electrons to form atoms. Before electrons combined with nuclei, the travel of radiation through space was very difficult. Radiation in the form of photons (packets of light energy) could not travel very far without colliding with electrons. Once protons and electrons combined to form hydrogen, photons became able to travel through space. The radiation carried by the photons had the characteristic spectrum of a hot gas. Since the time this radiation was first released, it has cooled and is now 3 K (-270°C or - 450°F). It is called the primeval background radiation and has been definitively detected and studied, first by radio telescopes and then by the Cosmic Background Explorer (COBE) and Wilkinson Microwave Anisotropy Probe (WMAP) spacecrafts. COBE, WMAP, and ground-based radio telescopes detected tiny deviations from uniformity in the primeval background radiation; these deviations may be the seeds from which clusters of galaxies grew.

No comments:

Post a Comment